These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Calypso NT: Reliable, Efficient Parallel Processing  

E-print Network

Calypso NT: Reliable, Efficient Parallel Processing on Windows NT Networks 1 Donald Mclaughlin processing system that runs on Windows NT workstations. The system allows a parallel program written in C to run on the Windows NT platform and made significant changes to the programming interface

Dasgupta, Partha

2

Overview of ICLASS research: Reliable and parallel computing  

NASA Technical Reports Server (NTRS)

An overview of Illinois Computer Laboratory for Aerospace Systems and Software (ICLASS) Research: Reliable and Parallel Computing is presented. Topics covered include: reliable and fault tolerant computing; fault tolerant multiprocessor architectures; fault tolerant matrix computation; and parallel processing.

Iyer, Ravi K.

1987-01-01

3

Armed Services Vocational Aptitude Battery (ASVAB): Alternate Forms Reliability (Forms 8, 9, 10, and 11). Technical Paper for Period October 1980-April 1985.  

ERIC Educational Resources Information Center

A study investigated the alternate forms reliability of the Armed Services Vocational Aptitude Battery (ASVAB) Forms 8, 9, 10, and 11. Usable data were obtained from 62,938 armed services applicants who took the ASVAB in January and February 1983. Results showed that the parallel forms reliability coefficients between ASVAB Form 8a and the…

Palmer, Pamla; And Others

4

Reliability evaluation and optimal design in heterogeneous multi-state series-parallel systems  

Microsoft Academic Search

This paper addresses the heterogeneous redundancy allocation problem in multi-state series-parallel reliability structures with the objective to minimize the total cost of system design satisfying the given reliability constraint and the consumer load demand. The demand distribution is presented as a piecewise cumulative load curve and each subsystem is allowed to consist of parallel redundant components of not more than

Vikas K. Sharma; Manju Agarwal; Kanwar Sen

2011-01-01

5

The Reliable Router: A Reliable and High-Performance Communication Substrate for Parallel Computers  

Microsoft Academic Search

. The Reliable Router (RR) is a network switching elementtargeted to two-dimensional mesh interconnection network topologies.It is designed to run at 100 MHz and reach a useful link bandwidth of3.2 Gbit\\/sec. The Reliable Router uses adaptive routing coupled withlink-level retransmission and a unique-token protocol to increase bothperformance and reliability. The RR can handle a single node or linkfailure anywhere in

William J. Dally; Larry R. Dennison; David Harris; Kinhong Kan; Thucydides Xanthopoulos

1994-01-01

6

Masking reveals parallel form systems in the visual brain  

PubMed Central

It is generally supposed that there is a single, hierarchically organized pathway dedicated to form processing, in which complex forms are elaborated from simpler ones, beginning with the orientation-selective cells of V1. In this psychophysical study, we undertook to test another hypothesis, namely that the brain’s visual form system consists of multiple parallel systems and that complex forms are other than the sum of their parts. Inspired by imaging experiments which show that forms of increasing perceptual complexity (lines, angles, and rhombuses) constituted from the same elements (lines) activate the same visual areas (V1, V2, and V3) with the same intensity and latency (Shigihara and Zeki, 2013, 2014), we used backward masking to test the supposition that these forms are processed in parallel. We presented subjects with lines, angles, and rhombuses as different target-mask pairs. Evidence in favor of our supposition would be if masking is the most effective when target and mask are processed by the same system and least effective when they are processed in different systems. Our results showed that rhombuses were strongly masked by rhombuses but only weakly masked by lines or angles, but angles and lines were well masked by each other. The relative resistance of rhombuses to masking by low-level forms like lines and angles suggests that complex forms like rhombuses may be processed in a separate parallel system, whereas lines and angles are processed in the same one. PMID:25120460

Lo, Yu Tung; Zeki, Semir

2014-01-01

7

Reliability Optimization of Series-Parallel Systems Using a Genetic Algorithm David W. Coit, IEEE Student Member  

E-print Network

Reliability Optimization of Series-Parallel Systems Using a Genetic Algorithm David W. Coit, IEEE Optimization of Series-Parallel Systems Using a Genetic Algorithm Key Words - Genetic algorithm, Combinatorial genetic algorithm (GA) is developed and demonstrated to analyze series-parallel systems and to determine

Smith, Alice E.

8

Achieving low-cost high-reliability computation through redundant parallel processing  

Microsoft Academic Search

This paper presents a reconfigurable parallel architecture comprising an FPGA backbone and multiple processing nodes connected in a redundant array architecture and constructed mainly from low-cost commercial components. The reconfigurability of the backbone aids in allowing the system to operate as a fault-tolerant cluster utilising the principle of reliability through redundancy. Although initially designed for space-borne on-board processing of satellite

I. V. McLoughlin; T. Bretschneider

2006-01-01

9

Parameter Interval Estimation of System Reliability for Repairable Multistate Series-Parallel System with Fuzzy Data  

PubMed Central

The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ? 100. In addition, the optimal ?-cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728

2014-01-01

10

Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis  

NASA Technical Reports Server (NTRS)

During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

Gibson, Garth Alan

1990-01-01

11

Increasing the parallelism of filters through transformation to block state variable form  

Microsoft Academic Search

The block state variable form is investigated as a technique to increase the parallelism of a filter. This increase in parallelism allows more parallel processors to be usefully applied to the problem, resulting in a faster processing rate than is possible in the unblocked form. Upper and lower bounds on the sample period bound and the number of processors required

D. Schwartz; T. Barnwell

1984-01-01

12

Forward kinematics of a class of parallel (Stewart) platforms with closed-form solutions  

Microsoft Academic Search

The condition under which closed-form solutions of forward kinematics of parallel platforms are obtainable is explored. It is found that forward position analysis has closed-form solutions if one rotational degree of freedom (DOF) of a parallel platform is decoupled from the other five DOFs. Geometrically, this condition is satisfied when five end points at the platform or at the base

Chang-de Zhang; Shin-Min Song

1991-01-01

13

Reliability  

NSDL National Science Digital Library

In essence, reliability is the consistency of test results. To understand the meaning of reliability and how it relates to validity, imagine going to an airport to take flight #007 from Pittsburgh to San Diego. If, every time the airplane makes the flight

Christmann, Edwin P.; Badgett, John L.

2008-11-01

14

Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method  

ERIC Educational Resources Information Center

In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel

Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

2008-01-01

15

Exploring Equivalent Forms Reliability Using a Key Stage 2 Reading Test  

ERIC Educational Resources Information Center

This article outlines an empirical investigation into equivalent forms reliability using a case study of a national curriculum reading test. Within the situation being studied, there has been a genuine attempt to create several equivalent forms and so it is of interest to compare the actual behaviour of the relationship between these forms to the…

Benton, Tom

2013-01-01

16

Reliability  

E-print Network

This paper presents an experimental research on the size of individuals when fixed and dynamic size populations are employed with Genetic Programming (GP). We propose an improvement to the Plague operator (PO), that we have called Random Plague (RPO). Then by further studies based on the RPO results we analyzed the Fault Tolerance on Parallel Genetic Programming.

Daniel Lombra Nagonzalez; Francisco Fernandez De Vega

17

Equivalent forms and split-half reliability of the NU-CHIPS administered in noise.  

PubMed

The effects of white noise on the equivalent forms reliability and internal consistency reliability of the Northwestern University-Children's Perception of Speech Test (NU-CHIPS) were examined. Subjects were 36 normally hearing 10-year-old children who were assigned randomly in equal numbers to one of three experimental groups. Each group was administered all four forms of the NU-CHIPS at one of three signal-to-noise ratios (S/N = -4, S/N = 0, S/N = +2). The reliability of the NU-CHIPS when presented in noise is diminished relative to its reported reliability when administered in quiet as revealed by Pearson product-moment correlation coefficients. PMID:6716990

Chermak, G D; Pederson, C M; Bendel, R B

1984-05-01

18

Reliability Modeling Methodology for Independent Approaches on Parallel Runways Safety Analysis  

NASA Technical Reports Server (NTRS)

This document is an adjunct to the final report An Integrated Safety Analysis Methodology for Emerging Air Transport Technologies. That report presents the results of our analysis of the problem of simultaneous but independent, approaches of two aircraft on parallel runways (independent approaches on parallel runways, or IAPR). This introductory chapter presents a brief overview and perspective of approaches and methodologies for performing safety analyses for complex systems. Ensuing chapter provide the technical details that underlie the approach that we have taken in performing the safety analysis for the IAPR concept.

Babcock, P.; Schor, A.; Rosch, G.

1998-01-01

19

Comparison of heuristic methods for reliability optimization of series-parallel systems  

E-print Network

Three heuristics, the max-min approach, Nakagawa and Nakashima method, and Kim and Yum method, are considered for the redundancy allocation problem with series-parallel structures. The max-min approach can formulate the problem as an integer linear...

Lee, Hsiang

2003-01-01

20

Validity and Reliability of International Physical Activity Questionnaire-Short Form in Chinese Youth  

ERIC Educational Resources Information Center

Purpose: The psychometric profiles of the widely used International Physical Activity Questionnaire-Short Form (IPAQ-SF) in Chinese youth have not been reported. The purpose of this study was to examine the validity and reliability of the IPAQ-SF using a sample of Chinese youth. Method: One thousand and twenty-one youth (M[subscript age] = 14.26 ±…

Wang, Chao; Chen, Peijie; Zhuang, Jie

2013-01-01

21

Alternate Form and Test-Retest Reliability of easyCBM Reading Measures. Technical Report # 0906  

ERIC Educational Resources Information Center

We report the results of a test-retest and alternate form reliability study of grade 1, 3, 5, and 8 reading measures from the easyCBM assessment system. Approximately 50 students in each grade participated in the study. In Grade 1, we studied the following measures: Phoneme Segmenting, Letter Sounds, Letter Names, Word Reading Fluency, and Passage…

Alonzo, Julie; Tindal, Gerald

2009-01-01

22

Unified form for parallel ion viscous stress in magnetized plasmas E. D. Helda)  

E-print Network

Unified form for parallel ion viscous stress in magnetized plasmas E. D. Helda) Utah State between collisional pitch-angle scattering and free streaming effects and from taking a Chapman in deriving closure relations that apply both in the hot core and in the cool edge of mag- netized fusion

Held, Eric

23

Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis  

NASA Technical Reports Server (NTRS)

Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

Liu, Kuojuey Ray

1990-01-01

24

Parallel FE Approximation of the Even/Odd Parity Form of the Linear Boltzmann Equation  

SciTech Connect

A novel solution method has been developed to solve the linear Boltzmann equation on an unstructured triangular mesh. Instead of tackling the first-order form of the equation, this approach is based on the even/odd-parity form in conjunction with the conventional mdtigroup discrete-ordinates approximation. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, and the method is well suited for massively parallel computers.

Drumm, Clifton R.; Lorenz, Jens

1999-07-21

25

Magnetosheath filamentary structures formed by ion acceleration at the quasi-parallel bow shock  

NASA Astrophysics Data System (ADS)

from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.

Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.

2014-04-01

26

Microelectromechanical filter formed from parallel-connected lattice networks of contour-mode resonators  

DOEpatents

A microelectromechanical (MEM) filter is disclosed which has a plurality of lattice networks formed on a substrate and electrically connected together in parallel. Each lattice network has a series resonant frequency and a shunt resonant frequency provided by one or more contour-mode resonators in the lattice network. Different types of contour-mode resonators including single input, single output resonators, differential resonators, balun resonators, and ring resonators can be used in MEM filter. The MEM filter can have a center frequency in the range of 10 MHz-10 GHz, with a filter bandwidth of up to about 1% when all of the lattice networks have the same series resonant frequency and the same shunt resonant frequency. The filter bandwidth can be increased up to about 5% by using unique series and shunt resonant frequencies for the lattice networks.

Wojciechowski, Kenneth E; Olsson, III, Roy H; Ziaei-Moayyed, Maryam

2013-07-30

27

Parallel processing in the brain's visual form system: an fMRI study  

PubMed Central

We here extend and complement our earlier time-based, magneto-encephalographic (MEG), study of the processing of forms by the visual brain (Shigihara and Zeki, 2013) with a functional magnetic resonance imaging (fMRI) study, in order to better localize the activity produced in early visual areas when subjects view simple geometric stimuli of increasing perceptual complexity (lines, angles, rhombuses) constituted from the same elements (lines). Our results show that all three categories of form activate all three visual areas with which we were principally concerned (V1–V3), with angles producing the strongest and rhombuses the weakest activity in all three. The difference between the activity produced by angles and rhombuses was significant, that between lines and rhombuses was trend significant while that between lines and angles was not. Taken together with our earlier MEG results, the present ones suggest that a parallel strategy is used in processing forms, in addition to the well-documented hierarchical strategy. PMID:25126064

Shigihara, Yoshihito; Zeki, Semir

2014-01-01

28

The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills  

ERIC Educational Resources Information Center

Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of…

Bae, Jungok; Lee, Yae-Sheik

2011-01-01

29

Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.  

SciTech Connect

Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours. In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters

Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee; Neary, Vincent Sinclair

2014-09-01

30

Reliability and validity of a short form household food security scale in a Caribbean community  

PubMed Central

Background We evaluated the reliability and validity of the short form household food security scale in a different setting from the one in which it was developed. Methods The scale was interview administered to 531 subjects from 286 households in north central Trinidad in Trinidad and Tobago, West Indies. We evaluated the six items by fitting item response theory models to estimate item thresholds, estimating agreement among respondents in the same households and estimating the slope index of income-related inequality (SII) after adjusting for age, sex and ethnicity. Results Item-score correlations ranged from 0.52 to 0.79 and Cronbach's alpha was 0.87. Item responses gave within-household correlation coefficients ranging from 0.70 to 0.78. Estimated item thresholds (standard errors) from the Rasch model ranged from -2.027 (0.063) for the 'balanced meal' item to 2.251 (0.116) for the 'hungry' item. The 'balanced meal' item had the lowest threshold in each ethnic group even though there was evidence of differential functioning for this item by ethnicity. Relative thresholds of other items were generally consistent with US data. Estimation of the SII, comparing those at the bottom with those at the top of the income scale, gave relative odds for an affirmative response of 3.77 (95% confidence interval 1.40 to 10.2) for the lowest severity item, and 20.8 (2.67 to 162.5) for highest severity item. Food insecurity was associated with reduced consumption of green vegetables after additionally adjusting for income and education (0.52, 0.28 to 0.96). Conclusions The household food security scale gives reliable and valid responses in this setting. Differing relative item thresholds compared with US data do not require alteration to the cut-points for classification of 'food insecurity without hunger' or 'food insecurity with hunger'. The data provide further evidence that re-evaluation of the 'balanced meal' item is required. PMID:15200684

Gulliford, Martin C; Mahabir, Deepak; Rocke, Brian

2004-01-01

31

A reliability study of springback on the sheet metal forming process under probabilistic variation of prestrain and blank holder force  

NASA Astrophysics Data System (ADS)

This work deals with a reliability assessment of springback problem during the sheet metal forming process. The effects of operative parameters and material properties, blank holder force and plastic prestrain, on springback are investigated. A generic reliability approach was developed to control springback. Subsequently, the Monte Carlo simulation technique in conjunction with the Latin hypercube sampling method was adopted to study the probabilistic springback. Finite element method based on implicit/explicit algorithms was used to model the springback problem. The proposed constitutive law for sheet metal takes into account the adaptation of plastic parameters of the hardening law for each prestrain level considered. Rackwitz-Fiessler algorithm is used to find reliability properties from response surfaces of chosen springback geometrical parameters. The obtained results were analyzed using a multi-state limit reliability functions based on geometry compensations.

Mrad, Hatem; Bouazara, Mohamed; Aryanpour, Gholamreza

2013-08-01

32

Reliability of Arabic ICIQ-UI short form in Saudi Arabia  

PubMed Central

Context: The International Consultation on Incontinence Questionnaire-Urinary Incontinence Short Form (ICIQ-UI SF) provides a brief measure of symptoms and impact of urinary incontinence on quality of life. It is suitable for use in clinical practice and research. An Arabic version of the ICIQ-UI SF was translated and validated in Egypt and Syria. Aims: The objective was to assess the reliability of the Arabic version of the ICIQ-UI SF in women from Saudi Arabia. Settings and Design: A study at the Urogynecology Clinic was conducted from November 2010 until August 2011. Materials and Methods: Thirty-seven consecutive Saudi women attending urogynecologic clinic were recruited. Questionnaires were distributed for self-completion and then redistributed to the same set of respondents two to four weeks later as part of a test-retest analysis for assessing questionnaire's stability. Statistical Analysis Used: Agreement between two measurements was determined by weighted Kappa. Internal consistency was assessed using Cronbach's alpha coefficient. Results: Participants had a mean (SD) age of 39 (9.9), median parity of 4, and mean BMI (SD) of 30.9 kg/m2 (4.6). There were no differences in the frequency and amount of urine leaks or the impact of UI on quality of life observed between the two visits. Assessment of internal consistency was excellent with the Cronbach's alpha coefficient of 0.97 (95% CI: 0.88-0.98). Participants agreed that the questionnaire was clear, appropriate, and easy to understand. Conclusions: The Arabic ICIQ-UI SF is a stable and clear questionnaire that can be used for UI assessment in clinical practice and research among Saudi women. PMID:23662008

Al-Shaikh, Ghadeer; Al-Badr, Ahmad; Al Maarik, Amira; Cotterill, Nikki; Al-Mandeel, Hazem M.

2013-01-01

33

Parallel Dimers and Anti-parallel Tetramers Formed by Epidermal Growth Factor Receptor Pathway Substrate Clone 15 (EPS15)*  

E-print Network

Substrate Clone 15 (EPS15)* (Received for publication, August 28, 1997, and in revised form, October 17 The recently discovered localization of epidermal growth factor receptor pathway substrate clone 15 (Eps15 protein complex, AP-2, strongly suggest that Eps15 has an important role in the pathway of clathrin

Kirchhausen, Tomas

34

A Validation Study of the Dutch Childhood Trauma Questionnaire-Short Form: Factor Structure, Reliability, and Known-Groups Validity  

ERIC Educational Resources Information Center

Objective: The 28-item Childhood Trauma Questionnaire-Short Form (CTQ-SF) has been translated into at least 10 different languages. The validity of translated versions of the CTQ-SF, however, has generally not been examined. The objective of this study was to investigate the factor structure, internal consistency reliability, and known-groups…

Thombs, Brett D.; Bernstein, David P.; Lobbestael, Jill; Arntz, Arnoud

2009-01-01

35

Reliability of the International Physical Activity Questionnaire in Research Settings: Last 7-Day Self-Administered Long Form  

ERIC Educational Resources Information Center

The purpose of this study was to examine the test-retest reliability of the last 7-day long form International Physical Activity Questionnaire (Craig et al., 2003) and to examine the construct validity for the measure in a research setting. Participants were 151 male (n = 52) and female (n = 99) university students (M age = 24.15 years, SD = 5.01)…

Levy, Susan S.; Readdy, R. Tucker

2009-01-01

36

G-quadruplexes form ultrastable parallel structures in deep eutectic solvent.  

PubMed

G-quadruplex DNA is highly polymorphic. Its conformation transition is involved in a series of important life events. These controllable diverse structures also make G-quadruplex DNA a promising candidate as catalyst, biosensor, and DNA-based architecture. So far, G-quadruplex DNA-based applications are restricted done in aqueous media. Since many chemical reactions and devices are required to be performed under strictly anhydrous conditions, even at high temperature, it is challenging and meaningful to conduct G-quadruplex DNA in water-free medium. In this report, we systemically studied 10 representative G-quadruplexes in anhydrous room-temperature deep eutectic solvents (DESs). The results indicate that intramolecular, intermolecular, and even higher-order G-quadruplex structures can be formed in DES. Intriguingly, in DES, parallel structure becomes the G-quadruplex DNA preferred conformation. More importantly, compared to aqueous media, G-quadruplex has ultrastability in DES and, surprisingly, some G-quadruplex DNA can survive even beyond 110 °C. Our work would shed light on the applications of G-quadruplex DNA to chemical reactions and DNA-based devices performed in an anhydrous environment, even at high temperature. PMID:23282194

Zhao, Chuanqi; Ren, Jinsong; Qu, Xiaogang

2013-01-29

37

An Investigation of Angle Relationships Formed by Parallel Lines Cut by a Transversal Using GeoGebra  

NSDL National Science Digital Library

In this lesson, students will discover angle relationships formed (corresponding, alternate interior, alternate exterior, same-side interior, same-side exterior) when two parallel lines are cut by a transversal. They will establish definitions and identify whether these angle pairs are supplementary or congruent.

2013-01-08

38

Bringing the Cognitive Estimation Task into the 21st Century: Normative Data on Two New Parallel Forms  

PubMed Central

The Cognitive Estimation Test (CET) is widely used by clinicians and researchers to assess the ability to produce reasonable cognitive estimates. Although several studies have published normative data for versions of the CET, many of the items are now outdated and parallel forms of the test do not exist to allow cognitive estimation abilities to be assessed on more than one occasion. In the present study, we devised two new 9-item parallel forms of the CET. These versions were administered to 184 healthy male and female participants aged 18–79 years with 9–22 years of education. Increasing age and years of education were found to be associated with successful CET performance as well as gender, intellect, naming, arithmetic and semantic memory abilities. To validate that the parallel forms of the CET were sensitive to frontal lobe damage, both versions were administered to 24 patients with frontal lobe lesions and 48 age-, gender- and education-matched controls. The frontal patients’ error scores were significantly higher than the healthy controls on both versions of the task. This study provides normative data for parallel forms of the CET for adults which are also suitable for assessing frontal lobe dysfunction on more than one occasion without practice effects. PMID:24671170

MacPherson, Sarah E.; Wagner, Gabriela Peretti; Murphy, Patrick; Bozzali, Marco; Cipolotti, Lisa; Shallice, Tim

2014-01-01

39

Measurement of impulsive choice in rats: Same- and alternate-form test-retest reliability and temporal tracking.  

PubMed

Impulsive choice is typically measured by presenting smaller-sooner (SS) versus larger-later (LL) rewards, with biases towards the SS indicating impulsivity. The current study tested rats on different impulsive choice procedures with LL delay manipulations to assess same-form and alternate-form test-retest reliability. In the systematic-GE procedure (Green & Estle, 2003), the LL delay increased after several sessions of training; in the systematic-ER procedure (Evenden & Ryan, 1996), the delay increased within each session; and in the adjusting-M procedure (Mazur, 1987), the delay changed after each block of trials within a session based on each rat's choices in the previous block. In addition to measuring choice behavior, we also assessed temporal tracking of the LL delays using the median times of responding during LL trials. The two systematic procedures yielded similar results in both choice and temporal tracking measures following extensive training, whereas the adjusting procedure resulted in relatively more impulsive choices and poorer temporal tracking. Overall, the three procedures produced acceptable same form test-retest reliability over time, but the adjusting procedure did not show significant alternate form test-retest reliability with the other two procedures. The results suggest that systematic procedures may supply better measurements of impulsive choice in rats. PMID:25490901

Peterson, Jennifer R; Hill, Catherine C; Kirkpatrick, Kimberly

2014-12-01

40

Development and Reliability Testing of a Fast-Food Restaurant Observation Form.  

PubMed

Abstract Purpose . To develop a reliable observational data collection instrument to measure characteristics of the fast-food restaurant environment likely to influence consumer behaviors, including product availability, pricing, and promotion. Design . The study used observational data collection. Setting . Restaurants were in the Chicago Metropolitan Statistical Area. Subjects . A total of 131 chain fast-food restaurant outlets were included. Measures . Interrater reliability was measured for product availability, pricing, and promotion measures on a fast-food restaurant observational data collection instrument. Analysis . Analysis was done with Cohen's ? coefficient and proportion of overall agreement for categorical variables and intraclass correlation coefficient (ICC) for continuous variables. Results . Interrater reliability, as measured by average ? coefficient, was .79 for menu characteristics, .84 for kids' menu characteristics, .92 for food availability and sizes, .85 for beverage availability and sizes, .78 for measures on the availability of nutrition information,.75 for characteristics of exterior advertisements, and .62 and .90 for exterior and interior characteristics measures, respectively. For continuous measures, average ICC was .88 for food pricing measures, .83 for beverage prices, and .65 for counts of exterior advertisements. Conclusion . Over 85% of measures demonstrated substantial or almost perfect agreement. Although some measures required revision or protocol clarification, results from this study suggest that the instrument may be used to reliably measure the fast-food restaurant environment. PMID:24819996

Rimkus, Leah; Ohri-Vachaspati, Punam; Powell, Lisa M; Zenk, Shannon N; Quinn, Christopher M; Barker, Dianne C; Pugach, Oksana; Resnick, Elissa A; Chaloupka, Frank J

2014-05-12

41

Alternate Form Reliability and Concurrent Validity of the PPVT-R for Referred Rehabilitation Agency Adults.  

ERIC Educational Resources Information Center

Investigated the relaionships among the Peabody Picture Vocabulary Test Revised (PPVT-R) alternate forms and the relationship of each PPVT-R form with the Wechsler Adult Intelligence Scale-Revised (WAIS-R). All correlations with both forms of the PPVT-R were significant. PPVT-R mean scores did underestimate significantly all WAIS-R mean scores.…

Stevenson, James D., Jr.

1986-01-01

42

Composite Reliability and Standard Errors of Measurement for a Seven-Subtest Short Form of the Wechsler Adult Intelligence Scale-Revised.  

ERIC Educational Resources Information Center

Composite reliability and standard errors of measurement were computed for prorated Verbal, Performance, and Full-Scale intelligence quotient (IQ) scores from a seven-subtest short form of the Wechsler Adult Intelligence Scale-Revised. Results with 1,880 adults (standardization sample) indicate that this form is as reliable as the complete test.…

Schretlen, David; And Others

1994-01-01

43

Reliability and validity of the parent form of the social competence scale in Chinese preschoolers.  

PubMed

The Parent Form of the Social Competence Scale (SCS-PF) was translated into Chinese and validated in a sample of Chinese preschool children (N = 443). Results confirmed a single dimension and high internal consistency in the SCS-PF. Mothers' ratings on the SCS-PF correlated moderately with teachers' ratings on the Teacher Form of the Social Competence Scale and weakly with teachers' ratings on the Student-Teacher Relationship Scale. PMID:23045868

Zhang, Xiao; Ke, Xue; Wang, Xiaoyan

2012-08-01

44

HELIOS Critical Design Review: Reliability  

NASA Technical Reports Server (NTRS)

This paper presents Helios Critical Design Review Reliability form October 16-20, 1972. The topics include: 1) Reliability Requirement; 2) Reliability Apportionment; 3) Failure Rates; 4) Reliability Assessment; 5) Reliability Block Diagram; and 5) Reliability Information Sheet.

Benoehr, H. C.; Herholz, J.; Prem, H.; Mann, D.; Reichert, L.; Rupp, W.; Campbell, D.; Boettger, H.; Zerwes, G.; Kurvin, C.

1972-01-01

45

Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments  

ERIC Educational Resources Information Center

Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the…

Satern, Miriam N.

2011-01-01

46

Development and reliability testing of a food store observation form. — Measures of the Food Environment  

Cancer.gov

Skip to Main Content at the National Institutes of Health | www.cancer.gov Print Page E-mail Page Search: Please wait while this form is being loaded.... Home Browse by Resource Type Browse by Area of Research Research Networks Funding Information About

47

[Reliability and validity of the Severe Impairment Battery, short form (SIB-s), in patients with dementia in Spain].  

PubMed

INTRODUCTION. People with progressive dementia evolve into a state where traditional neuropsychological tests are not effective. Severe Impairment Battery (SIB) and short form (SIB-s) were developed for evaluating the cognitive status in patients with severe dementia. AIM. To evaluate the psychometric attributes of the SIB-s in patients with severe dementia. PATIENTS AND METHODS. 127 institutionalized patients (female: 86.6%; mean age: 82.6 ± 7.5 years-old) with dementia were assessed with the SIB-s, the Global Deterioration Scale (GDS), Mini-Mental State Examination (MMSE), Severe Mini-Mental State Examination (sMMSE), Barthel Index and FAST. RESULTS. SIB-s acceptability, reliability, validity and precision were analyzed. The mean total score for scale was 19.1 ± 15.34 (range: 0-48). Floor effect was 18.1%, only marginally higher than the desirable 15%. Factor analysis identified a single factor explaining 68% of the total variance of the scale. Cronbach's alpha coefficient was 0.96 and the item-total corrected correlation ranged from 0.27 to 0.83. The item homogeneity value was 0.43. Test-retest and inter-rater reliability for the total score was satisfactory (ICC: 0.96 and 0.95, respectively). The SIB-s showed moderate correlation with functional dependency scales (Barthel Index: 0.48, FAST: -0.74). Standard error of measurement was 3.07 for the total score. CONCLUSIONS. The SIB-s is a reliable and valid instrument for evaluating patients with severe dementia in the Spanish population of relatively brief instruments. PMID:25522858

Cruz-Orduna, I; Aguera-Ortiz, L F; Montorio-Cerrato, I; Leon-Salas, B; Valle de Juan, M C; Martinez-Martin, P

2015-01-01

48

The relative noise levels of parallel axis gear sets with various contact ratios and gear tooth forms  

NASA Technical Reports Server (NTRS)

The real noise reduction benefits which may be obtained through the use of one gear tooth form as compared to another is an important design parameter for any geared system, especially for helicopters in which both weight and reliability are very important factors. This paper describes the design and testing of nine sets of gears which are as identical as possible except for their basic tooth geometry. Noise measurements were made at various combinations of load and speed for each gear set so that direct comparisons could be made. The resultant data was analyzed so that valid conclusions could be drawn and interpreted for design use.

Drago, Raymond J.; Lenski, Joseph W., Jr.; Spencer, Robert H.; Valco, Mark; Oswald, Fred B.

1993-01-01

49

A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows  

NASA Astrophysics Data System (ADS)

Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

Allphin, Devin

50

The Myeloproliferative Neoplasm Symptom Assessment Form (MPN-SAF): international prospective validation and reliability trial in 402 patients.  

PubMed

Symptomatic burden in myeloproliferative neoplasms is present in most patients and compromises quality of life. We sought to validate a broadly applicable 18-item instrument (Myeloproliferative Neoplasm Symptom Assessment Form [MPN-SAF], coadministered with the Brief Fatigue Inventory) to assess symptoms of myelofibrosis, essential thrombocythemia, and polycythemia vera among prospective cohorts in the United States, Sweden, and Italy. A total of 402 MPN-SAF surveys were administered (English [25%], Italian [46%], and Swedish [28%]) in 161 patients with essential thrombocythemia, 145 patients with polycythemia vera, and 96 patients with myelofibrosis. Responses among the 3 administered languages showed great consistency after controlling for MPN subtype. Strong correlations existed between individual items and key symptomatic elements represented on both the MPN-SAF and the European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire-C30. Enrolling physicians' blinded opinion of patient symptoms (6 symptoms assessed) were highly correlated with corresponding patients' responses. Serial administration of the English MPN-SAF among 53 patients showed that most MPN-SAF items are well correlated (r > 0.5, P < .001) and highly reproducible (intraclass correlation coefficient > 0.7). The MPN-SAF is a comprehensive and reliable instrument that is available in multiple languages to evaluate symptoms associated with all types of MPNs in clinical trials globally. PMID:21536863

Scherber, Robyn; Dueck, Amylou C; Johansson, Peter; Barbui, Tiziano; Barosi, Giovanni; Vannucchi, Alessandro M; Passamonti, Francesco; Andreasson, Bjorn; Ferarri, Maria L; Rambaldi, Alessandro; Samuelsson, Jan; Birgegard, Gunnar; Tefferi, Ayalew; Harrison, Claire N; Radia, Deepti; Mesa, Ruben A

2011-07-14

51

Subjective Well-Being Under Neuroleptics Scale short form (SWN-K): reliability and validity in an Estonian speaking sample  

PubMed Central

Background The Subjective Well-Being Under Neuroleptic Treatment Scale short form (SWN-K) is a self-rating scale developed to measure mentally ill patients' well-being under the antipsychotic drug treatment. This paper reports on adaptation and psychometric properties of the instrument in an Estonian psychiatric sample. Methods In a naturalistic study design, 124 inpatients or outpatients suffering from the first psychotic episode or chronic psychotic illness completed the translated SWN-K instrument. Item content analysis, internal consistency analysis, exploratory principal components analysis, and confirmatory factor analysis were used to construct the Estonian version of the SWN-K (SWN-K-E). Additionally, socio-demographic and clinical data, observer-rated psychopathology, medication side effects, daily antipsychotic drug dosages, and general functioning were assessed at two time points, at baseline and after a 29-week period; the associations of the SWN-K-E scores with these variables were explored. Results After having selected 20 items for the Estonian adaptation, the internal consistency of the total SWN-K-E was 0.93 and the subscale consistencies ranged from 0.70 to 0.80. Good test–retest reliabilities were observed for the adapted scale scores, with the correlation of the total score over about 6 months being r = 0.70. Confirmatory factor analysis replicated the presence of a higher-order factor (general well-being) and five first-order factors (mental functioning, physical functioning, social integration, emotional regulation, and self-control); the model fitted the data well. The results indicated a moderate-high correlations r = 0.54 between the SWN-K-E total score and the evaluation how satisfied patients were with their lives in generally. No significant correlations were found between the overall subjective well-being score and age, severity of the psychopathology, drug adverse effects, or prescribed drug dosage. Conclusion Taken together, the results demonstrated that the Estonian version of the SWN-K is a reliable and valid instrument with psychometric properties similar to the original English version. The potential uses of the scale in both research and clinical settings are considered. PMID:24025191

2013-01-01

52

Stochastic Modeling of Composite Web Services for Closed-Form Analysis of Their Performance and Reliability Bottlenecks  

Microsoft Academic Search

Web services providers often commit service-level agreements (SLAs) with their customers for guaranteeing the quality of the services. These SLAs are related not just to functional attributes of the services but to performance and reliability attributes as well. When combining several services into a composite service, it is non-trivial to determine, prior to service deployment, performance and reliability values of

N. Sato; Kishor S. Trivedi

2007-01-01

53

Parent Ratings Using the Chinese Version of the Parent Gifted Rating Scales-School Form: Reliability and Validity for Chinese Students  

ERIC Educational Resources Information Center

This study examined the reliability and validity of the scores of a Chinese-translated version of the Gifted Rating Scales-School Form (GRS-S) using parents as raters and explored the effects of gender and grade on the ratings. A total of 222 parents participated in the study and rated their child independently using the Chinese version of the…

Li, Huijun; Lee, Donghyuck; Pfeiffer, Steve I.; Petscher, Yaacov

2008-01-01

54

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 1. Technical Report #1216  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest/and alternate form) and G-Theory/D-Study research on the easy CBM reading measures, grades 1-5. Data were gathered in the spring 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due…

Anderson, Daniel; Park, Jasmine, Bitnara; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

2012-01-01

55

The French-Canadian Version of the Self-Report Coping Scale: Estimates of the Reliability, Validity, and Development of a Short Form  

ERIC Educational Resources Information Center

This investigation was conducted to explore the reliability and validity of scores on the French Canadian version of the Self-Report Coping Scale (SRCS; D. L. Causey & E. F. Dubow, 1992) and that of a short form of the SRCS. Evidence provides initial support for construct validity by replication of the factor structure and correlations with…

Hebert, Martine; Parent, Nathalie; Daignault, Isabelle V.

2007-01-01

56

A parallel processing tutorial  

Microsoft Academic Search

An overview of parallel computing is provided, with reference to numerical analysis and, in particular, to computational electromagnetics. The history of parallelism is reviewed, and the general principles are provided. The two main types of parallelism encountered, pipelining and replication are discussed, and an example of each is described. A parallel algorithm for forming a matrix-vector product is presented and

David B. Davidson

1990-01-01

57

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Passage Reading Fluency Assessments: Grade 4. Technical Report #1219  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

58

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 5. Technical Report #1220  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Lai, Cheng-Fei; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

2012-01-01

59

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 2. Technical Report #1217  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest an alternate form) and G-Theory/D-Study on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from the convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due to…

Anderson, Daniel; Lai, Cheg-Fei; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

2012-01-01

60

An Investigation of Psychometric Properties of Coping Styles Scale Brief Form: A Study of Validity and Reliability  

ERIC Educational Resources Information Center

The aim of the current study was to develop a short form of Coping Styles Scale based on COPE Inventory. A total of 275 undergraduate students (114 female, and 74 male) were administered in the first study. In order to test factors structure of Coping Styles Scale Brief Form, principal components factor analysis and direct oblique rotation was…

Bacanli, Hasan; Surucu, Mustafa; Ilhan, Tahsin

2013-01-01

61

A 12Item Short-Form Health Survey: construction of scales and preliminary tests of reliability and validity  

Microsoft Academic Search

Regression methods were used to select and score 12 items from the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) to reproduce the Physical Component Summary and Mental Component Summary scales in the general US population (n=2,333). The resulting 12-item short-form (SF-12) achieved multiple R squares of 0.911 and 0.918 in predictions of the SF-36 Physical Component Summary and SF-36

Ware John E. Jr; Mark Kosinski; Susan D. Keller

1996-01-01

62

The Behavior Problems Inventory-Short Form for Individuals with Intellectual Disabilities: Part II--Reliability and Validity  

ERIC Educational Resources Information Center

Background: The Behavior Problems Inventory-01 (BPI-01) is an informant-based behaviour rating instrument for intellectual disabilities (ID) with 49 items and three sub-scales: "Self-injurious Behavior," "Stereotyped Behavior" and "Aggressive/Destructive Behavior." The Behavior Problems Inventory-Short Form (BPI-S) is a BPI-01 spin-off with 30…

Rojahn, J.; Rowe, E. W.; Sharber, A. C.; Hastings, R.; Matson, J. L.; Didden, R.; Kroes, D. B. H.; Dumont, E. L. M.

2012-01-01

63

Reliability and structural integrity  

NASA Technical Reports Server (NTRS)

An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.

Davidson, J. R.

1976-01-01

64

The investigation of supply chain's reliability measure: a case study  

NASA Astrophysics Data System (ADS)

In this paper, using supply chain operational reference, the reliability evaluation of available relationships in supply chain is investigated. For this purpose, in the first step, the chain under investigation is divided into several stages including first and second suppliers, initial and final customers, and the producing company. Based on the formed relationships between these stages, the supply chain system is then broken down into different subsystem parts. The formed relationships between the stages are based on the transportation of the orders between stages. Paying attention to the system elements' location, which can be in one of the five forms of series namely parallel, series/parallel, parallel/series, or their combinations, we determine the structure of relationships in the divided subsystems. According to reliability evaluation scales on the three levels of supply chain, the reliability of each chain is then calculated. Finally, using the formulas of calculating the reliability in combined systems, the reliability of each system and ultimately the whole system is investigated.

Taghizadeh, Houshang; Hafezi, Ehsan

2012-10-01

65

Female Genital Mutilation in Sierra Leone: Forms, Reliability of Reported Status, and Accuracy of Related Demographic and Health Survey Questions  

PubMed Central

Objective. To determine forms of female genital mutilation (FGM), assess consistency between self-reported and observed FGM status, and assess the accuracy of Demographic and Health Surveys (DHS) FGM questions in Sierra Leone. Methods. This cross-sectional study, conducted between October 2010 and April 2012, enrolled 558 females aged 12–47 from eleven antenatal clinics in northeast Sierra Leone. Data on demography, FGM status, and self-reported anatomical descriptions were collected. Genital inspection confirmed the occurrence and extent of cutting. Results. All participants reported FGM status; 4 refused genital inspection. Using the WHO classification of FGM, 31.7% had type Ib; 64.1% type IIb; and 4.2% type IIc. There was a high level of agreement between reported and observed FGM prevalence (81.2% and 81.4%, resp.). There was no correlation between DHS FGM responses and anatomic extent of cutting, as 2.7% reported pricking; 87.1% flesh removal; and 1.1% that genitalia was sewn closed. Conclusion. Types I and II are the main forms of FGM, with labia majora alterations in almost 5% of cases. Self-reports on FGM status could serve as a proxy measurement for FGM prevalence but not for FGM type. The DHS FGM questions are inaccurate for determining cutting extent. PMID:24204384

Grant, Donald S.; Berggren, Vanja

2013-01-01

66

Network reliability  

NASA Technical Reports Server (NTRS)

Network control (or network management) functions are essential for efficient and reliable operation of a network. Some control functions are currently included as part of the Open System Interconnection model. For local area networks, it is widely recognized that there is a need for additional control functions, including fault isolation functions, monitoring functions, and configuration functions. These functions can be implemented in either a central or distributed manner. The Fiber Distributed Data Interface Medium Access Control and Station Management protocols provide an example of distributed implementation. Relative information is presented here in outline form.

Johnson, Marjory J.

1985-01-01

67

Formation of G-quadruplexes in poly-G sequences: structure of a propeller-type parallel-stranded G-quadruplex formed by a G(15) stretch.  

PubMed

Poly-G sequences are found in different genomes including human and have the potential to form higher-order structures with various applications. Previously, long poly-G sequences were thought to lead to multiple possible ways of G-quadruplex folding, rendering their structural characterization challenging. Here we investigate the structure of G-quadruplexes formed by poly-G sequences d(TTGnT), where n = 12 to 19. Our data show the presence of multiple and/or higher-order G-quadruplex structures in most sequences. Strikingly, NMR spectra of the TTG15T sequence containing a stretch of 15 continuous guanines are exceptionally well-resolved and indicate the formation of a well-defined G-quadruplex structure. The NMR solution structure of this sequence revealed a propeller-type parallel-stranded G-quadruplex containing three G-tetrad layers and three single-guanine propeller loops. The same structure can potentially form anywhere along a long Gn stretch, making it unique for molecular recognition by other cellular molecules. PMID:25375976

Sengar, Anjali; Heddi, Brahim; Phan, Anh Tuân

2014-12-16

68

A new model of in vitro fungal biofilms formed on human nail fragments allows reliable testing of laser and light therapies against onychomycosis.  

PubMed

Onychomycoses represent approximately 50 % of all nail diseases worldwide. In warmer and more humid countries like Brazil, the incidence of onychomycoses caused by non-dermatophyte molds (NDM, including Fusarium spp.) or yeasts (including Candida albicans) has been increasing. Traditional antifungal treatments used for the dermatophyte-borne disease are less effective against onychomycoses caused by NDM. Although some laser and light treatments have demonstrated clinical efficacy against onychomycosis, their US Food and Drug Administration (FDA) approval as "first-line" therapy is pending, partly due to the lack of well-demonstrated fungicidal activity in a reliable in vitro model. Here, we describe a reliable new in vitro model to determine the fungicidal activity of laser and light therapies against onychomycosis caused by Fusarium oxysporum and C. albicans. Biofilms formed in vitro on sterile human nail fragments were treated with 1064 nm neodymium-doped yttrium aluminum garnet laser (Nd:YAG), 420 nm intense pulsed light (IPL) IPL 420, followed by Nd:YAG, or near-infrared light ((NIR) 700-1400 nm). Light and laser antibiofilm effects were evaluated using cell viability assay and scanning electron microscopy (SEM). All treatments were highly effective against C. albicans and F. oxysporum biofilms, resulting in decreases in cell viability of 45-60 % for C. albicans and 92-100 % for F. oxysporum. The model described here yielded fungicidal activities that matched more closely to those observed in the clinic, when compared to published in vitro models for laser and light therapies. Thus, our model might represent an important tool for the initial testing, validation, and "fine-tuning" of laser and light therapies against onychomycosis. PMID:25471266

Vila, Taissa Vieira Machado; Rozental, Sonia; de Sá Guimarães, Claudia Maria Duarte

2014-12-01

69

The Zarit Caregiver Burden Interview Short Form (ZBI-12) in spouses of Veterans with Chronic Spinal Cord Injury, Validity and Reliability of the Persian Version  

PubMed Central

Background: To test the psychometric properties of the Persian version of Zarit Burden Interview (ZBI-12) in the Iranian population. Methods: After translating and cultural adaptation of the questionnaire into Persian, 100 caregiver spouses of Iran- Iraq war (1980-88) veterans with chronic spinal cord injury who live in the city of Mashhad, Iran, invited to participate in the study. The Persian version of ZBI-12 accompanied with the Persian SF-36 was completed by the caregivers to test validity of the Persian ZBI-12.A Pearson`s correlation coefficient was calculated for validity testing. In order to assess reliability of the Persian ZBI-12, we administered the ZBI-12 randomly in 48 caregiver spouses again 3 days later. Results: Generally, the internal consistency of the questionnaire was found to be strong (Cronbach's alpha 0.77). Intercorrelation matrix between the different domains of ZBI-12 at test-retest was 0.78. The results revealed that majority of questions the Persian ZBI_12 have a significant correlation to each other. In terms of validity, our results showed that there is significant correlations between some domains of the Persian version the Short Form Health Survey -36 with the Persian Zarit Burden Interview such as Q1 with Role Physical (P=0.03),General Health (P=0.034),Social Functional (0.037), Mental Health (0.023) and Q3 with Physical Function (P=0.001),Viltality (0.002), Socil Function (0.001). Conclusions: Our findings suggest that the Zarit Burden Interview Persian version is both a valid and reliable instrument for measuring the burden of caregivers of individuals with chronic spinal cord injury. PMID:25692171

Rajabi-Mashhadi, Mohammad T; Mashhadinejad, Hosein; Ebrahimzadeh, Mohammad H; Golhasani-Keshtan, Farideh; Ebrahimi, Hanieh; Zarei, Zahra

2015-01-01

70

Photovoltaic module reliability workshop  

Microsoft Academic Search

The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI\\/DOE) under the general theme of photovoltaic module reliability during the period 1986 to 1990. The reliability photovoltaic (PV) modules\\/systems is exceedingly important along with the initial cost and efficiency of modules if the PV

L. Mrig

1990-01-01

71

Evaluation of General Classes of Reliability Estimators Often Used in Statistical Analyses of Quasi-Experimental Designs  

NASA Astrophysics Data System (ADS)

In this paper major reliability estimators are analyzed and there comparatively result are discussed. There strengths and weaknesses are evaluated in this case study. Each of the reliability estimators has certain advantages and disadvantages. Inter-rater reliability is one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers. As an alternative, you could look at the correlation of ratings of the same single observer repeated on two different occasions. Each of the reliability estimators will give a different value for reliability. In general, the test-retest and inter-rater reliability estimates will be lower in value than the parallel forms and internal consistency ones because they involve measuring at different times or with different raters. Since reliability estimates are often used in statistical analyses of quasi-experimental designs.

Saini, K. K.; Sehgal, R. K.; Sethi, B. L.

2008-10-01

72

General peroxidase activity of a parallel G-quadruplex-hemin DNAzyme formed by Pu39WT - a mixed G-quadruplex forming sequence in the Bcl-2 P1 promoter  

PubMed Central

Background A 39-base-pair sequence (Pu39WT) located 58 to 19 base pairs upstream of the Bcl-2 P1 promoter has been implicated in the formation of an intramolecular mixed G-quadruplex structure and is believed to play a major role in the regulation of bcl-2 transcription. However, an extensive functional exploration requires further investigation. To further exploit the structure–function relationship of the Pu39WT-hemin DNAzyme, the secondary structure and peroxidase activity of the Pu39WT-hemin complex were investigated. Results Experimental results showed that when Pu39WT was incubated with hemin, it formed a uniparallel G-quadruplex-hemin complex in K+ or Na+ solution, rather than a mixed hybrid without bound hemin. Also, Pu39WT-hemin showed peroxidase activity (ABTS2?) in the presence of H2O2 to produce the colored radical anion (ABTS•-), which could then be used to determine the parameters governing the catalytic efficiency and reveal the peroxidase activity of the Pu39WT-hemin DNAzyme. Conclusions These results demonstrate the general peroxidase activity of Pu39WT-hemin DNAzyme, which is an intramolecular parallel G-quadruplex structure. This peroxidase activity of hemin complexed with the G-quadruplex-forming sequence in the Bcl-2 gene promoter may imply a potential mechanism of hemin-mediated cellular injury. PMID:25050134

2014-01-01

73

Computer Reliability  

NASA Technical Reports Server (NTRS)

Using a NASA developed program, Dr. J. Walter Bond is creating a course in computer reliability modeling. The course will examine three different computer programs, one of them NASA's Care III, the others UCLA's Aries 78 and Aries 82. All three are designed to help estimate the reliability of complex, redundant, fault tolerant system. In computer design, software of this kind can predict or model the effects of various hardware or software failures, a process called reliability modeling.

1987-01-01

74

Reliability analysis  

NASA Technical Reports Server (NTRS)

The objective was to search for and demonstrate approaches and concepts for fast wafer probe tests of mechanisms affecting the reliability of MOS technology and, based on these, develop and optimize test chips and test procedures. Progress is reported on four important wafer-level reliability problems: gate-oxide radiation hardness; hot-electron effects; time-dependence dielectric breakdown; and electromigration.

1985-01-01

75

Parallel processing for control applications  

SciTech Connect

Parallel processing has been a topic of discussion in computer science circles for decades. Using more than one single computer to control a process has many advantages that compensate for the additional cost. Initially multiple computers were used to attain higher speeds. A single cpu could not perform all of the operations necessary for real time operation. As technology progressed and cpu's became faster, the speed issue became less significant. The additional processing capabilities however continue to make high speeds an attractive element of parallel processing. Another reason for multiple processors is reliability. For the purpose of this discussion, reliability and robustness will be the focal paint. Most contemporary conceptions of parallel processing include visions of hundreds of single computers networked to provide 'computing power'. Indeed our own teraflop machines are built from large numbers of computers configured in a network (and thus limited by the network). There are many approaches to parallel configfirations and this presentation offers something slightly different from the contemporary networked model. In the world of embedded computers, which is a pervasive force in contemporary computer controls, there are many single chip computers available. If one backs away from the PC based parallel computing model and considers the possibilities of a parallel control device based on multiple single chip computers, a new area of possibilities becomes apparent. This study will look at the use of multiple single chip computers in a parallel configuration with emphasis placed on maximum reliability.

Telford, J. W. (John W.)

2001-01-01

76

Electricity Reliability  

E-print Network

Electricity Delivery and Energy Reliability High Temperature Superconductivity (HTS) Visualization in the future because they have virtually no resistance to electric current, offering the possibility of new electric power equipment with more energy efficiency and higher capacity than today's systems

77

Parallel rendering  

NASA Technical Reports Server (NTRS)

This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

Crockett, Thomas W.

1995-01-01

78

The MOS 36-item Short-Form Health Survey (SF36): III. Tests of data quality, scaling assumptions, and reliability across diverse patient groups  

Microsoft Academic Search

The widespread use of standardized health surveys is predicated on the largely untested assumption that scales constructed from those surveys will satisfy minimum psychometric requirements across diverse population groups. Data from the Medical Outcomes Study (MOS) were used to evaluate data completeness and quality, test scaling assumptions, and estimate internal-consistency reliability for the eight scales constructed from the MOS SF-36

Colleen A. McHorney; Ware John E. Jr; J. F. Rachel Lu; Cathy Donald Sherbourne

1994-01-01

79

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Word and Passage Reading Fluency Assessments: Grade 3. Technical Report #1218  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

80

Parallel Optimisation  

NSDL National Science Digital Library

An introduction to optimisation techniques that may improve parallel performance and scaling on HECToR. It assumes that the reader has some experience of parallel programming including basic MPI and OpenMP. Scaling is a measurement of the ability for a parallel code to use increasing numbers of cores efficiently. A scalable application is one that, when the number of processors is increased, performs better by a factor which justifies the additional resource employed. Making a parallel application scale to many thousands of processes requires not only careful attention to the communication, data and work distribution but also to the choice of the algorithms to use. Since the choice of algorithm is too broad a subject and very particular to application domain to include in this brief guide we concentrate on general good practices towards parallel optimisation on HECToR.

81

Scalable parallel communications  

NASA Technical Reports Server (NTRS)

Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

1992-01-01

82

Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach  

ERIC Educational Resources Information Center

Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

2012-01-01

83

Multithreading and Parallel Microprocessors  

E-print Network

Multithreading and Parallel Microprocessors Stephen Jenks Electrical Engineering and Computer Scalable Parallel and Distributed Systems Lab 4 Outline Parallelism in Microprocessors Multicore Processor Parallelism Parallel Programming for Shared Memory OpenMP POSIX Threads Java Threads Parallel Microprocessor

Shinozuka, Masanobu

84

Reliability Assessment The Reliability of the  

E-print Network

2006 Long-Term Reliability Assessment The Reliability of the Bulk Power Systems In North America..................................7 FUEL SUPPLY AND DELIVERY FOR ELECTRIC GENERATION IMPORTANT TO RELIABILITY..............................................11 RELIABILITY WILL DEPEND ON CLOSE COORDINATION OF GENERATION AND TRANSMISSION PLANNING

85

Does memory contaminate test-retest reliability?  

PubMed

The Wonderlic Personnel Test (1983) was administered twice over a 3-week period under conditions in which the activity of the second test was experimentally manipulated. Data from 302 undergraduates were analyzed. The standard test-retest reliability coefficient, .872, was not significantly different from the coefficients obtained from three other groups that, on the second test, were each given specific instructions: (a) to reason out the answers (pure reassess condition); (b) to use reasoning, memory of their initial responses, or both (reassess and memory); or (c) to take an alternate form of the test (parallel). However, the standard test-retest reliability coefficient was higher, p less than .10, than the coefficient obtained from a condition (pure memory) in which subjects were instructed to duplicate their previous responses, using only memory. Although the subjects in the test-retest and combined reassess and memory conditions reported recalling previous answers for 20-25% of the items on the second test, it was concluded that conscious repetition of specific responses did not seriously inflate the estimate of test-retest reliability. PMID:1613489

McKelvie, S J

1992-01-01

86

Redefining reliability  

SciTech Connect

Want to buy some reliability? The question would have been unthinkable in some markets served by the natural gas business even a few years ago, but in the new gas marketplace, industrial, commercial and even some residential customers have the opportunity to choose from among an array of options about the kind of natural gas service they need--and are willing to pay for. The complexities of this brave new world of restructuring and competition have sent the industry scrambling to find ways to educate and inform its customers about the increased responsibility they will have in determining the level of gas reliability they choose. This article discusses the new options and the new responsibilities of customers, the needed for continuous education, and MidAmerican Energy Company`s experiment in direct marketing of natural gas.

Paulson, S.L.

1995-11-01

87

Broadband monitoring simulation with massively parallel processors  

NASA Astrophysics Data System (ADS)

Modern efficient optimization techniques, namely needle optimization and gradual evolution, enable one to design optical coatings of any type. Even more, these techniques allow obtaining multiple solutions with close spectral characteristics. It is important, therefore, to develop software tools that can allow one to choose a practically optimal solution from a wide variety of possible theoretical designs. A practically optimal solution provides the highest production yield when optical coating is manufactured. Computational manufacturing is a low-cost tool for choosing a practically optimal solution. The theory of probability predicts that reliable production yield estimations require many hundreds or even thousands of computational manufacturing experiments. As a result reliable estimation of the production yield may require too much computational time. The most time-consuming operation is calculation of the discrepancy function used by a broadband monitoring algorithm. This function is formed by a sum of terms over wavelength grid. These terms can be computed simultaneously in different threads of computations which opens great opportunities for parallelization of computations. Multi-core and multi-processor systems can provide accelerations up to several times. Additional potential for further acceleration of computations is connected with using Graphics Processing Units (GPU). A modern GPU consists of hundreds of massively parallel processors and is capable to perform floating-point operations efficiently.

Trubetskov, Mikhail; Amotchkina, Tatiana; Tikhonravov, Alexander

2011-09-01

88

Large Parallel UPS Systems Utilizing PWM Technology  

Microsoft Academic Search

In the past large Uninterruptible Power Supply (UPS) applications requiring parallel operation of Pulse Width Modulated (PWM) inverters was not practical. New developments in PWM logic design have made load sharing of PWM inverters very precise. This paper addresses the critical factors of the inverter and system design which permit parallel operation and enhancement of system reliability. Taking full advantage

John Reed; Naresh Sharma

1984-01-01

89

Manufacturing & Reliability  

E-print Network

with superimposed pressures up to 2 Gpa are possible. Deformation processing is conducted on novel forging, forming Equipment Advanced Deformation Simulator MTS Model 311.31 · Hot/warm/cold forming · Multiple deformation sequences · 110 Kip forging actuator · 220 Kip indexing actuator · Maximum loading rate: 120"/s · "Large

Rollins, Andrew M.

90

Photovoltaic module reliability workshop  

SciTech Connect

The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

Mrig, L. (ed.)

1990-01-01

91

Photovoltaic module reliability workshop  

NASA Astrophysics Data System (ADS)

The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986 to 1990. The reliability photovoltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warrantees available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the U.S., PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

Mrig, L.

92

Adaptive parallel logic networks  

NASA Technical Reports Server (NTRS)

Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

Martinez, Tony R.; Vidal, Jacques J.

1988-01-01

93

Parallel Resistors  

NSDL National Science Digital Library

Students will measure the resistance of resistors that they have drawn on paper with a graphite pencil. They will then connect two resistors in parallel and measure the resistance of the combination. In this activity, it is important that students color v

Michael Horton

2009-05-30

94

ESTABLISHING THE VALIDITY OF SHORT-FORM COMPOSITE ITEMS IN THE CONTEXT OF TEACHING EVALUATIONS  

E-print Network

states that: ?Instruments to measure student ratings of instruction should solicit, at a minimum, student perspectives on (a) the delivery of instruction, (b) the assessment of learning, (c) the availability of the faculty, and (d) whether the goals... to the two items; Hill & Lewicki, 2005). Several different types of reliability measures exist. These include: (a) split- half, (b) internal consistency, (c) parallel forms, and (d) test-retest reliability. In terms of internal consistency, coefficient...

Sawalani, Gita Murli

2008-01-01

95

Structure, reliability, and predictive validity of the Texas Christian University Correctional Residential Self-Rating Form at Intake in a residential substance abuse treatment facility.  

PubMed

This study examined the structure and predictive validity of the Texas Christian University Correctional Residential Self-Rating Form at Intake in a court mandated inpatient substance abuse treatment facility (N = 729). Client characteristics such as treatment motivation and psychological and social functioning were examined as predictors of prospective behavioral outcomes including compliance with treatment program rules and guidelines as well as completion of the treatment program. Results suggest that a broad indicator of individuals' pretreatment motivation predicted their ability to complete the program. Treatment noncompliance, as measured by the number of rule infractions committed during the inpatient treatment, was significantly predicted by individuals' propensity to externalize their symptoms. Implications for the effective use of the CR SRF-Intake as a screener for potential treatment problems are discussed as well as possible targets for interventions in substance abuse populations. PMID:20598835

Lowmaster, Sara E; Morey, Leslie C; Baker, Kay L; Hopwood, Christopher J

2010-09-01

96

Reliability of Projects: A Quantitative Approach Part 1 Reliability and Productivity Models of the Elements of Work Flows  

Microsoft Academic Search

Classical theory of reliability is applicable for the project's reliability analysis. The main idea behind the mathematical modeling of the project's reliability is to represent the entire project as a mixed system of parallel-serial human actions. Every human action, depending on the difficulty of the problems under investigation, with some probability can be successful or unsuccessful. Combining these probabilities with

Pavel Barseghyan

97

Perfect Pipelining: A New Loop Parallelization Technique  

Microsoft Academic Search

Parallelizing compilers do not handle loops in a satisfactory manner. Fine-grain transformationscapture irregular parallelism inside a loop body not amenable to coarser approaches but have limitedability to exploit parallelism across iterations. Coarse methods sacrifice irregular forms of parallelismin favor of pipelining (overlapping) iterations. In this paper we present a new transformation, PerfectPipelining, that bridges the gap between these fine- and

Alexander Aiken; Alexandru Nicolau

1988-01-01

98

Parallelization: Sieve of Eratosthenes  

NSDL National Science Digital Library

This module presents the Sieve of Eratosthenes, a method for finding the prime numbers below a certain integer. One can model the sieve for small integers by hand. For bigger integers, it becomes necessary to use a coded implementation. This code can be either serial (sequential) or parallel. Students will explore the various forms of parallelism (shared memory, distributed memory, and hybrid) as well as the scaling of the algorithm on multiple cores in its various forms, observing the relationship between run time of the program and number of cores devoted to the program. An assessment rubric, two exercises, and two student project ideas allow the student to consolidate her/his understanding of the material presented in the module.

Weeden, Aaron

99

Parallel Anisotropic Tetrahedral Adaptation  

NASA Technical Reports Server (NTRS)

An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

Park, Michael A.; Darmofal, David L.

2008-01-01

100

Parallel hierarchical radiosity rendering  

SciTech Connect

In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

Carter, M.

1993-07-01

101

Parallel Programming in the Age of Ubiquitous Parallelism  

NASA Astrophysics Data System (ADS)

Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

Pingali, Keshav

2014-04-01

102

von Willebrand factor (vWf) as a plasma marker of endothelial activation in diabetes: improved reliability with parallel determination of the vWf propeptide (vWf:AgII).  

PubMed

Elevated plasma von Willebrand factor (vWf) levels are found in diabetes and other vasculopathies, and predict cardiovascular mortality. vWf is stored and released from endothelial cell secretory granules, along with equimolar amounts of its propeptide (vWf:AgII). In the present study, we examined plasma propeptide levels as a marker of endothelial secretion in vivo, using an ELISA based on monoclonal antibodies. vWf but not propeptide levels are influenced by blood groups, explaining in part the smaller variation in plasma propeptide levels among normal individuals. In both controls and insulin-dependent diabetic patients, we found a close correlation between propeptide and immunoreactive vWf levels (r2=0.54, p <0.0001). vWf and propeptide were elevated in patient subgroups with microalbuminuria or overt diabetic nephropathy, whereas only the propeptide was significantly elevated in the normoalbuminuric subgroup. This observation suggests that in conjunction with vWf, propeptide measurements may improve the identification of endothelial activation, which occurs frequently even without increased urinary albumin excretion. In 12 NIDDM patients, a 3-week diet enriched in monounsaturated fat (MUFA) resulted in parallel decreases in vWf (-22%, p <0.05) and propeptide (-17%, p <0.05) levels, indicating that the experimental diet affected endothelial secretion rather than vWf catabolism. A carbohydrate-enriched control diet did not significantly influence either marker. Our results suggest that concomitant determinations of plasma vWf and propeptide are useful tools to assess endothelial activation in vivo, and reinforce our previous conclusion that a diet rich in MUFA can improve endothelial function in NIDDM. PMID:9869174

Vischer, U M; Emeis, J J; Bilo, H J; Stehouwer, C D; Thomsen, C; Rasmussen, O; Hermansen, K; Wollheim, C B; Ingerslev, J

1998-12-01

103

Weka-Parallel: Machine Learning in Parallel  

Microsoft Academic Search

We present Weka-Parallel, which is a modification to Weka, a popularmachine learning software package. Weka-Parallel expands uponthe original program by allowing one to perform n-fold cross-validationsin parallel. This added parallelism causes Weka-Parallel to demonstratea significant speed increase over Weka by lowering the amountof time necessary to evaluate a dataset using any given classifier. WekaParallelis designed for the researcher who needs

Sebastian Celis; David R. Musicant

104

Short-term reliability of a brief hazard perception test.  

PubMed

Hazard perception tests (HPTs) have been successfully implemented in some countries as a part of the driver licensing process and, while their validity has been evaluated, their short-term stability is unknown. This study examined the short-term reliability of a brief, dynamic version of the HPT. Fifty-five young adults (Mage=21 yrs) with at least two years of post-licensing driving experience completed parallel, 21-scene HPTs with a one-month interval separating each test. Minimal practice effects (?0.1s) were manifested. Internal consistency (Cronbach's alpha) averaged 0.73 for the two forms. The correlation between the two tests was 0.55 (p<0.001) and correcting for lack of reliability increased the correlation to 0.72. Thus, a brief form of the HPT demonstrates acceptable short-term reliability in drivers whose hazard perception should be stable, an important feature for implementation and consumer acceptance. One implication of these results is that valid HPT scores should predict future crash risk, a desirable property for user acceptance of such tests. However, short-term stability should be assessed over longer periods and in other driver groups, particularly novices and older adults, in whom inter-individual differences in the development of hazard perception skill may render HPT tests unstable, even over short intervals. PMID:25173997

Scialfa, Charles T; Pereverseff, Rosemary S; Borkenhagen, David

2014-12-01

105

Parallelization of VQ codebook generation by two algorithms: parallel LBG and aggressive PNN [image compression applications  

Microsoft Academic Search

Summary form only given. We evaluate two parallel algorithms for the codebook generation of the VQ compression: parallel LBG and aggressive PNN. Parallel LBG is based on the LBG algorithm with the K-mean method. The cost of both latter algorithms mainly consists of: a) the computation part; b) the communication part; and c) the update part. Aggressive PNN is a

A. Wakatani

2005-01-01

106

PHACT: Parallel HOG and Correlation Tracking  

NASA Astrophysics Data System (ADS)

Histogram of Oriented Gradients (HOG) based methods for the detection of humans have become one of the most reliable methods of detecting pedestrians with a single passive imaging camera. However, they are not 100 percent reliable. This paper presents an improved tracker for the monitoring of pedestrians within images. The Parallel HOG and Correlation Tracking (PHACT) algorithm utilises self learning to overcome the drifting problem. A detection algorithm that utilises HOG features runs in parallel to an adaptive and stateful correlator. The combination of both acting in a cascade provides a much more robust tracker than the two components separately could produce.

Hassan, Waqas; Birch, Philip; Young, Rupert; Chatwin, Chris

2014-03-01

107

Reliability model generator  

NASA Technical Reports Server (NTRS)

An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

McMann, Catherine M. (Inventor); Cohen, Gerald C. (Inventor)

1991-01-01

108

Comparison of Reliability Measures under Factor Analysis and Item Response Theory  

ERIC Educational Resources Information Center

Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…

Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng

2012-01-01

109

Reliability Engineering & Data Collection  

Microsoft Academic Search

International petrochemical, chemical, refining and petroleum industries are trying to implement reliability programs to improve plant safety while trying to maintain plant availability. These programs can vary significantly in size and complexity. Any kind of reliability program, like a preventive maintenance (PM) program, consists always of one or more reliability models and reliability data to execute these models. It is

Michel Houtermans; T. V. Capelle; M. Al-Ghumgham

2007-01-01

110

Reviewing Traffic Reliability Research  

Microsoft Academic Search

Multi-dimension, stochastic, and dynamic are essential nature of urban traffic operation. Traffic reliability introduces the idea of reliability into traffic research and is an important field of cause analysis of traffic problems. Considerable researches have been conducted on traffic reliability, covering from theory to practice, and from model to algorithm. There already exists a framework for reliability analysis. However, few

Dianhai WANG; Hongsheng QI; Cheng XU

2010-01-01

111

Reliability Generalization: "Lapsus Linguae"  

ERIC Educational Resources Information Center

This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

Smith, Julie M.

2011-01-01

112

Reliable Evaluations of URL Normalization  

Microsoft Academic Search

URL normalization is a process of transforming URL strings into canonical form. Through this process, duplicate URL representations for web pages can be reduced significantly. There are a number of normalization methods. In this paper, we describe four metrics for evaluating normalization methods. The reliability and consistency of a URL is also considered in our evaluation. With the metrics proposed,

Sung Jin Kim; Hyo Sook Jeong; Sang Ho Lee

2006-01-01

113

A trial for a reliable shape measurement using interferometry and deflectometry  

NASA Astrophysics Data System (ADS)

Phase measuring deflectometry is an emerging technique to measure specular complex surface, such as aspherical surface and free-form surface. It is very attractive for its wide dynamic range of vertical scale and application range. Because it is a gradient based surface profilometry, we have to integrate the measured data to get surface shape. It can be cause of low accuracy. On the other hand, interferometry is accurate and well-known method for precision shape measurement. In interferometry, the original measured data is phase of interference signal, which directly shows the surface shape of the target. However interferometry is too precise to measure aspherical surface, free-form surface and usual surface in common industry. To assure the accuracy in ultra-precision measurement, reliability is the most important thing. Reliability can be kept by cross-checking. Then I will propose measuring method using both interferometer and deflectometry for reliable shape measurement. In this concept, global shape is measured using deflectometry and local shape around flat area is measured using interferometry. The result of deflectometry is global and precise. But it include ambiguity due to slope integration. In interferometry, only a small area can be measured, which is almost parallel to the reference surface. But it is accurate and reliable. To combine both results, it should be global, precise and reliable measurement. I will present the concept of combination of interferometry and deflectometry and some preliminary experimental results.

Hanayama, Ryohei

2014-07-01

114

Comprehensive Design Reliability Activities for Aerospace Propulsion Systems  

NASA Technical Reports Server (NTRS)

This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

Christenson, R. L.; Whitley, M. R.; Knight, K. C.

2000-01-01

115

Improved CDMA Performance Using Parallel Interference Cancellation  

NASA Technical Reports Server (NTRS)

This report considers a general parallel interference cancellation scheme that significantly reduces the degradation effect of user interference but with a lesser implementation complexity than the maximum-likelihood technique. The scheme operates on the fact that parallel processing simultaneously removes from each user the interference produced by the remaining users accessing the channel in an amount proportional to their reliability. The parallel processing can be done in multiple stages. The proposed scheme uses tentative decision devices with different optimum thresholds at the multiple stages to produce the most reliably received data for generation and cancellation of user interference. The 1-stage interference cancellation is analyzed for three types of tentative decision devices, namely, hard, null zone, and soft decision, and two types of user power distribution, namely, equal and unequal powers. Simulation results are given for a multitude of different situations, in particular, those cases for which the analysis is too complex.

Simon, Marvin; Divsalar, Dariush

1995-01-01

116

Parallel Mandelbrot Set Model  

NSDL National Science Digital Library

The Parallel Mandelbrot Set Model is a parallelization of the sequential MandelbrotSet model, which does all the computations on a single processor core. This parallelization is able to use a computer with more than one cores (or processors) to carry out the same computation, thus speeding up the process. The parallelization is done using the model elements in the Parallel Java group. These model elements allow easy use of the Parallel Java library created by Alan Kaminsky. In particular, the parallelization used for this model is based on code in Chapters 11 and 12 of Kaminsky's book Building Parallel Java. The Parallel Mandelbrot Set Model was developed using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double click the ejs_chaos_ParallelMandelbrotSet.jar file to run the program if Java is installed.

Franciscouembre

2011-11-24

117

Implementation of an efficient parallel BDD package  

Microsoft Academic Search

Large BDD applications push completing resources to their limits. One solution to overcoming resource limitations is to distribute the BDD data structure across multiple networked workstations. This paper presents an efficient parallel BDD package for a distributed environment such as a network of workstations (NOW) or a distributed memory parallel computer. The implementation exploits a number of different forms of

Tony Stornetta; Forrest Brewer

1996-01-01

118

Parallel Activation in Bilingual Phonological Processing  

ERIC Educational Resources Information Center

In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

Lee, Su-Yeon

2011-01-01

119

Parallel rendering techniques for massively parallel visualization  

SciTech Connect

As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

Hansen, C.; Krogh, M.; Painter, J.

1995-07-01

120

Towards reliable multimodal sensing in aware environments  

Microsoft Academic Search

A prototype system for implementing a reliable sensor network for large scale smart environments is presented. Most applications within any form of smart environments (rooms, offices, homes, etc.) are dependent on reliable who, where, when, and what information of its inhabitants (users). This information can be inferred from different sensors spread throughout the space. However, isolated sensing technologies provide limited

Scott Stillman; Irfan Essa

2001-01-01

121

Reliability analysis of composite structures  

NASA Technical Reports Server (NTRS)

A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.

Kan, Han-Pin

1992-01-01

122

Task parallel implementation of NAS parallel benchmarks.  

E-print Network

??The multi-core era brings new challenges to the programming community. Parallelization requirements of applications in mainstream computing and applications in emergent fields of high performance… (more)

Nanjaiah, Shashi Kumar

2010-01-01

123

Power electronics reliability analysis.  

SciTech Connect

This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

Smith, Mark A.; Atcitty, Stanley

2009-12-01

124

Human Reliability Program Overview  

SciTech Connect

This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

Bodin, Michael

2012-09-25

125

Parallel iterative methods for solving linear equations  

SciTech Connect

A form of matrix splitting into interlocking quadrants is introduced; this leads to a new class of iterative methods for the numerical solution of linear simultaneous equations which are applicable for use on parallel computers. 11 references.

Evans, D.J.; Haghighi, R.S.

1982-01-01

126

Design considerations for parallel graphics libraries  

NASA Technical Reports Server (NTRS)

Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

Crockett, Thomas W.

1994-01-01

127

Parallel Adaptive Mesh Refinement Library  

NASA Technical Reports Server (NTRS)

Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

Mac-Neice, Peter; Olson, Kevin

2005-01-01

128

Parallel integrated frame synchronizer chip  

NASA Technical Reports Server (NTRS)

A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

2000-01-01

129

Speculative parallelization of partially parallel loops  

E-print Network

, and applied a fully parallel data dependence test to determine if it had any cross–processor depen- dences. If the test failed, then the loop was re–executed serially. While this method exploits doall parallelism well, it can cause slowdowns for loops...

Dang, Francis Hoai Dinh

2009-05-15

130

Low-power approaches for parallel, free-space photonic interconnects  

SciTech Connect

Future advances in the application of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMS) and board-level parallel connections. Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. These will make use of new device-level technologies such as vertical cavity surface-emitting lasers and special low-power parallel photoreceiver circuits. Depending on the application, these device technologies will often be monolithically integrated to reduce the amount of board or module real estate required by the photonics. Highly parallel MCM and board-level applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated in photonic and optoelectronic technologies. An example is found in two-dimensional point-to-point array interconnects for MCM stacking. These interconnects are based on high-efficiency Vertical Cavity Surface Emitting Lasers (VCSELs), Heterojunction Bipolar Transistor (HBT) photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques. Individual channels have been demonstrated at 100 Mb/s, operating with a direct 3.3V CMOS electronic interface while using 45 mW of electrical power. These results demonstrate how optoelectronic device technologies can be optimized for low-power parallel link applications.

Carson, R.F.; Lovejoy, M.L.; Lear, K.L.; WSarren, M.E.; Seigal, P.K.; Craft, D.C.; Kilcoyne, S.P.; Patrizi, G.A.; Blum, O.

1995-12-31

131

Combinatorial reliability analysis of multiprocessor computers  

SciTech Connect

The authors propose a combinatorial method to evaluate the reliability of multiprocessor computers. Multiprocessor structures are classified as crossbar switch, time-shared buses, and multiport memories. Closed-form reliability expressions are derived via combinatorial path enumeration on the probabilistic-graph representation of a multiprocessor system. The method can analyze the reliability performance of real systems like C.mmp, Tandem 16, and Univac 1100/80. User-oriented performance levels are defined for measuring the performability of degradable multiprocessor systems. For a regularly structured multiprocessor system, it is fast and easy to use this technique for evaluating system reliability with statistically independent component reliabilities. System availability can be also evaluated by this reliability study. 6 references.

Hwang, K.; Tian-Pong Chang

1982-12-01

132

User's guide to the Reliability Estimation System Testbed (REST)  

NASA Technical Reports Server (NTRS)

The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

1992-01-01

133

Efficient reliable broadcast for commodity clusters  

Microsoft Academic Search

High-speed collective communication is the key to achieve high-performance computing in parallel computing. In the past, collective operations are usually implemented using unicast operations. We proposed a new architecture EQA (Enhanced Queue Architecture) for implementing high-speed collective operations in a cluster. With the incorporation of EQA and the hardware broadcast facility in network switches, an efficient reliable broadcast operation is

Kwan-Po Wong; Cho-Li Wang

2000-01-01

134

Parallel flow diffusion battery  

DOEpatents

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

Yeh, H.C.; Cheng, Y.S.

1984-01-01

135

DC Circuits: Parallel Resistances  

NSDL National Science Digital Library

In this interactive learning activity, students will learn about parallel circuits. They will measure and calculate the resistance of parallel circuits and answer several questions about the example circuit shown.

136

Parallel I/O Systems  

NSDL National Science Digital Library

* Redundant disk array architectures,* Fault tolerance issues in parallel I/O systems,* Caching and prefetching,* Parallel file systems,* Parallel I/O systems, * Parallel I/O programming paradigms, * Parallel I/O applications and environments, * Parallel programming with parallel I/O

Amy Apon

137

Reliability models for dataflow computer systems  

NASA Technical Reports Server (NTRS)

The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

Kavi, K. M.; Buckles, B. P.

1985-01-01

138

Reliability quantification and visualization for electric microgrids  

NASA Astrophysics Data System (ADS)

The electric grid in the United States is undergoing modernization from the state of an aging infrastructure of the past to a more robust and reliable power system of the future. The primary efforts in this direction have come from the federal government through the American Recovery and Reinvestment Act of 2009 (Recovery Act). This has provided the U.S. Department of Energy (DOE) with 4.5 billion to develop and implement programs through DOE's Office of Electricity Delivery and Energy Reliability (OE) over the a period of 5 years (2008-2012). This was initially a part of Title XIII of the Energy Independence and Security Act of 2007 (EISA) which was later modified by Recovery Act. As a part of DOE's Smart Grid Programs, Smart Grid Investment Grants (SGIG), and Smart Grid Demonstration Projects (SGDP) were developed as two of the largest programs with federal grants of 3.4 billion and $600 million respectively. The Renewable and Distributed Systems Integration (RDSI) demonstration projects were launched in 2008 with the aim of reducing peak electricity demand by 15 percent at distribution feeders. Nine such projects were competitively selected located around the nation. The City of Fort Collins in co-operative partnership with other federal and commercial entities was identified to research, develop and demonstrate a 3.5MW integrated mix of heterogeneous distributed energy resources (DER) to reduce peak load on two feeders by 20-30 percent. This project was called FortZED RDSI and provided an opportunity to demonstrate integrated operation of group of assets including demand response (DR), as a single controllable entity which is often called a microgrid. As per IEEE Standard 1547.4-2011 (IEEE Guide for Design, Operation, and Integration of Distributed Resource Island Systems with Electric Power Systems), a microgrid can be defined as an electric power system which has following characteristics: (1) DR and load are present, (2) has the ability to disconnect from and parallel with the area Electric Power Systems (EPS), (3) includes the local EPS and may include portions of the area EPS, and (4) is intentionally planned. A more reliable electric power grid requires microgrids to operate in tandem with the EPS. The reliability can be quantified through various metrics for performance measure. This is done through North American Electric Reliability Corporation (NERC) metrics in North America. The microgrid differs significantly from the traditional EPS, especially at asset level due to heterogeneity in assets. Thus, the performance cannot be quantified by the same metrics as used for EPS. Some of the NERC metrics are calculated and interpreted in this work to quantify performance for a single asset and group of assets in a microgrid. Two more metrics are introduced for system level performance quantification. The next step is a better representation of the large amount of data generated by the microgrid. Visualization is one such form of representation which is explored in detail and a graphical user interface (GUI) is developed as a deliverable tool to the operator for informative decision making and planning. Electronic appendices-I and II contain data and MATLAB© program codes for analysis and visualization for this work.

Panwar, Mayank

139

Special issue on parallelism  

Microsoft Academic Search

The articles presented in our Special Issue on parallel processing on the supercomputing scale reflect, to some extent, splits in the community developing these machines. There are several schools of thought on how best to implement parallel processing at both the hard- and software levels. Controversy exists over the wisdom of aiming for general- or special-purpose parallel machines, and what

Karen A. Frenkel

1986-01-01

140

CFD on parallel computers  

NASA Astrophysics Data System (ADS)

CFD or Computational Fluid Dynamics is one of the scientific disciplines that has always posed new challenges to the capabilities of the modern, ultra-fast supercomputers, and now to the even faster parallel computers. For applications where number crunching is of primary importance, there is perhaps no escaping parallel computers since sequential computers can only be (as projected) as fast as a few gigaflops and no more, unless, of course, some altogether new technology appears in future. For parallel computers, on the other hand, there is no such limit since any number of processors can be made to work in parallel. Computationally demanding CFD codes and parallel computers are therefore soul-mates, and will remain so for all foreseeable future. So much so that there is a separate and fast-emerging discipline that tackles problems specific to CFD as applied to parallel computers. For some years now, there is an international conference on parallel CFD. So, one can indeed say that parallel CFD has arrived. To understand how CFD codes are parallelized, one must understand a little about how parallel computers function. Therefore, in what follows we will first deal with parallel computers, how a typical CFD code (if there is one such) looks like, and then the strategies of parallelization.

Basu, A. J.

1994-10-01

141

Coordinating heterogeneous parallelism  

Microsoft Academic Search

Our goal is to produce a client-server based programming environment to enable massively parallel symbolic computing on heterogeneous ensembles of parallel hardware. Multiple users should be able to log on as clients and use any combination of the resources available, via a single simple language. We want our users to have control over the Kind of parallelism employed by their

Duncan J. Batey; Julian A. Padget

1995-01-01

142

Parallel simulation today  

NASA Technical Reports Server (NTRS)

This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

Nicol, David; Fujimoto, Richard

1992-01-01

143

Research in parallel computing  

NASA Technical Reports Server (NTRS)

This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.

Ortega, James M.; Henderson, Charles

1994-01-01

144

Decomposing the Potentially Parallel  

NSDL National Science Digital Library

This course provides an introduction to the issues involved in decomposing problems onto parallel machines, and to the types of architectures and programming styles commonly found in parallel computers. The list of topics discussed includes types of decomposition, task farming, regular domain decomposition, unbalanced grids, and parallel molecular dynamics.

Elspeth Minty, Robert Davey, Alan Simpson, David Henty

145

Mapping Accuracy of Short Reads from Massively Parallel Sequencing and the Implications for Quantitative Expression Profiling  

Microsoft Academic Search

BackgroundMassively parallel sequencing offers an enormous potential for expression profiling, in particular for interspecific comparisons. Currently, different platforms for massively parallel sequencing are available, which differ in read length and sequencing costs. The 454-technology offers the highest read length. The other sequencing technologies are more cost effective, on the expense of shorter reads. Reliable expression profiling by massively parallel sequencing

Nicola Palmieri; Christian Schlötterer; Rodolfo Aramayo

2009-01-01

146

Reliability for tactical cryocoolers  

NASA Astrophysics Data System (ADS)

For the past 18 years Carleton Life Support Systems has produced over 15,000 tactical cryogenic coolers that are primarily used in military infrared systems with excellent demonstrated reliability. As system reliability has improved, the cooler performance has emerged as a dominant component for reliability predictions. This has driven cooler reliability requirements to increase from a 1500-hour rotary cooler in chiefly ground applications to current requirements of 20,000 hours for linear coolers in advanced airborne applications. At the same time there is a push for improved cooldown time, lower power, lighter weight and smaller package. This paper reviews our progress on extending cooler life. It reviews recent product returns and contends that the majority of issues are not primarily related to reliability. It also reviews how system performance specifications are restrictive to the cooler designer in achieving higher reliability in tactical coolers.

Hoefelmeyer, Henry L.; Nelson, Randy; Nelson, Robert

2004-08-01

147

Test-Retest Reliability and Minimal Detectable Change on Balance and Ambulation Tests, the 36Item Short Form Health Survey, and the Unified Parkinson Disease Rating Scale in People With Parkinsonism  

Microsoft Academic Search

Background and Purpose. Distinguishing between a clinically significant change and change due to measurement error can be difficult. The purpose of this study was to determine test-retest reliability and minimal detectable change for the Berg Balance Scale (BBS), forward and backward functional reach, the Romberg Test and the Sharpened Romberg Test (SRT) with eyes open and closed, the Activities- specific

Teresa Steffen; Megan Seney

148

Human reliability analysis  

SciTech Connect

The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach.

Dougherty, E.M.; Fragola, J.R.

1988-01-01

149

Summary of Research on Reliability Criteria-Based Flight System Control  

NASA Technical Reports Server (NTRS)

This paper presents research on the reliability assessment of adaptive flight control systems. The topics include: 1) Overview of Project Focuses; 2) Reliability Analysis; and 3) Design for Reliability. This paper is presented in viewgraph form.

Wu, N. Eva; Belcastro, Christine (Technical Monitor)

2002-01-01

150

Recalibrating software reliability models  

NASA Technical Reports Server (NTRS)

In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

1989-01-01

151

Software Reliability 2002  

NASA Technical Reports Server (NTRS)

In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

Wallace, Dolores R.

2003-01-01

152

Low-power, parallel photonic interconnections for Multi-Chip Module applications  

SciTech Connect

New applications of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMs). Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. MCM-based applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated currently in photonic and optoelectronic technologies. The work described is a parallel link array, designed for vertical (Z-Axis) interconnection of the layers in a MCM-based signal processor stack, operating at a data rate of 100 Mb/s. This interconnect is based upon high-efficiency VCSELs, HBT photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques.

Carson, R.F.; Lovejoy, M.L.; Lear, K.L.

1994-12-31

153

A Bayesian approach to reliability and confidence  

NASA Technical Reports Server (NTRS)

The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

Barnes, Ron

1989-01-01

154

Bonded Retainers - Clinical Reliability  

Microsoft Academic Search

Bonded retainers have become a very important retention appliance in orthodontic treatment. They are popular because they are considered reliable, independent of patient cooperation, highly efficient, easy to fabricate, and almost invisible. Of these traits, reliability is the subject of this clinical study. A total of 549 patients with retainers were analyzed with regard to wearing time, extension of the

Dietmar Segner; Bettina Heinrici

2000-01-01

155

On reliability growth testing  

Microsoft Academic Search

Reliability development growth testing (RDGT) is the most common method used to improve equipment reliability. The author had an opportunity to perform an analysis of hardware that experienced environmental stress screening (ESS), environmental qualification testing (EQT), RDGT and field usage. The failure mode and corrective action data were used to qualitatively assess the effectiveness of RDGT testing. The results of

E. Demko

1995-01-01

156

Hawaii electric system reliability.  

SciTech Connect

This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

Silva Monroy, Cesar Augusto; Loose, Verne William

2012-09-01

157

Semiconductor network reliability assessment  

Microsoft Academic Search

The paper discusses the reliability test plan and test results on semiconductor network microelectronic devices. Included in the test plan are reliability programs, life testing, step stress testing, and environmental tests. The failure analysis-corrective action cycle is discussed at length. The failure analysis procedure is outlined with many specific examples of analysis results at various points in the analysis procedure.

J. Adams; W. Workman

1964-01-01

158

The reliability of multitest regimens with sacroiliac pain provocation tests  

Microsoft Academic Search

Background: Studies concerning the reliability of individual sacroiliac tests have inconsistent results. It has been suggested that the use of a test regimen is a more reliable form of diagnosis than individually performed tests. Objective: To assess the interrater reliability of multitest scores by using a regimen of 5 commonly used sacroiliac pain provocation tests. Methods: Two examiners examined 78

Dirk J. Kokmeyer; Peter van der Wurff; Geert Aufdemkampe; Theresa C. M. Fickenscher

2002-01-01

159

Parallel digital forensics infrastructure.  

SciTech Connect

This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick

2009-10-01

160

Java Parallel Secure Stream for Grid Computing  

SciTech Connect

The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. This paper presents a pure Java package called JPARSS (Java Par-allel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addi-tion X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed.

Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

2001-09-01

161

Serial-parallel decompositions of Mueller matrices.  

PubMed

The algebraic methods for serial and parallel decompositions of Mueller matrices are combined in order to obtain a general framework for a suitable analysis of polarimetric measurements based on equivalent systems constituted by simple components. A general procedure for the parallel decomposition of a Mueller matrix into a convex sum of pure elements is presented and applied to the two canonical forms of depolarizing Mueller matrices [Ossikovski, J. Opt. Soc. Am. A 27, 123 (2010).], leading to the serial-parallel decomposition of any Mueller matrix. The resultant model is consistent with the mathematical structure and the reciprocity properties of Mueller matrices. PMID:23456000

Gil, José J; San José, Ignacio; Ossikovski, Razvigor

2013-01-01

162

Non-Cartesian parallel imaging reconstruction.  

PubMed

Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499

Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

2014-11-01

163

Availability and reliability overview  

SciTech Connect

With the diversity of fuel costs, outages of high voltage direct current (HVDC) systems can have a large economic impact. Available methods to evaluate reliability are based on simple probability and sufficient data. A valid, consistent data base for historical performance is available through CIGRE publications. Additional information on future performance is available from each supplier's bid. Using all available information, including the customer's own estimate of reliability, reliability can be evaluated by calculating the expected value of energy unavailability for each supplier. 4 figures, 2 tables.

Albrecht, P.F.; Fink, J.L.

1984-01-01

164

PCLIPS: Parallel CLIPS  

NASA Technical Reports Server (NTRS)

A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

1994-01-01

165

An iterative parallel algorithm of finite element method  

Microsoft Academic Search

In this paper, a parallel algorithm with iterative form for solving finite element equation is presented. Based on the iterative\\u000a solution of linear algebra equations, the parallel computational steps are introduced in this method. Also by using the weighted\\u000a residual method and choosing the appropriate weighting functions, the finite element basic form of parallel algorithm is deduced.\\u000a The program of

Hu Ning; Zhang Ru-qing

1992-01-01

166

Medical Device Reliability BIOMATERIALS  

E-print Network

-generation packaging, where conformal coatings will serve as the primary interface between the deviceMedical Device Reliability BIOMATERIALS Our goal is to provide medical device manufacturers, and consistency of active implantable medical devices. These devices, including pacemakers, cardiac defibrillators

167

Sequential cumulative fatigue reliability  

NASA Technical Reports Server (NTRS)

A component is assumed to be subjected to a sequence of several groups of sinusoidal stresses. Each group consists of a specific number of cycles having the same maximum alternating stress level and the same mean stress level, the maximum alternating stress level being different from group to group. A method for predicting the reliability of components subjected to such loads is proposed, given their distributional alternating stress versus cycles-to-failure (S-N) diagram. It is called the 'conditional reliability-equivalent life' method. It is applied to four-cases using distributional fatigue data generated in the Reliability Research Laboratory of The University of Arizona, and the predicted reliabilities are compared and discussed.

Kececioglu, D.; Chester, L. B.; Gardner, E. O.

1974-01-01

168

Parallel Processing Letters, c World Scienti c Publishing Company  

E-print Network

for point-to-point communication { an example is a restricted form of reduction { but memory allocation of powerful forms of reduction and scan [11] diÃ?cult in Fortran or C. Our prime concerns are the structure. Portability can only be achieved if the parallel solution is not geared towards a speci#12;c parallel

Passau, Universität

169

Reliability and Regression Analysis  

NSDL National Science Digital Library

This applet, by David M. Lane of Rice University, demonstrates how the reliability of X and Y affect various aspects of the regression of Y on X. Java 1.1 is required and a full set of instructions is given in order to get the full value from the applet. Exercises and definitions to key terms are also given to help students understand reliability and regression analysis.

Lane, David M.

2009-02-17

170

The Journey Toward Reliability  

NSDL National Science Digital Library

Kansas State University faculty members have partnered with industry to assist in the implementation of a reliability centered manufacturing (RCM) program. This paper highlights faculty members experiences, benefits to industry of implementing a reliability centered manufacturing program, and faculty members roles in the RCM program implementation. The paper includes lessons learned by faculty members, short-term extensions of the faculty-industry partnership, and a long-term vision for a RCM institute at the university level.

Brockway, Kathy V.; Spaulding, Greg

2010-03-15

171

A high-speed linear algebra library with automatic parallelism  

NASA Technical Reports Server (NTRS)

Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

Boucher, Michael L.

1994-01-01

172

Multidisciplinary System Reliability Analysis  

NASA Technical Reports Server (NTRS)

The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

2001-01-01

173

Software reliability studies  

NASA Technical Reports Server (NTRS)

The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.

Wilson, Larry W.

1989-01-01

174

Statistical modelling of software reliability  

NASA Technical Reports Server (NTRS)

During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

Miller, Douglas R.

1991-01-01

175

Compositional C++: Compositional Parallel Programming  

Microsoft Academic Search

A compositional parallel program is a program constructed by composing component programs in parallel, where the composed program inherits properties of its components. In this paper, we describe a small extension of C++ called Compositional C++ or CC++ which is an object-oriented notation that supports compositional parallel programming. CC++ integrates different paradigms of parallel programming: data-parallel, task-parallel and object-parallel paradigms;

K. Mani Chandy; Carl Kesselman

1992-01-01

176

NAS Parallel Benchmark Results  

Microsoft Academic Search

The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to studythe performance of parallel supercomputers. The eight benchmark problems are specified in a"pencil and paper" fashion. In other words, the complete details of the problem to be solved aregiven in a technical document, and except for a few restrictions, benchmarkers are free to selectthe language constructs and

Subhash Saini; David H. Bailey

1995-01-01

177

Parallelizing quantum circuits  

Microsoft Academic Search

We present a novel automated technique for parallelizing quantum circuits via forward and backward translation to measurement-based quantum computing patterns and analyze the trade off in terms of depth and space complexity. As a result we distinguish a class of polynomial depth circuits that can be parallelized to logarithmic depth while adding only polynomial many auxiliary qubits. In particular, we

Anne Broadbent; Elham Kashefi

2009-01-01

178

Applied Parallel Metadata Indexing  

Microsoft Academic Search

The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data

Jacobi; Michael R

2012-01-01

179

Parallel computing works  

SciTech Connect

An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

Not Available

1991-10-23

180

Characterization of parallel subtraction  

PubMed Central

Parallel subtraction is an operation defined on pairs of positive operators. In terms of electrical networks, one may pose the following problem: Given an electrical network, represented by a specified positive operator, determine the set of positive operators which when connected in parallel with the specified operator yield another prescribed operator. The set of solutions of this electrical network problem is shown to have a minimum. The minimum is termed “the parallel difference of the fixed operators,” and the operation is termed “parallel subtraction.” The parallel difference is used to obtain explicit error estimates for an iteration procedure which approximates the geometric mean of positive operators. This concept of the geometric mean reduces to the square root of the product of the operators if the operators commute. Finally, by using the geometric mean, an operator version of the Gaussian mean is presented. PMID:16592689

Anderson, W. N.; Morley, T. D.; Trapp, G. E.

1979-01-01

181

A master and slave control strategy for parallel operation of three-phase UPS systems with different ratings  

Microsoft Academic Search

Parallel operation of UPS system has been used to increase power capacity of the system or to secure reliable supply of power to critical loads. During parallel operation, load sharing control to maintain the current balance is critical for reliable operation, since load sharing is very sensitive to differences in components of each module such as amplitude \\/phase difference, line

Woo-Cheol Lee; Taeck-Ki Lee; Sang-Hoon Lee; Kyung-Hwan Kim; Dong-Seok Hyun; In-Young Suh

2004-01-01

182

Statistical criteria for parallel tests: a comparison of accuracy and power.  

PubMed

Parallel tests are needed so that alternate forms can be applied to different groups or on different occasions, but also in the context of split-half reliability estimation for a given test. Statistically, parallelism holds beyond reasonable doubt when the null hypotheses of equality of observed means and variances across the two forms (or halves) are not rejected. Several statistical tests have been proposed for this purpose, but their performance has never been compared. This study assessed the relative performance (type I error rate and power) of the Student-Pitman-Morgan, Bradley-Blackwood, and Wilks tests of equality of means and variances in the typical conditions surrounding studies of parallelism-namely, integer-valued and bounded test scores with distributions that may not be bivariate normal. The results advise against the use of the Wilks test and support the use of the Bradley-Blackwood test because of its simplicity and its minimally better performance in comparison with the more cumbersome Student-Pitman-Morgan test. PMID:23413034

García-Pérez, Miguel A

2013-12-01

183

Reliability-based aeroelastic optimization of a composite aircraft wing via fluid-structure interaction of high fidelity solvers  

NASA Astrophysics Data System (ADS)

We consider reliability based aeroelastic optimization of a AGARD 445.6 composite aircraft wing with stochastic parameters. Both commercial engineering software and an in-house reliability analysis code are employed in this high-fidelity computational framework. Finite volume based flow solver Fluent is used to solve 3D Euler equations, while Gambit is the fluid domain mesh generator and Catia-V5-R16 is used as a parametric 3D solid modeler. Abaqus, a structural finite element solver, is used to compute the structural response of the aeroelastic system. Mesh based parallel code coupling interface MPCCI-3.0.6 is used to exchange the pressure and displacement information between Fluent and Abaqus to perform a loosely coupled fluid-structure interaction by employing a staggered algorithm. To compute the probability of failure for the probabilistic constraints, one of the well known MPP (Most Probable Point) based reliability analysis methods, FORM (First Order Reliability Method) is implemented in Matlab. This in-house developed Matlab code is embedded in the multidisciplinary optimization workflow which is driven by Modefrontier. Modefrontier 4.1, is used for its gradient based optimization algorithm called NBI-NLPQLP which is based on sequential quadratic programming method. A pareto optimal solution for the stochastic aeroelastic optimization is obtained for a specified reliability index and results are compared with the results of deterministic aeroelastic optimization.

Nikbay, M.; Fakkusoglu, N.; Kuru, M. N.

2010-06-01

184

Gearbox Reliability Collaborative Update (Presentation)  

SciTech Connect

This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

Sheng, S.

2013-10-01

185

Reliability Assessment for Two Versions of Vocabulary Levels Tests  

ERIC Educational Resources Information Center

This article reports a reliability study of two versions of the Vocabulary Levels Test at the 5000 word level. This study was motivated by a finding from an ongoing longitudinal study of vocabulary acquisition that Version A and Version B of Vocabulary Levels Test at the 5000 word level were not parallel. In order to investigate this issue,…

Xing, Peiling; Fulcher, Glenn

2007-01-01

186

A case study of the splithalf reliability coefficient  

Microsoft Academic Search

Different values for split-half reliability will be found for a single test if the items comprising the contrasted halves of the test are selected in different ways. The author presents evidence based on 4 arbitrary splits, such as the odd-even item split, on 30 random splits, and on 14 parallel splits in which the division was determined by item analysis

L. J. Cronbach

1946-01-01

187

Proposed reliability cost model  

NASA Technical Reports Server (NTRS)

The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

Delionback, L. M.

1973-01-01

188

Quantifying reliability uncertainty : a proof of concept.  

SciTech Connect

This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna (Los Alamos National Laboratory, Los Alamos, NM); Lorio, John F.; Fatherley, Quinn (Los Alamos National Laboratory, Los Alamos, NM); Anderson-Cook, Christine (Los Alamos National Laboratory, Los Alamos, NM); Wilson, Alyson G. (Los Alamos National Laboratory, Los Alamos, NM); Zurn, Rena M.

2009-10-01

189

Electronic logic for enhanced switch reliability  

DOEpatents

A logic circuit is used to enhance redundant switch reliability. Two or more switches are monitored for logical high or low output. The output for the logic circuit produces a redundant and fail-safe representation of the switch outputs. When both switch outputs are high, the output is high. Similarly, when both switch outputs are low, the logic circuit's output is low. When the output states of the two switches do not agree, the circuit resolves the conflict by memorizing the last output state which both switches were simultaneously in and produces the logical complement of this output state. Thus, the logic circuit of the present invention allows the redundant switches to be treated as if they were in parallel when the switches are open and as if they were in series when the switches are closed. A failsafe system having maximum reliability is thereby produced.

Cooper, J.A.

1984-01-20

190

Understanding biological computation: reliable learning and recognition.  

PubMed Central

We experimentally examine the consequences of the hypothesis that the brain operates reliably, even though individual components may intermittently fail, by computing with dynamical attractors. Specifically, such a mechanism exploits dynamic collective behavior of a system with attractive fixed points in its phase space. In contrast to the usual methods of reliable computation involving a large number of redundant elements, this technique of self-repair only requires collective computation with a few units, and it is amenable to quantitative investigation. Experiments on parallel computing arrays show that this mechanism leads naturally to rapid self-repair, adaptation to the environment, recognition and discrimination of fuzzy inputs, and conditional learning, properties that are commonly associated with biological computation. PMID:6593731

Hogg, T; Huberman, B A

1984-01-01

191

Software reliability perspectives  

NASA Technical Reports Server (NTRS)

Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

Wilson, Larry; Shen, Wenhui

1987-01-01

192

The NAS parallel benchmarks  

NASA Technical Reports Server (NTRS)

A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

1991-01-01

193

The NAS parallel benchmarks  

NASA Technical Reports Server (NTRS)

A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

Bailey, David (editor); Barton, John (editor); Lasinski, Thomas (editor); Simon, Horst (editor)

1993-01-01

194

NAS parallel benchmark results  

NASA Technical Reports Server (NTRS)

The NAS (Numerical Aerodynamic Simulation) parallel benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a 'pencil and paper' fashion. The performance results of various systems using the NAS parallel benchmarks are presented. These results represent the best results that have been reported to the authors for the specific systems listed. They represent implementation efforts performed by personnel in both the NAS Applied Research Branch of NASA Ames Research Center and in other organizations.

Bailey, D. H.; Barszcz, E.; Dagum, L.; Simon, H. D.

1992-01-01

195

JSD: Parallel Job Accounting on the IBM SP2  

NASA Technical Reports Server (NTRS)

The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

1995-01-01

196

Parallelization of thermochemical nanolithography  

NASA Astrophysics Data System (ADS)

One of the most pressing technological challenges in the development of next generation nanoscale devices is the rapid, parallel, precise and robust fabrication of nanostructures. Here, we demonstrate the possibility to parallelize thermochemical nanolithography (TCNL) by employing five nano-tips for the fabrication of conjugated polymer nanostructures and graphene-based nanoribbons.One of the most pressing technological challenges in the development of next generation nanoscale devices is the rapid, parallel, precise and robust fabrication of nanostructures. Here, we demonstrate the possibility to parallelize thermochemical nanolithography (TCNL) by employing five nano-tips for the fabrication of conjugated polymer nanostructures and graphene-based nanoribbons. Electronic supplementary information (ESI) available: Details on the cantilevers array, on the sample preparation, and on the GO AFM experiments. See DOI: 10.1039/c3nr05696a

Carroll, Keith M.; Lu, Xi; Kim, Suenne; Gao, Yang; Kim, Hoe-Joon; Somnath, Suhas; Polloni, Laura; Sordan, Roman; King, William P.; Curtis, Jennifer E.; Riedo, Elisa

2014-01-01

197

The Parallel Axiom  

ERIC Educational Resources Information Center

Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

Rogers, Pat

1972-01-01

198

Series and Parallel Circuits  

NSDL National Science Digital Library

In this activity, learners demonstrate and discuss simple circuits as well as the differences between parallel and serial circuit design and functions. Learners test two different circuit designs through the use of low voltage light bulbs.

IEEE

2013-08-30

199

Parallels with nature  

NASA Astrophysics Data System (ADS)

Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

2014-10-01

200

Parallelization of CFD codes  

NASA Astrophysics Data System (ADS)

The use of parallelization is examined for conducting CFD representations such as 3D Navier-Stokes simulations of flows about aircraft for engineering purposes. References are made to fine-, medium-, and coarse-grain levels of parallelism, the use of artificial viscosity, and the use of explicit Runge-Kutta time integration. The inherent parallelism in CFD is examined with attention given to the use of patched multiblocks on shared-memory and local-memory MIMD machines. Medium-grain parallelism is effective for the shared-memory MIMDs when using a compiler directive that advances the equations in time after copying them onto several independent processors. Local-memory computers can be used to avoid the performance restrictions of memory access by using processors with built-in memories. The microblock concept is described, and some examples are given of decomposed domains including a computational result for a simulated Euler equations.

Bergman, C. M.; Vos, J. B.

1991-08-01

201

Series/Parallel Batteries  

NSDL National Science Digital Library

It is important for students to understand how resistors, capacitors, and batteries combine in series and parallel. The combination of batteries has a lot of practical applications in science competitions. This lab also reinforces how to use a voltmeter t

Michael Horton

2009-05-30

202

Parallel programming with PCN  

SciTech Connect

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, a set of tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory at info.mcs.anl.gov.

Foster, I.; Tuecke, S.

1991-09-01

203

Parallel programming with PCN  

SciTech Connect

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

Foster, I.; Tuecke, S.

1991-12-01

204

Parallelization: Binary Tree Traversal  

NSDL National Science Digital Library

This module teaches the use of binary trees to sort through large data sets, different traversal methods for binary trees, including parallel methods, and how to scale a binary tree traversal on multiple compute cores. Upon completion of this module, students should be able to recognize the structure of a binary tree, employ different methods for traversing a binary tree, understand how to parallelize a binary tree traversal, and how to scale a binary tree traversal over multiple compute cores.

Aaron Weeden

205

Grid Reliability////////////////////////// Grid Dependability  

E-print Network

-6 ) Life-critical, long-life, safety critical domains Aviation industry (aircraft control) Space missions for learning new words! Dependability, reliability, availability, safety integrity, maintainability of System Assessment, Part 5: Assessment of System Dependability, Draft, Publication 1069-5, Int

206

Wood Durability Service & Reliability  

E-print Network

Wood Durability Laboratory Service & Reliability Equipment & Facilities Field sites for AWPA E-7 (mold) tests. Wood weathering facilities. Lab-scale pressure treating cylin- ders. X-ray preservative analyzer (Oxford Twin-X). State-of-the-art facilities for wood and plastics composites manufacturing

207

Software reliability report  

NASA Technical Reports Server (NTRS)

There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

Wilson, Larry

1991-01-01

208

Human Reliability Engineering  

Microsoft Academic Search

Human reliability engineering (HRE) is the description, analysis, and improvement of situations in which human errors have been made or could be made. The probability of human error is distinguished from the probability of incident. HRE can be carried out at different levels: A. Prevention with regard to a future human error; B. Prevention of a future incident and correction

H. Kragt

1978-01-01

209

Quantifying Human Performance Reliability.  

ERIC Educational Resources Information Center

Human performance reliability for tasks in the time-space continuous domain is defined and a general mathematical model presented. The human performance measurement terms time-to-error and time-to-error-correction are defined. The model and measurement terms are tested using laboratory vigilance and manual control tasks. Error and error-correction…

Askren, William B.; Regulinski, Thaddeus L.

210

Measuring agreement in medical informatics reliability studies  

Microsoft Academic Search

Agreement measures are used frequently in reliability studies that involve categorical data. Simple measures like observed agreement and specific agreement can reveal a good deal about the sample. Chance-corrected agreement in the form of the kappa statistic is used frequently based on its correspondence to an intraclass correlation coefficient and the ease of calculating it, but its magnitude depends on

George Hripcsak; Daniel F. Heitjan

2002-01-01

211

Efficient Reliable Internet Storage  

Microsoft Academic Search

This position paper presents a new design for an Internet- wide peer-to-peer storage facility. The design is intended to reduce the required replication significantly without loss of availability. Two techniques are proposed. First, ag- gressive use of parallel recovery made possible by plac- ing blocks randomly, rather than in a DHT-based fashion. Second, tracking of individual nodes availabilities, so that

Robbert van Renesse

212

Parallel Composition Communication and Allow Hiding Parallel Processes  

E-print Network

Communication and Allow Hiding The Saga of Axiomatizing Parallel Composition Challenge (Dish1+Dish2) || Coke ? = (Dish1 || Coke)+(Dish2 || Coke) Mousavi: Parallel Processes #12;Parallel Composition Communication and Allow Hiding The Saga of Axiomatizing Parallel Composition Challenge (Dish1+Dish2) || Coke ? = (Dish1

Groote, Jan Friso

213

Parallel Composition Communication and Allow Hiding Parallel Processes  

E-print Network

Parallel Composition Challenge (Dish1+Dish2) || Coke ? = (Dish1 || Coke)+(Dish2 || Coke) Mousavi: Parallel Composition Challenge (Dish1+Dish2) || Coke ? = (Dish1 || Coke)+(Dish2 || Coke) Faron Moller's Result Parallel Parallel Composition and |: Raisons d'^etre (Dish1 + Dish2) Coke (Dish1 Coke) + (Dish2 Coke) (Dish1 + Dish

Mousavi, Mohammad

214

General Aviation Aircraft Reliability Study  

NASA Technical Reports Server (NTRS)

This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

2001-01-01

215

Validity and Reliability of the EOM-EIS for Early Adolescents.  

ERIC Educational Resources Information Center

Examined Extended Objective Measure of Ego Identity Status for reliability and validity among 467 secondary school students. Results were supportive of appropriateness of all measures for the subjects. Analysis of reliability, validity, demographic characteristics, and psychosocial maturity yielded results which parallel theoretical framework and…

Jones, Randy M.; Streitmatter, Janice L.

1987-01-01

216

[Validity and reliability of the Work Ability Index in primary care workers in Argentina].  

PubMed

This study evaluates the validity and reliability of the Work Ability Index (WAI) in Argentina. An instrument was applied to 100 workers, all agents of Primary Health Care in the county of General Pueyrredón. In the construct validity, the dimensional structure was studied by means of exploratory factor analysis, based on a polychoric matrix and parallel analysis to obtain the number of factors. In the correlation validation, the Spearman correlation was estimated between the WAI and the dimensions of the 36-Item Short Form Health Survey (SF-36). The reliability assessment was measured by Cronbach's alpha estimate. The result of the internal consistency of the scale was 0.80, indicating acceptable reliability. The WAI score yielded the following results: 12% moderate, 50% good and 38% optimal. In the validation process, a three-dimensional structure was identified which accounts for 66% of the total variance of the data through the main components. The theoretical assumptions of the construct validity confirmed the direct and significant correlation between WAI scores and the dimensions of health status assessment, with the highest value in the physical functioning dimension (0.478) and the lowest in the bodily pain dimension (-0.218). It was concluded that the WAI, translated and adapted into Spanish, showed adequate psychometric properties and can therefore be used in association studies between aspects of work and their impact on health. PMID:23995544

Peralta, Norma; Godoi Vasconcelos, Ana Glória; Härter Griep, Rosane; Miller, Leticia

2012-01-01

217

On the Forward Kinematics of Parallel Manipulators  

Microsoft Academic Search

In this article we present a novel procedure for the system atic analysis of the forward kinematics of a class of parallel manipulators that generalize the well-known Stewart plat form. The designs comprise a movable platform connected to a fixed base by a set of legs, the lengths of which can be con trolled. The legs are connected to the

R. Nair; J. H. Maddocks

1994-01-01

218

Artful Balance: The Parallel Structures of Style.  

ERIC Educational Resources Information Center

Based on an extensive computer-aided examination of representative published American writing, this book examines and compares how various kinds of prose employ the diverse forms of parallelism. A scale of rhetorical value for assessing the cooccurring rhetorical devices of repetition is also presented. The chapters are entitled: "Balance or…

Hiatt, Mary P.

219

Reliable broadcast protocols  

NASA Technical Reports Server (NTRS)

A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.

Joseph, T. A.; Birman, Kenneth P.

1989-01-01

220

Human Reliability Program Workshop  

SciTech Connect

A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

Landers, John; Rogers, Erin; Gerke, Gretchen

2014-05-18

221

Reliability and testing  

NASA Technical Reports Server (NTRS)

Reliability and its interdependence with testing are important topics for development and manufacturing of successful products. This generally accepted fact is not only a technical statement, but must be also seen in the light of 'Human Factors.' While the background for this paper is the experience gained with electromechanical/electronic space products, including control and system considerations, it is believed that the content could be also of interest for other fields.

Auer, Werner

1996-01-01

222

Compact, Reliable EEPROM Controller  

NASA Technical Reports Server (NTRS)

A compact, reliable controller for an electrically erasable, programmable read-only memory (EEPROM) has been developed specifically for a space-flight application. The design may be adaptable to other applications in which there are requirements for reliability in general and, in particular, for prevention of inadvertent writing of data in EEPROM cells. Inadvertent writes pose risks of loss of reliability in the original space-flight application and could pose such risks in other applications. Prior EEPROM controllers are large and complex and do not provide all reasonable protections (in many cases, few or no protections) against inadvertent writes. In contrast, the present controller provides several layers of protection against inadvertent writes. The controller also incorporates a write-time monitor, enabling determination of trends in the performance of an EEPROM through all phases of testing. The controller has been designed as an integral subsystem of a system that includes not only the controller and the controlled EEPROM aboard a spacecraft but also computers in a ground control station, relatively simple onboard support circuitry, and an onboard communication subsystem that utilizes the MIL-STD-1553B protocol. (MIL-STD-1553B is a military standard that encompasses a method of communication and electrical-interface requirements for digital electronic subsystems connected to a data bus. MIL-STD- 1553B is commonly used in defense and space applications.) The intent was to both maximize reliability while minimizing the size and complexity of onboard circuitry. In operation, control of the EEPROM is effected via the ground computers, the MIL-STD-1553B communication subsystem, and the onboard support circuitry, all of which, in combination, provide the multiple layers of protection against inadvertent writes. There is no controller software, unlike in many prior EEPROM controllers; software can be a major contributor to unreliability, particularly in fault situations such as the loss of power or brownouts. Protection is also provided by a powermonitoring circuit.

Katz, Richard; Kleyner, Igor

2010-01-01

223

Spacecraft transmitter reliability  

NASA Technical Reports Server (NTRS)

A workshop on spacecraft transmitter reliability was held at the NASA Lewis Research Center on September 25 and 26, 1979, to discuss present knowledge and to plan future research areas. Since formal papers were not submitted, this synopsis was derived from audio tapes of the workshop. The following subjects were covered: users' experience with space transmitters; cathodes; power supplies and interfaces; and specifications and quality assurance. A panel discussion ended the workshop.

1980-01-01

224

Distribution system reliability indices  

SciTech Connect

Distribution system reliability assessment can be divided into two basic segments of measuring past performance and predicting future performance. This paper compares the results obtained from tow surveys dealing with United States and Canadian utility activities in regard to service continuity data collection and utilization. The paper also presents a summary of service continuity statistics for those Canadian utilities that participate in the Canadian Electrical Association annual service continuity reports.

Billinton, R.; Billinton, J.E.

1989-01-01

225

Sublattice parallel replica dynamics  

NASA Astrophysics Data System (ADS)

Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998), 10.1103/PhysRevB.57.R13985] by combining it with the synchronous sublattice approach of Shim and Amar [Y. Shim and J. G. Amar, Phys. Rev. B 71, 125432 (2005), 10.1103/PhysRevB.71.125432], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

Martínez, Enrique; Uberuaga, Blas P.; Voter, Arthur F.

2014-06-01

226

Software reliability studies  

NASA Technical Reports Server (NTRS)

There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

Hoppa, Mary Ann; Wilson, Larry W.

1994-01-01

227

Beyond reliability to profitability  

SciTech Connect

Reliability concerns have controlled much of power generation design and operations. Emerging from a strictly regulated environment, profitability is becoming a much more important concept for today`s power generation executives. This paper discusses the conceptual advance-view power plant maintenance as a profit center, go beyond reliability, and embrace profitability. Profit Centered Maintenance begins with the premise that financial considerations, namely profitability, drive most aspects of modern process and manufacturing operations. Profit Centered Maintenance is a continuous process of reliability and administrative improvement and optimization. For the power generation executives with troublesome maintenance programs, Profit Centered Maintenance can be the blueprint to increased profitability. It requires the culture change to make decisions based on value, to reengineer the administration of maintenance, and to enable the people performing and administering maintenance to make the most of available maintenance information technology. The key steps are to optimize the physical function of maintenance and to resolve recurring maintenance problems so that the need for maintenance can be reduced. Profit Centered Maintenance is more than just an attitude it is a path to profitability, be it resulting in increased profits or increased market share.

Bond, T.H. [Thomas Bond Consultants, San Diego, CA (United States); Mitchell, J.S. [Mitchell Associates, San Juan Capistrano, CA (United States)

1996-07-01

228

Is quantum parallelism real?  

NASA Astrophysics Data System (ADS)

In this paper we raise questions about the reality of computational quantum parallelism. Such questions are important because while quantum theory is rigorously established, the hypothesis that it supports a more powerful model of computation remains speculative. More specifically, we suggest the possibility that the seeming computational parallelism offered by quantum superpositions is actually effected by gate-level parallelism in the reversible implementation of the quantum operator. In other words, when the total number of logic operations is analyzed, quantum computing may not be more powerful than classical. This fact has significant public policy implications with regard to the relative levels of effort that are appropriate for the development of quantumparallel algorithms and associated hardware (i.e., qubit-based) versus quantum-scale classical hardware.

Lanzagorta, Marco; Uhlmann, Jeffrey

2008-04-01

229

Parallel optical sampler  

DOEpatents

An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

2014-05-20

230

Parallelizing Quantum Circuits  

E-print Network

We present a novel automated technique for parallelizing quantum circuits via forward and backward translation to measurement-based quantum computing patterns and analyze the trade off in terms of depth and space complexity. As a result we distinguish a class of polynomial depth circuits that can be parallelized to logarithmic depth while adding only polynomial many auxiliary qubits. In particular, we provide for the first time a full characterization of patterns with flow of arbitrary depth, based on the notion of influencing paths and a simple rewriting system on the angles of the measurement. Our method leads to insightful knowledge for constructing parallel circuits and as applications, we demonstrate several constant and logarithmic depth circuits. Furthermore, we prove a logarithmic separation in terms of quantum depth between the quantum circuit model and the measurement-based model.

Anne Broadbent; Elham Kashefi

2007-04-13

231

Parallel State Estimation Assessment with Practical Data  

SciTech Connect

This paper presents a full-cycle parallel state estimation (PSE) implementation using a preconditioned conjugate gradient algorithm. The developed code is able to solve large-size power system state estimation within 5 seconds using real-world data, comparable to the Supervisory Control And Data Acquisition (SCADA) rate. This achievement allows the operators to know the system status much faster to help improve grid reliability. Case study results of the Bonneville Power Administration (BPA) system with real measurements are presented. The benefits of fast state estimation are also discussed.

Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

2014-10-31

232

A (data) parallel implementation of the dual revised simplex method  

E-print Network

Background Fundamental model in optimal decision-making Solution techniques Simplex method (1947) Interior for sparse LPs Not expensive for large LPs BTRAN, FTRAN and INVERT naturally serial PRICE naturally parallel) but never implemented efficiently in serial RHS cT N aT P b bP N B Task-parallel multiple BTRAN to form P

Hall, Julian

233

Parallel scripting for applications at the petascale and beyond.  

SciTech Connect

Scripting accelerates and simplifies the composition of existing codes to form more powerful applications. Parallel scripting extends this technique to allow for the rapid development of highly parallel applications that can run efficiently on platforms ranging from multicore workstations to petascale supercomputers.

Wilde, M.; Zhang, Z.; Clifford, B.; Hategan, M.; Iskra, K.; Beckman, P.; Foster, I.; Raicu, I.; Espinosa, A.; Univ. of Chicago

2009-11-01

234

Compiling FORTRAN for Massively Parallel Architectures Peter Brezany  

E-print Network

Compiling FORTRAN for Massively Parallel Architectures Peter Brezany University of Vienna Institute of existing scientific Fortran code into a form suitable for parallel processing on DMMPs. The exclusive use, Fortran is still being used as the primary language for the development of scientific software. Therefore

Brezany, Peter

235

A discrete ordinate response matrix method for massively parallel computers  

Microsoft Academic Search

A discrete ordinate response matrix method is formulated for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices which result from the diamond-differenced equations are utilized in a factored form which minimizes memory requirements and significantly reduces the required number of algorithm utilizes massive parallelism by

U. R. Hanebutte; E. E. Lewis

1991-01-01

236

SPINning parallel systems software.  

SciTech Connect

We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin.

Matlin, O.S.; Lusk, E.; McCune, W.

2002-03-15

237

Parallel signal processing  

NASA Astrophysics Data System (ADS)

The potential application of parallel computing techniques to digital signal processing for radar is discussed and two types of regular array processor are discussed. The first type of processor is the systolic or wavefront processor. The application of this type of processor to adaptive beamforming is discussed and the joint STL-RSRE adaptive antenna processor test-bed is reviewed. The second type of regular array processor is the SIMD parallel computer. One such processor, the Mil-DAP, is described, and its application to a varied range of radar signal processing tasks is discussed.

McWhirter, John G.

1989-12-01

238

On Component Reliability and System Reliability for Space Missions  

NASA Technical Reports Server (NTRS)

This paper is to address the basics, the limitations and the relationship between component reliability and system reliability through a study of flight computing architectures and related avionics components for NASA future missions. Component reliability analysis and system reliability analysis need to be evaluated at the same time, and the limitations of each analysis and the relationship between the two analyses need to be understood.

Chen, Yuan; Gillespie, Amanda M.; Monaghan, Mark W.; Sampson, Michael J.; Hodson, Robert F.

2012-01-01

239

Reliability Calculus: A Theoretical Framework To Analyze Communication Reliability  

E-print Network

Reliability Calculus: A Theoretical Framework To Analyze Communication Reliability Wenbo He Xue Liu, University of Nebraska-Lincoln School of Computer Science, McGill University § Nokia Research Center Abstract--Communication reliability is one of the most impor- tant concerns and fundamental issues

Liu, Xue

240

Substation Configuration Reliability 1 Reliability of Substation Configurations  

E-print Network

complex method of reliability assessment than those used to look at a single substation design, This paperSubstation Configuration Reliability 1 Reliability of Substation Configurations Daniel Nack, Iowa substation, it still contains what could be described as weak points or points of failure that would lead

McCalley, James D.

241

Substation Reliability Centered Maintenance  

SciTech Connect

Substation Reliability Centered Maintenance (RCM) is a technique that is used to develop maintenance plans and criteria so the operational capability of substation equipment is achieved, restored, or maintained. The objective of the RCM process is to focus attention on system equipment in a manner that leads to the formulation of an optimal maintenance plan. The RCM concept originated in the airline industry in the 1970s and has been used since 1985 to establish maintenance requirements for nuclear power plants. The RCM process is initially applied during the design and development phase of equipment or systems on the premise that reliability is a design characteristic. It is then reapplied, as necessary, during the operational phase to sustain a more optimal maintenance program based on actual field experiences. The purpose of the RCM process is to develop a maintenance program that provides desired or specified levels of operational safety and reliability at the lowest possible overall cost. The objectives are to predict or detect and correct incipient failures before they occur or before they develop into major defects, reduce the probability of failure, detect hidden problems, and improve the cost-effectiveness of the maintenance program. RCM accomplishes two basic purposes: (1) It identifies in real-time incipient equipment problems, averting potentially expensive catastrophic failures by communicating potential problems to appropriate system operators and maintenance personnel. (2) It provides decision support by recommending, identifying, and scheduling preventive maintenance. Recommendations are based on maintenance criteria, maintenance history, experience with similar equipment, real-time field data, and resource constraints. Hardware and software are used to accomplish these two purposes. The RCM system includes instrumentation that monitors critical substation equipment as well as computer software that helps analyze equipment data.

Purucker, S.L.

1992-11-01

242

Ferrite logic reliability study  

NASA Technical Reports Server (NTRS)

Development and use of digital circuits called all-magnetic logic are reported. In these circuits the magnetic elements and their windings comprise the active circuit devices in the logic portion of a system. The ferrite logic device belongs to the all-magnetic class of logic circuits. The FLO device is novel in that it makes use of a dual or bimaterial ferrite composition in one physical ceramic body. This bimaterial feature, coupled with its potential for relatively high speed operation, makes it attractive for high reliability applications. (Maximum speed of operation approximately 50 kHz.)

Baer, J. A.; Clark, C. B.

1973-01-01

243

Parallel programming with PCN  

SciTech Connect

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

Foster, I.; Tuecke, S.

1993-01-01

244

Parallelism and evolutionary algorithms  

Microsoft Academic Search

This paper contains a modern vision of the paral- lelization techniques used for evolutionary algorithms (EAs). The work is motivated by two fundamental facts: first, the different families of EAs have naturally converged in the last decade while parallel EAs (PEAs) seem still to lack unified studies, and second, there is a large number of improvements in these algorithms and

Enrique Alba; Marco Tomassini

2002-01-01

245

Parallelization for reaction  

E-print Network

Parallelization for reaction waves with complex chemistry Context Application Background Numerical with complex chemistry S. Descombes 2 M. Duarte 3 T. Dumont 1 V. Louvet 1 M. Massot 3 1Camille Jordan Institute - France 3EM2C Laboratory - Ecole Centrale Paris - France Workshop on Computational and Applied Mathematics

Louvet, Violaine

246

High performance parallel architectures  

SciTech Connect

In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

Anderson, R.E. (Lawrence Livermore National Lab., CA (USA))

1989-09-01

247

IU parallel processing benchmark  

Microsoft Academic Search

A benchmark is presented that was designed to evaluate the merits of various parallel architectures as applied to image understanding (IU). This benchmark exercise addresses the issue of system performance on an integrated set of tasks, where the task interactions that are typical of complex vision application are present. The goal of this exercise is to gain a better understanding

Charles Weems; Edward Riseman; Allen Hanson; Azriel Rosenfeld

1988-01-01

248

NAS Parallel Benchmarks Results  

NASA Technical Reports Server (NTRS)

The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

1995-01-01

249

Parallel Spectral Numerical Methods  

NSDL National Science Digital Library

This module teaches the principals of Fourier spectral methods, their utility in solving partial differential equation and how to implement them in code. Performance considerations for several Fourier spectral implementations are discussed and methods for effective scaling on parallel computers are explained.

Gong Chen

250

Parallel hierarchical global illumination  

SciTech Connect

Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

Snell, Q.O.

1997-10-08

251

Parallel Traveling Salesman Problem  

NSDL National Science Digital Library

The traveling salesman problem is a classic optimization problem in which one seeks to minimize the path taken by a salesman in traveling between N cities, where the salesman stops at each city one and only one time, never retracing his/her route. This implementation is designed to run on UNIX systems with X-Windows, and includes parallelization using MPI.

Joiner, David; Hassinger, Jonathan

252

FIELD RELIABILITY OF ELECTRONIC SYSTEMS  

E-print Network

Reliability 13 3.2. Source 2 14 3.2.1. Description (Electronics, Maintenance Workshop (Research Institute) 14I Ww i 1 i FIELD RELIABILITY OF ELECTRONIC SYSTEMS wcwotoias R I S 0 - M - 2 4 1 8 An analytical study of in-the fiald axparlanca of electronics reliability Tag© Elm Rise National Laboratory, DK-4000

253

Testing for PV Reliability (Presentation)  

SciTech Connect

The DOE SUNSHOT workshop is seeking input from the community about PV reliability and how the DOE might address gaps in understanding. This presentation describes the types of testing that are needed for PV reliability and introduces a discussion to identify gaps in our understanding of PV reliability testing.

Kurtz, S.; Bansal, S.

2014-09-01

254

Kinesthetic Aftereffect Scores Are Reliable  

ERIC Educational Resources Information Center

The validity of the Kinesthetic Aftereffect (KAE) as a measure of personality has been criticized because of KAE's poor test-retest reliability. However, systematic bias effects render KA E retest sessions invalid and make test-retest reliability an inappropriate measure of KAE's true reliability. (Author/CTM)

Mishara, Brian L.; Baker, A. Harvey

1978-01-01

255

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2012 CFR

...2012-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2012-10-01

256

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

...2014-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2014-10-01

257

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2013 CFR

...2013-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2013-10-01

258

PARALLEL ELECTRIC FIELD SPECTRUM OF SOLAR WIND TURBULENCE  

SciTech Connect

By searching through more than 10 satellite years of THEMIS and Cluster data, 3 reliable examples of parallel electric field turbulence in the undisturbed solar wind have been found. The perpendicular and parallel electric field spectra in these examples have similar shapes and amplitudes, even at large scales (frequencies below the ion gyroscale), where Alfvenic turbulence with no parallel electric field component is thought to dominate. The spectra of the parallel electric field fluctuations are power laws with exponents near -5/3 below the ion scales ({approx}0.1 Hz), and with a flattening of the spectrum in the vicinity of this frequency. At small scales (above a few Hz), the spectra are steeper than -5/3 with values in the range of -2.1 to -2.8. These steeper slopes are consistent with expectations for kinetic Alfven turbulence, although their amplitude relative to the perpendicular fluctuations is larger than expected.

Mozer, F. S.; Chen, C. H. K., E-mail: fmozer@ssl.berkeley.edu [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States)

2013-05-01

259

Message based event specification for debugging nondeterministic parallel programs  

SciTech Connect

Portability and reliability of parallel programs can be severely impaired by their nondeterministic behavior. Therefore, an effective means to precisely and accurately specify unacceptable nondeterministic behavior is necessary for testing and debugging parallel programs. In this paper we describe a class of expressions, called Message Expressions that can be used to specify nondeterministic behavior of message passing parallel programs. Specification of program behavior with Message Expressions is easier than pattern based specification techniques in that the former does not require knowledge of run-time event order, whereas that later depends on the user`s knowledge of the run-time event order for correct specification. We also discuss our adaptation of Message Expressions for use in a dynamic distributed testing and debugging tool, called mdb, for programs written for PVM (Parallel Virtual Machine).

Damohdaran-Kamal, S.K. [Los Alamos National Lab., NM (United States); Francioni, J.M. [University of Southwestern Louisiana, Lafayette, LA (United States)

1995-02-01

260

Mathematical models for the reliability research [telecommunications systems  

Microsoft Academic Search

Summary form only given. The analysis of mathematical models carried out has allowed the generation of calculated expressions for determination of reliability indexes of computer web routes on known values of element indexes.

N. Kazakova

2003-01-01

261

Information hiding in parallel programs  

SciTech Connect

A fundamental principle in program design is to isolate difficult or changeable design decisions. Application of this principle to parallel programs requires identification of decisions that are difficult or subject to change, and the development of techniques for hiding these decisions. We experiment with three complex applications, and identify mapping, communication, and scheduling as areas in which decisions are particularly problematic. We develop computational abstractions that hide such decisions, and show that these abstractions can be used to develop elegant solutions to programming problems. In particular, they allow us to encode common structures, such as transforms, reductions, and meshes, as software cells and templates that can reused in different applications. An important characteristic of these structures is that they do not incorporate mapping, communication, or scheduling decisions: these aspects of the design are specified separately, when composing existing structures to form applications. This separation of concerns allows the same cells and templates to be reused in different contexts.

Foster, I.

1992-01-30

262

ZAMBEZI: a parallel pattern parallel fault sequential circuit fault simulator  

Microsoft Academic Search

Sequential circuit fault simulators use the multiple bits in a computer data word to accelerate simulation. We introduce, and implement, a new sequential circuit fault simulator, a parallel pattern parallel fault simulator, ZAMBEZI, which simultaneously simulates multiple faults with multiple vectors in one data word. ZAMBEZI is developed by enhancing the control flow, of existing parallel pattern algorithms. For a

Minesh B. Amin; Bapiraju Vinnakota

1996-01-01

263

Parallel Consensual Neural Networks  

NASA Technical Reports Server (NTRS)

A new neural network architecture is proposed and applied in classification of remote sensing/geographic data from multiple sources. The new architecture is called the parallel consensual neural network and its relation to hierarchical and ensemble neural networks is discussed. The parallel consensual neural network architecture is based on statistical consensus theory. The input data are transformed several times and the different transformed data are applied as if they were independent inputs and are classified using stage neural networks. Finally, the outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote sensing data and geographic data are given. The performance of the consensual neural network architecture is compared to that of a two-layer (one hidden layer) conjugate-gradient backpropagation neural network. The results with the proposed neural network architecture compare favorably in terms of classification accuracy to the backpropagation method.

Benediktsson, J. A.; Sveinsson, J. R.; Ersoy, O. K.; Swain, P. H.

1993-01-01

264

Parallel multilevel preconditioners  

SciTech Connect

In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

1989-01-01

265

Parallel Subconvolution Filtering Architectures  

NASA Technical Reports Server (NTRS)

These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

Gray, Andrew A.

2003-01-01

266

Collisionless parallel shocks  

NASA Technical Reports Server (NTRS)

Consideration is given to a collisionless parallel shock based on solitary-type solutions of the modified derivative nonlinear Schroedinger equation (MDNLS) for parallel Alfven waves. The standard derivative nonlinear Schroedinger equation is generalized in order to include the possible anisotropy of the plasma distribution and higher-order Korteweg-de Vies-type dispersion. Stationary solutions of MDNLS are discussed. The anisotropic nature of 'adiabatic' reflections leads to the asymmetric particle distribution in the upstream as well as in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves, which can significantly contribute to the shock thermalization.

Khabibrakhmanov, I. KH.; Galeev, A. A.; Galinskii, V. L.

1993-01-01

267

Parallel Computational Subunits in Dentate Granule Cells Generate Multiple Place Fields  

PubMed Central

A fundamental question in understanding neuronal computations is how dendritic events influence the output of the neuron. Different forms of integration of neighbouring and distributed synaptic inputs, isolated dendritic spikes and local regulation of synaptic efficacy suggest that individual dendritic branches may function as independent computational subunits. In the present paper, we study how these local computations influence the output of the neuron. Using a simple cascade model, we demonstrate that triggering somatic firing by a relatively small dendritic branch requires the amplification of local events by dendritic spiking and synaptic plasticity. The moderately branching dendritic tree of granule cells seems optimal for this computation since larger dendritic trees favor local plasticity by isolating dendritic compartments, while reliable detection of individual dendritic spikes in the soma requires a low branch number. Finally, we demonstrate that these parallel dendritic computations could contribute to the generation of multiple independent place fields of hippocampal granule cells. PMID:19750211

Ujfalussy, Balázs; Kiss, Tamás; Érdi, Péter

2009-01-01

268

Supporting dynamic parallel object arrays  

Microsoft Academic Search

ABSTRACT We present efficient support for generalized arrays of parallel data driven objects. Array elements are regular C++ objects, and are scattered across the parallel machine. An individual element is addressed by its \\

Orion Sky Lawlor; Laxmikant V. Kalé

2003-01-01

269

Resistor Combinations for Parallel Circuits.  

ERIC Educational Resources Information Center

To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

McTernan, James P.

1978-01-01

270

Standard Templates Adaptive Parallel Library  

E-print Network

STAPL (Standard Templates Adaptive Parallel Library) is a parallel C++ library designed as a superset of the C++ Standard Template Library (STL), sequentially consistent for functions with the same name, and executed on uni- or multi- processor...

Arzu, Francisco Jose

2012-06-07

271

A Parallel Repetition Theorem  

Microsoft Academic Search

We show that a parallel repetition of any two-prover one-round proof system (MIP(2,1)) decreases the probability of error at an exponential rate. No constructive bound was previously known. The constant in the exponent (in our analysis) depends only on the original probability of error and on the total number of possible answers of the two provers. The dependency on the

Ran Raz

1998-01-01

272

Parallelization: Infectious Disease  

NSDL National Science Digital Library

Epidemiology is the study of infectious disease. Infectious diseases are said to be "contagious" among people if they are transmittable from one person to another. Epidemiologists can use models to assist them in predicting the behavior of infectious diseases. This module will develop a simple agent-based infectious disease model, develop a parallel algorithm based on the model, provide a coded implementation for the algorithm, and explore the scaling of the coded implementation on high performance cluster resources.

Aaron Weeden

273

Massively parallel neural computation  

E-print Network

and communication resources is developed and then used to implement a neural computation system on the multi- FPGA platform. Finding suitable benchmark neural networks for a massively parallel neural com- putation system proves to be a challenge. A synthetic... .2.4 Accumulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 7.2.5 Memory spike source . . . . . . . . . . . . . . . . . . . . . . . 116 7.2.6 Spike injector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 7.2.7 Spike auditor...

Fox, Paul James

2013-03-12

274

Parallel sphere rendering  

SciTech Connect

Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

Krogh, M.; Painter, J.; Hansen, C.

1996-10-01

275

Parallel Ada benchmarks for the SVMS  

NASA Technical Reports Server (NTRS)

The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

Collard, Philippe E.

1990-01-01

276

Parallel Pascal - An extended Pascal for parallel computers  

NASA Technical Reports Server (NTRS)

Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

Reeves, A. P.

1984-01-01

277

Synchronous Parallel Kinetic Monte Carlo  

SciTech Connect

A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

Mart?nez, E; Marian, J; Kalos, M H

2006-12-14

278

Roo: A parallel theorem prover  

SciTech Connect

We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

Lusk, E.L.; McCune, W.W.; Slaney, J.K.

1991-11-01

279

Parallelized direct execution simulation of message-passing parallel programs  

NASA Technical Reports Server (NTRS)

As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

1994-01-01

280

Nonparametric approach to reliability and its applications.  

E-print Network

??Reliability concepts are used by reliability engineers in the industry to perform systematic reliability studies for the identification and possible elimination of failure causes, quantification… (more)

Jayasinghe, C

2013-01-01

281

Channel flow and the development of parallel-dipping normal faults Thorsten J. Nagel1  

E-print Network

Channel flow and the development of parallel-dipping normal faults Thorsten J. Nagel1 and W. Roger. [1] In a series of numerical experiments, arrays of parallel-dipping normal faults formed only. This observation contradicts previous studies relating parallel-dipping normal faults to consistent horizontal

Buck, Roger

282

Parallelism in gene transcription among sympatric lake whitefish ( Coregonus clupeaformis Mitchill) ecotypes  

Microsoft Academic Search

We tested the hypothesis that phenotypic parallelism between dwarf and normal whitefish ecotypes ( Coregonus clupeaformis, Salmonidae) is accompanied by parallelism in gene transcription. The most striking phenotypic differences between these forms implied energetic metabolism and swimming activity. Therefore, we predicted that genes showing parallel expression should mainly belong to functional groups associated with these phenotypes. Transcriptome profiles were obtained

N. DEROME; P. DUCHESNE; L. BERNATCHEZ

2006-01-01

283

Folding mechanism of the alpha-subunit of tryptophan synthase, an alpha/beta barrel protein: global analysis highlights the interconversion of multiple native, intermediate, and unfolded forms through parallel channels.  

PubMed

A variety of techniques have been used to investigate the urea-induced kinetic folding mechanism of the alpha-subunit of tryptophan synthase from Escherichia coli. A distinctive property of this 29 kDa alpha/beta barrel protein is the presence of two stable equilibrium intermediates, populated at approximately 3 and 5 M urea. The refolding process displays multiple kinetic phases whose lifetimes span the submillisecond to greater than 100 s time scale; unfolding studies yield two relaxation times on the order of 10-100 s. In an effort to understand the populations and structural properties of both the stable and transient intermediates, stopped-flow, manual-mixing, and equilibrium circular dichroism data were globally fit to various kinetic models. Refolding and unfolding experiments from various initial urea concentrations as well as forward and reverse double-jump experiments were critical for model discrimination. The simplest kinetic model that is consistent with all of the available data involves four slowly interconverting unfolded forms that collapse within 5 ms to a marginally stable intermediate with significant secondary structure. This early intermediate is an off-pathway species that must unfold to populate a set of four on-pathway intermediates that correspond to the 3 M urea equilibrium intermediate. Reequilibrations among these conformers act as rate-limiting steps in folding for a majority of the population. A fraction of the native conformation appears in less than 1 s at 25 degrees C, demonstrating that even large proteins can rapidly traverse a complex energy surface. PMID:9893998

Bilsel, O; Zitzewitz, J A; Bowers, K E; Matthews, C R

1999-01-19

284

Photon detection with parallel asynchronous processing  

NASA Technical Reports Server (NTRS)

An approach to photon detection with a parallel asynchronous signal processor is described. The visible or IR photon-detection capability of the silicon p(+)-n-n(+) detectors and the parallel asynchronous processing are addressed separately. This approach would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture consisting of a stack of planar arrays of the devices would form a 2D array processor with a 2D array of inputs located directly behind a focal-plane detector array. A 2D image data stream would propagate in neuronlike asynchronous pulse-coded form through the laminar processor. Such systems can integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The possibility of multispectral image processing is addressed.

Coon, D. D.; Perera, A. G. U.

1990-01-01

285

Reliability of wireless sensor networks.  

PubMed

Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

2014-01-01

286

Reliability of Wireless Sensor Networks  

PubMed Central

Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

2014-01-01

287

Photovoltaic performance and reliability workshop  

Microsoft Academic Search

This workshop was the sixth in a series of workshops sponsored by NREL\\/DOE under the general subject of photovoltaic testing and reliability during the period 1986-1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the U.S., PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research

L. Mrig

1993-01-01

288

Parallelized nested sampling  

NASA Astrophysics Data System (ADS)

One of the important advantages of nested sampling as an MCMC technique is its ability to draw representative samples from multimodal distributions and distributions with other degeneracies. This coverage is accomplished by maintaining a number of so-called live samples within a likelihood constraint. In usual practice, at each step, only the sample with the least likelihood is discarded from this set of live samples and replaced. In [1], Skilling shows that for a given number of live samples, discarding only one sample yields the highest precision in estimation of the log-evidence. However, if we increase the number of live samples, more samples can be discarded at once while still maintaining the same precision. For computer code running only serially, this modification would considerably increase the wall clock time necessary to reach convergence. However, if we use a computer with parallel processing capabilities, and we write our code to take advantage of this parallelism to replace multiple samples concurrently, the performance penalty can be eliminated entirely and possibly reversed. In this case, we must use the more general equation in [1] for computing the expectation of the shrinkage distribution: E [- log t]= (N r-r+1)-1+(Nr-r+2)-1+⋯+Nr-1, for shrinkage t with Nr live samples and r samples discarded at each iteration. The equation for the variance Var (- log t)= (N r-r+1)-2+(Nr-r+2)-2+⋯+Nr-2 is used to find the appropriate number of live samples Nr to use with r > 1 to match the variance achieved with N1 live samples and r = 1. In this paper, we show that by replacing multiple discarded samples in parallel, we are able to achieve a more thorough sampling of the constrained prior distribution, reduce runtime, and increase precision.

Henderson, R. Wesley; Goggans, Paul M.

2014-12-01

289

Optimization of Multiobjective System Reliability Design Using FLC controlled GA  

NASA Astrophysics Data System (ADS)

A practical optimal reliability design of a system required high system reliability could be formulated as an appropriate mathematical programming model, however, because in the real world, we should concern some kinds of decision criteria. Particularly, system reliability and construction cost are basically conflict each other, so that when taking both of them into consideration, the system reliability design model can be formulated as a bi-objective mathematical programming model. In this research, we consider a bi-criteria redundant system reliability design problem which is optimized by selecting and assigning system components among different valuable candidates for constructing a series-parallel redundant system. Such a problem is formulated as a bi-criteria nonlinear integer programming (bi-nIP) model. In the past decade, several researchers have developed many heuristic algorithms including genetic algorithms (GAs) for solving multi-criteria system reliability optimization problems and obtained acceptable and satisfactory results. Unfortunately, the Pareto solutions obtained by solving a multi-objective optimization problem using a GA cannot be guaranteed its quality, and the number of the Pareto solutions obtained is sometimes not so many. In order to overcome such problems, we propose a hybrid genetic algorithm combined with a Fuzzy Logic Controller (FLC) and a local search technique to obtain the Pareto solutions as many and good as possible. The efficiency of the proposed method is demonstrated through comparative numerical experiments.

Mukuda, Minoru; Tsujimura, Yasuhiro; Gen, Mitsuo

290

Highly parallel computation  

NASA Technical Reports Server (NTRS)

Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

Denning, Peter J.; Tichy, Walter F.

1990-01-01

291

Parallel sphere rendering  

SciTech Connect

Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

Krogh, M.; Hansen, C.; Painter, J. [Los Alamos National Lab., NM (United States); de Verdiere, G.C. [CEA Centre d`Etudes de Limeil, 94 - Villeneuve-Saint-Georges (France)

1995-05-01

292

Parallel Eclipse Project Checkout  

NASA Technical Reports Server (NTRS)

Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any Eclipse-based repository with a similar structure. It also can apply build parameters and preferences automatically at the end of the checkout.

Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

2011-01-01

293

The Verification-based Analysis of Reliable Multicast Protocol  

NASA Technical Reports Server (NTRS)

Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

Wu, Yunqing

1996-01-01

294

The Specification-Based Validation of Reliable Multicast Protocol  

NASA Technical Reports Server (NTRS)

Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

Wu, Yunqing

1995-01-01

295

Parallel language constructs for tensor product computations on loosely coupled architectures  

NASA Technical Reports Server (NTRS)

A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.

Mehrotra, Piyush; Van Rosendale, John

1989-01-01

296

76 FR 16277 - System Restoration Reliability Standards  

Federal Register 2010, 2011, 2012, 2013

...Reliability Standards and One New Glossary Term and for Retirement of Five Existing Reliability Standards and One Glossary Term. The three Reliability standards...EOP Reliability Standards and the glossary term filed by NERC in this...

2011-03-23

297

Massively Parallel QCD  

SciTech Connect

The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

2007-04-11

298

Parallelization of a treecode  

E-print Network

I describe here the performance of a parallel treecode with individual particle timesteps. The code is based on the Barnes-Hut algorithm and runs cosmological N-body simulations on parallel machines with a distributed memory architecture using the MPI message-passing library. For a configuration with a constant number of particles per processor the scalability of the code was tested up to P=128 processors on an IBM SP4 machine. In the large $P$ limit the average CPU time per processor necessary for solving the gravitational interactions is $\\sim 10 %$ higher than that expected from the ideal scaling relation. The processor domains are determined every large timestep according to a recursive orthogonal bisection, using a weighting scheme which takes into account the total particle computational load within the timestep. The results of the numerical tests show that the load balancing efficiency $L$ of the code is high ($>=90%$) up to P=32, and decreases to $L\\sim 80%$ when P=128. In the latter case it is found that some aspects of the code performance are affected by machine hardware, while the proposed weighting scheme can achieve a load balance as high as $L\\sim 90%$ even in the large $P$ limit.

R. Valdarnini

2003-03-18

299

Applied Parallel Metadata Indexing  

SciTech Connect

The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

Jacobi, Michael R [Los Alamos National Laboratory

2012-08-01

300

Reliability Assessment Using Discriminative Sampling and Metamodeling  

E-print Network

, or reliability analysis, is the foundation for the reliability-based design and the recent heated research topic1 05M-400 Reliability Assessment Using Discriminative Sampling and Metamodeling G. Gary Wang Dept ABSTRACT Reliability assessment is the foundation for reliability engineering and reliability-based design

Wang, Gaofeng Gary

301

Stirling Convertor Fasteners Reliability Quantification  

NASA Technical Reports Server (NTRS)

Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.

Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.

2006-01-01

302

Reliability Based Design Optimization of Bridge Abutments Using Pseudo-dynamic Method  

Microsoft Academic Search

In this paper, the reliability of a gravity retaining wall bridge abutment is analyzed. The first order reliability method (FORM) is applied to estimate the component reliability indices of each failure mode and to assess the effect of uncertainties in design parameters. Two modes of failure namely rotation of the wall about its heel, sliding of the wall on its

B. Munwar Basha; G. L. Sivakumar Babu

303

Claims about the Reliability of Student Evaluations of Instruction: The Ecological Fallacy Rides Again  

ERIC Educational Resources Information Center

The vast majority of the research on student evaluation of instruction has assessed the reliability of groups of courses and yielded either a single reliability coefficient for the entire group, or grouped reliability coefficients for each student evaluation of teaching (SET) item. This manuscript argues that these practices constitute a form of…

Morley, Donald D.

2012-01-01

304

Parallel-Stranded DNA with Natural Base Sequences  

Microsoft Academic Search

Noncanonical parallel-stranded DNA double helices (ps-DNA) of natural nucleotide sequences are usually less stable than the canonical antiparallel-stranded DNA structures, which ensures reliable cell functioning. However, recent data indicate a possible role of ps-DNA in DNA loops or in regions of trinucleotide repeats connected with neurodegenerative diseases. The review surveys recent studies on the effect of nucleotide sequence on preference

A. K. Shchyolkina; O. F. Borisova; M. A. Livshits; T. M. Jovin

2003-01-01

305

Photovoltaic performance and reliability workshop  

NASA Astrophysics Data System (ADS)

This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986-1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the U.S., PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in the field were brought together to exchange the technical knowledge and field experience as related to current information in this evolving field of PV reliability. The papers presented here reflect this effort since the last workshop held in September, 1992. The topics covered include: cell and module characterization, module and system testing, durability and reliability, system field experience, and standards and codes.

Mrig, L.

1993-12-01

306

Statistical modeling of software reliability  

NASA Technical Reports Server (NTRS)

This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

Miller, Douglas R.

1992-01-01

307

Reliability-based design optimization using efficient global reliability analysis.  

SciTech Connect

Finding the optimal (lightest, least expensive, etc.) design for an engineered component that meets or exceeds a specified level of reliability is a problem of obvious interest across a wide spectrum of engineering fields. Various methods for this reliability-based design optimization problem have been proposed. Unfortunately, this problem is rarely solved in practice because, regardless of the method used, solving the problem is too expensive or the final solution is too inaccurate to ensure that the reliability constraint is actually satisfied. This is especially true for engineering applications involving expensive, implicit, and possibly nonlinear performance functions (such as large finite element models). The Efficient Global Reliability Analysis method was recently introduced to improve both the accuracy and efficiency of reliability analysis for this type of performance function. This paper explores how this new reliability analysis method can be used in a design optimization context to create a method of sufficient accuracy and efficiency to enable the use of reliability-based design optimization as a practical design tool.

Bichon, Barron J. (Southwest Research Institute, San Antonio, TX); Mahadevan, Sankaran (Vanderbilt University, Nashville, TN); Eldred, Michael Scott

2010-05-01

308

Parallel Computing in SCALE  

SciTech Connect

The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement activities has been developed to provide an integrated framework for future methods development. Some of the major components of the SCALE parallel computing development plan are parallelization and multithreading of computationally intensive modules and redesign of the fundamental SCALE computational architecture.

DeHart, Mark D [ORNL] [ORNL; Williams, Mark L [ORNL] [ORNL; Bowman, Stephen M [ORNL] [ORNL

2010-01-01

309

Public Cluster : parallel machine with multi-block approach  

E-print Network

We introduce a new approach to enable an open and public parallel machine which is accessible for multi users with multi jobs belong to different blocks running at the same time. The concept is required especially for parallel machines which are dedicated for public use as implemented at the LIPI Public Cluster. We have deployed the simplest technique by running multi daemons of parallel processing engine with different configuration files specified for each user assigned to access the system, and also developed an integrated system to fully control and monitor the whole system over web. A brief performance analysis is also given for Message Parsing Interface (MPI) engine. It is shown that the proposed approach is quite reliable and affect the whole performances only slightly.

Akbar, Z; Ajinagoro, B I; Ohara, G I; Firmansyah, I; Hermanto, B; Handoko, L T

2007-01-01

310

Parallel computation of seismic analysis of high arch dam  

NASA Astrophysics Data System (ADS)

Parallel computation programs are developed for three-dimensional meso-mechanics analysis of fully-graded dam concrete and seismic response analysis of high arch dams (ADs), based on the Parallel Finite Element Program Generator (PFEPG). The computational algorithms of the numerical simulation of the meso-structure of concrete specimens were studied. Taking into account damage evolution, static preload, strain rate effect, and the heterogeneity of the meso-structure of dam concrete, the fracture processes of damage evolution and configuration of the cracks can be directly simulated. In the seismic response analysis of ADs, all the following factors are involved, such as the nonlinear contact due to the opening and slipping of the contraction joints, energy dispersion of the far-field foundation, dynamic interactions of the dam-foundation-reservoir system, and the combining effects of seismic action with all static loads. The correctness, reliability and efficiency of the two parallel computational programs are verified with practical illustrations.

Chen, Houqun; Ma, Huaifa; Tu, Jin; Cheng, Guangqing; Tang, Juzhen

2008-03-01

311

The STAPL Parallel Container Framework  

E-print Network

programming. stapl is a parallel C++ library with functionality similar to stl, the ISO adopted C++ Standard Template Library [49]. stl is a collection of basic algorithms, containers and iterators that can be used as high-level building blocks... for sequential applications. Similar to stl, stapl provides a collection of parallel algorithms (pAlgorithms), parallel and distributed containers (pContainers) [63, 65, 64, 15, 66], and pViews to abstract the data access in pContainers. stapl provides...

Tanase, Ilie Gabriel

2012-02-14

312

Parallel Imaging Microfluidic Cytometer  

PubMed Central

By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

2011-01-01

313

Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux  

SciTech Connect

In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

Guo Zehua; Tang Xianzhu [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

2012-06-15

314

Physiologic Trend Detection and Artifact Rejection: A Parallel Implementation of a Multi-state Kalman Filtering Algorithm  

PubMed Central

Using a parallel implementation of the multi-state Kalman filtering algorithm, we have developed an accurate method of reliably detecting and identifying trends, abrupt changes, and artifacts from multiple physiologic data streams in real-time. The Kalman filter algorithm was implemented within an innovative software architecture for parallel computation: a parallel process trellis. Examples, processed in real-time, of both simulated and actual data serve to illustrate the potential value of the Kalman filter as a tool in physiologic monitoring.

Sittig, Dean F.; Factor, Michael

1989-01-01

315

Cerebro : forming parallel internets and enabling ultra-local economies  

E-print Network

Internet-based mobile communications have been increasing rapidly [5], yet there is little or no progress in platforms that enable applications for discovery, context-awareness and sharing of data and services in a peer-wise ...

Ypodimatopoulos, Polychronis Panagiotis

2008-01-01

316

A reliable multicast for XTP  

NASA Technical Reports Server (NTRS)

Multicast services needed for current distributed applications on LAN's fall generally into one of three categories: datagram, semi-reliable, and reliable. Transport layer multicast datagrams represent unreliable service in which the transmitting context 'fires and forgets'. XTP executes these semantics when the MULTI and NOERR mode bits are both set. Distributing sensor data and other applications in which application-level error recovery strategies are appropriate benefit from the efficiency in multidestination delivery offered by datagram service. Semi-reliable service refers to multicasting in which the control algorithms of the transport layer--error, flow, and rate control--are used in transferring the multicast distribution to the set of receiving contexts, the multicast group. The multicast defined in XTP provides semi-reliable service. Since, under a semi-reliable service, joining a multicast group means listening on the group address and entails no coordination with other members, a semi-reliable facility can be used for communication between a client and a server group as well as true peer-to-peer group communication. Resource location in a LAN is an important application domain. The term 'semi-reliable' refers to the fact that group membership changes go undetected. No attempt is made to assess the current membership of the group at any time--before, during, or after--the data transfer.

Dempsey, Bert J.; Weaver, Alfred C.

1990-01-01

317

The process group approach to reliable distributed computing  

NASA Technical Reports Server (NTRS)

The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

Birman, Kenneth P.

1992-01-01

318

1982 engineering conference on reliability for the electrical power industry  

SciTech Connect

Emergency onsite ac power systems at nuclear power plants are a major concern in plant risk assessments because of the relatively large frequency of loss of offsite power and the dependence of most other safety systems on ac power. Detailed reviews of onsite ac power system designs and reviews of experience with diesel generators at US nuclear power plants form the basis of system reliability analyses that show significant improvements in reliability can be obtained at moderate cost for some plants. Onsite ac power system modifications analyzed include procedural modifications, minor equipment modifications and major equipment additions. Relative costs of various modifications are compared with associated system reliability improvements.

Campbell, D.J.; Arendt, J.S.; Battle, R.E.; Baranowsky, P.W.

1982-01-01

319

Parallel Smoothed Aggregation Multigrid: Aggregation Strategies on Massively Parallel Machines  

SciTech Connect

Algebraic multigrid methods offer the hope that multigrid convergence can be achieved (for at least some important applications) without a great deal of effort from engineers and scientists wishing to solve linear systems. In this paper the authors consider parallelization of the smoothed aggregation multi-grid method. Smoothed aggregation is one of the most promising algebraic multigrid methods. Therefore, developing parallel variants with both good convergence and efficiency properties is of great importance. However, parallelization is nontrivial due to the somewhat sequential aggregation (or grid coarsening) phase. In this paper, they discuss three different parallel aggregation algorithms and illustrate the advantages and disadvantages of each variant in terms of parallelism and convergence. Numerical results will be shown on the Intel Teraflop computer for some large problems coming from nontrivial codes: quasi-static electric potential simulation and a fluid flow calculation.

Ray S. Tuminaro

2000-11-09

320

Parallel repeat solution solver for sparse matrices  

SciTech Connect

This research successfully examines the repeat solution phase of the solution of the linear system A x = b, for a sparse matrix A and a vector b. A new algorithm uses the partitioned inverses of the conventional lower and upper triangular factors of A. Solutions are obtained by direct matrix-vector multiplication of those factors by the vector b. Examples from power system analysis, structural analysis, and finite-element methods have been used to determine the effectiveness of ordering and partitioning in forming these factors. Although not as sparse as the traditional LU factors, these partitioned inverse factors are sufficiently sparse (0.5 to 2%) to make their use practical. Large cases require about 50% more arithmetic operations in comparison with conventional forward elimination and back substitution. However, these operations can now be done in parallel, making a significant speedup possible. Two hypothetical hardware designs were simulated to show the expected speedup from parallel processing.

Enns, M.K.; Alvarado, F.; MacGregor, D.M.

1985-01-01

321

Reliability consideration for erratic loadings  

SciTech Connect

Traditionally, power systems reliability studies, have been concerned with the modelling and evaluation of various systems; whether transmission and distribution lines or generating components. The concept of reliability considers a component as either non repairable or repairable. In the latter case, the reliability is a measure of the component`s availability for a specified period of time. This paper examines the impact of a steel plant`s demand on the availability of the generating units for an island utility. The load is quite erratic, and is speculated as having a deleterious effect on the life of the generating machines committed to meeting the customer`s requirements. These machines are normally on speed (frequency) control, and would track the ramping rate of the client. The paper attempts to quantify the increased maintenance due to the load, also changes in outage rates, and as such determine the impact on reliability and availability of these sets.

Walrond, S.P.; Sharma, C.

1995-12-31

322

Reliability and Maintainability (RAM) Training  

NASA Technical Reports Server (NTRS)

The theme of this manual is failure physics-the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low-cost reliable products. In a broader sense the manual should do more. It should underscore the urgent need CI for mature attitudes toward reliability. Five of the chapters were originally presented as a classroom course to over 1000 Martin Marietta engineers and technicians. Another four chapters and three appendixes have been added, We begin with a view of reliability from the years 1940 to 2000. Chapter 2 starts the training material with a review of mathematics and a description of what elements contribute to product failures. The remaining chapters elucidate basic reliability theory and the disciplines that allow us to control and eliminate failures.

Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Packard, Michael H. (Editor)

2000-01-01

323

Supporting dynamic parallel object arrays  

Microsoft Academic Search

We present efficient support for generalized arrays of parallel data driven objects. The “array elements” are scattered across a parallel machine. Each array element is an object that can be thought of as a virtual processor. The individual elements are addressed by their “index”, which can be an arbitrary object rather than a simple integer. For example, it can be

Orion Sky Lawlor; Laxmikant V. Kalé

2001-01-01

324

The parallel composition of processes  

E-print Network

We suggest that the canonical parallel operation of processes is composition in a well-supported compact closed category of spans of reflexive graphs. We present the parallel operations of classical process algebras as derived operations arising from monoid objects in such a category, representing the fact that they are protocols based on an underlying broadcast communication.

Albasini, L de Francesco; Walters, R F C

2009-01-01

325

Parallel Matlab MIT Lincoln Laboratory  

E-print Network

­ Sensor analysis systems are implemented in other languages ­ Transformation involves years of software ­ Most users will not touch any solution that requires other languages (even cmex) · Portability ­ Most_Recv(source,comm,tag); #12;MIT Lincoln LaboratorySlide-7 Parallel Matlab MatlabMPI fuctionality · "Core Lite" Parallel

Kepner, Jeremy

326

Limited width parallel prefix circuits  

Microsoft Academic Search

In this paper, we present lower and upper bounds on the size of limited width, bounded and unbounded fan-out parallel prefix circuits. The lower bounds on the sizes of such circuits are a function of the depth, width, and number of inputs. The size requirement of an N input bounded fan-out parallel prefix circuit having limited width W and extra

David A. Carlson; Binay Sugla

1990-01-01

327

Advanced techniques in reliability model representation and solution  

NASA Technical Reports Server (NTRS)

The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

Palumbo, Daniel L.; Nicol, David M.

1992-01-01

328

Photovoltaics Performance and Reliability Workshop  

NASA Astrophysics Data System (ADS)

This document consists of papers and viewgraphs compiled from the proceedings of a workshop held in September 1992. This workshop was the fifth in a series sponsored by NREL/DOE under the general subject areas of photovoltaic module testing and reliability. PV manufacturers, DOE laboratories, electric utilities, and others exchanged technical knowledge and field experience. The topics of cell and module characterization, module and system performance, materials and module durability/reliability research, solar radiation, and applications are discussed.

Mrig, L.

329

Photovoltaics performance and reliability workshop  

SciTech Connect

This document consists of papers and viewgraphs compiled from the proceedings of a workshop held in September 1992. This workshop was the fifth in a series sponsored by NREL/DOE under the general subject areas of photovoltaic module testing and reliability. PV manufacturers, DOE laboratories, electric utilities and others exchanged technical knowledge and field experience. The topics of cell and module characterization, module and system performance, materials and module durability/reliability research, solar radiation, and applications are discussed.

Mrig, L. [ed.] [ed.

1992-11-01

330

Photovoltaics performance and reliability workshop  

SciTech Connect

This document consists of papers and viewgraphs compiled from the proceedings of a workshop held in September 1992. This workshop was the fifth in a series sponsored by NREL/DOE under the general subject areas of photovoltaic module testing and reliability. PV manufacturers, DOE laboratories, electric utilities and others exchanged technical knowledge and field experience. The topics of cell and module characterization, module and system performance, materials and module durability/reliability research, solar radiation, and applications are discussed.

Mrig, L. (ed.) [ed.

1992-01-01

331

76 FR 23222 - Electric Reliability Organization Interpretation of Transmission Operations Reliability  

Federal Register 2010, 2011, 2012, 2013

...Docket No. RM10-29-000] Electric Reliability Organization Interpretation of Transmission Operations Reliability AGENCY: Federal Energy Regulatory...approve the North American Electric Reliability Corporation's (NERC's)...

2011-04-26

332

High Performance Parallel Architectures  

NASA Technical Reports Server (NTRS)

Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

El-Ghazawi, Tarek; Kaewpijit, Sinthop

1998-01-01

333

Parallel consensual neural networks.  

PubMed

A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data. PMID:18255610

Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

1997-01-01

334

Measurement, estimation, and prediction of software reliability  

NASA Technical Reports Server (NTRS)

Quantitative indices of software reliability are defined, and application of three important indices is indicated: (1) reliability measurement, (2) reliability estimation, and (3) reliability prediction. State of the art techniques for each of these procedures are presented together with considerations of data acquisition. Failure classifications and other documentation for comprehensive software reliability evaluation are described.

Hecht, H.

1977-01-01

335

Sub-Second Parallel State Estimation  

SciTech Connect

This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly state estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.

Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.; Wang, Shaobu; Huang, Zhenyu

2014-10-31

336

A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix  

NASA Technical Reports Server (NTRS)

A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

Shroff, Gautam

1989-01-01

337

Supporting data intensive applications with medium grained parallelism  

SciTech Connect

ADAMS is an ambitious effort to provide new database access paradigms for the kinds of scientific applications that require massively parallel access to very large data sets in order to be effective. Many of the Grand Challenge Problems fall into this category, as well as those kinds of scientific research which depend on widely distributed shared sets of disparate data. The essence of the ADAMS approach is to view data purely in functional terms, rather than the more traditional structural view in which multiple data items are aggregated into records or tuples of flat files. Further, ADAMS has been implemented as an embedded interface so that scientists can develop applications in the host programming language of their choice, often Fortran, Pascal, or C, and still access shared data generated in other environments. The syntax and semantics of ADAMS is essentially complete. The functional nature of the ADAMS data interface paradigm simplifies its implementation in a distributed environment, e.g., the Mentat run-time system, because one must only distribute functional servers, not pieces of data structures. However, this only opens up the possibility of effective parallel database processing; to realize this potential far more work must be done in the areas of data dependence, intra-statement parallelism, parallel query optimization, and maintaining consistency and reliability in concurrent systems. Discovering how to make effective parallel data access an actually in real scientific applications is the point of this research.

Pfaltz, J.L.; French, J.C.; Grimshaw, A.S.; Son, S.H.

1992-04-01

338

Appendix E: Parallel Pascal development system  

NASA Technical Reports Server (NTRS)

The Parallel Pascal Development System enables Parallel Pascal programs to be developed and tested on a conventional computer. It consists of several system programs, including a Parallel Pascal to standard Pascal translator, and a library of Parallel Pascal subprograms. The library includes subprograms for using Parallel Pascal on a parallel system with a fixed degree of parallelism, such as the Massively Parallel Processor, to conveniently manipulate arrays which have dimensions than the hardware. Programs can be conveninetly tested with small sized arrays on the conventional computer before attempting to run on a parallel system.

1985-01-01

339

Parallel 3-D spherical-harmonics transport methods  

SciTech Connect

This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The authors have developed massively parallel algorithms and codes for solving the radiation transport equation on 3-D unstructured spatial meshes consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. Three self-adjoint forms of the transport equation are solved: the even-parity form, the odd-parity form, and the self-adjoint angular flux form. The authors developed this latter form, which offers several significant advantages relative to the traditional forms. The transport equation is discretized in space using a trilinear finite-element approximation, in direction using a spherical-harmonic approximation, and in energy using the multigroup approximation. The discrete equations are solved used a parallel conjugate-gradient. All of the parallel algorithms were implemented on the CM-5 computer at LANL. Calculations are presented which demonstrate that the solution technique is both highly parallel and efficient.

Morel, J.E.; McGhee, J.M. [Los Alamos National Lab., NM (United States). Computing, Information, and Communications Div.; Manteuffel, T. [Univ. of Colorado, Boulder, CO (United States). Dept. of Mathematics

1997-08-01

340

Fatigue Reliability of Gas Turbine Engine Structures  

NASA Technical Reports Server (NTRS)

The results of an investigation are described for fatigue reliability in engine structures. The description consists of two parts. Part 1 is for method development. Part 2 is a specific case study. In Part 1, the essential concepts and practical approaches to damage tolerance design in the gas turbine industry are summarized. These have evolved over the years in response to flight safety certification requirements. The effect of Non-Destructive Evaluation (NDE) methods on these methods is also reviewed. Assessment methods based on probabilistic fracture mechanics, with regard to both crack initiation and crack growth, are outlined. Limit state modeling techniques from structural reliability theory are shown to be appropriate for application to this problem, for both individual failure mode and system-level assessment. In Part 2, the results of a case study for the high pressure turbine of a turboprop engine are described. The response surface approach is used to construct a fatigue performance function. This performance function is used with the First Order Reliability Method (FORM) to determine the probability of failure and the sensitivity of the fatigue life to the engine parameters for the first stage disk rim of the two stage turbine. A hybrid combination of regression and Monte Carlo simulation is to use incorporate time dependent random variables. System reliability is used to determine the system probability of failure, and the sensitivity of the system fatigue life to the engine parameters of the high pressure turbine. 'ne variation in the primary hot gas and secondary cooling air, the uncertainty of the complex mission loading, and the scatter in the material data are considered.

Cruse, Thomas A.; Mahadevan, Sankaran; Tryon, Robert G.

1997-01-01

341

DPF: A Data Parallel Fortran Benchmark Suite  

E-print Network

DPF: A Data Parallel Fortran Benchmark Suite Yu Charlie Hu S. Lennart Johnsson Dimitris Kehagias Parallel Processing Symposium, Geneva, Switzerland, April 1997. #12; DPF: A Data Parallel Fortran Benchmark@deas.harvard.edu Abstract We present the Data Parallel Fortran (DPF) benchmark suite, a set of data parallel Fortran codes

Johnsson, S. Lennart

342

Parallel Adaptive Mesh Refinement  

SciTech Connect

As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the ability of both meshing methods to resolve simulation details by varying the local grid spacing.

Diachin, L; Hornung, R; Plassmann, P; WIssink, A

2005-03-04

343

Parallel processing and expert systems  

NASA Technical Reports Server (NTRS)

Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

Lau, Sonie; Yan, Jerry C.

1991-01-01

344

Is Monte Carlo embarrassingly parallel?  

SciTech Connect

Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

2012-07-01

345

Imprecise Reliability F.P.A. Coolen  

E-print Network

Imprecise Reliability F.P.A. Coolen Department of Mathematical Sciences, Durham University Durham reliability, particularly focussing on reliability theory with uncertainty quantified via lower and upper further study, and we briefly discuss some research challenges. Keywords: expert judgements; Imprecise

Coolen, Frank

346

Reliability Assessment Incorporating Operational Considerations and Economic  

E-print Network

Engineering Research Center Reliability Assessment Incorporating Operational Considerations and Economic Center (PSERC) research project titled "Reliability Assessment Incorporating Operational ConsiderationsPSERC Reliability Assessment Incorporating Operational Considerations and Economic Aspects

347

77 FR 26686 - Transmission Planning Reliability Standards  

Federal Register 2010, 2011, 2012, 2013

...RM11-18-000; Order No. 762] Transmission Planning Reliability Standards AGENCY: Federal...Commission remands proposed Transmission Planning (TPL) Reliability Standard TPL-002-0b...Commission remands proposed Transmission Planning (TPL) Reliability Standard TPL-...

2012-05-07

348

Transfer form  

Cancer.gov

10/02 Transfer Investigational Agent Form This form is to be used for an intra-institutional transfer, one transfer/form. Division of Cancer Prevention National Cancer Institute National Institutes of Health TRANSFER FROM: Investigator transferring agent:

349

Parallel search of strongly ordered game trees  

SciTech Connect

The alpha-beta algorithm forms the basis of many programs that search game trees. A number of methods have been designed to improve the utility of the sequential version of this algorithm, especially for use in game-playing programs. These enhancements are based on the observation that alpha beta is most effective when the best move in each position is considered early in the search. Trees that have this so-called strong ordering property are not only of practical importance but possess characteristics that can be exploited in both sequential and parallel environments. This paper draws upon experiences gained during the development of programs which search chess game trees. Over the past decade major enhancements of the alpha beta algorithm have been developed by people building game-playing programs, and many of these methods will be surveyed and compared here. The balance of the paper contains a study of contemporary methods for searching chess game trees in parallel, using an arbitrary number of independent processors. To make efficient use of these processors, one must have a clear understanding of the basic properties of the trees actually traversed when alpha-beta cutoffs occur. This paper provides such insights and concludes with a brief description of a refinement to a standard parallel search algorithm for this problem. 33 references.

Marsland, T.A.; Campbell, M.

1982-12-01

350

Implementing a parallel C++ runtime system for scalable parallel systems  

Microsoft Academic Search

pC++ is a language extension to C++ designed toallow programmers to compose "concurrent aggregate"collection classes which can be aligned and distributedover the memory hierarchy of a parallel machine ina manner modeled on the High Performance FortranForum (HPFF) directives for Fortran 90. pC++ allowsthe user to write portable and efficient code whichwill run on a wide range of scalable parallel computersystems.

A. Malony; B. Mohr; P. Beckman; D. Gannon; S. Yang; F. Bodin; S. Kesavan

1993-01-01

351

Massively Parallel Finite Element Programming  

Microsoft Academic Search

\\u000a Today’s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands\\u000a of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers\\u000a in generic finite element codes.\\u000a \\u000a \\u000a Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is\\u000a a limiting

Timo Heister; Martin Kronbichler; Wolfgang Bangerth

2010-01-01

352

Distributing a GIS using a parallel data approach  

NASA Astrophysics Data System (ADS)

The limitations of serial processors for managing large computationally intensive dataset problems in fields such as visualization and Geographical Information Systems (GIS) are well known. Parallel processing techniques, where one or many computational tasks are distributed across a number of processing elements, have been proposed as a solution to the problem. We describe a model for visualizing oceanographic data that extends an earlier technique of using data parallel algorithms on a dedicated parallel computer to an object- oriented distributed visualization system that forms a virtual parallel machine on a network computers. This paper presents a visualization model being developed by the University of Southern Mississippi demonstrating interactive visualization of oceanographic data. The test case involves visualization of two and three-dimensional oceanographic data (salinity, sound speed profile, currents, temperature, and depth) with Windows NT Pentium class computers serving as both severs and client workstations.

Monde, John R.; Wild, Michael

1998-05-01

353

Supercomputing on massively parallel bit-serial architectures  

NASA Technical Reports Server (NTRS)

Research on the Goodyear Massively Parallel Processor (MPP) suggests that high-level parallel languages are practical and can be designed with powerful new semantics that allow algorithms to be efficiently mapped to the real machines. For the MPP these semantics include parallel/associative array selection for both dense and sparse matrices, variable precision arithmetic to trade accuracy for speed, micro-pipelined train broadcast, and conditional branching at the processing element (PE) control unit level. The preliminary design of a FORTRAN-like parallel language for the MPP has been completed and is being used to write programs to perform sparse matrix array selection, min/max search, matrix multiplication, Gaussian elimination on single bit arrays and other generic algorithms. A description is given of the MPP design. Features of the system and its operation are illustrated in the form of charts and diagrams.

Iobst, Ken

1985-01-01

354

Parallel computation using boundary elements in solid mechanics  

NASA Technical Reports Server (NTRS)

The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

Chien, L. S.; Sun, C. T.

1990-01-01

355

Robust Design of Reliability Test Plans Using Degradation Measures.  

SciTech Connect

With short production development times, there is an increased need to demonstrate product reliability relatively quickly with minimal testing. In such cases there may be few if any observed failures. Thus, it may be difficult to assess reliability using the traditional reliability test plans that measure only time (or cycles) to failure. For many components, degradation measures will contain important information about performance and reliability. These measures can be used to design a minimal test plan, in terms of number of units placed on test and duration of the test, necessary to demonstrate a reliability goal. Generally, the assumption is made that the error associated with a degradation measure follows a known distribution, usually normal, although in practice cases may arise where that assumption is not valid. In this paper, we examine such degradation measures, both simulated and real, and present non-parametric methods to demonstrate reliability and to develop reliability test plans for the future production of components with this form of degradation.

Lane, Jonathan Wesley; Lane, Jonathan Wesley; Crowder, Stephen V.; Crowder, Stephen V.

2014-10-01

356

First reliability test of a surface micromachined microengine using SHiMMeR  

SciTech Connect

The first-ever reliability stress test on surface micromachined microengines developed at Sandia National Laboratories (SNL) has been completed. We stressed 41 microengines at 36,000 RPM and inspected the functionality at 60 RPM. We have observed an infant mortality region, a region of low failure rate (useful life), and no signs of wearout in the data. The reliability data are presented and interpreted using standard reliability methods. Failure analysis results on the stressed microengines are presented. In our effort to study the reliability of MEMS, we need to observe the failures of large numbers of parts to determine the failure modes. To facilitate testing of large numbers of micromachines. The Sandia High Volume Measurement of Micromachine Reliability (SHiMMeR) system has computer controlled positioning and the capability to inspect moving parts. The development of this parallel testing system is discussed in detail.

Tanner, D.M.; Smith, N.F.; Bowman, D.J. [and others

1997-08-01

357

Why Structured Parallel Programming Matters Murray Cole  

E-print Network

Many (most?) parallel applications don't actually involve arbitrary, dynamic interaction patterns Computing Many (most?) parallel applications don't actually involve arbitrary, dynamic interaction patterns1 Why Structured Parallel Programming Matters Murray Cole Institute for Computing Systems

Cole, Murray

358

Benchmarking Parallel Java Master's Project Report  

E-print Network

Benchmarking Parallel Java Master's Project Report Asma'u Sani Mohammed Java API by implementing the OpenMP version of the NAS Parallel Benchmark (NPB in comparison with FORTRAN OpenMP. Benchmarking Parallel Java allows us to understand

Kaminsky, Alan

359

Parallel Marker Based Image Segmentation with Watershed  

E-print Network

Parallel Marker Based Image Segmentation with Watershed Transformation Alina N. Moga Albert; Parallel Marker Based Watershed Transformation Abstract. The parallel watershed transformation used homogeneity with the watershed transformation. Boundary­based region merging is then effected to condense non

360

Uhlmann's parallelism Nagaoka's quantum information geometry  

E-print Network

Uhlmann's parallelism and Nagaoka's quantum information geometry Keiji Matsumoto METR 97-09 October 1997 #12;Uhmann's parallelism and Nagaoka's quantum information geometry Keiji Matsumoto 1 Abstract: Uhlmann's parallelism and Nagaoka's quantum information geometry. In this paper, intrinsic relation

Yamamoto, Hirosuke

361

Automatic Generation of Parallel CRC Circuits  

Microsoft Academic Search

A parallel CRC circuit simultaneously processes multiple data bits. A generic VHDL description of parallel CRC circuits lets designers synthesize CRC circuits for any generator polynomial or required amount of parallelism

Michael Sprachmann

2001-01-01

362

Assessment of NDE reliability data  

NASA Technical Reports Server (NTRS)

Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.

Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.

1975-01-01

363

Reliability model for planetary gear  

NASA Technical Reports Server (NTRS)

A reliability model is presented for planetary gear trains in which the ring gear is fixed, the Sun gear is the input, and the planet arm is the output. The input and output shafts are coaxial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. This type of gear train is commonly used in main rotor transmissions for helicopters and in other applications which require high reductions in speed. The reliability model is based on the Weibull distribution of the individual reliabilities of the transmission components. The transmission's basic dynamic capacity is defined as the input torque which may be applied for one million input rotations of the Sun gear. Load and life are related by a power law. The load life exponent and basic dynamic capacity are developed as functions of the component capacities.

Savage, M.; Paridon, C. A.; Coy, J. J.

1982-01-01

364

Application of spatial and angular domain based parallelism to a discrete ordinates formulation with unstructured spatial discretization  

Microsoft Academic Search

A parallel discrete ordinate formulation employing a general, unstructured finite element spatial discretization is presented for steady, gray, nonscattering radiative heat transport within a participating medium. The formulation is based on the first order form of the boltzmann transport equation and allows for any combination of spatial and angular domain based parallelism. The formulation is tested on a massively parallel,

1997-01-01

365

Detection of faults and software reliability analysis  

NASA Technical Reports Server (NTRS)

Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.

Knight, John C.

1987-01-01

366

Computational Thermochemistry and Benchmarking of Reliable Methods  

SciTech Connect

During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

2006-06-20

367

Employing machine learning for reliable miRNA target identification in plants  

PubMed Central

Background miRNAs are ~21 nucleotide long small noncoding RNA molecules, formed endogenously in most of the eukaryotes, which mainly control their target genes post transcriptionally by interacting and silencing them. While a lot of tools has been developed for animal miRNA target system, plant miRNA target identification system has witnessed limited development. Most of them have been centered around exact complementarity match. Very few of them considered other factors like multiple target sites and role of flanking regions. Result In the present work, a Support Vector Regression (SVR) approach has been implemented for plant miRNA target identification, utilizing position specific dinucleotide density variation information around the target sites, to yield highly reliable result. It has been named as p-TAREF (plant-Target Refiner). Performance comparison for p-TAREF was done with other prediction tools for plants with utmost rigor and where p-TAREF was found better performing in several aspects. Further, p-TAREF was run over the experimentally validated miRNA targets from species like Arabidopsis, Medicago, Rice and Tomato, and detected them accurately, suggesting gross usability of p-TAREF for plant species. Using p-TAREF, target identification was done for the complete Rice transcriptome, supported by expression and degradome based data. miR156 was found as an important component of the Rice regulatory system, where control of genes associated with growth and transcription looked predominant. The entire methodology has been implemented in a multi-threaded parallel architecture in Java, to enable fast processing for web-server version as well as standalone version. This also makes it to run even on a simple desktop computer in concurrent mode. It also provides a facility to gather experimental support for predictions made, through on the spot expression data analysis, in its web-server version. Conclusion A machine learning multivariate feature tool has been implemented in parallel and locally installable form, for plant miRNA target identification. The performance was assessed and compared through comprehensive testing and benchmarking, suggesting a reliable performance and gross usability for transcriptome wide plant miRNA target identification. PMID:22206472

2011-01-01

368

"Feeling" Series and Parallel Resistances.  

ERIC Educational Resources Information Center

Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

Morse, Robert A.

1993-01-01

369

Parallel Inversion of Sparse Matrices  

Microsoft Academic Search

This paper presents a parallel algorithm for obtaining the inverse of a large, nonsingular symmetric matrix A of dimension nxn. The inversion method proposed is based on the triangular factors of A. The task of obtaining the \\

Ramon Betancourt; Fernando L. Alvarado

1986-01-01

370

PARALLEL GREEDY RANDOMIZED ADAPTIVE SEARCH ...  

E-print Network

Dec 6, 2004 ... up of those elements that can be added to the current solution under construction without ...... The execution times of the independent parallel program executing ... processing a finite set of jobs on a finite set of machines.

2004-12-06

371

Turbomachinery CFD on parallel computers  

NASA Technical Reports Server (NTRS)

The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

1992-01-01

372

RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM  

NASA Technical Reports Server (NTRS)

RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for reliability calculations and infinite repair resources for availability calculations. No more than 967 items or groups can be modeled by RELAV. If larger problems can be broken into subsystems of 967 items or less, the subsystem results can be used as item inputs to a system problem. The calculated availabilities are steady-state values. Group results are presented in the order in which they were calculated (from the most embedded level out to the system level). This provides a good mechanism to perform trade studies. Starting from the system result and working backwards, the granularity gets finer; therefore, system elements that contribute most to system degradation are detected quickly. RELAV is a C-language program originally developed under the UNIX operating system on a MASSCOMP MC500 computer. It has been modified, as necessary, and ported to an IBM PC compatible with a math coprocessor. The current version of the program runs in the DOS environment and requires a Turbo C vers. 2.0 compiler. RELAV has a memory requirement of 103 KB and was developed in 1989. RELAV is a copyrighted work with all copyright vested in NASA.

Bowerman, P. N.

1994-01-01

373

Reliable predictions of unusual molecules.  

PubMed

Quantum chemistry can today be employed to invent new molecules and investigate their properties and chemical bonding. However, the predicted species must be viable in order to be synthesized by experimentalists. In this perspective article we describe the technology of reliable theoretical predictions and show how understanding of chemical bonding in studied chemical systems could help to design new molecular structures. We also provide a short overview of successfully predicted and already produced (in some cases) planar hypercoordinate species to demonstrate that the consistent theoretical prediction of viable molecules with unusual structures and properties is now a reliable tool for exploring new, yet unknown molecules, clusters, nanomaterials and solids. PMID:23103915

Ivanov, Alexander S; Boldyrev, Alexander I

2012-12-14

374

Computing Reliabilities Of Ceramic Components  

NASA Technical Reports Server (NTRS)

CARES/PC computer program performs statistical analysis of data obtained from fracture of simple, uniaxial tensile or flexural specimens of ceramics and estimates Weibull and Batdorf material parameters from these data. CARES/PC is subset of Ceramics Analysis and Reliability Evaluation of Structures (CARES) program (LEW-15168), which calculates fast-fracture reliabilities or failure probabilities of ceramic components by use of Batdorf and Weibull models to describe effects of multiaxial stress states on strengths of materials. CARES/PC written and compiled with the Microsoft FORTRAN v5.0 compiler.

Szatmary, S. A.; Gyekenyesi, J. P.; Nemeth, N. N.

1993-01-01

375

Reliability of concentrix™ CPV modules  

NASA Astrophysics Data System (ADS)

Reliability and durability of all CPV power plant components is very important since long-term operation in a harsh environment is required. The last three Concentrix CPV Module generations have all been certified according to IEC 62108:2007. Based on field experience to date, there is no indication of where and how to improve Soitec's Concentrix™ CPV Module reliability. To prove and further improve the CPV Module robustness, extensive internal and external accelerated ageing tests are executed at Soitec. In this paper, some of these tests are described and results are presented.

Gerster, E.; Gombert, A.; Wanka, S.

2012-10-01

376

Parallel architecture for OPS5  

Microsoft Academic Search

An architecture that captures some of the inherent parallelism of the OPS5 expert system language has been designed and implemented at Oak Ridge National Laboratory. A central feature of this architecture is a network bus over which a single host processor broadcasts messages to a set of parallel rule processors. This transmit-only bus is implemented by a memory-mapped scheme which

Philip L. Butler; J. D. Allen Jr.; Donald W. Bouldin

1988-01-01

377

Parallel Algorithms for Term Matching  

Microsoft Academic Search

We present a new randomized parallel algorithm for term matching. Let n be the number of nodes of the directed acyclic graphs (dags) representing the terms to be matched, then our algorithm uses O(log2n) parallel time and M(n) processors, where M(n) is the complexity of n by n matrix multiplication. The number of processors is a significant improvement over previously

Cynthia Dwork; Paris C. Kanellakis; Larry J. Stockmeyer

1986-01-01

378

Forms of matter and forms of radiation  

E-print Network

The theory of defects in ordered and ill-ordered media is a well-advanced part of condensed matter physics. Concepts developed in this field also occur in the study of spacetime singularities, namely: i)- the topological theory of quantized defects (Kibble's cosmic strings) and ii)- the Volterra process for continuous defects, used to classify the Poincar\\'e symmetry breakings. We reassess the classification of Minkowski spacetime defects in the same theoretical frame, starting from the conjecture that these defects fall into two classes, as on they relate to massive particles or to radiation. This we justify on the empirical evidence of the Hubble's expansion. We introduce timelike and null congruences of geodesics treated as ordered media, viz. 'm'-crystals of massive particles and 'r'-crystals of massless particles, with parallel 4-momenta in M^4. Classifying their defects (or 'forms') we find (i) 'm'- and 'r'- Volterra continuous line defects and (ii) quantized topologically stable 'r'-defects, these latter forms being of various dimensionalities. Besides these 'perfect' forms, there are 'imperfect' disclinations that bound misorientation walls in three dimensions. We also speculate on the possible relation of these forms with the large-scale structure of the Universe.

Maurice Kleman

2011-04-08

379

Architectures for reasoning in parallel  

NASA Technical Reports Server (NTRS)

The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

Hall, Lawrence O.

1989-01-01

380

Efficiency of parallel direct optimization  

NASA Technical Reports Server (NTRS)

Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

Janies, D. A.; Wheeler, W. C.

2001-01-01

381

RELIABILITY OF CAPACITOR CHARGING UNITS  

E-print Network

RELIABILITY OF CAPACITOR CHARGING UNITS Clint Sprott July 30, 1965 University of Wisconsin Thermonuclear Plasma Studies PLP 51.. Copy No. 3CAPACITOR CHARGING UNITS Clint Sprott July 30, 1965 Recent tests have been made on the accuracy of the voltage to which various capacitor banks

Sprott, Julien Clinton

382

Reliable Multicast Transport Protocol (RMTP)  

Microsoft Academic Search

This paper presents the design, implementation, and performance of a reliable multicast transport protocol (RMTP). RMTP is based on a hierarchical structure in which receivers are grouped into local regions or domains and in each domain there is a special receiver called a designated receiver (DR) which is responsible for sending acknowledgments periodically to the sender, for processing acknowledgment from

Sanjoy Paul; Krishan K. Sabnani; John C.-H. Lin; Supratik Bhattacharyya

1997-01-01

383

Becoming a high reliability organization.  

PubMed

Aircraft carriers, electrical power grids, and wildland firefighting, though seemingly different, are exemplars of high reliability organizations (HROs)--organizations that have the potential for catastrophic failure yet engage in nearly error-free performance. HROs commit to safety at the highest level and adopt a special approach to its pursuit. High reliability organizing has been studied and discussed for some time in other industries and is receiving increasing attention in health care, particularly in high-risk settings like the intensive care unit (ICU). The essence of high reliability organizing is a set of principles that enable organizations to focus attention on emergent problems and to deploy the right set of resources to address those problems. HROs behave in ways that sometimes seem counterintuitive--they do not try to hide failures but rather celebrate them as windows into the health of the system, they seek out problems, they avoid focusing on just one aspect of work and are able to see how all the parts of work fit together, they expect unexpected events and develop the capability to manage them, and they defer decision making to local frontline experts who are empowered to solve problems. Given the complexity of patient care in the ICU, the potential for medical error, and the particular sensitivity of critically ill patients to harm, high reliability organizing principles hold promise for improving ICU patient care. PMID:22188677

Christianson, Marlys K; Sutcliffe, Kathleen M; Miller, Melissa A; Iwashyna, Theodore J

2011-01-01

384

Smart Grid - a reliability perspective  

Microsoft Academic Search

Increasing complexity of power grids, growing demand, and requirement for greater grid reliability, security and efficiency as well as environmental and energy sustainability concerns continue to highlight the need for a quantum leap in harnessing communication and information technologies. This leap toward a ¿smarter¿ grid is now widely referred to as \\

Khosrow Moslehi; Ranjit Kumar

2010-01-01

385

Web Awards: Are They Reliable?  

ERIC Educational Resources Information Center

School library media specialists recommend quality Web sites to children based on evaluations and Web awards. This article examines three types of Web awards and who grants them, suggests ways to determine their reliability, and discusses specific award sites. Includes a bibliography of Web sites. (PEN)

Everhart, Nancy; McKnight, Kathleen

1997-01-01

386

Photovoltaic performance and reliability workshop  

Microsoft Academic Search

This proceedings is the compilation of papers presented at the ninth PV Performance and Reliability Workshop held at the Sheraton Denver West Hotel on September 4--6, 1996. This years workshop included presentations from 25 speakers and had over 100 attendees. All of the presentations that were given are included in this proceedings. Topics of the papers included: defining service lifetime

Kroposki

1996-01-01

387

Photovoltaics Performance and Reliability Workshop  

Microsoft Academic Search

This document consists of papers and viewgraphs compiled from the proceedings of a workshop held in September 1992. This workshop was the fifth in a series sponsored by NREL\\/DOE under the general subject areas of photovoltaic module testing and reliability. PV manufacturers, DOE laboratories, electric utilities, and others exchanged technical knowledge and field experience. The topics of cell and module

L. Mrig

1992-01-01

388

Trends in human reliability analysis  

Microsoft Academic Search

The approach to human reliability has been changing during the past decades, partly due to the needs from probabilistic risk assessment of large scale industrial installations, partly due to a change within psychological research towards cognitive studies. In the paper, some of the characteristic features of this change are discussedDefinition of human error and judgement of performance are becoming increasingly

JENS RASMUSSEN

1985-01-01

389

Reliability analysis of ventilation systems  

SciTech Connect

This article proposes design parameters and systems analysis procedures for a mine ventilation system which incorporates backup and redundancy in its switching and fan systems in order to optimize the safety and reliability of the overall system. Failure probabilities due to frost buildup and other factors are assessed and several design regimes are comparatively evaluated. Diagrams are included.

Petrov, N.N.; Butorina, O.S.

1987-09-01

390

Photovoltaic performance and reliability workshop  

SciTech Connect

This proceedings is the compilation of papers presented at the ninth PV Performance and Reliability Workshop held at the Sheraton Denver West Hotel on September 4--6, 1996. This years workshop included presentations from 25 speakers and had over 100 attendees. All of the presentations that were given are included in this proceedings. Topics of the papers included: defining service lifetime and developing models for PV module lifetime; examining and determining failure and degradation mechanisms in PV modules; combining IEEE/IEC/UL testing procedures; AC module performance and reliability testing; inverter reliability/qualification testing; standardization of utility interconnect requirements for PV systems; need activities to separate variables by testing individual components of PV systems (e.g. cells, modules, batteries, inverters,charge controllers) for individual reliability and then test them in actual system configurations; more results reported from field experience on modules, inverters, batteries, and charge controllers from field deployed PV systems; and system certification and standardized testing for stand-alone and grid-tied systems.

Kroposki, B.

1996-10-01

391

Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer  

DOEpatents

Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

2014-02-11

392

Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer  

DOEpatents

Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

2014-08-12

393

Compound estimation procedures in reliability  

NASA Technical Reports Server (NTRS)

At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.

Barnes, Ron

1990-01-01

394

Parallel asynchronous systems and image processing algorithms  

NASA Technical Reports Server (NTRS)

A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

Coon, D. D.; Perera, A. G. U.

1989-01-01

395

Power Quality and Reliability Project  

NASA Technical Reports Server (NTRS)

One area where universities and industry can link is in the area of power systems reliability and quality - key concepts in the commercial, industrial and public sector engineering environments. Prairie View A&M University (PVAMU) has established a collaborative relationship with the University of'Texas at Arlington (UTA), NASA/Johnson Space Center (JSC), and EP&C Engineering and Technology Group (EP&C) a small disadvantage business that specializes in power quality and engineering services. The primary goal of this collaboration is to facilitate the development and implementation of a Strategic Integrated power/Systems Reliability and Curriculum Enhancement Program. The objectives of first phase of this work are: (a) to develop a course in power quality and reliability, (b) to use the campus of Prairie View A&M University as a laboratory for the study of systems reliability and quality issues, (c) to provide students with NASA/EPC shadowing and Internship experience. In this work, a course, titled "Reliability Analysis of Electrical Facilities" was developed and taught for two semesters. About thirty seven has benefited directly from this course. A laboratory accompanying the course was also developed. Four facilities at Prairie View A&M University were surveyed. Some tests that were performed are (i) earth-ground testing, (ii) voltage, amperage and harmonics of various panels in the buildings, (iii) checking the wire sizes to see if they were the right size for the load that they were carrying, (iv) vibration tests to test the status of the engines or chillers and water pumps, (v) infrared testing to the test arcing or misfiring of electrical or mechanical systems.

Attia, John O.

2001-01-01

396

A Semantic Wiki Alerting Environment Incorporating Credibility and Reliability Evaluation  

E-print Network

A Semantic Wiki Alerting Environment Incorporating Credibility and Reliability Evaluation Brian in the form of a semantic wiki. A gang ontology and semantic inferencing are used to annotate the reports Introduction In this paper, we describe a prototype we are developing that we call the Semantic Wiki Alerting

Kokar, Mieczyslaw M.

397

Pad finish related board-level solder joint reliability research  

Microsoft Academic Search

Pad finish is the direct interface between PCB and solder ball, it plays an important role in determining the compound and characteristics of IMC formed at those interfaces, and even to change the mechanical property and microstructure of bulk solder joint. In this paper, we investigated the reliability property of 10 different pad finish combinations. The substrate finish includes electro-plated

Chen Zhengrong; Zhou Jianwei; Fu Xingming; Lee Jaisung

2010-01-01

398

Reliability and Validity of the Learning Styles Questionnaire.  

ERIC Educational Resources Information Center

Describes a study of Chinese undergraduate students at the Hong Kong Polytechnic that was conducted to examine the reliability and predictive validity of a short form of Honey and Mumford's Learning Styles Questionnaire. Correlations between learning style scores and preferences for different types of learning activities are discussed. (16…

Fung, Y. H.; And Others

1993-01-01

399

Defining Requirements for Improved Photovoltaic System Reliability  

SciTech Connect

Reliable systems are an essential ingredient of any technology progressing toward commercial maturity and large-scale deployment. This paper defines reliability as meeting system fictional requirements, and then develops a framework to understand and quantify photovoltaic system reliability based on initial and ongoing costs and system value. The core elements necessary to achieve reliable PV systems are reviewed. These include appropriate system design, satisfactory component reliability, and proper installation and servicing. Reliability status, key issues, and present needs in system reliability are summarized for four application sectors.

Maish, A.B.

1998-12-21

400

Quantum Memory Hierarchies: Efficient Designs to Match Available Parallelism in Quantum Computing  

Microsoft Academic Search

The assumption of maximum parallelism support for the successful realization of scalable quantum computers has led to homogeneous, ``sea-of-qubits'' architectures. The resulting architectures overcome the primary challenges of reliability and scalability at the cost of physically unacceptable system area. We find that by exploiting the natural serialization at both the application and the physical microarchitecture level of a quantum computer,

Darshan D. Thaker; Tzvetan S. Metodi; Andrew W. Cross; Isaac L. Chuang; Frederic T. Chong

2006-01-01

401

Evaluating the Error Resilience of Parallel Programs Bo Fang, Karthik Pattabiraman, Matei Ripeanu, Sudhanva Gurumurthi  

E-print Network

Evaluating the Error Resilience of Parallel Programs Bo Fang, Karthik Pattabiraman, Matei Ripeanu in terms of reliability. Evaluating the error resilience of HPC applications is an essential step a methodology to char- acterize the resilience of OpenMP programs using fault-injection experiments. We find

Gurumurthi, Sudhanva

402

Native Speakers' versus L2 Learners' Sensitivity to Parallelism in Vp-Ellipsis  

ERIC Educational Resources Information Center

This article examines sensitivity to structural parallelism in verb phrase ellipsis constructions in English native speakers as well as in three groups of advanced second language (L2) learners. The results of a set of experiments, based on those of Tanenhaus and Carlson (1990), reveal subtle but reliable differences among the various learner…

Duffield, Nigel G.; Matsuo, Ayumi

2009-01-01

403

A parallel-plate actuated test structure for fatigue analysis of MEMS  

Microsoft Academic Search

Silicon, heavily used as a structural material in MEMS, is subject to several reliability concerns most importantly fatigue that can limit the utility of MEMS devices in commercial and defense applications. A novel parallel-plate actuated test structure for fatigue analysis of MEMS is designed in this paper, and the structure is fabricated by bulk micromachining. Firstly, according to the predefined

Qi Min; Junyong Tao; Yun'an Zhang; Xun Chen

2011-01-01

404

A parallel stereo algorithm that produces dense depth maps and preserves image features  

Microsoft Academic Search

To compute reliable dense depth maps, a stereo algorithm must preserve depth discontinuities and avoid gross errors. In this paper, we show how simple and parallel techniques can be combined to achieve this goal and deal with complex real world scenes. Our algorithm relies on correlation followed by interpolation. During the correlation phase the two images play a symmetric role

Pascal Fua

1992-01-01

405

A parallel stereo algorithm that produces dense depth maps and preserves image features  

Microsoft Academic Search

To compute reliable dense depth maps, a stereo al- gorithm must preserve depth discontinuities and avoid gross errors. In this paper, we show how simple and parallel tech- niques can be combined to achieve this goal and deal with complex real world scenes. Our algorithm relies on correla- tion followed by interpolation. During the correlation phase the two images play

Pascal Fua

1991-01-01

406

PARALLEL IDENTIFICATION OF STRUCTURAL DAMAGES USING VIBRATION MODES AND SENSOR CHARACTERISTICS  

Microsoft Academic Search

The knowledge of modal parameters is used to enhance a parallel system identification technique that is aimed at estimating the story stiffness and damping of a building structure. The modal parameters are used to decide the pass band for band-pass filters to improve the quality of data by selecting the reliable signals confined in the vicinity of modal frequencies. The

Reiki YOSHIMOTO; Akira MITA; Koichi MORITA

407

A knowledge management system for series-parallel availability optimization and design  

Microsoft Academic Search

System availability is an important subject in the design field of industrial system as the system structure becomes more complicated. While improving the system's reliability, the cost is also on the upswing. The availability is increased by a redundancy system. Redun- dancy Allocation Problem (RAP) of a series-parallel system is traditionally resolved by experienced system designers. We proposed a genetic

Ying-shen Juang; Shui-shun Lin; Hsing-pei Kao

2008-01-01

408

CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED  

EPA Science Inventory

This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter, and the power-law relationship between...

409

CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED  

EPA Science Inventory

This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter , and the power-law relationship betwe...

410

Hierarchial parallel computer architecture defined by computational multidisciplinary mechanics  

NASA Technical Reports Server (NTRS)

The goal is to develop an architecture for parallel processors enabling optimal handling of multi-disciplinary computation of fluid-solid simulations employing finite element and difference schemes. The goals, philosphical and modeling directions, static and dynamic poly trees, example problems, interpolative reduction, the impact on solvers are shown in viewgraph form.

Padovan, Joe; Gute, Doug; Johnson, Keith

1989-01-01

411

HIGHLY PARALLEL EVOLUTIONARY ALGORITHMS FOR GLOBAL OPTIMIZATION, SYMBOLIC INFERENCE AND  

E-print Network

, reproduction and selection. Mutation randomly perturbs a candidate solution, recombination decomposes two dis­ tinct solutions and then randomly mixes their parts to form a novel solution, reproduction replicates. G. Degli Antoni. 1 #12; 2. Parallel Cellular Evolutionary Algorithms for Global Optimization

Neumaier, Arnold

412

Parallel electronic circuit simulation on the iPSC system  

Microsoft Academic Search

A parallel circuit simulator was implemented on the iPSC system. Concurrent model evaluation, hierarchical BBDF (bordered block diagonal form) reordering, and distributed multifrontal decomposition to solve the sparse matrix are used. A speedup of six times has been achieved on an eight-processor iPSC hypercube system

C.-P. Yuan; R. Lucas; P. Chan; R. Dutton

1988-01-01

413

An Analysis of Gang Scheduling for Multiprogrammed Parallel Computing Environments  

E-print Network

An Analysis of Gang Scheduling for Multiprogrammed Parallel Computing Environments Mark S­8285 fwang­fang,papaefthymiou­mariosg@cs.yale.edu Abstract Gang scheduling is a resource management scheme and analyze a queueing theoretic model for a general gang scheduling scheme that forms the basis of a multipro

Papaefthymiou, Marios

414

Optimal Schedules for Parallel Prefix Computation with Bounded Resources  

E-print Network

Given x 1 ; . . . ; xN , parallel prefix computes x 1 ffi x 2 ffi . . . ffi x k , for 1 k N , with associative operation ffi. We show optimal schedules for parallel prefix computation with a fixed number of resources p 2 for a prefix of size N p(p + 1)=2 . The time of the optimal schedules with p resources is d2N=(p + 1)e for N p(p + 1)=2, which we prove to be the strict lower bound(i.e., which is what can be achieved maximally). We then present a pipelined form of optimal schedules with d2N=(p + 1)e + d(p 0 1)=2e 0 1 time, which takes a constant overhead of d(p 0 1)=2e time more than the optimal schedules. Parallel prefix is an important common operation in many algorithms including the evaluation of polynomials, general Hornor expressions, carry look-ahead circuits and ranking and packing problems. A most important application of parallel prefix is loop parallelizing transformation. 1 Introduction Given x 1 ; . . . ; xN , parallel prefix computes x 1 ffi x 2 ffi . . . ffi x...

Alexandru Nicolau; Haigeng Wang

1991-01-01

415

Asynchronous parallel status comparator  

DOEpatents

Disclosed is an apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition. 4 figs.

Arnold, J.W.; Hart, M.M.

1992-12-15

416

Asynchronous parallel status comparator  

DOEpatents

Apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition.

Arnold, Jeffrey W. (828 Hickory Ridge Rd., Aiken, SC 29801); Hart, Mark M. (223 Limerick Dr., Aiken, SC 29803)

1992-01-01

417

Scalable Parallel Random Number Generators Library, The (SPRNG)  

NSDL National Science Digital Library

Computational stochasitc approaches (Monte Carlo methods) based on the random sampling are becoming extremely important research tools not only in their "traditional" fields such as physics, chemistry or applied mathematics but also in social sciences and, recently, in various branches of industry. An indication of importance is, for example, the fact that Monte Carlo calculations consume about one half of the supercomputer cycles. One of the indispensable and important ingredients for reliable and statistically sound calculations is the source of pseudo random numbers. The goal of our project is to develop, implement and test a scalable package for parallel pseudo random number generation which will be easy to use on a variety of architectures, especially in large-scale parallel Monte Carlo applications.

Michael Mascagni, Ashok Srinivasan

418

Performance and Scalability Evaluation of the Ceph Parallel File System  

SciTech Connect

Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

Wang, Feiyi [ORNL] [ORNL; Nelson, Mark [Inktank Storage, Inc.] [Inktank Storage, Inc.; Oral, H Sarp [ORNL] [ORNL; Settlemyer, Bradley W [ORNL] [ORNL; Atchley, Scott [ORNL] [ORNL; Caldwell, Blake A [ORNL] [ORNL; Hill, Jason J [ORNL] [ORNL

2013-01-01

419

JPARSS: A Java Parallel Network Package for Grid Computing  

SciTech Connect

The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size. This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services

Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

2002-03-01

420

Computing contingency statistics in parallel.  

SciTech Connect

Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

2010-09-01

421

Parallel plasma fluid turbulence calculations  

SciTech Connect

The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center`s CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.

Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

1994-12-31

422

Tax Forms  

NSDL National Science Digital Library

As thoughts in the US turn to taxes (April 15 is just around the corner), Mary Jane Ledvina of the Louisiana State University regional government depository library has provided a simple, effective pointers page to downloadable tax forms. Included are federal tax forms and those for 43 states. Of course, available forms vary by state. Most forms are in Adobe Acrobat (.pdf) format. This is a simple, crisply designed page that should save time, although probably not headaches.

Ledvina, Mary J.

1997-01-01

423

Experiments with OR-parallel logic programs  

SciTech Connect

We present here the results of several experiments involving OR-parallelism, based on the implementation of a parallel Warren Abstract Machine at Argonne National Laboratory. The experiments illustrate a variety of effects resulting from various types of programs, and raise issues that must be dealt with in any parallel implementation. We also demonstrate a tool for obtaining a visual representation of the parallelism.

Disz, T.; Lusk, E.; Overbeek, R.

1987-02-01

424

Parallel computational complexity in statistical physics  

Microsoft Academic Search

We examine several models in statistical physics from the perspective of parallel computational complexity theory. In each case, we describe a parallel method of simulation that is faster than current sequential methods. We find that parallel complexity results are in accord with intuitive notions of physical complexity for the models studied. First, we investigate the parallel complexity of sampling Lorentz

Kenneth J. Moriarty

1998-01-01

425

DPF: A Data Parallel Fortran Benchmark Suite  

Microsoft Academic Search

We present the Data Parallel Fortran (DPF) benchmark suite, a set of data parallel Fortran codes for evaluating data parallel compilers appropriate for any target parallel archi- tecture, with shared or distributed memory. The codes are provided in basic, optimized and several library versions. The functionality of the benchmarks cover collective commu- nication functions, scientific software library functions, and application

Y. Charlie Hu; S. Lennart Johnsson; Dimitris Kehagias; Nadia Shalaby

1997-01-01

426

Language Extensions in Support of Compiler Parallelization  

E-print Network

to compare the performance of four versions of each benchmark: 1) sequential Java, 2) sequential X10, 3) hand-parallelized X10, 4) parallel Java. Averaged over ten JGF Section 2 and 3 benchmarks, the parallel X10 version also speed up code due to elimination of runtime checks. For the eight benchmarks for which parallel

Kasahara, Hironori

427

Open architecture for multilingual parallel texts  

E-print Network

Multilingual parallel texts (abbreviated to parallel texts) are linguistic versions of the same content ("translations"); e.g., the Maastricht Treaty in English and Spanish are parallel texts. This document is about creating an open architecture for the whole Authoring, Translation and Publishing Chain (ATP-chain) for the processing of parallel texts.

Benitez, M T Carrasco

2008-01-01

428

Refinement Transformation Using Abstract Parallel Machines  

E-print Network

for circuit specification--- `concrete parallel machines'. #12; The ability to define Abstract ParallelRefinement Transformation Using Abstract Parallel Machines Joy Goodman 1 , John O'Donnell 1 and Gudula R¨unger 2 1 University of Glasgow 2 Universit¨at Leipzig Abstract. Abstract Parallel Machines

Goodman, Joy

429

Dynamic parallel complexity of computational circuits  

Microsoft Academic Search

The dynamic parallel complexity of general computational circuits (defined in introduction) is discussed. We exhibit some relationships between parallel circuit evaluation and some uniform closure properties of a certain class of unary functions and present a systematic method for the design of processor efficient parallel algorithms for circuit evaluation. Using this method: (1) we improve the algorithm for parallel Boolean

Gary L. Miller; Shang-Hua Teng

1987-01-01

430

Parallel sorting algorithms for optimizing particle simulations  

Microsoft Academic Search

Real world particle simulation codes have to handle a huge number of particles and their interactions. Thus, parallel implementations are required to get suitable production codes. Parallel sorting is often used to organize the set of particles or to redistribute data for locality and load balancing concerns. In this article, the use and design of parallel sorting algorithms for parallel

Michael Hofmann; G. Runger; P. Gibbon; R. Speck

2010-01-01

431

Tutorial: Performance and reliability in redundant disk arrays  

NASA Technical Reports Server (NTRS)

A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.

Gibson, Garth A.

1993-01-01

432

Measuring agreement in medical informatics reliability studies.  

PubMed

Agreement measures are used frequently in reliability studies that involve categorical data. Simple measures like observed agreement and specific agreement can reveal a good deal about the sample. Chance-corrected agreement in the form of the kappa statistic is used frequently based on its correspondence to an intraclass correlation coefficient and the ease of calculating it, but its magnitude depends on the tasks and categories in the experiment. It is helpful to separate the components of disagreement when the goal is to improve the reliability of an instrument or of the raters. Approaches based on modeling the decision making process can be helpful here, including tetrachoric correlation, polychoric correlation, latent trait models, and latent class models. Decision making models can also be used to better understand the behavior of different agreement metrics. For example, if the observed prevalence of responses in one of two available categories is low, then there is insufficient information in the sample to judge raters' ability to discriminate cases, and kappa may underestimate the true agreement and observed agreement may overestimate it. PMID:12474424

Hripcsak, George; Heitjan, Daniel F

2002-04-01

433

What makes a family reliable?  

NASA Technical Reports Server (NTRS)

Asteroid families are clusters of asteroids in proper element space which are thought to be fragments from former collisions. Studies of families promise to improve understanding of large collision events and a large event can open up the interior of a former parent body to view. While a variety of searches for families have found the same heavily populated families, and some searches have found the same families of lower population, there is much apparent disagreement between proposed families of lower population of different investigations. Indicators of reliability, factors compromising reliability, an illustration of the influence of different data samples, and a discussion of how several investigations perceived families in the same region of proper element space are given.

Williams, James G.

1992-01-01

434

Model-Based Reliability Analysis  

SciTech Connect

Modeling, in conjunction with testing, is a rich source of insight. Model parameters are easily controlled and monitoring can be done unobtrusively. The ability to inject faults without otherwise affecting performance is particularly critical. Many iterations can be done quickly with a model while varying parameters and conditions based on a small number of validation tests. The objective of Model-Based Reliability Analysis (MBRA) is to identify ways to capitalize on the insights gained from modeling to make both qualitative and quantitative statements about product reliability. MBRA will be developed and exercised in the realm of weapon system development and maintenance, where the challenges of severe environmental requirements, limited production quantities, and use of one-shot devices can make testing prohibitively expensive. However, the general principles will also be applicable to other product types.

Rene L. Bierbaum; Thomas d. Brown; Thomas J. Kerschen

2001-01-22

435

Gearbox Reliability Collaborative Bearing Calibration  

SciTech Connect

NREL has initiated the Gearbox Reliability Collaborative (GRC) to investigate the root cause of the low wind turbine gearbox reliability. The GRC follows a multi-pronged approach based on a collaborative of manufacturers, owners, researchers and consultants. The project combines analysis, field testing, dynamometer testing, condition monitoring, and the development and population of a gearbox failure database. At the core of the project are two 750kW gearboxes that have been redesigned and rebuilt so that they are representative of the multi-megawatt gearbox topology currently used in the industry. These gearboxes are heavily instrumented and are tested in the field and on the dynamometer. This report discusses the bearing calibrations of the gearboxes.

van Dam, J.

2011-10-01

436

On-orbit spacecraft reliability  

NASA Technical Reports Server (NTRS)

Operational and historic data for 350 spacecraft from 52 U.S. space programs were analyzed for on-orbit reliability. Failure rates estimates are made for on-orbit operation of spacecraft subsystems, components, and piece parts, as well as estimates of failure probability for the same elements during launch. Confidence intervals for both parameters are also given. The results indicate that: (1) the success of spacecraft operation is only slightly affected by most reported incidents of anomalous behavior; (2) the occurrence of the majority of anomalous incidents could have been prevented piror to launch; (3) no detrimental effect of spacecraft dormancy is evident; (4) cycled components in general are not demonstrably less reliable than uncycled components; and (5) application of product assurance elements is conductive to spacecraft success.

Bloomquist, C.; Demars, D.; Graham, W.; Henmi, P.

1978-01-01

437

Visualizing Parallel Computer System Performance  

NASA Technical Reports Server (NTRS)

Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

Malony, Allen D.; Reed, Daniel A.

1988-01-01

438

Massively Parallel MRI Detector Arrays  

PubMed Central

Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

Keil, Boris; Wald, Lawrence L

2013-01-01

439

Fast data parallel polygon rendering  

SciTech Connect

This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

Ortega, F.A.; Hansen, C.D.

1993-09-01

440

A Continuous Nonparametric Reliability Estimator  

Microsoft Academic Search

Nonparametric point and interval reliability estimators are obtained which imply a continuous underlying time-to-failure density function. These estimators are simple to use and involve cumulative normal probabilities. Sample calculations are presented to illustrate their use. Using the point estimator, performance comparisons are made with two standard estimators by means of Monte Carlo simulation in the two-parameter Weibull family of distributions.

H. F. Martz JR; M. L. Hailey

1971-01-01

441

Reliability Research for Photovoltaic Modules  

NASA Technical Reports Server (NTRS)

Report describes research approach used to improve reliability of photovoltaic modules. Aimed at raising useful module lifetime to 20 to 30 years. Development of cost-effective solutions to module-lifetime problem requires compromises between degradation rates, failure rates, and lifetimes, on one hand, and costs of initial manufacture, maintenance, and lost energy, on other hand. Life-cycle costing integrates disparate economic terms, allowing cost effectiveness to be quantified, allowing comparison of different design alternatives.

Ross, Ronald J., Jr.

1986-01-01

442

High frequency switched capacitor IIR filters using parallel cyclic type circuits  

Microsoft Academic Search

In order to reduce the performance deterioration due to the finite gain bandwidth (GB) product of op-amps in switched capacitor (SC) transversal filters, parallel cyclic type circuits have been proposed. The authors consider how to implement direct form I SC IIR (infinite impulse response) filters using the parallel cyclic type circuit. The effects of finite GB products of op-amps and

Yoshinori HIRATA; Kyoko KATO; Nobuaki TAKAHASHI; Tsuyoshi TAKEBE

1992-01-01

443

Parallel Hybrid Clustering using Genetic Programming and Multi-Objective Fitness with Density (PYRAMID)  

E-print Network

(PYRAMID) Samir Tout1 , William Sverdlik2 , Junping Sun1 1 Graduate School of Computer and Information parallel hybrid clustering using genetic programming and multi-objective fitness with density (PYRAMID on user-supplied parameters, PYRAMID employs a combination of data parallelism, a form of genetic

Fernandez, Thomas

444

Reliability Models and Attributable Risk  

NASA Technical Reports Server (NTRS)

The intention of this report is to bring a developing and extremely useful statistical methodology to greater attention within the Safety, Reliability, and Quality Assurance Office of the NASA Johnson Space Center. The statistical methods in this exposition are found under the heading of attributable risk. Recently the Safety, Reliability, and Quality Assurance Office at the Johnson Space Center has supported efforts to introduce methods of medical research statistics dealing with the survivability of people to bear on the problems of aerospace that deal with the reliability of component hardware used in the NASA space program. This report, which describes several study designs for which attributable risk is used, is in concert with the latter goals. The report identifies areas of active research in attributable risk while briefly describing much of what has been developed in the theory of attributable risk. The report, which largely is a report on a report, attempts to recast the medical setting and language commonly found in descriptions of attributable risk into the setting and language of the space program and its component hardware.

Jarvinen, Richard D.

1999-01-01

445

Reliability in individual monitoring service.  

PubMed

As a laboratory certified to ISO 9001:2008 and accredited to ISO/IEC 17025, the Secondary Standard Dosimetry Laboratory (SSDL)-Nuclear Malaysia has incorporated an overall comprehensive system for technical and quality management in promoting a reliable individual monitoring service (IMS). Faster identification and resolution of issues regarding dosemeter preparation and issuing of reports, personnel enhancement, improved customer satisfaction and overall efficiency of laboratory activities are all results of the implementation of an effective quality system. Review of these measures and responses to observed trends provide continuous improvement of the system. By having these mechanisms, reliability of the IMS can be assured in the promotion of safe behaviour at all levels of the workforce utilising ionising radiation facilities. Upgradation of in the reporting program through a web-based e-SSDL marks a major improvement in Nuclear Malaysia's IMS reliability on the whole. The system is a vital step in providing a user friendly and effective occupational exposure evaluation program in the country. It provides a higher level of confidence in the results generated for occupational dose monitoring of the IMS, thus, enhances the status of the radiation protection framework of the country. PMID:21147789

Mod Ali, N

2011-03-01

446

Reliability Generalization: Exploring Variance in Measurement Error Affecting Score Reliability across Studies.  

ERIC Educational Resources Information Center

Proposes a new method, reliability generalization, for meta-analysis. Reliability generalization characterizes the typical reliability of scores for a test across studies, the amount of variability in reliability coefficients, and the sources of this variability. Analysis of 87 reliability coefficients for two scales of the Bem Sex Role Inventory…

Vacha-Haase, Tammi

1998-01-01

447

Hybrid parallel programming with MPI and Unified Parallel C.  

SciTech Connect

The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity because of their ability to provide a shared global address space that spans the memories of multiple compute nodes. However, taking advantage of UPC can require a large recoding effort for existing parallel applications. In this paper, we explore a new hybrid parallel programming model that combines MPI and UPC. This model allows MPI programmers incremental access to a greater amount of memory, enabling memory-constrained MPI codes to process larger data sets. In addition, the hybrid model offers UPC programmers an opportunity to create static UPC groups that are connected over MPI. As we demonstrate, the use of such groups can significantly improve the scalability of locality-constrained UPC codes. This paper presents a detailed description of the hybrid model and demonstrates its effectiveness in two applications: a random access benchmark and the Barnes-Hut cosmological simulation. Experimental results indicate that the hybrid model can greatly enhance performance; using hybrid UPC groups that span two cluster nodes, RA performance increases by a factor of 1.33 and using groups that span four cluster nodes, Barnes-Hut experiences a twofold speedup at the expense of a 2% increase in code size.

Dinan, J.; Balaji, P.; Lusk, E.; Sadayappan, P.; Thakur, R.; Mathematics and Computer Science; The Ohio State Univ.

2010-01-01

448

Parallel algorithms for mapping pipelined and parallel computations  

NASA Technical Reports Server (NTRS)

Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

Nicol, David M.

1988-01-01

449

Constructions: Parallel Through A Point  

NSDL National Science Digital Library

After review of Construction Basics, the technique of constructing a parallel line through a point not on the line will be learned. Let's review the basics of Constructions in Geometry first: Constructions - General Rules Review of how to copy an angle is helpful; please review that here: Constructions: Copy a Line Segment and an Angle Now, using a paper, pencil, straight edge, and compass, you will learn how to construct a parallel through a point. A video demonstration is available to help you. (Windows Media ...

Neubert, Mrs.

2010-12-31

450

Reliability-Aware Power Management for Parallel Real-Time Applications with Precedence Constraints  

E-print Network

, 78249 {yguo,dzhu}@cs.utsa.edu Hakan Aydin Department of Computer Science George Mason University Fairfax, VA 22030 aydin@cs.gmu.edu Abstract--The negative effects of the Dynamic Voltage and Frequency Scaling management remains as one of the grand chal- lenges for the research and engineering community, both

Aydin, Hakan

451

Statistical Approaches to Achieving Sufficiently High Test Score Reliabilities for Research Purposes  

Microsoft Academic Search

The author provides statistical approaches to aid investigators in assuring that sufficiently high test score reliabilities are achieved for specific research purposes. The statistical approaches use tests of statistical significance between the obtained reliability and lowest population reliability that an investigator will tolerate. The statistical approaches work for coefficient alpha and related coefficients and for alternate-forms, split-half (2-part alpha), and

Richard A. Charter

2008-01-01

452

Parallel 3-D Electromagnetic Particle code using High Performance Fortran: Parallel TRISTAN  

E-print Network

Parallel 3-D Electromagnetic Particle code using High Performance Fortran: Parallel TRISTAN using High Performance Fortran (HPF) as a RPM (Real Parallel Machine). In the parallelized HPF code and Hitachi SR-8000 etc. using High Performance Fortran (HPF). In our parallel program, the simulation domain

Nishikawa, Ken-Ichi

453

FAROW: A tool for fatigue and reliability of wind turbines  

NASA Astrophysics Data System (ADS)

FAROW is a computer program that evaluates the fatigue and reliability of wind turbine components using structural reliability methods. A deterministic fatigue life formulation is based on functional forms of three basic parts of wind turbine fatigue calculation: (1) the loading environment, (2) the gross level of structural response given the load environment, and (3) the local failure criterion given both load environment and gross stress response. The calculated lifetime is compared with a user specific target lifetime to assess probabilities of premature failure. The parameters of the functional forms can be defined as either constants or random variables. The reliability analysis uses the deterministic lifetime calculation as the limit state function of a FORM/SORM (first and second order reliability methods) calculation based on techniques developed by Rackwitz. Besides probability of premature failure, FAROW calculates the mean lifetime, the relative importance of each of the random variables, and the sensitivity of the results to all of the input parameters, both constant inputs and the parameters that define the random variable inputs. The ability to check the probability of failure with Monte Carlo simulation is included as an option.

Veers, P. S.; Lange, C. H.; Winterstein, S. R.

454

FAROW: A tool for fatigue and reliability of wind turbines  

SciTech Connect

FAROW is a computer program that evaluates the fatigue and reliability of wind turbine components using structural reliability methods. A deterministic fatigue life formulation is based on functional forms of three basic parts of wind turbine fatigue calculation: (1) the loading environment, (2) the gross level of structural response given the load environment, and (3) the local failure criterion given both load environment and gross stress response. The calculated lifetime is compared with a user specific target lifetime to assess probabilities of premature failure. The parameters of the functional forms can be defined as either constants or random variables. The reliability analysis uses the deterministic lifetime calculation as the limit state function of a FORM/SORM (first and second order reliability methods) calculation based on techniques developed by Rackwitz. Besides probability of premature failure, FAROW calculates the mean lifetime, the relative importance of each of the random variables, and the sensitivity of the results to all of the input parameters, both constant inputs and the parameters that define the random variable inputs. The ability to check the probability of failure with Monte Carlo simulation is included as an option.

Veers, P.S. [Sandia National Labs., Albuquerque, NM (US); Lange, C.H.; Winterstein, S.R. [Stanford Univ., CA (US). Civil Engineering Dept.

1993-07-01

455

Mapping Pixel Windows To Vectors For Parallel Processing  

NASA Technical Reports Server (NTRS)

Mapping performed by matrices of transistor switches. Arrays of transistor switches devised for use in forming simultaneous connections from square subarray (window) of n x n pixels within electronic imaging device containing np x np array of pixels to linear array of n(sup2) input terminals of electronic neural network or other parallel-processing circuit. Method helps to realize potential for rapidity in parallel processing for such applications as enhancement of images and recognition of patterns. In providing simultaneous connections, overcomes timing bottleneck or older multiplexing, serial-switching, and sample-and-hold methods.

Duong, Tuan A.

1996-01-01

456

Radiation transport on unstructured mesh with parallel computers  

SciTech Connect

This paper summarizes the developmental work on a deterministic transport code that provides multidimensional radiation transport capabilities on an unstructured mesh. The second-order form of the Boltzmann transport equation is solved utilizing the discrete ordinates angular differencing and the Galerkin finite element spatial differencing. The discretized system, which couples the spatial-angular dependence, is solved simultaneously using a parallel conjugate-gradient (CG) iterative solver. This approach eliminates the need for the conventional inner iterations over the discrete directions and is well-suited for massively parallel computers.

Fan, W.C.; Drumm, C.R.

2000-07-01

457

Fatigue reliability of wind turbine components  

NASA Astrophysics Data System (ADS)

Fatigue life estimates for wind turbine components can be extremely variable due to both inherently random and uncertain parameters. A structural reliability analysis is used to qualify the probability that the fatigue life will fall short of a selected target. Reliability analysis also produces measures of the relative importance of the various sources of uncertainty and the sensitivity of the reliability to each input parameter. The process of obtaining reliability estimates is briefly outlined. An example fatigue reliability calculation for a blade joint is formulated; reliability estimates, importance factors, and sensitivities are produced. Guidance in selecting distribution functions for the random variables used to model the random and uncertain parameters is also provided.

Veers, P. S.

458

Parallel language constructs for tensor product computations on loosely coupled architectures  

NASA Technical Reports Server (NTRS)

Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.

Mehrotra, Piyush; Vanrosendale, John

1989-01-01

459

Interobserver and intraobserver reliability of therapist-assisted videotaped evaluations of upper-limb hemiplegia 1 1 No benefits in any form have been received or will be received by a commercial party related directly or indirectly to the subject of this article  

Microsoft Academic Search

PurposeTherapist-assisted videotaped sessions have been used to augment physical examinations in the evaluation of hand and arm function in patients with spastic hemiplegia. The purpose of this study was to assess the interobserver and intraobserver reliability of standardized videotaped examinations in the evaluation and functional classification of these patients.

Peter M Waters; David Zurakowski; Paul Patterson; Donald S Bae; Donna Nimec

2004-01-01

460

76 FR 23171 - Electric Reliability Organization Interpretations of Interconnection Reliability Operations and...  

Federal Register 2010, 2011, 2012, 2013

...No. 750] Electric Reliability Organization Interpretations of Interconnection Reliability Operations and Coordination...IRO-005-1 and TOP-005-1 Reliability Standards, which were...this document via the Internet through the...

2011-04-26

461

Parallel Processing in Amplitude Analysis  

E-print Network

electromagnetic interactions weak interactions strong interactions Quantum Chromodynamics (QCD) Quantum tube" model Energy density from "Lattice QCD" #12;M. R. Shepherd Parallel Processing Lecture 2 March 31 March 31, 2011 Hybrid Mesons · Conventional mesons: flux tube is in ground state · Hybrid mesons: flux

Evans, Hal

462

Parallel Programming Examples using MPI  

NSDL National Science Digital Library

Despite the rate at which computers have advanced in recent history, human imagination has advanced faster. Often greater computing power can be achieved by having multiple computers work together on a single problem. This tutorial discusses how Message Passing Interface (MPI) can be used to implement parallel programming solutions in a variety of cases.

Joiner, David; The Shodor Education Foundation, Inc.

463

Parallel Supercomputing with Commodity Components  

Microsoft Academic Search

We have implemented a parallel computer architec- ture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microproces- sors and switched fast ethernet as a communication fab- ric, we have obtained sustained performance on sci- entific applications in excess of one Gigaflop. Dur- ing one production astrophysics treecode simulation, we performed floating point operations (1.2 Petaflops) over

Michael S. Warren; Donald J. Becker; M. Patrick Goda; John K. Salmon; Thomas L. Sterling

1997-01-01

464

Parallel circuit simulation on supercomputers  

Microsoft Academic Search

Circuit simulation is a very time-consuming and numerically intensive application, especially when the problem size is large as in the case of VLSI circuits. To improve the performance of circuit simulators without sacrificing accuracy, a variety of parallel processing algorithms have been investigated due to the recent availability of a number of commercial multiprocessor machines. In this paper, research in

R. A. Saleh; K. A. Gallivan; M.-C. Chang; I. N. Hajj; T. N. Trick; D. Smart

1989-01-01

465

Tutorial: Parallel Simulation on Supercomputers  

SciTech Connect

This tutorial introduces typical hardware and software characteristics of extant and emerging supercomputing platforms, and presents issues and solutions in executing large-scale parallel discrete event simulation scenarios on such high performance computing systems. Covered topics include synchronization, model organization, example applications, and observed performance from illustrative large-scale runs.

Perumalla, Kalyan S [ORNL

2012-01-01

466

Ejs Parallel Plate Capacitor Model  

NSDL National Science Digital Library

The Ejs Parallel Plate Capacitor model displays a parallel-plate capacitor which consists of two identical metal plates, placed parallel to one another. The capacitor can be charged by connecting one plate to the positive terminal of a battery and the other plate to the negative terminal. The dielectric constant and the separation of the plates can be changed via sliders. You can modify this simulation if you have Ejs installed by right-clicking within the plot and selecting "Open Ejs Model" from the pop-up menu item. Ejs Parallel Plate Capacitor model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_bu_capacitor.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models for Newtonian mechanics are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Duffy, Andrew

2008-07-14

467

The Everett axiom of parallelism  

E-print Network

In this work we consider the meaningfulness of the concept parallel worlds. To that extent we propose the model of the infinite-dimensionaly multievent space, generating everettics altervers in each point of Minkowski space time. Our research reveals fractal character of such alterverse.

Lebedev, Yury A; Dulphan, Anna Ya

2013-01-01

468

Parallel distributed computing using Python  

NASA Astrophysics Data System (ADS)

This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

2011-09-01

469

Real-time\\/parallel computing  

Microsoft Academic Search

This book discusses the real-time, parallel computing of digitized images including both the symbolic and semantic data derived from such images. The processing, storing, and transmitting of images and image data are examined. Techniques and algorithms for the analysis and manipulation of images are explored both theoretically and in terms of implementation in hardware and software. The main subject areas

M. Onoe; K. Preston; A. Rosenfield

1983-01-01

470

Optimal Circuits for Parallel Multipliers  

Microsoft Academic Search

Abstract—We present new design and analysis techniques for the synthesis of parallel multiplier circuits that have smaller predicted delay than the best current multipliers. In [4], Oklobdzija et al. suggested a new approach, the Three-Dimensional Method (TDM), for Partial Product Reduction Tree (PPRT) design that produces multipliers that outperform the current best designs. The goal of TDM is to produce

Paul F. Stelling; Charles U. Martel; Vojin G. Oklobdzija; R. Ravi

1998-01-01

471

Coarray Fortran for parallel programming  

Microsoft Academic Search

Co-Array Fortran, formerly known as F--, is a small extension of Fortran 95 for parallel processing. A Co-Array Fortran program is interpreted as if it were replicated a number of times and all copies were executed asynchronously. Each copy has its own set of data objects and is termed an image. The array syntax of Fortran 95 is extended with

Robert W. Numrich; John Reid

1998-01-01

472

Parallel, Distributed Scripting with Python  

SciTech Connect

Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

Miller, P J

2002-05-24

473

A portable parallel particle program  

Microsoft Academic Search

We describe our implementation of the parallel hashed oct-tree (HOT) code, and in particular its application to neighbor finding in a smoothed particle hydrodynamics (SPH) code. We also review the error bounds on the multipole approximations involved in treecodes, and extend them to include general cell-cell interactions. Performance of the program on a variety of problems (including gravity, SPH, vortex

Michael S. Warren; John K. Salmon

1995-01-01

474

Learning Style Scales: a valid and reliable questionnaire  

PubMed Central

Purpose: Learning-style instruments assist students in developing their own learning strategies and outcomes, in eliminating learning barriers, and in acknowledging peer diversity. Only a few psychometrically validated learning-style instruments are available. This study aimed to develop a valid and reliable learning-style instrument for nursing students. Methods: A cross-sectional survey study was conducted in two nursing schools in two countries. A purposive sample of 156 undergraduate nursing students participated in the study. Face and content validity was obtained from an expert panel. The LSS construct was established using principal axis factoring (PAF) with oblimin rotation, a scree plot test, and parallel analysis (PA). The reliability of LSS was tested using Cronbach’s ?, corrected item-total correlation, and test-retest. Results: Factor analysis revealed five components, confirmed by PA and a relatively clear curve on the scree plot. Component strength and interpretability were also confirmed. The factors were labeled as perceptive, solitary, analytic, competitive, and imaginative learning styles. Cronbach’s ? was >0.70 for all subscales in both study populations. The corrected item-total correlations were >0.30 for the items in each component. Conclusion: The LSS is a valid and reliable inventory for evaluating learning style preferences in nursing students in various multicultural environments. PMID:25134513

2014-01-01

475

Complete classification of parallel Lorentz surfaces in four-dimensional neutral pseudosphere  

SciTech Connect

A Lorentz surface of an indefinite space form is called parallel if its second fundamental form is parallel with respect to the Van der Waerden-Bortolotti connection. Such surfaces are locally invariant under the reflection with respect to the normal space at each point. Parallel surfaces are important in geometry as well as in general relativity since extrinsic invariants of such surfaces do not change from point to point. Parallel Lorentz surfaces in four-dimensional (4D) Lorentzian space forms are classified by Chen and Van der Veken [''Complete classification of parallel surfaces in 4-dimensional Lorentz space forms,'' Tohoku Math. J. 61, 1 (2009)]. Recently, explicit classification of parallel Lorentz surfaces in the pseudo-Euclidean 4-space E{sub 2}{sup 4} and in the pseudohyperbolic 4-space H{sub 2}{sup 4}(-1) are obtained recently by Chen et al. [''Complete classification of parallel Lorentzian surfaces in Lorentzian complex space forms,'' Int. J. Math. 21, 665 (2010); ''Complete classification of parallel Lorentz surfaces in neutral pseudo hyperbolic 4-space,'' Cent. Eur. J. Math. 8, 706 (2010)], respectively. In this article, we completely classify the remaining case; namely, parallel Lorentz surfaces in 4D neutral pseudosphere S{sub 2}{sup 4}(1). Our result states that there are 24 families of such surfaces in S{sub 2}{sup 4}(1). Conversely, every parallel Lorentz surface in S{sub 2}{sup 4}(1) is obtained from one of the 24 families. The main result indicates that there are major differences between Lorentz surfaces in the de Sitter 4-space dS{sub 4} and in the neutral pseudo 4-sphere S{sub 2}{sup 4}.

Chen, Bang-Yen [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824-1027 (United States)

2010-08-15

476

Parallel Worldline Numerics: Implementation and Error Analysis  

E-print Network

We give an overview of the worldline numerics technique, and discuss the parallel CUDA implementation of a worldline numerics algorithm. In the worldline numerics technique, we wish to generate an ensemble of representative closed-loop particle trajectories, and use these to compute an approximate average value for Wilson loops. We show how this can be done with a specific emphasis on cylindrically symmetric magnetic fields. The fine-grained, massive parallelism provided by the GPU architecture results in considerable speedup in computing Wilson loop averages. Furthermore, we give a brief overview of uncertainty analysis in the worldline numerics method. There are uncertainties from discretizing each loop, and from using a statistical ensemble of representative loops. The former can be minimized so that the latter dominates. However, determining the statistical uncertainties is complicated by two subtleties. Firstly, the distributions generated by the worldline ensembles are highly non-Gaussian, and so the standard error in the mean is not a good measure of the statistical uncertainty. Secondly, because the same ensemble of worldlines is used to compute the Wilson loops at different values of $T$ and $x_\\mathrm{ cm}$, the uncertainties associated with each computed value of the integrand are strongly correlated. We recommend a form of jackknife analysis which deals with both of these problems.

Dan Mazur; Jeremy S. Heyl

2014-07-28

477

Optimization of Reliability Allocation and Testing Schedule for Software Systems Michael R. Lyu  

E-print Network

, subject to some testing sched- ule and resource constraints. The system testing activity can be formulated as a combinatorial optimization prob- lem with known cost, reliability, effort, and other attributes of the system to the testing cost through various types of reliability growth curves. We achieve closed-form solutions

Newcastle upon Tyne, University of

478

The Effect of a Looker's Past Reliability on Infants' Reasoning about Beliefs  

ERIC Educational Resources Information Center

We investigated whether 16-month-old infants' past experience with a person's gaze reliability influences their expectation about the person's ability to form beliefs. Infants were first administered a search task in which they observed an experimenter show excitement while looking inside a box that either contained a toy (reliable looker…

Poulin-Dubois, Diane; Chow, Virginia

2009-01-01

479

Lifting SU(3)-structures to nearly parallel G2-structures  

NASA Astrophysics Data System (ADS)

Hitchin shows in [N. Hitchin, The geometry of three-forms in six and seven dimensions, J. Differ. Geom. 55 (2000) 547-576. math.DG/0010054v1; N. Hitchin, Stable forms and special metrics, Global Differential Geometry: The Mathematical Legacy of Alfred Gray, in: Contemp. Math., volume 288, American Math. Soc., 2001, pp. 70-89. math.DG/0107101v1] that half-flat SU(3)-structures on a six-dimensional manifold M can be lifted to parallel G2-structure on the product M×R. We show that Hitchin's approach can also be used to construct nearly parallel G2-structures by lifting so-called nearly half-flat structures. These SU(3)-structures are described by pairs (?,?) of stable 2- and 3-forms with d?=??2, for some ??R?{0}.

Stock, Sebastian

2009-01-01

480

Mirror versus parallel bimanual reaching  

PubMed Central

Background In spite of their importance to everyday function, tasks that require both hands to work together such as lifting and carrying large objects have not been well studied and the full potential of how new technology might facilitate recovery remains unknown. Methods To help identify the best modes for self-teleoperated bimanual training, we used an advanced haptic/graphic environment to compare several modes of practice. In a 2-by-2 study, we compared mirror vs. parallel reaching movements, and also compared veridical display to one that transforms the right hand’s cursor to the opposite side, reducing the area that the visual system has to monitor. Twenty healthy, right-handed subjects (5 in each group) practiced 200 movements. We hypothesized that parallel reaching movements would be the best performing, and attending to one visual area would reduce the task difficulty. Results The two-way comparison revealed that mirror movement times took an average 1.24 s longer to complete than parallel. Surprisingly, subjects’ movement times moving to one target (attending to one visual area) also took an average of 1.66 s longer than subjects moving to two targets. For both hands, there was also a significant interaction effect, revealing the lowest errors for parallel movements moving to two targets (p?parallel movements with a veridical display (moving to two separate targets). These results point to the expected levels of challenge for these bimanual training modes, which could be used to advise therapy choices in self-neurorehabilitation. PMID:23837908

2013-01-01

481

Scalable parallel dynamic fracture simulation using an extrinsic cohesive Rodrigo Espinha a  

E-print Network

history: Received 23 July 2012 Received in revised form 12 July 2013 Accepted 15 July 2013 Available remains nearly constant when the num- ber of processors increases at the same rate as the number for the parallelization of thr

Paulino, Glaucio H.

482

Ultimate DWDM format in fiber-true bit-parallel solitons on WDM beams  

NASA Technical Reports Server (NTRS)

Whether true solitons can exist on WDM beams (and in what form) is a question that is generally unknown. This paper will discuss an answer to this question and a demonstration of the bit-parallel WDM transmission.

Yeh, C.; Bergman, L. A.

2000-01-01

483

High reliability low jitter 80 kV pulse generator.  

SciTech Connect

Switching can be considered to be the essence of pulsed power. Time accurate switch/trigger systems with low inductance are useful in many applications. This article describes a unique switch geometry coupled with a low-inductance capacitive energy store. The system provides a fast-rising high voltage pulse into a low impedance load. It can be challenging to generate high voltage (more than 50 kilovolts) into impedances less than 10 {Omega}, from a low voltage control signal with a fast rise time and high temporal accuracy. The required power amplification is large, and is usually accomplished with multiple stages. The multiple stages can adversely affect the temporal accuracy and the reliability of the system. In the present application, a highly reliable and low jitter trigger generator was required for the Z pulsed-power facility [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats,J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K.W. Struve, W.A. Stygar, L.K. Warne, and J. R. Woodworth, 2007 IEEE Pulsed Power Conference, Albuquerque, NM (IEEE, Piscataway, NJ, 2007), p. 979]. The large investment in each Z experiment demands low prefire probability and low jitter simultaneously. The system described here is based on a 100 kV DC-charged high-pressure spark gap, triggered with an ultraviolet laser. The system uses a single optical path for simultaneously triggering two parallel switches, allowing lower inductance and electrode erosion with a simple optical system. Performance of the system includes 6 ns output rise time into 5.6 {Omega}, 550 ps one-sigma jitter measured from the 5 V trigger to the high voltage output, and misfire probability less than 10{sup -4}. The design of the system and some key measurements will be shown in the paper. We will discuss the design goals related to high reliability and low jitter. While reliability is usually important, and is coupled with jitter, reliability is seldom given more than a qualitative analysis (if any at all). We will show how reliability of the system was calculated, and results of a jitter-reliability tradeoff study. We will describe the behavior of sulfur hexafluoride as the insulating gas in the mildly nonuniform field geometry at pressures of 300 to 500 kPa. We will show the resistance of the arc channels, and show the performance comparisons with normal two-channel operation, and single channel operation.

Savage, Mark Edward; Stoltzfus, Brian Scott

2009-06-01

484

Reliability based inspection scheduling for fixed offshore structures  

SciTech Connect

In order to ensure the structural integrity of offshore structures, it is necessary to carry out periodic inspections. Fatigue cracking is one of the main deterioration processes and the inspection for cracks in welded joints forms a significant part of the inspection effort. Recent developments in structural reliability theory, fatigue fracture mechanics and the availability of various probabilistic databases such as corrosion fatigue data, inspection reliability data can be used to provide a theoretical framework for inspection planning. In addition, advances in knowledge based system technology allows the incorporation of practical constraints with the theoretical results to provide an integrated practical solution. This paper describes the development of a knowledge based system incorporating the latest reliability based inspection planning methods. This development is the result of a large EC funded project under the THERMIE initiative and has had technical input from several organizations in four European countries.

Dharmavasan, S.; Peers, S.M.C. [University College London (United Kingdom); Faber, M.H. [RCP-Denmark APS, Mariager (Denmark); Dijkstra, O.D. [TNO Building and Construction Research, Delft (Netherlands); Cervetto, D. [Registro Italiano Navale, Genoa (Italy); Manfredi, E. [Univ. of Pisa (Italy)

1994-12-31

485

Reliability of internally corroding pipelines  

SciTech Connect

Internal corrosion is an increasing problem world-wide in on and offshore pipelines. This paper describes how the results of genuine high resolution magnetic flux leakage (MFL) inspection together with fitness-for-purpose assessments are used as the basis for defining cost effective rehabilitation strategies for internally corroding pipelines. Strategies are highlighted for pipelines containing `active` corrosion which cannot be eliminated. Attention is given to (1) advances in the methods for assessing the significance of corrosion and (2) the benefits of using modem reliability methodologies which allow the probability of failure with time to be determined. Case studies are presented of the successful use of the above methods.

Jones, D.G.; Dawson, S.J.; Clyne, A.J. [BG plc, Northumberland (United Kingdom)

1998-12-31

486

Reliability of structural brittle materials  

NASA Technical Reports Server (NTRS)

Traditionally, the use of brittle materials has been avoided in demanding structural applications because of their unreliability. They have been used however, due to other desirable properties, in nonstructural applications or where the mechanical load is minimal. The most common method utilized today for the design approach of brittle materials is the probabilistic, which takes into consideration the flaw and stress distribution within the brittle material. It does not take into consideration the fracture mechanics effect of strength degradation while aging under a mechanical load. This project will combine the two methods, probabilistic and fracture mechanics, into a more reliable design method for brittle materials.

Hall, W. B.

1985-01-01

487

Parallelized event chain algorithm for dense hard sphere and polymer systems  

NASA Astrophysics Data System (ADS)

We combine parallelization and cluster Monte Carlo for hard sphere systems and present a parallelized event chain algorithm for the hard disk system in two dimensions. For parallelization we use a spatial partitioning approach into simulation cells. We find that it is crucial for correctness to ensure detailed balance on the level of Monte Carlo sweeps by drawing the starting sphere of event chains within each simulation cell with replacement. We analyze the performance gains for the parallelized event chain and find a criterion for an optimal degree of parallelization. Because of the cluster nature of event chain moves massive parallelization will not be optimal. Finally, we discuss first applications of the event chain algorithm to dense polymer systems, i.e., bundle-forming solutions of attractive semiflexible polymers.

Kampmann, Tobias A.; Boltz, Horst-Holger; Kierfeld, Jan

2015-01-01

488

Electric Power Reliability in Chemical Plants  

E-print Network

The quality and reliability of utility-generated electric power is presently receiving a great deal of attention from the chemical and refining industry. What changes have taken place to make electric power reliability a major topic of discussion...

Cross, M. B.

489

SUGGESTED ANALYTIC APPROACH TO TRANSMISSION RELIABILITY MARGIN  

E-print Network

SUGGESTED ANALYTIC APPROACH TO TRANSMISSION RELIABILITY MARGIN DRAFT REPORT JUNE 22 1999 Jianfeng Zhang Ian Dobson Fernando L. Alvarado POWER SYSTEMS ENGINEERING RESEARCH CENTER Electrical & Computer Engineering Dept. University of Wisconsin, Madison WI 53706 USA Abstract Transmission Reliability Margin (TRM

490

Arnold Schwarzenegger REAL-TIME GRID RELIABILITY  

E-print Network

Arnold Schwarzenegger Governor REAL-TIME GRID RELIABILITY MANAGEMENT California ISO Phasor Application Summary Report Prepared For: California Energy Commission Public Interest Energy Research Program Prepared By: Lawrence Berkeley National Laboratory Consortium for Electric Reliability Technology Solutions

491

Arnold Schwarzenegger REAL-TIME GRID RELIABILITY  

E-print Network

Arnold Schwarzenegger Governor REAL-TIME GRID RELIABILITY MANAGEMENT California ISO Real: California Energy Commission Public Interest Energy Research Program Prepared By: Lawrence Berkeley National Laboratory Consortium for Electric Reliability Technology Solutions APPENDIXC October 2008 CEC-500

492

77 FR 26714 - Transmission Planning Reliability Standards  

Federal Register 2010, 2011, 2012, 2013

...Docket No. RM12-1-000] Transmission Planning Reliability Standards AGENCY: Federal...the approval of modified Transmission Planning Reliability Standard, TPL-001-2 (Transmission System Planning Performance Requirements), which...

2012-05-07

493

76 FR 66229 - Transmission Planning Reliability Standards  

Federal Register 2010, 2011, 2012, 2013

...Docket No. RM11-18-000] Transmission Planning Reliability Standards AGENCY: Federal...SUMMARY: Transmission Planning (TPL) Reliability Standards are intended...contingencies and allowable system impacts in the planning process. The table includes a...

2011-10-26

494

Data Communication Principles Reliable Data Transfer  

E-print Network

Data Communication Principles Switching Reliable Data Transfer Data Communication Basics Mahalingam Ramkumar Mississippi State University, MS September 8, 2014 Ramkumar CSE 4153 / 6153 #12;Data Communication Principles Switching Reliable Data Transfer 1 Data Communication Principles Data Rate of a Communication

Ramkumar, Mahalingam

495

Permission Forms  

ERIC Educational Resources Information Center

The prevailing practice in public schools is to routinely require permission or release forms for field trips and other activities that pose potential for liability. The legal status of such forms varies, but they are generally considered to be neither rock-solid protection nor legally valueless in terms of immunity. The following case and the…

Zirkel, Perry A.

2005-01-01

496

A discrete ordinate response matrix method for massively parallel computers  

SciTech Connect

A discrete ordinate response matrix method is formulated for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices which result from the diamond-differenced equations are utilized in a factored form which minimizes memory requirements and significantly reduces the required number of algorithm utilizes massive parallelism by assigning each spatial node to a processor. The algorithm is accelerated effectively by a synthetic method in which the low-order diffusion equations are also solved by mass