Science.gov

Sample records for parallel forms reliability

  1. Robinson's Measure of Agreement as a Parallel Forms Reliability Coefficient.

    ERIC Educational Resources Information Center

    Willson, Victor L.

    A major deficiency in classical test theory is the reliance on Pearson product-moment (PPM) correlation concepts in the definition of reliability. PPM measures are totally insensitive to first moment differences in tests which leads to the dubious assumption of essential tan-equivalence. Robinson proposed a measure of agreement that is sensitive

  2. Parallelized reliability estimation of reconfigurable computer networks

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Das, Subhendu; Palumbo, Dan

    1990-01-01

    A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.

  3. Essay Reliability: Form and Meaning.

    ERIC Educational Resources Information Center

    Shale, Doug

    This study is an attempt at a cohesive characterization of the concept of essay reliability. As such, it takes as a basic premise that previous and current practices in reporting reliability estimates for essay tests have certain shortcomings. The study provides an analysis of these shortcomings--partly to encourage a fuller understanding of the

  4. Parallelism in the brain's visual form system

    PubMed Central

    Shigihara, Yoshihito; Zeki, Semir

    2013-01-01

    We used magnetoencephalography (MEG) to determine whether increasingly complex forms constituted from the same elements (lines) activate visual cortex with the same or different latencies. Twenty right-handed healthy adult volunteers viewed two different forms, lines and rhomboids, representing two levels of complexity. Our results showed that the earliest responses produced by lines and rhomboids in both striate and prestriate cortex had similar peak latencies (40 ms) although lines produced stronger responses than rhomboids. Dynamic causal modeling (DCM) showed that a parallel multiple input model to striate and prestriate cortex accounts best for the MEG response data. These results lead us to conclude that the perceptual hierarchy between lines and rhomboids is not mirrored by a temporal hierarchy in latency of activation and thus that a strategy of parallel processing appears to be used to construct forms, without implying that a hierarchical strategy may not be used in separate visual areas, in parallel. PMID:24118503

  5. Construction of Parallel Test Forms Using Optimal Test Designs.

    ERIC Educational Resources Information Center

    Dirir, Mohamed A.

    The effectiveness of an optimal item selection method in designing parallel test forms was studied during the development of two forms that were parallel to an existing form for each of three language arts tests for fourth graders used in the Connecticut Mastery Test. Two listening comprehension forms, two reading comprehension forms, and two

  6. A parallel 3D ALE code for metal forming analyses

    SciTech Connect

    Neely, R.; Couch, R.; Dube, E.; Futral, S.

    1995-01-30

    A three-dimensional arbitrary Lagrange-Eulerian (ALE) code is being developed for use as a general purpose tool for metal forming analyses. The focus of the effort is on the processes of forging, extrusion, casting and rolling. The ALE approach was chosen as an efficient way to deal with the large deformations and complicated flows associated with these processes. A prototype version of the software package, ALE3D, exists and is being applied to the enumerated processes. The development of the code is being driven by the dual constraints of portability and extensibility. A general purpose simulation tool must be capable of mining on a variety of platforms from single processor workstations to massively parallel platforms. It might also be configured to easily accommodate new physical models and parameters. The focus of this paper will be on computer science issues, with parallelization being the dominant issue. Long term goals will be described, as well as current status.

  7. Reliability and mass analysis of dynamic power conversion systems with parallel of standby redundancy

    NASA Technical Reports Server (NTRS)

    Juhasz, A. J.; Bloomfield, H. S.

    1985-01-01

    A combinatorial reliability approach is used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis is also performed, specifically for a 100 kWe nuclear Brayton power conversion system with parallel redundancy. Although this study is done for a reactor outlet temperature of 1100K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.

  8. Reliability and mass analysis of dynamic power conversion systems with parallel or standby redundancy

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.; Bloomfield, Harvey S.

    1987-01-01

    A combinatorial reliability approach was used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis was also performed, specifically for a 100-kWe nuclear Brayton power conversion system with parallel redundancy. Although this study was done for a reactor outlet temperature of 1100 K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.

  9. Alternate Forms Reliability of the Behavioral Relaxation Scale: Preliminary Results

    ERIC Educational Resources Information Center

    Lundervold, Duane A.; Dunlap, Angel L.

    2006-01-01

    Alternate forms reliability of the Behavioral Relaxation Scale (BRS; Poppen,1998), a direct observation measure of relaxed behavior, was examined. A single BRS score, based on long duration observation (5-minute), has been found to be a valid measure of relaxation and is correlated with self-report and some physiological measures. Recently,…

  10. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    NASA Astrophysics Data System (ADS)

    Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.

    2013-08-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature ( T max) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative distribution functions of the CDOCE and T max as well as the correlation coefficients are obtained by using the FORM and the results are compared with corresponding Monte-Carlo simulations (MCS). According to the results obtained from the FORM, an increase in the pulling speed yields an increase in the probability of T max being greater than the resin degradation temperature. A similar trend is also seen for the probability of the CDOCE being less than 0.8.

  11. Parameter interval estimation of system reliability for repairable multistate series-parallel system with fuzzy data.

    PubMed

    Bamrungsetthapong, Wimonmas; Pongpullponsak, Adisak

    2014-01-01

    The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series-parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ? 100. In addition, the optimal ? -cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728

  12. Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gibson, Garth Alan

    1990-01-01

    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

  13. An Examination of the Effect of Multidimensionality on Parallel Forms Construction.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    This paper examines the effect of using unidimensional item response theory (IRT) item parameter estimates of multidimensional items to create weakly parallel test forms using target information curves. To date, all computer-based algorithms that have been devised to create parallel test forms assume that the items are unidimensional. This paper

  14. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel

  15. Generating Random Parallel Test Forms Using CTT in a Computer-Based Environment.

    ERIC Educational Resources Information Center

    Weiner, John A.; Gibson, Wade M.

    1998-01-01

    Describes a procedure for automated-test-forms assembly based on Classical Test Theory (CTT). The procedure uses stratified random-content sampling and test-form preequating to ensure both content and psychometric equivalence in generating virtually unlimited parallel forms. Extends the usefulness of CTT in automated test construction. (Author/SLD)

  16. Commentary on "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data"

    ERIC Educational Resources Information Center

    Hayton, James C.

    2009-01-01

    In the article "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data," Dinno (this issue) provides strong evidence that the distribution of random data does not have a significant influence on the outcome of the analysis. Hayton appreciates the thorough approach to evaluating this assumption, and agrees

  17. Similarity of the Multidimensional Space Defined by Parallel Forms of a Mathematics Test.

    ERIC Educational Resources Information Center

    Reckase, Mark D.; And Others

    The purpose of the paper is to determine whether test forms of the Mathematics Usage Test (AAP Math) of the American College Testing Program are parallel in a multidimensional sense. The AAP Math is an achievement test of mathematics concepts acquired by high school students by the end of their third year. To determine the dimensionality of the

  18. Commentary on "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data"

    ERIC Educational Resources Information Center

    Hayton, James C.

    2009-01-01

    In the article "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data," Dinno (this issue) provides strong evidence that the distribution of random data does not have a significant influence on the outcome of the analysis. Hayton appreciates the thorough approach to evaluating this assumption, and agrees…

  19. Reliability Modeling Methodology for Independent Approaches on Parallel Runways Safety Analysis

    NASA Technical Reports Server (NTRS)

    Babcock, P.; Schor, A.; Rosch, G.

    1998-01-01

    This document is an adjunct to the final report An Integrated Safety Analysis Methodology for Emerging Air Transport Technologies. That report presents the results of our analysis of the problem of simultaneous but independent, approaches of two aircraft on parallel runways (independent approaches on parallel runways, or IAPR). This introductory chapter presents a brief overview and perspective of approaches and methodologies for performing safety analyses for complex systems. Ensuing chapter provide the technical details that underlie the approach that we have taken in performing the safety analysis for the IAPR concept.

  20. Exploring Equivalent Forms Reliability Using a Key Stage 2 Reading Test

    ERIC Educational Resources Information Center

    Benton, Tom

    2013-01-01

    This article outlines an empirical investigation into equivalent forms reliability using a case study of a national curriculum reading test. Within the situation being studied, there has been a genuine attempt to create several equivalent forms and so it is of interest to compare the actual behaviour of the relationship between these forms to the

  1. An Investigation into Reliability, Availability, and Serviceability (RAS) Features for Massively Parallel Processor Systems

    SciTech Connect

    KELLY, SUZANNE M.; OGDEN, JEFFREY BRANDON

    2002-10-01

    A study has been completed into the RAS features necessary for Massively Parallel Processor (MPP) systems. As part of this research, a use case model was built of how RAS features would be employed in an operational MPP system. Use cases are an effective way to specify requirements so that all involved parties can easily understand them. This technique is in contrast to laundry lists of requirements that are subject to misunderstanding as they are without context. As documented in the use case model, the study included a look at incorporating system software and end-user applications, as well as hardware, into the RAS system.

  2. The Reliability and Validity of the Coopersmith Self-Esteem Inventory-Form B.

    ERIC Educational Resources Information Center

    Chiu, Lian-Hwang

    1985-01-01

    The purpose of this study was to determine the test-retest reliability and concurrent validity of the short form (Form B) of the Coopersmith Self-Esteem Inventory. Criterion measures for validity included: (1) sociometric measures; (2) teacher's popularity ranking; and, (3) self-esteem rating. (Author/LMO)

  3. Parallel FE Approximation of the Even/Odd Parity Form of the Linear Boltzmann Equation

    SciTech Connect

    Drumm, Clifton R.; Lorenz, Jens

    1999-07-21

    A novel solution method has been developed to solve the linear Boltzmann equation on an unstructured triangular mesh. Instead of tackling the first-order form of the equation, this approach is based on the even/odd-parity form in conjunction with the conventional mdtigroup discrete-ordinates approximation. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, and the method is well suited for massively parallel computers.

  4. Magnetosheath Filamentary Structures Formed by Ion Acceleration at the Quasi-Parallel Bow Shock

    NASA Technical Reports Server (NTRS)

    Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.

    2014-01-01

    Results from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.

  5. Magnetosheath filamentary structures formed by ion acceleration at the quasi-parallel bow shock

    NASA Astrophysics Data System (ADS)

    Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.

    2014-04-01

    Results from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.

  6. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  7. Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering.

    PubMed

    Slepoy, A; Peters, M D; Thompson, A P

    2007-11-30

    Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. PMID:17565499

  8. The Diagnostic Reading Test, Survey Section, Form E: A Reliability Study.

    ERIC Educational Resources Information Center

    Pellegrine, R. J.

    The Diagnostic Reading Tests were designed to assess the reading skills of college students enrolled in reading centers. To assess the reliability of the Diagnostic Reading Tests, Survey Section, Form E (DRTE), a study was conducted with university freshmen as subjects. The DRTE was administered to 31 students in an Educational Opportunity Program…

  9. Validity and Reliability of International Physical Activity Questionnaire-Short Form in Chinese Youth

    ERIC Educational Resources Information Center

    Wang, Chao; Chen, Peijie; Zhuang, Jie

    2013-01-01

    Purpose: The psychometric profiles of the widely used International Physical Activity Questionnaire-Short Form (IPAQ-SF) in Chinese youth have not been reported. The purpose of this study was to examine the validity and reliability of the IPAQ-SF using a sample of Chinese youth. Method: One thousand and twenty-one youth (M[subscript age] = 14.26

  10. Alternating d(G-A) sequences form a parallel-stranded DNA homoduplex.

    PubMed Central

    Rippe, K; Fritsch, V; Westhof, E; Jovin, T M

    1992-01-01

    The oligonucleotides d[(G-A)7G] and d[(G-A)12G] self-associate under physiological conditions (10 mM MgCl2, neutral pH) into a stable double-helical structure (psRR-DNA) in which the two polypurine strands are in a parallel orientation in contrast to the antiparallel disposition of conventional B-DNA. We have characterized psRR-DNA by gel electrophoresis, UV absorption, vacuum UV circular dichroism, monomer-excimer fluorescence of oligonucleotides end-labelled with pyrene, and chemical probing with diethyl pyrocarbonate and dimethyl sulfate. The duplex is stable at pH 4-9, suggesting that the structure is compatible with, but does not require, protonation of the A residues. The data support a model derived from force-field analysis in which the parallel-stranded d(G-A)n helix is right-handed and constituted of alternating, symmetrical Gsyn.Gsyn and Aanti.Aanti base pairs with N1H...O6 and N6H...N7 hydrogen bonds, respectively. This dinucleotide structure may be the source of a negative peak observed at 190 nm in the vacuum UV CD spectrum, a feature previously reported only for left-handed Z-DNA. The related sequence d[(GAAGGA)4G] also forms a parallel-stranded duplex but one that is less stable and probably involves a slightly different secondary structure. We discuss the potential intervention of psRR-DNA in recombination, gene expression and the stabilization of genomic structure. Images PMID:1396571

  11. Microelectromechanical filter formed from parallel-connected lattice networks of contour-mode resonators

    DOEpatents

    Wojciechowski, Kenneth E; Olsson, III, Roy H; Ziaei-Moayyed, Maryam

    2013-07-30

    A microelectromechanical (MEM) filter is disclosed which has a plurality of lattice networks formed on a substrate and electrically connected together in parallel. Each lattice network has a series resonant frequency and a shunt resonant frequency provided by one or more contour-mode resonators in the lattice network. Different types of contour-mode resonators including single input, single output resonators, differential resonators, balun resonators, and ring resonators can be used in MEM filter. The MEM filter can have a center frequency in the range of 10 MHz-10 GHz, with a filter bandwidth of up to about 1% when all of the lattice networks have the same series resonant frequency and the same shunt resonant frequency. The filter bandwidth can be increased up to about 5% by using unique series and shunt resonant frequencies for the lattice networks.

  12. Closed-form massively-parallel range-from-image-flow algorithm

    SciTech Connect

    Raviv, D.; Albus, J.S.

    1990-10-01

    The authors provide a closed-form solution for obtaining 3D structure of a scene for a given six degree of freedom motion of a camera. The solution is massively parallel, i.e., the range that corresponds to each pixel is dependent on the spatial and temporal changes in intensities of that pixel, and on the motion parameters of the camera. The measurements of the intensities are done in a priori known directions. The solution is for the general case of camera motion. The derivation is based upon representing the image in the spherical coordinate system, although a similar approach could be taken for other image domains, e.g., the planar coordinate system. They comment on the amount of computations, errors and singular points of the solutions. They also suggest a practical way to significantly reduce and implement them.

  13. Parallel processing in the brain's visual form system: an fMRI study

    PubMed Central

    Shigihara, Yoshihito; Zeki, Semir

    2014-01-01

    We here extend and complement our earlier time-based, magneto-encephalographic (MEG), study of the processing of forms by the visual brain (Shigihara and Zeki, 2013) with a functional magnetic resonance imaging (fMRI) study, in order to better localize the activity produced in early visual areas when subjects view simple geometric stimuli of increasing perceptual complexity (lines, angles, rhombuses) constituted from the same elements (lines). Our results show that all three categories of form activate all three visual areas with which we were principally concerned (V1–V3), with angles producing the strongest and rhombuses the weakest activity in all three. The difference between the activity produced by angles and rhombuses was significant, that between lines and rhombuses was trend significant while that between lines and angles was not. Taken together with our earlier MEG results, the present ones suggest that a parallel strategy is used in processing forms, in addition to the well-documented hierarchical strategy. PMID:25126064

  14. The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills

    ERIC Educational Resources Information Center

    Bae, Jungok; Lee, Yae-Sheik

    2011-01-01

    Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of

  15. The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills

    ERIC Educational Resources Information Center

    Bae, Jungok; Lee, Yae-Sheik

    2011-01-01

    Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of…

  16. The 36-Item Short Form Health Survey: Reliability and Validity in Chinese Medical Students

    PubMed Central

    Zhang, Yang; QU, Bo; Lun, Shi-si; Guo, Ying; Liu, Jie

    2012-01-01

    Objective: The 36-Item Short Form Health Survey (SF-36) is widely validated and popularly used in assessing the subjective quality of life (QOL) of patients and the general public. The aim of the study is to assess the psychometric properties of the 36-Item Short Form Health Survey (SF-36) in medical students in mainland of China. Methods: The reliability and validity of the 36-Item Short Form Health Survey (SF-36) questionnaire were assessed by conducting a cross-sectional study of Chinese medical students in December 2011. All 1358 3rd year and 4th year medical students from 46 classes at China Medical University were investigated. Results: The overall Cronbach's α coefficient of the SF-36 questionnaire was 0.791, while the respective Cronbach's α coefficients for each of the seven dimensions were > 0.70, except where the social function dimension was 0.631. Results showed that the SF-36 questionnaire was reliable and valid. Conclusion: In general, this study provides evidence that the SF-36 questionnaire is suitable measures for assess the QOL of medical students in China. PMID:22991490

  17. Normative data for measuring performance change on parallel forms of a 15-word list recall test.

    PubMed

    Carlesimo, Giovanni A; De Risi, Marco; Monaco, Marco; Costa, Alberto; Fadda, Lucia; Picardi, Angelo; Di Gennaro, Giancarlo; Caltagirone, Carlo; Grammaldo, Liliana

    2014-05-01

    Declarative memory evaluation is an essential step in the clinical and neuropsychological assessment of a variety of neurological disorders. It typically addresses the issue of normality/abnormality of an individual's performance. Another clinical application of the neuropsychological assessment of declarative memory is the longitudinal evaluation of an individual's performance change. In fact, in a variety of neurological conditions repeated assessments are needed to evaluate the modifications of a memory disorder as a function of time or in response to a pharmacological or rehabilitation treatment. This study was aimed at collecting data for measuring and interpreting performance change on a memory test for verbal material. For this purpose, we administered to 100 healthy subjects (age range 20-80 years; years of formal education range 8-17 years) three parallel forms of a test requiring the immediate and delayed recall of a 15-word list. The subjects performed the recall test three times (each time with a different list) at least 1 week apart. The order of the lists was randomized across subjects. Results revealed that performance on the three lists was highly correlated and did not vary as a function of the order of presentation. However, accuracy of recall was slightly better on a list compared to the others. Based on a method devised by Payne and Jones (J Clin Psychol 13:115-121, 1957), we provide normative data for establishing whether a discrepancy in recall accuracy on two versions of the test exceeds the discrepancy expected based on the performance of normal controls. PMID:24218156

  18. Reliability and Validity of the Korean Young Schema Questionnaire-Short Form-3 in Medical Students

    PubMed Central

    Lee, Seung Jae; Choi, Young Hee; Rim, Hyo Deog; Won, Seung Hee

    2015-01-01

    Objective The Young Schema Questionnaire (YSQ) is a self-report measure of early maladaptive schemas and is currently in its third revision; it is available in both long (YSQ-L3) and short (YSQ-S3) forms. The goal of this study was to develop a Korean version of the YSQ-S3 and establish its psychometric properties in a Korean sample. Methods A total of 542 graduate medical students completed the Korean version of the YSQ-S3 and several other psychological scales. A subsample of 308 subjects completed the Korean YSQ-S3 both before and after a 2-year test-retest interval. Correlation, regression, and confirmatory factor analyses were performed on the data. Results The internal consistency of the 90-item Korean YSQ-S3 was 0.97 and that of each schema was acceptable, with Cronbach's alphas ranging from 0.59 to 0.90. The test-retest reliability ranged from 0.46 to 0.65. Every schema showed robust positive correlations with most psychological measures. The confirmatory factor analysis for the 18-factor structure originally proposed by Young, Klosko, and Weishaar (2003) showed that most goodness-of-fit statistics were indicative of a satisfactory fit. Conclusion These findings support the reliability and validity of the Korean version of the YSQ-S3. PMID:26207121

  19. Self-stigma of mental illness scale--short form: reliability and validity.

    PubMed

    Corrigan, Patrick W; Michaels, Patrick J; Vega, Eduardo; Gause, Michael; Watson, Amy C; Rüsch, Nicolas

    2012-08-30

    The internalization of public stigma by persons with serious mental illnesses may lead to self-stigma, which harms self-esteem, self-efficacy, and empowerment. Previous research has evaluated a hierarchical model that distinguishes among stereotype awareness, agreement, application to self, and harm to self with the 40-item Self-Stigma of Mental Illness Scale (SSMIS). This study addressed SSMIS critiques (too long, contains offensive items that discourages test completion) by strategically omitting half of the original scale's items. Here we report reliability and validity of the 20-item short form (SSMIS-SF) based on data from three previous studies. Retained items were rated less offensive by a sample of consumers. Results indicated adequate internal consistencies for each subscale. Repeated measures ANOVAs showed subscale means progressively diminished from awareness to harm. In support of its validity, the harm subscale was found to be inversely and significantly related to self-esteem, self-efficacy, empowerment, and hope. After controlling for level of depression, these relationships remained significant with the exception of the relation between empowerment and harm SSMIS-SF subscale. Future research with the SSMIS-SF should evaluate its sensitivity to change and its stability through test-rest reliability. PMID:22578819

  20. Bringing the Cognitive Estimation Task into the 21st Century: Normative Data on Two New Parallel Forms

    PubMed Central

    MacPherson, Sarah E.; Wagner, Gabriela Peretti; Murphy, Patrick; Bozzali, Marco; Cipolotti, Lisa; Shallice, Tim

    2014-01-01

    The Cognitive Estimation Test (CET) is widely used by clinicians and researchers to assess the ability to produce reasonable cognitive estimates. Although several studies have published normative data for versions of the CET, many of the items are now outdated and parallel forms of the test do not exist to allow cognitive estimation abilities to be assessed on more than one occasion. In the present study, we devised two new 9-item parallel forms of the CET. These versions were administered to 184 healthy male and female participants aged 18–79 years with 9–22 years of education. Increasing age and years of education were found to be associated with successful CET performance as well as gender, intellect, naming, arithmetic and semantic memory abilities. To validate that the parallel forms of the CET were sensitive to frontal lobe damage, both versions were administered to 24 patients with frontal lobe lesions and 48 age-, gender- and education-matched controls. The frontal patients’ error scores were significantly higher than the healthy controls on both versions of the task. This study provides normative data for parallel forms of the CET for adults which are also suitable for assessing frontal lobe dysfunction on more than one occasion without practice effects. PMID:24671170

  1. Multiple clusters of release sites formed by individual thalamic afferents onto cortical interneurons ensure reliable transmission

    PubMed Central

    Bagnall, Martha W.; Hull, Court; Bushong, Eric A.; Ellisman, Mark H.; Scanziani, Massimo

    2012-01-01

    Summary Thalamic afferents supply the cortex with sensory information by contacting both excitatory neurons and inhibitory interneurons. Interestingly, thalamic contacts with interneurons constitute such a powerful synapse that even one afferent can fire interneurons, thereby driving feedforward inhibition. However, the spatial representation of this potent synapse on interneuron dendrites is poorly understood. Using Ca imaging and electron microscopy we show that an individual thalamic afferent forms multiple contacts with the interneuronal proximal dendritic arbor, preferentially near branch points. More contacts are correlated with larger amplitude synaptic responses. Each contact, consisting of a single bouton, can release up to 7 vesicles simultaneously, resulting in graded and reliable Ca transients. Computational modeling indicates that the release of multiple vesicles at each contact minimally reduces the efficiency of the thalamic afferent in exciting the interneuron. This strategy preserves the spatial representation of thalamocortical inputs across the dendritic arbor over a wide range of release conditions. PMID:21745647

  2. Validity, Reliability, and Potential Bias of Short Forms of Students' Evaluation of Teaching: The Case of UAE University

    ERIC Educational Resources Information Center

    Dodeen, Hamzeh

    2013-01-01

    Students' opinions continue to be a significant factor in the evaluation of teaching in higher education institutions. The purpose of this study was to psychometrically assess short students evaluation of teaching (SET) forms using the UAE University form as a model. The study evaluated the form validity, reliability, the overall question,

  3. Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data

    ERIC Educational Resources Information Center

    Dinno, Alexis

    2009-01-01

    Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make assertions

  4. Structural Aspects of the Antiparallel and Parallel Duplexes Formed by DNA, 2-O-Methyl RNA and RNA Oligonucleotides

    PubMed Central

    Szabat, Marta; Pedzinski, Tomasz; Czapik, Tomasz; Kierzek, Elzbieta; Kierzek, Ryszard

    2015-01-01

    This study investigated the influence of the nature of oligonucleotides on the abilities to form antiparallel and parallel duplexes. Base pairing of homopurine DNA, 2-O-MeRNA and RNA oligonucleotides with respective homopyrimidine DNA, 2-O-MeRNA and RNA as well as chimeric oligonucleotides containing LNA resulted in the formation of 18 various duplexes. UV melting, circular dichroism and fluorescence studies revealed the influence of nucleotide composition on duplex structure and thermal stability depending on the buffer pH value. Most duplexes simultaneously adopted both orientations. However, at pH 5.0, parallel duplexes were more favorable. Moreover, the presence of LNA nucleotides within a homopyrimidine strand favored the formation of parallel duplexes. PMID:26579720

  5. Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.

    SciTech Connect

    Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee; Neary, Vincent Sinclair

    2014-09-01

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours. In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters

  6. Reliability and Validity the Brief Problem Monitor, an Abbreviated Form of the Child Behavior Checklist

    PubMed Central

    Piper, Brian J.; Gray, Hilary M.; Raber, Jacob; Birkett, Melissa A.

    2014-01-01

    Aim The parent form of the 113 item Child Behavior Checklist (CBCL) is widely utilized by child psychiatrists and psychologists. This report examines the reliability and validity of a recently developed abbreviated version of the CBCL, the Brief Problem Monitor (BPM). Methods Caregivers (N=567) completed the CBCL online and the 19 BPM items were examined separately. Results Internal consistency of the BPM was high (Cronbachs alpha=0.91) and satisfactory for the Internalizing (0.78), Externalizing (0.86), and Attention (0.87) scales. High correlations between the CBCL and BPM were identified for the total score (r=0.95) as well as the Internalizing (0.86), Externalizing (0.93), and Attention (0.97) scales. The BPM and scales were sensitive and identified significantly higher behavioral and emotional problems among children whose caregiver reported a psychiatric diagnosis of Attention Deficit Hyperactivity Disorder, bipolar, depression, anxiety, developmental disabilities, or Autism Spectrum Disorders relative to a comparison group that had not been diagnosed with these disorders. BPM ratings also differed by the socioeconomic status and education of the caregiver. Mothers with higher annual incomes rated their children as having 38.8% fewer total problems (Cohens d=0.62) as well as 42.8% lower Internalizing (d=0.53), 44.1% less Externalizing (d=0.62), and 30.9% decreased Attention (d=0.39). A similar pattern was evident for maternal education (d=0.30 to 0.65). Conclusion Overall, these findings provide strong psychometric support for the BPM although the differences based on the characteristics of the parent indicates that additional information from other sources (e.g., teachers) should be obtained to complement parental reports. PMID:24735087

  7. Comparisons between Classical Test Theory and Item Response Theory in Automated Assembly of Parallel Test Forms

    ERIC Educational Resources Information Center

    Lin, Chuan-Ju

    2008-01-01

    The automated assembly of alternate test forms for online delivery provides an alternative to computer-administered, fixed test forms, or computerized-adaptive tests when a testing program migrates from paper/pencil testing to computer-based testing. The weighted deviations model (WDM) heuristic particularly promising for automated test assembly…

  8. Retest and Alternate-Form Reliabilities of the PPVT-R with Fourth, Fifth, and Sixth Grade Pupils.

    ERIC Educational Resources Information Center

    Tillinghast, B. S., Jr.; And Others

    1983-01-01

    A study using the Peabody Picture Vocabulary Test (Revised) was conducted to determine whether the increase in reliability when both Forms L and M were employed justified the increase in time required for the longer procedure. Children in grades four, five, and six were involved in the project. (PP)

  9. A Validation Study of the Dutch Childhood Trauma Questionnaire-Short Form: Factor Structure, Reliability, and Known-Groups Validity

    ERIC Educational Resources Information Center

    Thombs, Brett D.; Bernstein, David P.; Lobbestael, Jill; Arntz, Arnoud

    2009-01-01

    Objective: The 28-item Childhood Trauma Questionnaire-Short Form (CTQ-SF) has been translated into at least 10 different languages. The validity of translated versions of the CTQ-SF, however, has generally not been examined. The objective of this study was to investigate the factor structure, internal consistency reliability, and known-groups

  10. Measuring Teacher Self-Report on Classroom Practices: Construct Validity and Reliability of the Classroom Strategies Scale-Teacher Form

    ERIC Educational Resources Information Center

    Reddy, Linda A.; Dudek, Christopher M.; Fabiano, Gregory A.; Peters, Stephanie

    2015-01-01

    This article presents information about the construct validity and reliability of a new teacher self-report measure of classroom instructional and behavioral practices (the Classroom Strategies Scales-Teacher Form; CSS-T). The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented.

  11. Developing Form Assembly Specifications for Exams with Multiple Choice and Constructed Response Items: Balancing Reliability and Validity Concerns

    ERIC Educational Resources Information Center

    Hendrickson, Amy; Patterson, Brian; Ewing, Maureen

    2010-01-01

    The psychometric considerations and challenges associated with including constructed response items on tests are discussed along with how these issues affect the form assembly specifications for mixed-format exams. Reliability and validity, security and fairness, pretesting, content and skills coverage, test length and timing, weights, statistical…

  12. Reliability of the International Physical Activity Questionnaire in Research Settings: Last 7-Day Self-Administered Long Form

    ERIC Educational Resources Information Center

    Levy, Susan S.; Readdy, R. Tucker

    2009-01-01

    The purpose of this study was to examine the test-retest reliability of the last 7-day long form International Physical Activity Questionnaire (Craig et al., 2003) and to examine the construct validity for the measure in a research setting. Participants were 151 male (n = 52) and female (n = 99) university students (M age = 24.15 years, SD = 5.01)

  13. Utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to are specific to the Cray X-MP line of computers and its associated SSD (Solid-State Disk). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  14. The utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to in this paper are specific to the Cray X-MP line of computers and its associated SSD (Solid-state Storage Device). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  15. Reliability of a Group Form of the Peabody Picture Vocabulary Test.

    ERIC Educational Resources Information Center

    Tillinghast, B. S., Jr.; Renzulli, Joseph S.

    1968-01-01

    The purpose of this study was to further examine the reliability of the Peabody Picture Vocabulary Test (PPVT), a new instrument to measure hearing vocabulary so that a student's verbal intelligence may be inferred. A group testing procedure was utilized by reproducing the PPVT plates on 35 millimeter transparent slides and projecting them onto a…

  16. The relative noise levels of parallel axis gear sets with various contact ratios and gear tooth forms

    NASA Technical Reports Server (NTRS)

    Drago, Raymond J.; Lenski, Joseph W., Jr.; Spencer, Robert H.; Valco, Mark; Oswald, Fred B.

    1993-01-01

    The real noise reduction benefits which may be obtained through the use of one gear tooth form as compared to another is an important design parameter for any geared system, especially for helicopters in which both weight and reliability are very important factors. This paper describes the design and testing of nine sets of gears which are as identical as possible except for their basic tooth geometry. Noise measurements were made at various combinations of load and speed for each gear set so that direct comparisons could be made. The resultant data was analyzed so that valid conclusions could be drawn and interpreted for design use.

  17. Measuring teacher self-report on classroom practices: Construct validity and reliability of the Classroom Strategies Scale-Teacher Form.

    PubMed

    Reddy, Linda A; Dudek, Christopher M; Fabiano, Gregory A; Peters, Stephanie

    2015-12-01

    This article presents information about the construct validity and reliability of a new teacher self-report measure of classroom instructional and behavioral practices (the Classroom Strategies Scales-Teacher Form; CSS-T). The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented. Information is provided about the construct validity, internal consistency, test-retest reliability, and freedom from item-bias of the scales. Given previous investigations with the CSS Observer Form, it was hypothesized that internal consistency would be adequate and that confirmatory factor analyses (CFA) of CSS-T data from 293 classrooms would offer empirical support for the CSS-T's Total, Composite and subscales, and yield a similar factor structure to that of the CSS Observer Form. Goodness-of-fit indices of χ2/df, Root Mean Square Error of Approximation, Goodness of Fit Index, and Adjusted Goodness of Fit Index suggested satisfactory fit of proposed CFA models whereas the Comparative Fit Index did not. Internal consistency estimates of .93 and .94 were obtained for the Instructional Strategies and Behavioral Strategies Total scales respectively. Adequate test-retest reliability was found for instructional and behavioral total scales (r = .79, r = .84, percent agreement 93% and 93%). The CSS-T evidences freedom from item bias on important teacher demographics (age, educational degree, and years of teaching experience). Implications of results are discussed. PMID:25622226

  18. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  19. Self-Formed Barrier with Cu-Mn alloy Metallization and its Effects on Reliability

    SciTech Connect

    Koike, J.; Wada, M.; Usui, T.; Nasu, H.; Takahashi, S.; Shimizu, N.; Yoshimaru, M.; Shibata, H.

    2006-02-07

    Advancement of semiconductor devices requires the realization of an ultra-thin (less than 5 nm thick) diffusion barrier layer between Cu interconnect and insulating layers. Self-forming barrier layers have been considered as an alternative barrier structure to the conventional Ta/TaN barrier layers. The present work investigated the possibility of the self-forming barrier layer using Cu-Mn alloy thin films deposited directly on SiO2. After annealing at 450 deg. C for 30 min, an amorphous oxide layer of 3-4 nm in thickness was formed uniformly at the interface. The oxide formation was accompanied by complete expulsion of Mn atoms from the Cu-Mn alloy, leading to a drastic decrease in resistivity of the film. No interdiffusion was observed between Cu and SiO2, indicating an excellent diffusion-barrier property of the interface oxide.

  20. Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments

    ERIC Educational Resources Information Center

    Satern, Miriam N.

    2011-01-01

    Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the

  1. Development and reliability testing of a food store observation form. Measures of the Food Environment

    Cancer.gov

    Skip to Main Content at the National Institutes of Health | www.cancer.gov Print Page E-mail Page Search: Please wait while this form is being loaded.... Home Browse by Resource Type Browse by Area of Research Research Networks Funding Information About

  2. Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments

    ERIC Educational Resources Information Center

    Satern, Miriam N.

    2011-01-01

    Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the…

  3. Reliability of equivalent sphere model in blood-forming organ dose estimation

    SciTech Connect

    Shinn, J.L.; Wilson, J.W.; Nealy, J.E.

    1990-04-01

    The radiation dose equivalents to blood-forming organs (BFO's) of the astronauts at the Martian surface due to major solar flare events are calculated using the detailed body geometry of Langley and Billings. The solar flare spectra of February 1956, November 1960, and August 1972 events are employed instead of the idealized Webber form. The detailed geometry results are compared with those based on the 5-cm sphere model which was used often in the past to approximate BFO dose or dose equivalent. Larger discrepancies are found for the later two events possibly due to the lower numbers of highly penetrating protons. It is concluded that the 5-cm sphere model is not suitable for quantitative use in connection with future NASA deep-space, long-duration mission shield design studies.

  4. Reliability of equivalent sphere model in blood-forming organ dose estimation

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.; Nealy, John E.

    1990-01-01

    The radiation dose equivalents to blood-forming organs (BFO's) of the astronauts at the Martian surface due to major solar flare events are calculated using the detailed body geometry of Langley and Billings. The solar flare spectra of February 1956, November 1960, and August 1972 events are employed instead of the idealized Webber form. The detailed geometry results are compared with those based on the 5-cm sphere model which was used often in the past to approximate BFO dose or dose equivalent. Larger discrepancies are found for the later two events possibly due to the lower numbers of highly penetrating protons. It is concluded that the 5-cm sphere model is not suitable for quantitative use in connection with future NASA deep-space, long-duration mission shield design studies.

  5. Parallel-plate submicron gap formed by micromachined low-density pillars for near-field radiative heat transfer

    NASA Astrophysics Data System (ADS)

    Ito, Kota; Miura, Atsushi; Iizuka, Hideo; Toshiyoshi, Hiroshi

    2015-02-01

    Near-field radiative heat transfer has been a subject of great interest due to the applicability to thermal management and energy conversion. In this letter, a submicron gap between a pair of diced fused quartz substrates is formed by using micromachined low-density pillars to obtain both the parallelism and small parasitic heat conduction. The gap uniformity is validated by the optical interferometry at four corners of the substrates. The heat flux across the gap is measured in a steady-state and is no greater than twice of theoretically predicted radiative heat flux, which indicates that the parasitic heat conduction is suppressed to the level of the radiative heat transfer or less. The heat conduction through the pillars is modeled, and it is found to be limited by the thermal contact resistance between the pillar top and the opposing substrate surface. The methodology to form and evaluate the gap promotes the near-field radiative heat transfer to various applications such as thermal rectification, thermal modulation, and thermophotovoltaics.

  6. Parallel-plate submicron gap formed by micromachined low-density pillars for near-field radiative heat transfer

    SciTech Connect

    Ito, Kota; Miura, Atsushi; Iizuka, Hideo; Toshiyoshi, Hiroshi

    2015-02-23

    Near-field radiative heat transfer has been a subject of great interest due to the applicability to thermal management and energy conversion. In this letter, a submicron gap between a pair of diced fused quartz substrates is formed by using micromachined low-density pillars to obtain both the parallelism and small parasitic heat conduction. The gap uniformity is validated by the optical interferometry at four corners of the substrates. The heat flux across the gap is measured in a steady-state and is no greater than twice of theoretically predicted radiative heat flux, which indicates that the parasitic heat conduction is suppressed to the level of the radiative heat transfer or less. The heat conduction through the pillars is modeled, and it is found to be limited by the thermal contact resistance between the pillar top and the opposing substrate surface. The methodology to form and evaluate the gap promotes the near-field radiative heat transfer to various applications such as thermal rectification, thermal modulation, and thermophotovoltaics.

  7. easyCBM Beginning Reading Measures: Grades K-1 Alternate Form Reliability and Criterion Validity with the SAT-10. Technical Report #1403

    ERIC Educational Resources Information Center

    Wray, Kraig; Lai, Cheng-Fei; Sez, Leilani; Alonzo, Julie; Tindal, Gerald

    2013-01-01

    We report the results of an alternate form reliability and criterion validity study of kindergarten and grade 1 (N = 84-199) reading measures from the easyCBM assessment system and Stanford Early School Achievement Test/Stanford Achievement Test, 10th edition (SESAT/SAT-10) across 5 time points. The alternate form reliabilities ranged from

  8. Reliability of the State-Trait Anxiety Inventory, Form Y in Japanese samples.

    PubMed

    Iwata, N; Mishima, N

    1999-04-01

    The internal consistency of the State-Trait Anxiety Inventory, Form Y was examined using data collected from Japanese participants by five diverse surveys, in which one included American university students. Cronbach coefficient alpha was calculated separately for state and trait items as well as for anxiety-present and absent items. The internal consistency was higher for the anxiety-absent items than those of the state and trait anxiety items, but this tendency was not clear for the anxiety-present items. The trait anxiety items showed the lowest internal consistency for all Japanese groups, whereas the anxiety-present items showed the lowest alpha for American university students. It can be considered that this difference might induce the difference in two--factor structure between Japanese and people in Western countries. PMID:10335063

  9. Reliability and Validity of the Short Form of the Literacy-Independent Cognitive Assessment in the Elderly

    PubMed Central

    Kim, Jungeun; Jeong, Jee H.; Han, Seol-Heui; Ryu, Hui Jin; Lee, Jun-Young; Ryu, Seung-Ho; Lee, Dong Woo; Shim, Yong S.

    2013-01-01

    Background and Purpose The Literacy-Independent Cognitive Assessment (LICA) has been developed for a diagnosis of dementia and is a useful neuropsychological test battery for illiterate populations as well as literate populations. The objective of this study was to develop the short form of the LICA (S-LICA) and to evaluate the reliability and validity of the S-LICA. Methods The subtests of the S-LICA were selected based on the factor analysis and validation study results of the LICA. Patients with dementia (n=101) and normal elderly controls (n=185) participated in this study. Results Cronbach's coefficient alpha of the S-LICA was 0.92 for illiterate subjects and 0.94 for literate subjects, and the item-total correlation ranged from 0.63 to 0.81 (p<.01).The test-retest reliability of the S-LICA total score was high (r=0.94, p<.001), and the subtests had high test-retest reliabilities (r=0.68-0.87, p<.01). The correlation between the K-MMSE and S-LICA total scores were substantial in both the illiterate subjects (r=0.837, p<.001) and the literate subjects(r=0.802, p<.001). The correlation between the S-LICA and LICA was very high (r=0.989, p<.001). The area under the curve of the receiver operating characteristic was 0.999 for the literate subjects and 0.985 for the illiterate subjects. The sensitivity and specificity of the S-LICA for a diagnosis of dementia were 97% and 96% at the cutoff point of 72 for the literate subjects, and 96% and 93% at the cutoff point of 68 for the illiterate subjects, respectively. Conclusions Our results indicate that the S-LICA is a reliable and valid instrument for quick evaluation of patients with dementia in both illiterate and literate elderly populations. PMID:23626649

  10. Two-Repeat Human Telomeric d(TAGGGTTAGGGT) Sequence Forms Interconverting Parallel and Antiparallel G-Quadruplexes in Solution: Distinct Topologies, Thermodynamic Properties, and Folding/Unfolding Kinetics

    PubMed Central

    Patel, Dinshaw J.

    2015-01-01

    We demonstrate by NMR that the two-repeat human telomeric sequence d(TAGGGTTAGGGT) can form both parallel and antiparallel G-quadruplex structures in K+-containing solution. Both structures are dimeric G-quadruplexes involving three stacked G-tetrads. The sequence d(TAGGGUTAGGGT), containing a single thymine-to-uracil substitution at position 6, formed a predominantly parallel dimeric G-quadruplex with double-chain-reversal loops; the structure was symmetric, and all guanines were anti. Another modified sequence, d(UAGGGTBrUAGGGT), formed a predominantly antiparallel dimeric G-quadruplex with edgewise loops; the structure was asymmetric with six syn guanines and six anti guanines. The two structures can coexist and interconvert in solution. For the latter sequence, the antiparallel form is more favorable at low temperatures (<50 C), while the parallel form is more favorable at higher temperatures; at temperatures lower than 40 C, the antiparallel G-quadruplex folds faster but unfolds slower than the parallel G-quadruplex. PMID:14653736

  11. Hypo-activity screening in school setting; examining reliability and validity of the Teacher Estimation of Activity Form (TEAF).

    PubMed

    Rosenblum, Sara; Engel-Yeger, Batya

    2015-06-01

    It is well established that physical activity during childhood contributes to children's physical and psychological health. The aim of this study was to test the reliability and validity of the Hebrew version of the Teacher Estimation of Activity Form (TEAF) questionnaire as a screening tool among school-aged children in Israel. Six physical education teachers completed TEAF questionnaires of 123 children aged 5-12 years, 68 children (55%) with Typical Development (TD) and 55 children (45%) diagnosed with Developmental Coordination Disorder (DCD). The Hebrew version of the TEAF indicates a very high level of internal consistency (? = .97). There were no significant gender differences. Significant differences were found between children with and without DCD attesting to the test's construct validity. Concurrent validity was established by finding a significant high correlation (r = .76, p < .01) between the TEAF and the Movement-ABC mean scores within the DCD group. The TEAF demonstrated acceptable reliability and validity estimates. It appears to be a promising standardized practical tool in both research and practice for describing information about school-aged children's involvement in physical activity. Further research is indicated with larger samples to establish cut-off scores determining what point identifies hypo activity in striated age groups. Furthermore, the majority of the participants in this study were boys, and further research is needed to include more girls for a better understanding of the phenomena of hypo activity. PMID:25665095

  12. Reliability of the Wraparound Observation Form--Second Version: an instrument designed to assess the fidelity of the wraparound approach.

    PubMed

    Nordness, Philip D; Epstein, Michael H

    2003-06-01

    The push to rapidly implement the wraparound approach for families of children with serious emotional disturbance (SED) has resulted in a number of service models that may or may not be in accordance with its theoretical foundation. Given the number of wraparound programs being implemented nationwide, the need to develop instruments that can measure the fidelity of wraparound services in a reliable manner should not be ignored. The purpose of this study was to examine the interobserver agreement of the Wraparound Observation Form--Second Version (WOF-2), an observation system designed to assess the fidelity of wraparound services. Observations were conducted across 30 family planning meetings where wraparound services were provided. The mean percentage agreement across the 48 items was 96.7% (range 83.3-100%) and the average kappa statistic was 0.886 (range = 0.318-1.0). On the basis of these results, the WOF-2 appears to be a reliable instrument for assessing the delivery of wraparound services. PMID:12801072

  13. Initial validation of the Spanish childhood trauma questionnaire-short form: factor structure, reliability and association with parenting.

    PubMed

    Hernandez, Ana; Gallardo-Pujol, David; Pereda, Noem; Arntz, Arnoud; Bernstein, David P; Gaviria, Ana M; Labad, Antonio; Valero, Joaqun; Gutirrez-Zotes, Jose Alfonso

    2013-05-01

    The present study examines the internal consistency and factor structure of the Spanish version of the Childhood Trauma Questionnaire-Short Form (CTQ-SF) and the association between the CTQ-SF subscales and parenting style. Cronbach's ? and confirmatory factor analyses (CFA) were performed in a female clinical sample (n = 185). Kendall's ? correlations were calculated between the maltreatment and parenting scales in a subsample of 109 patients. The Spanish CTQ-SF showed adequate psychometric properties and a good fit of the 5-factor structure. The neglect and abuse scales were negatively associated with parental care and positively associated with overprotection scales. The results of this study provide initial support for the reliability and validity of the Spanish CTQ-SF. PMID:23266990

  14. Reliability and psychometric properties of the Greek translation of the State-Trait Anxiety Inventory form Y: Preliminary data

    PubMed Central

    Fountoulakis, Konstantinos N; Papadopoulou, Marina; Kleanthous, Soula; Papadopoulou, Anna; Bizeli, Vasiliki; Nimatoudis, Ioannis; Iacovides, Apostolos; Kaprinis, George S

    2006-01-01

    Background The State-Trait Anxiety Inventory form Y is a brief self-rating scale for the assessment of state and trait anxiety. The aim of the current preliminary study was to assess the psychometric properties of its Greek translation. Materials and methods 121 healthy volunteers 27.22 10.61 years old, and 22 depressed patients 29.48 9.28 years old entered the study. In 20 of them the instrument was re-applied 12 days later. Translation and Back Translation was made. The clinical diagnosis was reached with the SCAN v.2.0 and the IPDE. The Symptoms Rating Scale for Depression and Anxiety (SRSDA) and the EPQ were applied for cross-validation purposes. The Statistical Analysis included the Pearson Correlation Coefficient and the calculation of Cronbach's alpha. Results The State score for healthy subjects was 34.30 10.79 and the Trait score was 36.07 10.47. The respected scores for the depressed patients were 56.22 8.86 and 53.83 10.87. Both State and Trait scores followed the normal distribution in control subjects. Cronbach's alpha was 0.93 for the State and 0.92 for the Trait subscale. The Pearson Correlation Coefficient between State and Trait subscales was 0.79. Both subscales correlated fairly with the anxiety subscale of the SRSDA. Test-retest reliability was excellent, with Pearson coefficient being between 0.75 and 0.98 for individual items and equal to 0.96 for State and 0.98 for Trait. Conclusion The current study provided preliminary evidence concerning the reliability and the validity of the Greek translation of the STAI-form Y. Its properties are generally similar to those reported in the international literature, but further research is necessary. PMID:16448554

  15. The Bruininks-Oseretsky Test of Motor Proficiency-Short Form is reliable in children living in remote Australian Aboriginal communities

    PubMed Central

    2013-01-01

    Background The Lililwan Project is the first population-based study to determine Fetal Alcohol Spectrum Disorders (FASD) prevalence in Australia and was conducted in the remote Fitzroy Valley in North Western Australia. The diagnostic process for FASD requires accurate assessment of gross and fine motor functioning using standardised cut-offs for impairment. The Bruininks-Oseretsky Test of Motor Proficiency, Second Edition (BOT-2) is a norm-referenced assessment of motor function used worldwide and in FASD clinics in North America. It is available in a Complete Form with 53 items or a Short Form with 14 items. Its reliability in measuring motor performance in children exposed to alcohol in utero or living in remote Australian Aboriginal communities is unknown. Methods A prospective inter-rater and test-retest reliability study was conducted using the BOT-2 Short Form. A convenience sample of children (n?=?30) aged 7 to 9years participating in the Lililwan Project cohort (n?=?108) study, completed the reliability study. Over 50% of mothers of Lililwan Project children drank alcohol during pregnancy. Two raters simultaneously scoring each child determined inter-rater reliability. Test-retest reliability was determined by assessing each child on a second occasion using predominantly the same rater. Reliability was analysed by calculating Intra-Class correlation Coefficients, ICC(2,1), Percentage Exact Agreement (PEA) and Percentage Close Agreement (PCA) and measures of Minimal Detectable Change (MDC) were calculated. Results Thirty Aboriginal children (18 male, 12 female: mean age 8.8years) were assessed at eight remote Fitzroy Valley communities. The inter-rater reliability for the BOT-2 Short Form score sheet outcomes ranged from 0.88 (95%CI, 0.77 0.94) to 0.92 (95%CI, 0.84 0.96) indicating excellent reliability. The test-retest reliability (median interval between tests being 45.5days) for the BOT-2 Short Form score sheet outcomes ranged from 0.62 (95%CI, 0.34 0.80) to 0.73 (95%CI, 0.50 0.86) indicating fair to good reliability. The raw score MDC was 6.12. Conclusion The BOT-2 Short Form has acceptable reliability for use in remote Australian Aboriginal communities and will be useful in determining motor deficits in children exposed to alcohol prenatally. This is the first known study evaluating the reliability of the BOT-2 Short Form, either in the context of assessment for FASD or in Aboriginal children. PMID:24010634

  16. Concurrent Validity and Reliability of the Kaufman Version of the McCarthy Scales Short Form for a Sample of Mexican-American Children.

    ERIC Educational Resources Information Center

    Valencia, Richard R.; Rankin, Richard J.

    1983-01-01

    The concurrent validity and reliability of Kaufman's short-form version of the McCarthy Scales of Children's Abilities were examined for a sample of 342 Mexican-American preschool and kindergarten age children. The results showed that generally the positive psychometric properties of the Kaufman short form were also noted for the children in this

  17. Subjective Well-Being Under Neuroleptics Scale short form (SWN-K): reliability and validity in an Estonian speaking sample

    PubMed Central

    2013-01-01

    Background The Subjective Well-Being Under Neuroleptic Treatment Scale short form (SWN-K) is a self-rating scale developed to measure mentally ill patients' well-being under the antipsychotic drug treatment. This paper reports on adaptation and psychometric properties of the instrument in an Estonian psychiatric sample. Methods In a naturalistic study design, 124 inpatients or outpatients suffering from the first psychotic episode or chronic psychotic illness completed the translated SWN-K instrument. Item content analysis, internal consistency analysis, exploratory principal components analysis, and confirmatory factor analysis were used to construct the Estonian version of the SWN-K (SWN-K-E). Additionally, socio-demographic and clinical data, observer-rated psychopathology, medication side effects, daily antipsychotic drug dosages, and general functioning were assessed at two time points, at baseline and after a 29-week period; the associations of the SWN-K-E scores with these variables were explored. Results After having selected 20 items for the Estonian adaptation, the internal consistency of the total SWN-K-E was 0.93 and the subscale consistencies ranged from 0.70 to 0.80. Good testretest reliabilities were observed for the adapted scale scores, with the correlation of the total score over about 6 months being r = 0.70. Confirmatory factor analysis replicated the presence of a higher-order factor (general well-being) and five first-order factors (mental functioning, physical functioning, social integration, emotional regulation, and self-control); the model fitted the data well. The results indicated a moderate-high correlations r = 0.54 between the SWN-K-E total score and the evaluation how satisfied patients were with their lives in generally. No significant correlations were found between the overall subjective well-being score and age, severity of the psychopathology, drug adverse effects, or prescribed drug dosage. Conclusion Taken together, the results demonstrated that the Estonian version of the SWN-K is a reliable and valid instrument with psychometric properties similar to the original English version. The potential uses of the scale in both research and clinical settings are considered. PMID:24025191

  18. Parallel β-sheets and polar zippers in amyloid fibrils formed by residues 10–39 of the yeast prion protein Ure2p

    PubMed Central

    Chan, Jerry C.C.; Oyler, Nathan A.; Yau, Wai-Ming; Tycko, Robert

    2005-01-01

    We report the results of solid state nuclear magnetic resonance (NMR) and atomic force microscopy measurements on amyloid fibrils formed by residues 10–39 of the yeast prion protein Ure2p (Ure2p10–39). Measurements of intermolecular 13C-13C nuclear magnetic dipole-dipole couplings indicate that Ure2p10–39 fibrils contain in-register parallel β-sheets. Measurements of intermolecular 15N-13C dipole-dipole couplings, using a new solid state NMR technique called DSQ-REDOR, are consistent with hydrogen bonds between sidechain amide groups of Gln18 residues. Such sidechain hydrogen bonding interactions have been called “polar zippers” by M.F. Perutz and have been proposed to stabilize amyloid fibrils formed by peptides with glutamine- and asparagine-rich sequences, such as Ure2p10–39. We propose that polar zipper interactions account for the in-register parallel β-sheet structure in Ure2p10–39 fibrils and that similar peptides will also exhibit parallel β-sheet structures in amyloid fibrils. We present molecular models for Ure2p10–39 fibrils that are consistent with available experimental data. Finally, we show that solid state 13C NMR chemical shifts for 13C-labeled Ure2p10–39 fibrils are insensitive to hydration level, indicating that the fibril structure is not affected by the presence or absence of bulk water. PMID:16060675

  19. The Major G-Quadruplex Formed in the Human BCL-2 Proximal Promoter Adopts a Parallel Structure with a 13-nt Loop in K+ Solution

    PubMed Central

    2014-01-01

    The human BCL-2 gene contains a 39-bp GC-rich region upstream of the P1 promoter that has been shown to be critically involved in the regulation of BCL-2 gene expression. Inhibition of BCL-2 expression can decrease cellular proliferation and enhance the efficacy of chemotherapy. Here we report the major G-quadruplex formed in the Pu39 G-rich strand in this BCL-2 promoter region. The 1245G4 quadruplex adopts a parallel structure with one 13-nt and two 1-nt chain-reversal loops. The 1245G4 quadruplex involves four nonsuccessive G-runs, I, II, IV, V, unlike the previously reported bcl2 MidG4 quadruplex formed on the central four G-runs. The parallel 1245G4 quadruplex with the 13-nt loop, unexpectedly, appears to be more stable than the mixed parallel/antiparallel MidG4. Parallel-stranded structures with two 1-nt loops and one variable-length middle loop are found to be prevalent in the promoter G-quadruplexes; the variable middle loop is suggested to determine the specific overall structure and potential ligand recognition site. A limit of 7 nt in loop length is used in all quadruplex-predicting software. Thus, the formation and high stability of the 1245G4 quadruplex with a 13-nt loop is significant. The presence of two distinct interchangeable G-quadruplexes in the overlapping region of the BCL-2 promoter is intriguing, suggesting a novel mechanism for gene transcriptional regulation and ligand modulation. PMID:24450880

  20. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 1. Technical Report #1216

    ERIC Educational Resources Information Center

    Anderson, Daniel; Park, Jasmine, Bitnara; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest/and alternate form) and G-Theory/D-Study research on the easy CBM reading measures, grades 1-5. Data were gathered in the spring 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due

  1. Validity and Reliability of the Turkish Form of Technology-Rich Outcome-Focused Learning Environment Inventory

    ERIC Educational Resources Information Center

    Cakir, Mustafa

    2011-01-01

    The purpose of the study was to investigate the reliability and validity of a Turkish adaptation of Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI) which was developed by Aldridge, Dorman, and Fraser. A sample of 985 students from 16 high schools (Grades 9-12) participated in the study. Translation process followed

  2. Closed-form solution of mid-potential between two parallel charged plates with more extensive application

    NASA Astrophysics Data System (ADS)

    Shang, Xiang-Yu; Yang, Chen; Zhou, Guo-Qing

    2015-10-01

    Efficient calculation of the electrostatic interactions including repulsive force between charged molecules in a biomolecule system or charged particles in a colloidal system is necessary for the molecular scale or particle scale mechanical analyses of these systems. The electrostatic repulsive force depends on the mid-plane potential between two charged particles. Previous analytical solutions of the mid-plane potential, including those based on simplified assumptions and modern mathematic methods, are reviewed. It is shown that none of these solutions applies to wide ranges of inter-particle distance from 0 to 10 and surface potential from 1 to 10. Three previous analytical solutions are chosen to develop a semi-analytical solution which is proven to have more extensive applications. Furthermore, an empirical closed-form expression of mid-plane potential is proposed based on plenty of numerical solutions. This empirical solution has extensive applications, as well as high computational efficiency. Project supported by the National Key Basic Research Program of China (Grant No. 2012CB026103), the National Natural Science Foundation of China (Grant No. 51009136), and the Natural Science Foundation of Jiangsu Province, China (Grant No. BK2011212).

  3. An Investigation of Psychometric Properties of Coping Styles Scale Brief Form: A Study of Validity and Reliability

    ERIC Educational Resources Information Center

    Bacanli, Hasan; Surucu, Mustafa; Ilhan, Tahsin

    2013-01-01

    The aim of the current study was to develop a short form of Coping Styles Scale based on COPE Inventory. A total of 275 undergraduate students (114 female, and 74 male) were administered in the first study. In order to test factors structure of Coping Styles Scale Brief Form, principal components factor analysis and direct oblique rotation was…

  4. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Passage Reading Fluency Assessments: Grade 4. Technical Report #1219

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

  5. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 2. Technical Report #1217

    ERIC Educational Resources Information Center

    Anderson, Daniel; Lai, Cheg-Fei; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest an alternate form) and G-Theory/D-Study on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from the convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due to

  6. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 5. Technical Report #1220

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.

  7. Reproducible and Reliable Real-time PCR Assay to Measure Mature Form of miR-141.

    PubMed

    Aghaee-Bakhtiari, Seyed Hamid; Arefian, Ehsan; Soleimani, Masoud; Noorbakhsh, Farshid; Samiee, Siamak Mirab; Fard-Esfahani, Pezhman; Mahdian, Reza

    2016-02-01

    miR-141 is one of the miRNAs that has significant expression variations in different human malignancies including prostate cancer, hepatocellular carcinoma, renal cell carcinoma, pancreatic cancer, gastric cancer, and ovarian cancer. Furthermore, various studies have designated miR-141 as a prognostic and diagnostic biomarker in different types of cancer. Thus, accurate and precise quantification of miR-141 is very essential for clinical diagnostics. In this regard, development of a reproducible and reliable assay for miR-141 can be the first step to standardize quantification of this valuable biomarker for in vitro diagnostics assays. Using stem-loop approach, we designed a Taqman real-time PCR assay for miR-141. This method allowed us to reproducibly and reliably quantify miR-141. The specificity, sensitivity, interassay and intraassay, and the dynamic range of the method were determined. The assay had a linear dynamic range of 3E-9.6E copies/reaction and the limit of detection was determined to be between 960 and 192 copies/reaction with 95% confidence interval. In addition, the R2 rate was >0.99 and the slope of the standard curve >-3.27, indicating great amplification efficiency, which is >99%. The coefficient of variation for Ct values was <1.9% and 2.39% for intraassay and interassay, respectively. Therefore, this study can be the first step to standardize miR-141 evaluations, which consequently assist the physicians for improved prognosis, diagnosis, and treatment. PMID:25789530

  8. The Behavior Problems Inventory-Short Form for Individuals with Intellectual Disabilities: Part II--Reliability and Validity

    ERIC Educational Resources Information Center

    Rojahn, J.; Rowe, E. W.; Sharber, A. C.; Hastings, R.; Matson, J. L.; Didden, R.; Kroes, D. B. H.; Dumont, E. L. M.

    2012-01-01

    Background: The Behavior Problems Inventory-01 (BPI-01) is an informant-based behaviour rating instrument for intellectual disabilities (ID) with 49 items and three sub-scales: "Self-injurious Behavior," "Stereotyped Behavior" and "Aggressive/Destructive Behavior." The Behavior Problems Inventory-Short Form (BPI-S) is a BPI-01 spin-off with 30

  9. Japanese Version of Home Form of the ADHD-RS: An Evaluation of Its Reliability and Validity

    ERIC Educational Resources Information Center

    Tani, Iori; Okada, Ryo; Ohnishi, Masafumi; Nakajima, Shunji; Tsujii, Masatsugu

    2010-01-01

    Using the Japanese version of home form of the ADHD-RS, this survey attempted to compare the scores between the US and Japan and examined the correlates of ADHD-RS. We collected responses from parents or rearers of 5977 children (3119 males and 2858 females) in nursery, elementary, and lower-secondary schools. A confirmed factor analysis of

  10. Balancing the Need for Reliability and Time Efficiency: Short Forms of the Wechsler Adult Intelligence Scale-III

    ERIC Educational Resources Information Center

    Jeyakumar, Sharon L. E.; Warriner, Erin M.; Raval, Vaishali V.; Ahmad, Saadia A.

    2004-01-01

    Tables permitting the conversion of short-form composite scores to full-scale IQ estimates have been published for previous editions of the Wechsler Adult Intelligence Scale (WAIS). Equivalent tables are now needed for selected subtests of the WAIS-III. This article used Tellegen and Briggs's formulae to convert the sum of scaled scores for four

  11. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  12. Parallel processing

    SciTech Connect

    Krishnamurthy, E.V. )

    1989-01-01

    This book provides a introduction to the fundamental principles and practice of parallel processing. After a general introduction to the many facets of parallelism, the first part of the book is devoted to the development of a coherent theoretical framework. Particular attention is paid to the modeling, semantics and complexity of interacting parallel processes. The second part of the book considers the more practical aspects such as parallel processor architecture, parallel and distributed programming, and concurrent transaction handling in databases.

  13. Female Genital Mutilation in Sierra Leone: Forms, Reliability of Reported Status, and Accuracy of Related Demographic and Health Survey Questions

    PubMed Central

    Grant, Donald S.; Berggren, Vanja

    2013-01-01

    Objective. To determine forms of female genital mutilation (FGM), assess consistency between self-reported and observed FGM status, and assess the accuracy of Demographic and Health Surveys (DHS) FGM questions in Sierra Leone. Methods. This cross-sectional study, conducted between October 2010 and April 2012, enrolled 558 females aged 12–47 from eleven antenatal clinics in northeast Sierra Leone. Data on demography, FGM status, and self-reported anatomical descriptions were collected. Genital inspection confirmed the occurrence and extent of cutting. Results. All participants reported FGM status; 4 refused genital inspection. Using the WHO classification of FGM, 31.7% had type Ib; 64.1% type IIb; and 4.2% type IIc. There was a high level of agreement between reported and observed FGM prevalence (81.2% and 81.4%, resp.). There was no correlation between DHS FGM responses and anatomic extent of cutting, as 2.7% reported pricking; 87.1% flesh removal; and 1.1% that genitalia was sewn closed. Conclusion. Types I and II are the main forms of FGM, with labia majora alterations in almost 5% of cases. Self-reports on FGM status could serve as a proxy measurement for FGM prevalence but not for FGM type. The DHS FGM questions are inaccurate for determining cutting extent. PMID:24204384

  14. The potassium permanganate method. A reliable method for differentiating amyloid AA from other forms of amyloid in routine laboratory practice.

    PubMed Central

    van Rijswijk, M. H.; van Heusden, C. W.

    1979-01-01

    Alterations in affinity of amyloid for Congo red after incubation of tissue sections with potassium permanganate, as described by Wright el al, were studied. The affinity of amyloid for Congo red after incubation with potassium permanganate did not change in patients with myeloma-associated amyloidosis, familial amyloidotic polyneuropathy, medullary carcinoma of the thyroid, pancreatic island amyloid, and cerebral amyloidosis. Affinity for Congo red was lost after incubation with potassium permanganate in tissue sections from patients with secondary amyloidosis and amyloidosis complicating familial Mediterranean fever (consisting of amyloid AA). Patients with primary amyloidosis could be divided into two groups, one with potassium-permanganate--sensitive and one with potassium-permanganate--resistant amyloid deposits. These two groups correlated with the clinical classification in typical organ distribution (presenting with nephropathy) and atypical organ distribution (presenting with cardiomyopathy, nephropathy, and glossopathy) and the expected presence of amyloid AA or amyloid AL. Potassium permanganate sensitivity seems to be restricted to amyloid AA. The potassium permanganate method can be important in dividing the major forms of generalized amyloidosis in AA amyloid and non-AA amyloid. This can be used for differentiating early stages of the disease and cases otherwise difficult to classify. It is important to define patient groups properly, especially in evaluating the effect of therapeutic measures. (Am J Pathol 97:43--58, 1979). Images p[58]-a Figure 1 Figure 2 p[56]-a PMID:495695

  15. Reliability and validity of the Spanish version of the Child Health and Illness Profile (CHIP) Child-Edition, Parent Report Form (CHIP-CE/PRF)

    PubMed Central

    2010-01-01

    Background The objectives of the study were to assess the reliability, and the content, construct, and convergent validity of the Spanish version of the CHIP-CE/PRF, to analyze parent-child agreement, and compare the results with those of the original U.S. version. Methods Parents from a representative sample of children aged 6-12 years were selected from 9 primary schools in Barcelona. Test-retest reliability was assessed in a convenience subsample of parents from 2 schools. Parents completed the Spanish version of the CHIP-CE/PRF. The Achenbach Child Behavioural Checklist (CBCL) was administered to a convenience subsample. Results The overall response rate was 67% (n = 871). There was no floor effect. A ceiling effect was found in 4 subdomains. Reliability was acceptable at the domain level (internal consistency = 0.68-0.86; test-retest intraclass correlation coefficients = 0.69-0.85). Younger girls had better scores on Satisfaction and Achievement than older girls. Comfort domain score was lower (worse) in children with a probable mental health problem, with high effect size (ES = 1.45). The level of parent-child agreement was low (0.22-0.37). Conclusions The results of this study suggest that the parent version of the Spanish CHIP-CE has acceptable psychometric properties although further research is needed to check reliability at sub-domain level. The CHIP-CE parent report form provides a comprehensive, psychometrically sound measure of health for Spanish children 6 to 12 years old. It can be a complementary perspective to the self-reported measure or an alternative when the child is unable to complete the questionnaire. In general, the results are similar to the original U.S. version. PMID:20678198

  16. Nutritional form for the elderly is a reliable and valid instrument for the determination of undernutrition risk, and it is associated with health-related quality of life.

    PubMed

    Gombos, Tmea; Kertsz, Krisztina; Cskos, Agnes; Sderhamn, Ulrika; Sderhamn, Olle; Prohszka, Zoltn

    2008-02-01

    Undernutrition is a common problem associated with clinical complications such as impaired immune response, reduced muscle strength, impaired wound healing, and susceptibility to infections; therefore, it is an important treatment target to reduce morbidity and mortality associated with chronic diseases and aging. The aim of the present study was to apply a reliable and valid instrument for the determination of undernutrition risk in an in-hospital patient population and to describe possible associations between risk of undernutrition and some aspects of health-related quality of life in patients with chronic diseases. Fifty-six adult patients with different chronic diseases were interviewed with NUFFE questionnaire and the EQ-5D. Anthropometric measurements were performed. Reliability and validity of the NUFFE instrument was tested, and its correlation with EQ-5D was calculated. Euro-Qol scores correlated significantly with the total NUFFE scores and with the items constructing the most important factor of the instrument, explaining 53.74% of its variance. Nutritional form for the elderly was shown to be a reliable instrument in the study group because its internal consistency measured by Cronbach alpha was 0.62, and the item-total score correlations were significant for the half of the items. Criterion-related validity, concurrent validity, and construct validity of NUFFE were established. We have shown that impaired level of health-related quality of life is an important determinant of risk for undernutrition. Nutritional form for the elderly is an appropriate instrument to estimate undernutrition risk in a general, in-hospital patient population with various chronic diseases and to identify "at risk" patients who may benefit from professional dietary interventions to reduce undernutrition-related complications. PMID:19083389

  17. Computerized life and reliability modelling for turboprop transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Radil, K. C.; Lewicki, D. G.; Coy, J. J.

    1988-01-01

    A generalized life and reliability model is presented for parallel shaft geared prop-fan and turboprop aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on classical fatigue theory and the two parameter Weibull failure distribution. A computer program was developed to calculate the transmission life and reliability. The program is modular. In its present form, the program can analyze five different transmission arrangements. However, the program can be modified easily to include additional transmission arrangements. An example is included which compares the life of a compound two-stage transmission with the life of a split-torque, parallel compound two-stage transmission, as calculated by the computer program.

  18. Computerized life and reliability modelling for turboprop transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Radil, K. C.; Lewicki, D. G.; Coy, J. J.

    1988-01-01

    A generalized life and reliability model is presented for parallel shaft geared prop-fan and turboprop aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on classical fatigue theory and the two parameter Weibull failure distribution. A computer program was developed to calculate the transmission life and reliability. The program is modular. In its present form, the program can analyze five different transmission arrangements. However, the program can be modified easily to include additional transmission arrangements. An example is included which compares the life of a compound two-stage transmission with the life of a split-torque, parallel compound two-stage transmission as calculated by the comaputer program.

  19. Parallel Inhibition of Dopamine Amacrine Cells and Intrinsically Photosensitive Retinal Ganglion Cells in a Non-Image-Forming Visual Circuit of the Mouse Retina.

    PubMed

    Vuong, Helen E; Hardi, Claudia N; Barnes, Steven; Brecha, Nicholas C

    2015-12-01

    An inner retinal microcircuit composed of dopamine (DA)-containing amacrine cells and melanopsin-containing, intrinsically photosensitive retinal ganglion cells (M1 ipRGCs) process information about the duration and intensity of light exposures, mediating light adaptation, circadian entrainment, pupillary reflexes, and other aspects of non-image-forming vision. The neural interaction is reciprocal: M1 ipRGCs excite DA amacrine cells, and these, in turn, feed inhibition back onto M1 ipRGCs. We found that the neuropeptide somatostatin [somatotropin release inhibiting factor (SRIF)] also inhibits the intrinsic light response of M1 ipRGCs and postulated that, to tune the bidirectional interaction of M1 ipRGCs and DA amacrine cells, SRIF amacrine cells would provide inhibitory modulation to both cell types. SRIF amacrine cells, DA amacrine cells, and M1 ipRGCs form numerous contacts. DA amacrine cells and M1 ipRGCs express the SRIF receptor subtypes sst(2A) and sst4 respectively. SRIF modulation of the microcircuit was investigated with targeted patch-clamp recordings of DA amacrine cells in TH-RFP mice and M1 ipRGCs in OPN4-EGFP mice. SRIF increases K(+) currents, decreases Ca(2+) currents, and inhibits spike activity in both cell types, actions reproduced by the selective sst(2A) agonist L-054,264 (N-[(1R)-2-[[[(1S*,3R*)-3-(aminomethyl)cyclohexyl]methyl]amino]-1-(1H-indol-3-ylmethyl)-2-oxoethyl]spiro[1H-indene-1,4'-piperidine]-1'-carboxamide) in DA amacrine cells and the selective sst4 agonist L-803,087 (N(2)-[4-(5,7-difluoro-2-phenyl-1H-indol-3-yl)-1-oxobutyl]-L-arginine methyl ester trifluoroacetate) in M1 ipRGCs. These parallel actions of SRIF may serve to counteract the disinhibition of M1 ipRGCs caused by SRIF inhibition of DA amacrine cells. This allows the actions of SRIF on DA amacrine cells to proceed with adjusting retinal DA levels without destabilizing light responses by M1 ipRGCs, which project to non-image-forming targets in the brain. PMID:26631476

  20. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  1. Application of principal component analysis (PCA) and improved joint probability distributions to the inverse first-order reliability method (I-FORM) for predicting extreme sea states

    DOE PAGESBeta

    Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; Neary, Vincent S.

    2016-01-06

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (Hs) and either energy period (Te) or peak period (Tp) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmental contours. This papermore » develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less

  2. A new model of in vitro fungal biofilms formed on human nail fragments allows reliable testing of laser and light therapies against onychomycosis.

    PubMed

    Vila, Taissa Vieira Machado; Rozental, Sonia; de S Guimares, Claudia Maria Duarte

    2015-04-01

    Onychomycoses represent approximately 50 % of all nail diseases worldwide. In warmer and more humid countries like Brazil, the incidence of onychomycoses caused by non-dermatophyte molds (NDM, including Fusarium spp.) or yeasts (including Candida albicans) has been increasing. Traditional antifungal treatments used for the dermatophyte-borne disease are less effective against onychomycoses caused by NDM. Although some laser and light treatments have demonstrated clinical efficacy against onychomycosis, their US Food and Drug Administration (FDA) approval as "first-line" therapy is pending, partly due to the lack of well-demonstrated fungicidal activity in a reliable in vitro model. Here, we describe a reliable new in vitro model to determine the fungicidal activity of laser and light therapies against onychomycosis caused by Fusarium oxysporum and C. albicans. Biofilms formed in vitro on sterile human nail fragments were treated with 1064 nm neodymium-doped yttrium aluminum garnet laser (Nd:YAG), 420 nm intense pulsed light (IPL) IPL 420, followed by Nd:YAG, or near-infrared light ((NIR) 700-1400 nm). Light and laser antibiofilm effects were evaluated using cell viability assay and scanning electron microscopy (SEM). All treatments were highly effective against C. albicans and F. oxysporum biofilms, resulting in decreases in cell viability of 45-60 % for C. albicans and 92-100 % for F. oxysporum. The model described here yielded fungicidal activities that matched more closely to those observed in the clinic, when compared to published in vitro models for laser and light therapies. Thus, our model might represent an important tool for the initial testing, validation, and "fine-tuning" of laser and light therapies against onychomycosis. PMID:25471266

  3. The Zarit Caregiver Burden Interview Short Form (ZBI-12) in spouses of Veterans with Chronic Spinal Cord Injury, Validity and Reliability of the Persian Version

    PubMed Central

    Rajabi-Mashhadi, Mohammad T; Mashhadinejad, Hosein; Ebrahimzadeh, Mohammad H; Golhasani-Keshtan, Farideh; Ebrahimi, Hanieh; Zarei, Zahra

    2015-01-01

    Background: To test the psychometric properties of the Persian version of Zarit Burden Interview (ZBI-12) in the Iranian population. Methods: After translating and cultural adaptation of the questionnaire into Persian, 100 caregiver spouses of Iran- Iraq war (1980-88) veterans with chronic spinal cord injury who live in the city of Mashhad, Iran, invited to participate in the study. The Persian version of ZBI-12 accompanied with the Persian SF-36 was completed by the caregivers to test validity of the Persian ZBI-12.A Pearson`s correlation coefficient was calculated for validity testing. In order to assess reliability of the Persian ZBI-12, we administered the ZBI-12 randomly in 48 caregiver spouses again 3 days later. Results: Generally, the internal consistency of the questionnaire was found to be strong (Cronbach's alpha 0.77). Intercorrelation matrix between the different domains of ZBI-12 at test-retest was 0.78. The results revealed that majority of questions the Persian ZBI_12 have a significant correlation to each other. In terms of validity, our results showed that there is significant correlations between some domains of the Persian version the Short Form Health Survey -36 with the Persian Zarit Burden Interview such as Q1 with Role Physical (P=0.03),General Health (P=0.034),Social Functional (0.037), Mental Health (0.023) and Q3 with Physical Function (P=0.001),Viltality (0.002), Socil Function (0.001). Conclusions: Our findings suggest that the Zarit Burden Interview Persian version is both a valid and reliable instrument for measuring the burden of caregivers of individuals with chronic spinal cord injury. PMID:25692171

  4. Reliability and structural integrity

    NASA Technical Reports Server (NTRS)

    Davidson, J. R.

    1976-01-01

    An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.

  5. Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach

    ERIC Educational Resources Information Center

    Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

    2012-01-01

    Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

  6. Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach

    ERIC Educational Resources Information Center

    Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

    2012-01-01

    Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test

  7. Competitive Dominance among Strains of Luminous Bacteria Provides an Unusual Form of Evidence for Parallel Evolution in Sepiolid Squid-Vibrio Symbioses

    PubMed Central

    Nishiguchi, Michele K.; Ruby, Edward G.; McFall-Ngai, Margaret J.

    1998-01-01

    One of the principal assumptions in symbiosis research is that associated partners have evolved in parallel. We report here experimental evidence for parallel speciation patterns among several partners of the sepiolid squid-luminous bacterial symbioses. Molecular phylogenies for 14 species of host squids were derived from sequences of both the nuclear internal transcribed spacer region and the mitochondrial cytochrome oxidase subunit I; the glyceraldehyde phosphate dehydrogenase locus was sequenced for phylogenetic determinations of 7 strains of bacterial symbionts. Comparisons of trees constructed for each of the three loci revealed a parallel phylogeny between the sepiolids and their respective symbionts. Because both the squids and their bacterial partners can be easily cultured independently in the laboratory, we were able to couple these phylogenetic analyses with experiments to examine the ability of the different symbiont strains to compete with each other during the colonization of one of the host species. Our results not only indicate a pronounced dominance of native symbiont strains over nonnative strains, but also reveal a hierarchy of symbiont competency that reflects the phylogenetic relationships of the partners. For the first time, molecular systematics has been coupled with experimental colonization assays to provide evidence for the existence of parallel speciation among a set of animal-bacterial associations. PMID:9726861

  8. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  9. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

  10. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  11. VLSI and parallel computation

    SciTech Connect

    Suaya, R.; Birtwistle, G.

    1988-01-01

    This volume presents a cross-section of the most current research in parallel computation encompassing theoretical models, VLSI design, routing, and machine implementations. The book comprises a series of invited tutorial chapters on advanced topics in VLSI and concurrency. The chapters have been revised and updated to form a coherent volume exploring issues of fundamental importance in parallel computation, as well as significant research results in the contributor's specialties. Topics include load sharing models, PRAM models of computation, neural networks, Cochlea models, the design of algorithms for explicit concurrency, and VLSI CAD.

  12. Parallel programming

    SciTech Connect

    Perrott, R.H.

    1987-01-01

    This book examines the major hardware developments and programming concepts that have influenced the introduction of parallelism. It provides an overview of some of the features of specific machine architectures and their interaction with developments in software technology. The independent areas of multiprocessor and distributed programming, programming array and vector processors, and data flow programming are also examined in detail. Topics covered include: hardware technology developments; software technology developments; mutual exclusion; process synchronization; message passing primitives; Modula-2; Pascal Plus; Ada; Occam: a distributed computing language; Cray-1 FORTRAN translator: CFT; CDC Cyber FORTRAN; Illiac IV CFD FORTRAN; distributed array processor FORTRAN; Actus: a Pascal-based language; data flow programming.

  13. Examining the reliability and validity of a modified version of the International Physical Activity Questionnaire, long form (IPAQ-LF) in Nigeria: a cross-sectional study

    PubMed Central

    Oyeyemi, Adewale L; Bello, Umar M; Philemon, Saratu T; Aliyu, Habeeb N; Majidadi, Rebecca W; Oyeyemi, Adetoyeje Y

    2014-01-01

    Objectives To investigate the reliability and an aspect of validity of a modified version of the long International Physical Activity Questionnaire (Hausa IPAQ-LF) in Nigeria. Design Cross-sectional study, examining the reliability and construct validity of the Hausa IPAQ-LF compared with anthropometric and biological variables. Setting Metropolitan Maiduguri, the capital city of Borno State in Nigeria. Participants 180 Nigerian adults (50% women) with a mean age of 35.6 (SD=10.3) years, recruited from neighbourhoods with diverse socioeconomic status and walkability. Outcome measures Domains (domestic physical activity (PA), occupational PA, leisure-time PA, active transportation and sitting time) and intensities of PA (vigorous, moderate and walking) were measured with the Hausa IPAQ-LF on two different occasions, 8?days apart. Outcomes for construct validity were measured body mass index (BMI), systolic blood pressure (SBP) and diastolic blood pressure (DBP). Results The Hausa IPAQ-LF demonstrated good testretest reliability (intraclass correlation coefficient, ICC>75) for total PA (ICC=0.79, 95% CI 0.65 to 0.82), occupational PA (ICC=0.77, 95% CI 0.68 to 0.82), active transportation (ICC=0.82, 95% CI 0.75 to 0.87) and vigorous intensity activities (ICC=0.82, 95% CI 0.76 to 0.87). Reliability was substantially higher for total PA (ICC=0.80), occupational PA (ICC=0.78), leisure-time PA (ICC=0.75) and active transportation (ICC=0.80) in men than in women, but domestic PA (ICC=0.38) and sitting time (ICC=0.71) demonstrated more substantial reliability coefficients in women than in men. For the construct validity, domestic PA was significantly related mainly with SBP (r=?0.27) and DBP (r=?0.17), and leisure-time PA and total PA were significantly related only with SBP (r=?0.16) and BMI (r=?0.29), respectively. Similarly, moderate-intensity PA was mainly related with SBP (r=?0.16, p<0.05) and DBP (r=?0.21, p<0.01), but vigorous-intensity PA was only related with BMI (r=?0.11, p<0.05). Conclusions The modified Hausa IPAQ-LF demonstrated sufficient evidence of testretest reliability and may be valid for assessing context specific PA behaviours of adults in Nigeria. PMID:25448626

  14. d(CGGTGGT) forms an octameric parallel G-quadruplex via stacking of unusual G(:C):G(:C):G(:C):G(:C) octads

    PubMed Central

    Borbone, Nicola; Amato, Jussara; Oliviero, Giorgia; DAtri, Valentina; Gabelica, Valrie; De Pauw, Edwin; Piccialli, Gennaro; Mayol, Luciano

    2011-01-01

    Among non-canonical DNA secondary structures, G-quadruplexes are currently widely studied because of their probable involvement in many pivotal biological roles, and for their potential use in nanotechnology. The overall quadruplex scaffold can exhibit several morphologies through intramolecular or intermolecular organization of G-rich oligodeoxyribonucleic acid strands. In particular, several G-rich strands can form higher order assemblies by multimerization between several G-quadruplex units. Here, we report on the identification of a novel dimerization pathway. Our Nuclear magnetic resonance, circular dichroism, UV, gel electrophoresis and mass spectrometry studies on the DNA sequence dCGGTGGT demonstrate that this sequence forms an octamer when annealed in presence of K+ or NH4+ ions, through the 5?-5? stacking of two tetramolecular G-quadruplex subunits via unusual G(:C):G(:C):G(:C):G(:C) octads. PMID:21715378

  15. Parallelism in System Tools

    SciTech Connect

    Matney, Sr., Kenneth D; Shipman, Galen M

    2010-01-01

    The Cray XT, when employed in conjunction with the Lustre filesystem, has provided the ability to generate huge amounts of data in the form of many files. Typically, this is accommodated by satisfying the requests of large numbers of Lustre clients in parallel. In contrast, a single service node (Lustre client) cannot adequately service such datasets. This means that the use of traditional UNIX tools like cp, tar, et alli (with have no parallel capability) can result in substantial impact to user productivity. For example, to copy a 10 TB dataset from the service node using cp would take about 24 hours, under more or less ideal conditions. During production operation, this could easily extend to 36 hours. In this paper, we introduce the Lustre User Toolkit for Cray XT, developed at the Oak Ridge Leadership Computing Facility (OLCF). We will show that Linux commands, implementing highly parallel I/O algorithms, provide orders of magnitude greater performance, greatly reducing impact to productivity.

  16. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  17. A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2012-01-01

    Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true

  18. A microarray-based method for the parallel analysis of genotypes and expression profiles of wood-forming tissues in Eucalyptus grandis

    PubMed Central

    Barros, Eugenia; van Staden, Carol-Ann; Lezar, Sabine

    2009-01-01

    Background Fast-growing Eucalyptus grandis trees are one of the most efficient producers of wood in South Africa. The most serious problem affecting the quality and yield of solid wood products is the occurrence of end splitting in logs. Selection of E. grandis planting stock that exhibit preferred wood qualities is thus a priority of the South African forestry industry. We used microarray-based DNA-amplified fragment length polymorphism (AFLP) analysis in combination with expression profiling to develop fingerprints and profile gene expression of wood-forming tissue of seven different E. grandis trees. Results A 1578-probe cDNA microarray was constructed by arraying 768 cDNA-AFLP clones and 810 cDNA library clones from seven individual E. grandis trees onto silanised slides. The results revealed that 32% of the spotted fragments showed distinct expression patterns (with a fold change of at least 1.4 or -1.4 and a p value of 0.01) could be grouped into clusters representing co-expressed genes. Evaluation of the binary distribution of cDNA-AFLP fragments on the array showed that the individual genotypes could be discriminated. Conclusion A simple, yet general method was developed for genotyping and expression profiling of wood-forming tissue of E. grandis trees differing in their splitting characteristics and in their lignin contents. Evaluation of gene expression profiles and the binary distribution of cDNA-AFLP fragments on the chip suggest that the prototype chip developed could be useful for transcript profiling and for the identification of Eucalyptus trees with preferred wood quality traits in commercial breeding programmes. PMID:19473481

  19. Broadband monitoring simulation with massively parallel processors

    NASA Astrophysics Data System (ADS)

    Trubetskov, Mikhail; Amotchkina, Tatiana; Tikhonravov, Alexander

    2011-09-01

    Modern efficient optimization techniques, namely needle optimization and gradual evolution, enable one to design optical coatings of any type. Even more, these techniques allow obtaining multiple solutions with close spectral characteristics. It is important, therefore, to develop software tools that can allow one to choose a practically optimal solution from a wide variety of possible theoretical designs. A practically optimal solution provides the highest production yield when optical coating is manufactured. Computational manufacturing is a low-cost tool for choosing a practically optimal solution. The theory of probability predicts that reliable production yield estimations require many hundreds or even thousands of computational manufacturing experiments. As a result reliable estimation of the production yield may require too much computational time. The most time-consuming operation is calculation of the discrepancy function used by a broadband monitoring algorithm. This function is formed by a sum of terms over wavelength grid. These terms can be computed simultaneously in different threads of computations which opens great opportunities for parallelization of computations. Multi-core and multi-processor systems can provide accelerations up to several times. Additional potential for further acceleration of computations is connected with using Graphics Processing Units (GPU). A modern GPU consists of hundreds of massively parallel processors and is capable to perform floating-point operations efficiently.

  20. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Word and Passage Reading Fluency Assessments: Grade 3. Technical Report #1218

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

  1. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  2. Reliability training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  3. The RecA-binding pilE G4 Sequence Essential for Pilin Antigenic Variation forms Parallel-stranded Monomeric and 5-End Stacked Dimeric G-quadruplexes

    PubMed Central

    Kuryavyi, Vitaly; Cahoon, Laty A.; Seifert, H. Steven; Patel, Dinshaw J.

    2013-01-01

    SUMMARY Neisseria gonorrhoeae is an obligate human pathogen that can escape immune surveillance through antigenic variation of surface structures such as pili. A G-quadruplex-forming (G4) sequence (5-G3TG3TTG3TG3) located upstream of the N. gonorrhoeae pilin expression locus (pilE) is necessary for initiation of pilin antigenic variation, a recombination-based, high-frequency, diversity-generation system. We have determined NMR-based structures of the all-parallel-stranded monomeric and novel 5-end-stacked dimeric pilE G-quadruplexes in monovalent cation-containing solutions. We demonstrate that the three-layered all-parallel-stranded monomeric pilE G-quadruplex containing single residue double-chain-reversal loops, that can be modeled without steric clashes into the three-nucleotide DNA-binding site of RecA, binds and promotes E. coli RecA mediated strand exchange in vitro. We discuss how interactions between RecA and monomeric pilE G-quadruplex could facilitate the specialized recombination reactions leading to pilin diversification. PMID:23085077

  4. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  5. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  6. Assessing the Discriminant Ability, Reliability, and Comparability of Multiple Short Forms of the Boston Naming Test in an Alzheimer’s Disease Center Cohort

    PubMed Central

    Katsumata, Yuriko; Mathews, Melissa; Abner, Erin L.; Jicha, Gregory A.; Caban-Holt, Allison; Smith, Charles D.; Nelson, Peter T.; Kryscio, Richard J.; Schmitt, Frederick A.; Fardo, David W.

    2015-01-01

    Background The Boston Naming Test (BNT) is a commonly used neuropsychological test of confrontation naming that aids in determining the presence and severity of dysnomia. Many short versions of the original 60-item test have been developed and are routinely administered in clinical/research settings. Because of the common need to translate similar measures within and across studies, it is important to evaluate the operating characteristics and agreement of different BNT versions. Methods We analyzed longitudinal data of research volunteers (n = 681) from the University of Kentucky Alzheimer’s Disease Center longitudinal cohort. Conclusions With the notable exception of the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) 15-item BNT, short forms were internally consistent and highly correlated with the full version; these measures varied by diagnosis and generally improved from normal to mild cognitive impairment (MCI) to dementia. All short forms retained the ability to discriminate between normal subjects and those with dementia. The ability to discriminate between normal and MCI subjects was less strong for the short forms than the full BNT, but they exhibited similar patterns. These results have important implications for researchers designing longitudinal studies, who must consider that the statistical properties of even closely related test forms may be quite different. PMID:25613081

  7. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  8. Establishing the Validity and Reliability of the Student Practice Evaluation Form-Revised (SPEF-R) in Occupational Therapy Practice Education: A Rasch Analysis.

    PubMed

    Rodger, Sylvia; Chien, Chi-Wen; Turpin, Merrill; Copley, Jodie; Coleman, Allison; Brown, Ted; Caine, Anne-Maree

    2016-03-01

    This study investigated construct validity and internal consistency of the Student Practice Evaluation Form-Revised Edition Package (SPEF-R) which evaluates students' performance on practice education placements. The SPEF-R has 38 items covering eight domains, and each item is rated on a 5-point rating scale. Data from 125 students' final placement evaluations in their final year study were analyzed using the Rasch measurement model. The SPEF-R exhibited satisfactory rating scale performance and unidimensionality across the eight domains, providing construct validity evidence. Only 2 items misfit Rasch model's expectations (both related to students' performance with client groups, which were often rated as not observed). Additionally, the internal consistency of each SPEF-R domain was found to be excellent (Cronbach's ? = .86 to .91) and all individual items had reasonable to excellent item-total correlation coefficients. The study results indicate that the SPEF-R can be used with confidence to evaluate students' performance during placements, but continued validation and refinement are required. PMID:24214417

  9. SSD Reliability

    NASA Astrophysics Data System (ADS)

    Zambelli, C.; Olivo, P.

    SSD are complex electronic systems prone to wear-out and failure mechanisms mainly related to their basic component: the Flash memory. The reliability of a Flash memory depends on many technological and architectural aspects, from the physical concepts on which the store paradigm is achieved to the interaction among cells, from possible new physical mechanisms arising as the technology scales down to the countermeasures adopted within the memory controller to face erroneous behaviors.

  10. Parallel NPARC: Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Townsend, S. E.

    1996-01-01

    Version 3 of the NPARC Navier-Stokes code includes support for large-grain (block level) parallelism using explicit message passing between a heterogeneous collection of computers. This capability has the potential for significant performance gains, depending upon the block data distribution. The parallel implementation uses a master/worker arrangement of processes. The master process assigns blocks to workers, controls worker actions, and provides remote file access for the workers. The processes communicate via explicit message passing using an interface library which provides portability to a number of message passing libraries, such as PVM (Parallel Virtual Machine). A Bourne shell script is used to simplify the task of selecting hosts, starting processes, retrieving remote files, and terminating a computation. This script also provides a simple form of fault tolerance. An analysis of the computational performance of NPARC is presented, using data sets from an F/A-18 inlet study and a Rocket Based Combined Cycle Engine analysis. Parallel speedup and overall computational efficiency were obtained for various NPARC run parameters on a cluster of IBM RS6000 workstations. The data show that although NPARC performance compares favorably with the estimated potential parallelism, typical data sets used with previous versions of NPARC will often need to be reblocked for optimum parallel performance. In one of the cases studied, reblocking increased peak parallel speedup from 3.2 to 11.8.

  11. Results from the translation and adaptation of the Iranian Short-Form McGill Pain Questionnaire (I-SF-MPQ): preliminary evidence of its reliability, construct validity and sensitivity in an Iranian pain population

    PubMed Central

    2011-01-01

    Background The Short Form McGill Pain Questionnaire (SF-MPQ) is one of the most widely used instruments to assess pain. The aim of this study was to translate and culturally adapt the questionnaire for Farsi (the official language of Iran) speakers in order to test its reliability and sensitivity. Methods We followed Guillemin's guidelines for cross-cultural adaption of health-related measures, which include forward-backward translations, expert committee meetings, and face validity testing in a pilot group. Subsequently, the questionnaire was administered to a sample of 100 diverse chronic pain patients attending a tertiary pain and rehabilitation clinic. In order to evaluate test-retest reliability, patients completed the questionnaire in the morning and early evening of their first visit. Finally, patients were asked to complete the questionnaire for the third time after completing a standardized treatment protocol three weeks later. Intraclass correlation coefficient (ICC) was used to evaluate reliability. We used principle component analysis to assess construct validity. Results Ninety-two subjects completed the questionnaire both in the morning and in the evening of the first visit (test-retest reliability), and after three weeks (sensitivity to change). Eight patients who did not finish treatment protocol were excluded from the study. Internal consistency was found by Cronbach's alpha to be 0.951, 0.832 and 0.840 for sensory, affective and total scores respectively. ICC resulted in 0.906 for sensory, 0.712 for affective and 0.912 for total pain score. Item to subscale score correlations supported the convergent validity of each item to its hypothesized subscale. Correlations were observed to range from r2 = 0.202 to r2 = 0.739. Sensitivity or responsiveness was evaluated by pair t-test, which exhibited a significant difference between pre- and post-treatment scores (p < 0.001). Conclusion The results of this study indicate that the Iranian version of the SF-MPQ is a reliable questionnaire and responsive to changes in the subscale and total pain scores in Persian chronic pain patients over time. PMID:22074591

  12. A Method of Estimating Rater Reliability.

    ERIC Educational Resources Information Center

    van den Bergh, Huub; Eiting, Mindert H.

    1989-01-01

    A method of assessing rater reliability via a design of overlapping rater teams is presented. Covariances or correlations of ratings can be analyzed with LISREL models. Models in which the rater reliabilities are congeneric, tau-equivalent, or parallel can be tested. Two examples based on essay ratings are presented. (TJH)

  13. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  14. The Ohio Scales Youth Form: Expansion and Validation of a Self-Report Outcome Measure for Young Children

    ERIC Educational Resources Information Center

    Dowell, Kathy A.; Ogles, Benjamin M.

    2008-01-01

    We examined the validity and reliability of a self-report outcome measure for children between the ages of 8 and 11. The Ohio Scales Problem Severity scale is a brief, practical outcome measure available in three parallel forms: Parent, Youth, and Agency Worker. The Youth Self-Report form is currently validated for children ages 12 and older. The…

  15. The Ohio Scales Youth Form: Expansion and Validation of a Self-Report Outcome Measure for Young Children

    ERIC Educational Resources Information Center

    Dowell, Kathy A.; Ogles, Benjamin M.

    2008-01-01

    We examined the validity and reliability of a self-report outcome measure for children between the ages of 8 and 11. The Ohio Scales Problem Severity scale is a brief, practical outcome measure available in three parallel forms: Parent, Youth, and Agency Worker. The Youth Self-Report form is currently validated for children ages 12 and older. The

  16. PHACT: Parallel HOG and Correlation Tracking

    NASA Astrophysics Data System (ADS)

    Hassan, Waqas; Birch, Philip; Young, Rupert; Chatwin, Chris

    2014-03-01

    Histogram of Oriented Gradients (HOG) based methods for the detection of humans have become one of the most reliable methods of detecting pedestrians with a single passive imaging camera. However, they are not 100 percent reliable. This paper presents an improved tracker for the monitoring of pedestrians within images. The Parallel HOG and Correlation Tracking (PHACT) algorithm utilises self learning to overcome the drifting problem. A detection algorithm that utilises HOG features runs in parallel to an adaptive and stateful correlator. The combination of both acting in a cascade provides a much more robust tracker than the two components separately could produce.

  17. How Reliable Are TOEFL Scores?

    ERIC Educational Resources Information Center

    Wainer, Howard; Lukhele, Robert

    1997-01-01

    The reliability of scores from four forms of the Test of English as a Foreign Language (TOEFL) was estimated using a hybrid item response theory model. It was found that there was very little difference between overall reliability when the testlet items were assumed to be independent and when their dependence was modeled. (Author/SLD)

  18. Photovoltaic module reliability workshop

    SciTech Connect

    Mrig, L.

    1990-01-01

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  19. DUST EXTINCTION FROM BALMER DECREMENTS OF STAR-FORMING GALAXIES AT 0.75 {<=} z {<=} 1.5 WITH HUBBLE SPACE TELESCOPE/WIDE-FIELD-CAMERA 3 SPECTROSCOPY FROM THE WFC3 INFRARED SPECTROSCOPIC PARALLEL SURVEY

    SciTech Connect

    Dominguez, A.; Siana, B.; Masters, D.; Henry, A. L.; Martin, C. L.; Scarlata, C.; Bedregal, A. G.; Malkan, M.; Ross, N. R.; Atek, H.; Colbert, J. W.; Teplitz, H. I.; Rafelski, M.; McCarthy, P.; Hathi, N. P.; Dressler, A.; Bunker, A.

    2013-02-15

    Spectroscopic observations of H{alpha} and H{beta} emission lines of 128 star-forming galaxies in the redshift range 0.75 {<=} z {<=} 1.5 are presented. These data were taken with slitless spectroscopy using the G102 and G141 grisms of the Wide-Field-Camera 3 (WFC3) on board the Hubble Space Telescope as part of the WFC3 Infrared Spectroscopic Parallel survey. Interstellar dust extinction is measured from stacked spectra that cover the Balmer decrement (H{alpha}/H{beta}). We present dust extinction as a function of H{alpha} luminosity (down to 3 Multiplication-Sign 10{sup 41} erg s{sup -1}), galaxy stellar mass (reaching 4 Multiplication-Sign 10{sup 8} M {sub Sun }), and rest-frame H{alpha} equivalent width. The faintest galaxies are two times fainter in H{alpha} luminosity than galaxies previously studied at z {approx} 1.5. An evolution is observed where galaxies of the same H{alpha} luminosity have lower extinction at higher redshifts, whereas no evolution is found within our error bars with stellar mass. The lower H{alpha} luminosity galaxies in our sample are found to be consistent with no dust extinction. We find an anti-correlation of the [O III] {lambda}5007/H{alpha} flux ratio as a function of luminosity where galaxies with L {sub H{alpha}} < 5 Multiplication-Sign 10{sup 41} erg s{sup -1} are brighter in [O III] {lambda}5007 than H{alpha}. This trend is evident even after extinction correction, suggesting that the increased [O III] {lambda}5007/H{alpha} ratio in low-luminosity galaxies is likely due to lower metallicity and/or higher ionization parameters.

  20. Programming parallel processors

    SciTech Connect

    Babb, R.G. II

    1987-01-01

    This book surveys the major commercially available, scientific parallel computers with emphasis on how they are programmed. For each machine, the way in which parallel performance can be assessed is shown for the same, small example program. A wide range of parallel machines is covered, from superminis to parallel vector supercomputers, including both shared memory and message-passing machines. Topics covered include: exploiting multiprocessors: issues and options; Alliant FX/8; BBN Butterfly Parallel Processor; CRAY X-MP; FPS T Series Parallel Processor; IBM 3090; Intel iPSC Concurrent Computer; Loral Dataflo LDF 100; and Sequent Balance Series.

  1. Fault-tolerant parallel processor

    SciTech Connect

    Harper, R.E.; Lala, J.H. )

    1991-06-01

    This paper addresses issues central to the design and operation of an ultrareliable, Byzantine resilient parallel computer. Interprocessor connectivity requirements are met by treating connectivity as a resource that is shared among many processing elements, allowing flexibility in their configuration and reducing complexity. Redundant groups are synchronized solely by message transmissions and receptions, which aslo provide input data consistency and output voting. Reliability analysis results are presented that demonstrate the reduced failure probability of such a system. Performance analysis results are presented that quantify the temporal overhead involved in executing such fault-tolerance-specific operations. Empirical performance measurements of prototypes of the architecture are presented. 30 refs.

  2. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  3. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi (Albuquerque, NM); Cheng, Yung-Sung (Albuquerque, NM)

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  4. Using Alternate Forms of the Multidimensional Health Locus of Control Scale: Caveat Emptor

    ERIC Educational Resources Information Center

    Hubley, Anita M.; Wagner, Shannon

    2004-01-01

    This study examined whether Forms A and B of the Multidimensional Health Locus of Control Scale (MHLCS) are parallel by comparing (a) mean performance on the internal, powerful others, and chance subscales, (b) the internal consistency and one-week test-retest reliability estimates for each of the subscales, (c) the intercorrelations among the

  5. Parallel processing ITS

    SciTech Connect

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  6. Parallel program design

    SciTech Connect

    Chandy, K.M.; Misra, J. )

    1989-01-01

    The main theme of this book demonstrates that to program parallel computers, you need to understand how to program any computer well -- program that is, independently of any specific architecture. It considers a wide spectrum of computer architectures, and develops parallel programs for a variety of problems. This book is a statement of unique and important ideas necessary for understanding parallel programs.

  7. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  8. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  9. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  10. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to

  11. Parallel Implicit Algorithms for CFD

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  12. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  13. Improved CDMA Performance Using Parallel Interference Cancellation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Divsalar, Dariush

    1995-01-01

    This report considers a general parallel interference cancellation scheme that significantly reduces the degradation effect of user interference but with a lesser implementation complexity than the maximum-likelihood technique. The scheme operates on the fact that parallel processing simultaneously removes from each user the interference produced by the remaining users accessing the channel in an amount proportional to their reliability. The parallel processing can be done in multiple stages. The proposed scheme uses tentative decision devices with different optimum thresholds at the multiple stages to produce the most reliably received data for generation and cancellation of user interference. The 1-stage interference cancellation is analyzed for three types of tentative decision devices, namely, hard, null zone, and soft decision, and two types of user power distribution, namely, equal and unequal powers. Simulation results are given for a multitude of different situations, in particular, those cases for which the analysis is too complex.

  14. Towards Distributed Memory Parallel Program Analysis

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2008-06-17

    This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

  15. Design considerations for parallel graphics libraries

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  16. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  17. Generation and analysis of large reliability models

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Nicol, David M.

    1990-01-01

    An effort has been underway for several years at NASA's Langley Research Center to extend the capability of Markov modeling techniques for reliability analysis to the designers of highly reliable avionic systems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG), a software tool which uses as input a graphical, object-oriented block diagram of the system, is discussed. RMG uses an automated failure modes-effects analysis algorithm to produce the reliability model from the graphical description. Also considered is the ASSURE software tool, a parallel processing program which uses the ASSIST modeling language and SURE semi-Markov solution technique. An executable failure modes-effects analysis is used by ASSURE. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that large system architectures can now be analyzed.

  18. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  19. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  20. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  1. Reliability computation from reliability block diagrams

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.; Eckstein, E. Y.

    1975-01-01

    Computer program computes system reliability for very general class of reliability block diagrams. Four factors are considered in calculating probability of system success: active block redundancy, standby block redundancy, partial redundancy, and presence of equivalent blocks in the diagram.

  2. Eclipse Parallel Tools Platform

    Energy Science and Technology Software Center (ESTSC)

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures, and basis Fortran integration. Future versions will extend the functionality substantially, provide a number of core parallel tools, and provide support across a wide rang of parallel architectures and languages.« less

  3. Reliability of an Adapted Version of the Modified Six Elements Test as a Measure of Executive Function.

    PubMed

    Bertens, Dirk; Fasotti, Luciano; Egger, Jos I M; Boelen, Danielle H E; Kessels, Roy P C

    2016-01-01

    The Modified Six Elements Test (MSET) is used to examine executive deficits-more specifically, planning deficits. This study investigates the reliability of an adapted version of the MSET and proposes a novel scoring method. Two parallel versions of the adapted MSET were administered in 60 healthy participants in a counterbalanced order. Test-retest and parallel-form reliability were examined using intraclass correlation coefficients, Bland-Altman analyses, standard errors of measurement, and smallest real differences, representing clinically relevant changes over time. Moreover, the ecological validity of the adapted MSET was evaluated using the Executive Function Index, a self-rating questionnaire measuring everyday executive performance. No systematic differences between the test occasions were present, and the adapted MSET including the proposed scoring method was capable of detecting real clinical changes. Intraclass correlations for the test-retest and parallel-form reliability were modest, and the variability between the test scores was high. The nonsignificant correlations with the Executive Function Index did not confirm the previously established ecological validity of the MSET. We show that both parallel versions of the test are clinically equivalent and can be used to measure executive function over the course of time without task-specific learning effects. PMID:26111243

  4. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  5. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  6. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  7. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  8. Integrated circuit reliability testing

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Sayah, Hoshyar R. (Inventor)

    1988-01-01

    A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

  9. Integrated circuit reliability testing

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Sayah, Hoshyar R. (Inventor)

    1990-01-01

    A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

  10. Short-term reliability of a brief hazard perception test.

    PubMed

    Scialfa, Charles T; Pereverseff, Rosemary S; Borkenhagen, David

    2014-12-01

    Hazard perception tests (HPTs) have been successfully implemented in some countries as a part of the driver licensing process and, while their validity has been evaluated, their short-term stability is unknown. This study examined the short-term reliability of a brief, dynamic version of the HPT. Fifty-five young adults (Mage=21 yrs) with at least two years of post-licensing driving experience completed parallel, 21-scene HPTs with a one-month interval separating each test. Minimal practice effects (?0.1s) were manifested. Internal consistency (Cronbach's alpha) averaged 0.73 for the two forms. The correlation between the two tests was 0.55 (p<0.001) and correcting for lack of reliability increased the correlation to 0.72. Thus, a brief form of the HPT demonstrates acceptable short-term reliability in drivers whose hazard perception should be stable, an important feature for implementation and consumer acceptance. One implication of these results is that valid HPT scores should predict future crash risk, a desirable property for user acceptance of such tests. However, short-term stability should be assessed over longer periods and in other driver groups, particularly novices and older adults, in whom inter-individual differences in the development of hazard perception skill may render HPT tests unstable, even over short intervals. PMID:25173997

  11. Reliability model generator

    NASA Technical Reports Server (NTRS)

    McMann, Catherine M. (Inventor); Cohen, Gerald C. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  12. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  13. Parallel nearest neighbor calculations

    NASA Astrophysics Data System (ADS)

    Trease, Harold

    We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

  14. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  15. Parallel methods for dynamic simulation of multiple manipulator systems

    NASA Technical Reports Server (NTRS)

    Mcmillan, Scott; Sadayappan, P.; Orin, David E.

    1993-01-01

    In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

  16. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  17. Calculating system reliability with SRFYDO

    SciTech Connect

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  18. Low-power approaches for parallel, free-space photonic interconnects

    SciTech Connect

    Carson, R.F.; Lovejoy, M.L.; Lear, K.L.; WSarren, M.E.; Seigal, P.K.; Craft, D.C.; Kilcoyne, S.P.; Patrizi, G.A.; Blum, O.

    1995-12-31

    Future advances in the application of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMS) and board-level parallel connections. Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. These will make use of new device-level technologies such as vertical cavity surface-emitting lasers and special low-power parallel photoreceiver circuits. Depending on the application, these device technologies will often be monolithically integrated to reduce the amount of board or module real estate required by the photonics. Highly parallel MCM and board-level applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated in photonic and optoelectronic technologies. An example is found in two-dimensional point-to-point array interconnects for MCM stacking. These interconnects are based on high-efficiency Vertical Cavity Surface Emitting Lasers (VCSELs), Heterojunction Bipolar Transistor (HBT) photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques. Individual channels have been demonstrated at 100 Mb/s, operating with a direct 3.3V CMOS electronic interface while using 45 mW of electrical power. These results demonstrate how optoelectronic device technologies can be optimized for low-power parallel link applications.

  19. Simplified Parallel Domain Traversal

    SciTech Connect

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep by performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.

  20. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  1. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  2. UCLA Parallel PIC Framework

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor K.; Norton, Charles D.

    2004-12-01

    The UCLA Parallel PIC Framework (UPIC) has been developed to provide trusted components for the rapid construction of new, parallel Particle-in-Cell (PIC) codes. The Framework uses object-based ideas in Fortran95, and is designed to provide support for various kinds of PIC codes on various kinds of hardware. The focus is on student programmers. The Framework supports multiple numerical methods, different physics approximations, different numerical optimizations and implementations for different hardware. It is designed with "defensive" programming in mind, meaning that it contains many error checks and debugging helps. Above all, it is designed to hide the complexity of parallel processing. It is currently being used in a number of new Parallel PIC codes.

  3. Parallels with nature

    NASA Astrophysics Data System (ADS)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  4. On mesh rezoning algorithms for parallel platforms

    SciTech Connect

    Plaskacz, E.J.

    1995-07-01

    A mesh rezoning algorithm for finite element simulations in a parallel-distributed environment is described. The cornerstones of the algorithm are: the parallel computation of distortion norms on the element and subdomain level, the exchange of the individual subdomain norms to form a subdomain distortion vector, the classification of subdomains and the rezoning behavior prescribed within each subdomain as a response to its own classification and the classification of neighboring subdomains.

  5. Reliability Generalization: "Lapsus Linguae"

    ERIC Educational Resources Information Center

    Smith, Julie M.

    2011-01-01

    This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,

  6. Parallel DC notch filter

    NASA Astrophysics Data System (ADS)

    Kwok, Kam-Cheung; Chan, Ming-Kam

    1991-12-01

    In the process of image acquisition, the object of interest may not be evenly illuminated. So an image with shading irregularities would be produced. This type of image is very difficult to analyze. Consequently, a lot of research work concentrates on this problem. In order to remove the light illumination problem, one of the methods is to filter the image. The dc notch filter is one of the spatial domain filters used for reducing the effect of uneven light illumination on the image. Although the dc notch filter is a spatial domain filter, it is still rather time consuming to apply, especially when it is implemented on a microcomputer. To overcome the speed problem, a parallel dc notch filter is proposed. Based on the separability of the algorithm dc of notch filter, image parallelism (parallel image processing model) is used. To improve the performance of the microcomputer, an INMOS IMS B008 Module Mother Board with four IMS T800-17 is installed in the microcomputer. In fact, the dc notch filter is implemented on the transputer network. This parallel dc notch filter creates a great improvement in the computation time of the filter in comparison with the sequential one. Furthermore, the speed-up is used to analyze the performance of the parallel algorithm. As a result, parallel implementation of the dc notch filter on a transputer network gives a real-time performance of this filter.

  7. Can There Be Reliability without "Reliability?"

    ERIC Educational Resources Information Center

    Mislevy, Robert J.

    2004-01-01

    An "Educational Researcher" article by Pamela Moss (1994) asks the title question, "Can there be validity without reliability?" Yes, she answers, if by reliability one means "consistency among independent observations intended as interchangeable" (Moss, 1994, p. 7), quantified by internal consistency indices such as KR-20 coefficients and

  8. Wind turbine reliability database update.

    SciTech Connect

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  9. Reduction of a general matrix to tridiagonal form using a hypercube multiprocessor

    SciTech Connect

    Geist, G.A.

    1989-01-01

    Recently there has been a renewed interest in finding reliable methods of reducing general matrices to tridiagonal form. We have developed a serial reduction algorithm that appears to be very reliable in practice. In this paper we describe a parallel version of our algorithm, which has been implemented using a portable library developed at Oak Ridge National Laboratory (ORNL). The library allows the code to be portable across most commercial hypercubes. The algorithm was developed as one step in the process of finding eigenvalues of nonsymmetric matrices. Our original parallel eigenvalue routines reduced the matrix to Hessenberg form and then applied QR iteration, but their performance was disappointing. Our new parallel routines reduce the matrix to tridiagonal form and then apply LR iteration. Using an iPSC/2, we compare the performance of the new parallel routines with our previous parallel routines and show that the new routines are nearly an order of magnitude faster, allowing us to solve much larger problems than previously attempted. 14 refs., 6 figs., 2 tabs.

  10. Comparison of Reliability Measures under Factor Analysis and Item Response Theory

    ERIC Educational Resources Information Center

    Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng

    2012-01-01

    Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi

  11. Comparison of Reliability Measures under Factor Analysis and Item Response Theory

    ERIC Educational Resources Information Center

    Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng

    2012-01-01

    Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…

  12. Parallel time integration software

    Energy Science and Technology Software Center (ESTSC)

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  13. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  14. Reliability models applicable to space telescope solar array assembly system

    NASA Astrophysics Data System (ADS)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  15. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  16. Parallel shear and turbulence

    NASA Astrophysics Data System (ADS)

    Hayes, Tiffany; Gilmore, Mark; Watts, Christopher; Xie, Shuangwei; Yan, Lincan

    2009-11-01

    Instabilities may be caused in plasma due to (shear) flow. These flows can be transverse or parallel to the magnetic field. Past work has generally focussed on controlling and understanding the processes that occur from (shear) flow transverse to the magnetic field. At UNM experimental work is being performed in the the HelCat device (Helicon Cathode) to control the parallel flow in order to study and understand the processes that arise from this situation. It is also our aim to be able to control the transverse flow simulatneously, but independently of the parallel flow. By inserting a system of biased rings and grids into the plasma we are able to modify the flows, and hence the turbulence. Flows are measured using a seven-tip Mach probe. Results of our ability to control the flows independently are presented.

  17. Parallel optical sampler

    SciTech Connect

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  18. Parallel channel flow excursions

    SciTech Connect

    Johnston, B.S.

    1990-01-01

    Among the many known types of vapor-liquid flow instability is the excursion which may occur in heated parallel channels. Under certain conditions, the pressure drop requirement in a heated channel may increase with decreases in flow rate. This leads to an excursive reduction in flow. For channels heated by electricity or nuclear fission, this can result in overheating and damage to the channel. In the design of any parallel channel device, flow excursion limits should be established. After a review of parallel channel behavior and analysis, a conservative criterion will be proposed for avoiding excursions. In support of this criterion, recent experimental work on boiling in downward flow will be described. 5 figs.

  19. Derivation of operation rules for reservoirs in parallel with joint water demand

    NASA Astrophysics Data System (ADS)

    Zeng, Xiang; Hu, Tiesong; Xiong, Lihua; Cao, Zhixian; Xu, Chongyu

    2015-12-01

    The purpose of this paper is to derive the general optimality conditions of the commonly used operating policies for reservoirs in parallel with joint water demand, which are defined in terms of system-wide release rules and individual reservoir storage balancing functions. Following that, a new set of release rules for individual reservoirs are proposed in analytical forms by considering the optimality conditions for the balance of total water delivery utility and carryover storage value of individual reservoirs. Theoretical analysis indicates that the commonly used operating policies are a special case of the newly derived rules. The derived release rules are then applied to simulating the operation of a parallel reservoir system in northeastern China. Compared to the performance of the commonly used policies, some advantages of the proposed operation rules are illustrated. Most notably, less water shortage occurrence and higher water supply reliability are obtained from the proposed operation rules.

  20. Low-power, parallel photonic interconnections for Multi-Chip Module applications

    SciTech Connect

    Carson, R.F.; Lovejoy, M.L.; Lear, K.L.

    1994-12-31

    New applications of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMs). Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. MCM-based applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated currently in photonic and optoelectronic technologies. The work described is a parallel link array, designed for vertical (Z-Axis) interconnection of the layers in a MCM-based signal processor stack, operating at a data rate of 100 Mb/s. This interconnect is based upon high-efficiency VCSELs, HBT photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques.

  1. Mechanically reliable scales and coatings

    SciTech Connect

    Tortorelli, P.F.; Alexander, K.B.

    1995-07-01

    As the first stage in examining the mechanical reliability of protective surface oxides, the behavior of alumina scales formed on iron-aluminum alloys during high-temperature cyclic oxidation was characterized in terms of damage and spallation tendencies. Scales were thermally grown on specimens of three iron-aluminum composition using a series of exposures to air at 1000{degrees}C. Gravimetric data and microscopy revealed substantially better integrity and adhesion of the scales grown on an alloy containing zirconium. The use of polished (rather than just ground) specimens resulted in scales that were more suitable for subsequent characterization of mechanical reliability.

  2. Realistic analytical phantoms for parallel magnetic resonance imaging.

    PubMed

    Guerquin-Kern, M; Lejeune, L; Pruessmann, K P; Unser, M

    2012-03-01

    The quantitative validation of reconstruction algorithms requires reliable data. Rasterized simulations are popular but they are tainted by an aliasing component that impacts the assessment of the performance of reconstruction. We introduce analytical simulation tools that are suited to parallel magnetic resonance imaging and allow one to build realistic phantoms. The proposed phantoms are composed of ellipses and regions with piecewise-polynomial boundaries, including spline contours, Bzier contours, and polygons. In addition, they take the channel sensitivity into account, for which we investigate two possible models. Our analytical formulations provide well-defined data in both the spatial and k-space domains. Our main contribution is the closed-form determination of the Fourier transforms that are involved. Experiments validate the proposed implementation. In a typical parallel magnetic resonance imaging reconstruction experiment, we quantify the bias in the overly optimistic results obtained with rasterized simulations-the inverse-crime situation. We provide a package that implements the different simulations and provide tools to guide the design of realistic phantoms. PMID:22049364

  3. Coarrars for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Snyder, W. Van

    2011-01-01

    The design of the Coarray feature of Fortran 2008 was guided by answering the question "What is the smallest change required to convert Fortran to a robust and efficient parallel language." Two fundamental issues that any parallel programming model must address are work distribution and data distribution. In order to coordinate work distribution and data distribution, methods for communication and synchronization must be provided. Although originally designed for Fortran, the Coarray paradigm has stimulated development in other languages. X10, Chapel, UPC, Titanium, and class libraries being developed for C++ have the same conceptual framework.

  4. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.

  5. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  6. Reliability computation from reliability block diagrams

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.; Eckstein, R. E.

    1971-01-01

    A method and a computer program are presented to calculate probability of system success from an arbitrary reliability block diagram. The class of reliability block diagrams that can be handled include any active/standby combination of redundancy, and the computations include the effects of dormancy and switching in any standby redundancy. The mechanics of the program are based on an extension of the probability tree method of computing system probabilities.

  7. Power electronics reliability analysis.

    SciTech Connect

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  8. Human Reliability Program Overview

    SciTech Connect

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  9. Redundant system reliability analysis

    NASA Technical Reports Server (NTRS)

    Masreliez, C. J.

    1979-01-01

    Computer Aided Redundant System Reliability Analysis (CARSARA) program facilitates reliability assessment of fault-tolerance reconfigurable systems. CARSRA accounts for influences from transient faults and is used to model wide range of redundancy management strategies.

  10. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  11. Aerospace mechanical reliability practice

    NASA Technical Reports Server (NTRS)

    Fedor, O. H.

    1982-01-01

    The impact of mechanical-reliability practice on the Saturn/Apollo launch program is considered with reference to the interrelationship of analysts and designers with management. Rocket engine development, ground testing, and launch facilities in the Saturn/Apollo program are discussed, and the Saturn reliability approach is examined in regard to management style, decision making, human error control, and reliability analyses. It is noted that the use of conservative design philosophy contributed to achieved reliability.

  12. Reliability as Argument

    ERIC Educational Resources Information Center

    Parkes, Jay

    2007-01-01

    Reliability consists of both important social and scientific values and methods for evidencing those values, though in practice methods are often conflated with the values. With the two distinctly understood, a reliability argument can be made that articulates the particular reliability values most relevant to the particular measurement situation

  13. Architectural design for reliability

    SciTech Connect

    Cranwell, R.M.; Hunter, R.L.

    1997-08-01

    Design-for-reliability concepts can be applied to the products of the construction industry, which includes buildings, bridges, transportation systems, dams, and other structures. The application of a systems approach to designing in reliability emphasizes the importance of incorporating uncertainty in the analyses, the benefits of optimization analyses, and the importance of integrating reliability, safety, and security. 4 refs., 3 figs.

  14. Reliability in aposematic signaling

    PubMed Central

    2010-01-01

    In light of recent work, we will expand on the role and variability of aposematic signals. The focus of this review will be the concepts of reliability and honesty in aposematic signaling. We claim that reliable signaling can solve the problem of aposematic evolution, and that variability in reliability can shed light on the complexity of aposematic systems. PMID:20539774

  15. Reliability model generator specification

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Mccann, Catherine

    1990-01-01

    The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

  16. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  17. Parallel Dislocation Simulator

    Energy Science and Technology Software Center (ESTSC)

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  18. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  19. Optimizing parallel reduction operations

    SciTech Connect

    Denton, S.M.

    1995-06-01

    A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

  20. Parallel Total Energy

    Energy Science and Technology Software Center (ESTSC)

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  1. Parallel Multigrid Equation Solver

    Energy Science and Technology Software Center (ESTSC)

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  2. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  3. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  4. Reliability analysis of composite structures

    NASA Technical Reports Server (NTRS)

    Kan, Han-Pin

    1992-01-01

    A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.

  5. Business of reliability

    NASA Astrophysics Data System (ADS)

    Engel, Pierre

    1999-12-01

    The presentation is organized around three themes: (1) The decrease of reception equipment costs allows non-Remote Sensing organization to access a technology until recently reserved to scientific elite. What this means is the rise of 'operational' executive agencies considering space-based technology and operations as a viable input to their daily tasks. This is possible thanks to totally dedicated ground receiving entities focusing on one application for themselves, rather than serving a vast community of users. (2) The multiplication of earth observation platforms will form the base for reliable technical and financial solutions. One obstacle to the growth of the earth observation industry is the variety of policies (commercial versus non-commercial) ruling the distribution of the data and value-added products. In particular, the high volume of data sales required for the return on investment does conflict with traditional low-volume data use for most applications. Constant access to data sources supposes monitoring needs as well as technical proficiency. (3) Large volume use of data coupled with low- cost equipment costs is only possible when the technology has proven reliable, in terms of application results, financial risks and data supply. Each of these factors is reviewed. The expectation is that international cooperation between agencies and private ventures will pave the way for future business models. As an illustration, the presentation proposes to use some recent non-traditional monitoring applications, that may lead to significant use of earth observation data, value added products and services: flood monitoring, ship detection, marine oil pollution deterrent systems and rice acreage monitoring.

  6. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  7. Improving Reliability of a Residency Interview Process

    PubMed Central

    Serres, Michelle L.; Gundrum, Todd E.

    2013-01-01

    Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station—impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity. PMID:24159209

  8. Parallel computers and parallel algorithms for CFD: An introduction

    NASA Astrophysics Data System (ADS)

    Roose, Dirk; Vandriessche, Rafael

    1995-10-01

    This text presents a tutorial on those aspects of parallel computing that are important for the development of efficient parallel algorithms and software for computational fluid dynamics. We first review the main architectural features of parallel computers and we briefly describe some parallel systems on the market today. We introduce some important concepts concerning the development and the performance evaluation of parallel algorithms. We discuss how work load imbalance and communication costs on distributed memory parallel computers can be minimized. We present performance results for some CFD test cases. We focus on applications using structured and block structured grids, but the concepts and techniques are also valid for unstructured grids.

  9. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  10. Parallel Object-Oriented Methods and Applications

    NASA Astrophysics Data System (ADS)

    Karmesin, S. R.; Reynders, J. V. W.; Cummings, J. C.; Williams, T. J.; Humphrey, B. F.; Beckman, P. H.

    1997-08-01

    POOMA is an object-oriented C++ class library for doing large scale scientific computations. At its highest level it provides the user with data-parallel objects for simulating PDE's and containers of particles for kinetic simulations. These objects translate data-parallel statements into local computation, communication, and synchronization for execution on a variety of serial or parallel architectures. This allows development on workstations and execution on systems up through massively parallel supercomputers. POOMA's data-parallel objects allow construction of algorithms at a higher level and the separation of physics, coordinate system, and parallel communication code. POOMA uses expression templates for computational efficiency, and internally can use message passing or other forms of communication between processors. We discuss the POOMA framework design, present performance comparisons with F90, F77, and C, and compare and contrast the advantages and disadvantages of each for this sort of system. We also present results from real applications built on POOMA for multimaterial hydrodynamics, neutronics, and fusion plasmas.

  11. Global image processing operations on parallel architectures

    NASA Astrophysics Data System (ADS)

    Webb, Jon A.

    1990-09-01

    Image processing operations fall into two classes: local and global. Local operations affect only a small corresponding area in the output image, and include edge detection, smoothing, and point operations. In global operations any input pixel can affect any or a large number of output data. Global operations include histogram, image warping, Hough transform, and connected components. Parallel architectures offer a promising method for speeding up these image processing operations. Local operations are easy to parallelize, because the input data can be divided among processors, processed in parallel separately, then the outputs can be combined by concatenation. Global operations are harder to parallelize. In fact, some global operations cannot be executed in parallel; it is possible for a global operation to require serial execution for correct computation of the result. However, an important class of global operations, namely those that are reversible-that can be computed in forward or reverse order on a data structure-can be computed in parallel using a restricted form of divide and conquer called split and merge. These reversible operations include the global operations mentioned above, and many more besides-even such non-image processing operations as parsing, string search, and sorting. The split and merge method will be illustrated by application of it to these algorithms. Performance analysis of the method on different architectures-one-dimensional, two-dimensional, and binary tree processor arrays will be demonstrated.

  12. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  13. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  14. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  15. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Gryphon, Coranth D.; Miller, Mark D.

    1991-01-01

    PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

  16. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton On Hudson, NY); Chiu, George (Cross River, NY); Cipolla, Thomas M. (Katonah, NY); Coteus, Paul W. (Yorktown Heights, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hall, Shawn (Pleasantville, NY); Haring, Rudolf A. (Cortlandt Manor, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kopcsay, Gerard V. (Yorktown Heights, NY); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY); Takken, Todd (Brewster, NY)

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  17. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  18. ASSEMBLY OF PARALLEL PLATES

    DOEpatents

    Groh, E.F.; Lennox, D.H.

    1963-04-23

    This invention is concerned with a rigid assembly of parallel plates in which keyways are stamped out along the edges of the plates and a self-retaining key is inserted into aligned keyways. Spacers having similar keyways are included between adjacent plates. The entire assembly is locked into a rigid structure by fastening only the outermost plates to the ends of the keys. (AEC)

  19. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  20. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  1. A New Approach to Parallel Interference Cancellation for CDMA

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Martin

    1996-01-01

    This paper introduces an improved nonlinear parallel interference cancellation scheme that significantly reduces the degrading effect of user interference with implementation complexity linear in the number of users. The scheme operates on the fact that parallel processing simultaneously removes from each user a part of the interference produced by the remaining users accessing the channel the amount being proportional to their reliability. The parallel processing can be done in multiple stages. Simulation results are given for a multitude of different situations, in particular those cases for which the analysis is too complex.

  2. A massively asynchronous, parallel brain

    PubMed Central

    Zeki, Semir

    2015-01-01

    Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronouslywith colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain. PMID:25823871

  3. JSD: Parallel Job Accounting on the IBM SP2

    NASA Technical Reports Server (NTRS)

    Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

    1995-01-01

    The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

  4. Camera calibration based on parallel lines

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Zhang, Yuhai; Zhao, Yu

    2015-01-01

    Nowadays, computer vision has been wildly used in our daily life. In order to get some reliable information, camera calibration can not be neglected. Traditional camera calibration cannot be used in reality due to the fact that we cannot find the accurate coordinate information of the referenced control points. In this article, we present a camera calibration algorithm which can determine the intrinsic parameters both with the extrinsic parameters. The algorithm is based on the parallel lines in photos which can be commonly find in the real life photos. That is we can first get the intrinsic parameters as well as the extrinsic parameters though the information picked from the photos we take from the normal life. More detail, we use two pairs of the parallel lines to compute the vanishing points, specially if these parallel lines are perpendicular, which means these two vanishing points are conjugate with each other, we can use some views (at least 5 views) to determine the image of the absolute conic(IAC). Then, we can easily get the intrinsic parameters by doing cholesky factorization on the matrix of IAC.As we all know, when connect the vanishing point with the camera optical center, we can get a line which is parallel with the original lines in the scene plane. According to this, we can get the extrinsic parameters R and T. Both the simulation and the experiment results meets our expectations.

  5. Using multivariate generalizability theory to assess the effect of content stratification on the reliability of a performance assessment.

    PubMed

    Keller, Lisa A; Clauser, Brian E; Swanson, David B

    2010-12-01

    In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed. PMID:20509047

  6. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2013-07-31

    This paper presents a parallel state estimation (PSE) implementation using a preconditioned gradient algorithm and an orthogonal decomposition-based algorithm. The preliminary tests against a commercial Energy Management System (EMS) State Estimation (SE) tool using real-world data are performed. The results show that while the precondition gradient algorithm can solve the SE problem quicker with the help of parallel computing techniques, it might not be good for real-world data due to the large condition number of gain matrix introduced by the wide range of measurement weights. With the help of PETSc package and considering one iteration of the SE process, the orthogonal decomposition-based PSE algorithm can achieve 5-20 times speedup comparing against the commercial EMS tool. It is very promising that the developed PSE can solve the SE problem for large power systems at the SCADA rate, to improve grid reliability.

  7. Human reliability analysis

    SciTech Connect

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach.

  8. Science Grade 7, Long Form.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Bureau of Curriculum Development.

    The Grade 7 Science course of study was prepared in two parallel forms. A short form designed for students who had achieved a high measure of success in previous science courses; the long form for those who have not been able to maintain the pace. Both forms contain similar content. The Grade 7 guide is the first in a three-year sequence for

  9. Parallel computational complexity in statistical physics

    NASA Astrophysics Data System (ADS)

    Moriarty, Kenneth J.

    1998-12-01

    We examine several models in statistical physics from the perspective of parallel computational complexity theory. In each case, we describe a parallel method of simulation that is faster than current sequential methods. We find that parallel complexity results are in accord with intuitive notions of physical complexity for the models studied. First, we investigate the parallel complexity of sampling Lorentz lattice gas (LLG) trajectories. We show that the single-particle LLG can be simulated in highly parallel fashion, in contrast to multi-particle lattice gases which most likely cannot. In the case of diffusion-limited aggregation (DLA), we show that a polynomial speedup is feasible even though a highly parallel algorithm probably is not. In particular, we present a polynomial-processor algorithm for generating DLA clusters that runs in a time sub-linear in the cluster mass. We relate the dynamic exponent of our parallel DLA algorithm to static scaling exponents of DLA and give numerical estimates. We investigate the parallel complexity of the invaded cluster (IC) algorithm and find that a single sweep can be carried out in highly parallel fashion but that a polynomial number of sweeps most likely cannot be compressed into a polylogarithmic number of parallel steps. We argue that quantities measured for a sub-system of size l, using the IC algorithm, should exhibit a crossover to Swendsen-Wang behavior for l sufficiently smaller than the system size L, and we propose a scaling form to describe this phenomenon. By studying sub-systems, we observe critical slowing for the 2d Ising and 3-state Potts models. We define the dynamic exponent of the IC algorithm according to tausb{varepsilon,max} Lsp{zIC}, where tausb{varepsilon,max} is the maximum value of the energy autocorrelation time attained over all sub-system sizes for a given L. We give numerical estimates for zspIC for the 2d Ising and 3-state Potts models which result in improved upper bounds on the parallel complexity of sampling the critical points of these systems.

  10. Information hiding in parallel programs

    SciTech Connect

    Foster, I.

    1992-01-30

    A fundamental principle in program design is to isolate difficult or changeable design decisions. Application of this principle to parallel programs requires identification of decisions that are difficult or subject to change, and the development of techniques for hiding these decisions. We experiment with three complex applications, and identify mapping, communication, and scheduling as areas in which decisions are particularly problematic. We develop computational abstractions that hide such decisions, and show that these abstractions can be used to develop elegant solutions to programming problems. In particular, they allow us to encode common structures, such as transforms, reductions, and meshes, as software cells and templates that can reused in different applications. An important characteristic of these structures is that they do not incorporate mapping, communication, or scheduling decisions: these aspects of the design are specified separately, when composing existing structures to form applications. This separation of concerns allows the same cells and templates to be reused in different contexts.

  11. Device for balancing parallel strings

    DOEpatents

    Mashikian, Matthew S. (Storrs, CT)

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  12. Recalibrating software reliability models

    NASA Technical Reports Server (NTRS)

    Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

    1989-01-01

    In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

  13. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  14. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  15. Reliability quantification and visualization for electric microgrids

    NASA Astrophysics Data System (ADS)

    Panwar, Mayank

    The electric grid in the United States is undergoing modernization from the state of an aging infrastructure of the past to a more robust and reliable power system of the future. The primary efforts in this direction have come from the federal government through the American Recovery and Reinvestment Act of 2009 (Recovery Act). This has provided the U.S. Department of Energy (DOE) with 4.5 billion to develop and implement programs through DOE's Office of Electricity Delivery and Energy Reliability (OE) over the a period of 5 years (2008-2012). This was initially a part of Title XIII of the Energy Independence and Security Act of 2007 (EISA) which was later modified by Recovery Act. As a part of DOE's Smart Grid Programs, Smart Grid Investment Grants (SGIG), and Smart Grid Demonstration Projects (SGDP) were developed as two of the largest programs with federal grants of 3.4 billion and $600 million respectively. The Renewable and Distributed Systems Integration (RDSI) demonstration projects were launched in 2008 with the aim of reducing peak electricity demand by 15 percent at distribution feeders. Nine such projects were competitively selected located around the nation. The City of Fort Collins in co-operative partnership with other federal and commercial entities was identified to research, develop and demonstrate a 3.5MW integrated mix of heterogeneous distributed energy resources (DER) to reduce peak load on two feeders by 20-30 percent. This project was called FortZED RDSI and provided an opportunity to demonstrate integrated operation of group of assets including demand response (DR), as a single controllable entity which is often called a microgrid. As per IEEE Standard 1547.4-2011 (IEEE Guide for Design, Operation, and Integration of Distributed Resource Island Systems with Electric Power Systems), a microgrid can be defined as an electric power system which has following characteristics: (1) DR and load are present, (2) has the ability to disconnect from and parallel with the area Electric Power Systems (EPS), (3) includes the local EPS and may include portions of the area EPS, and (4) is intentionally planned. A more reliable electric power grid requires microgrids to operate in tandem with the EPS. The reliability can be quantified through various metrics for performance measure. This is done through North American Electric Reliability Corporation (NERC) metrics in North America. The microgrid differs significantly from the traditional EPS, especially at asset level due to heterogeneity in assets. Thus, the performance cannot be quantified by the same metrics as used for EPS. Some of the NERC metrics are calculated and interpreted in this work to quantify performance for a single asset and group of assets in a microgrid. Two more metrics are introduced for system level performance quantification. The next step is a better representation of the large amount of data generated by the microgrid. Visualization is one such form of representation which is explored in detail and a graphical user interface (GUI) is developed as a deliverable tool to the operator for informative decision making and planning. Electronic appendices-I and II contain data and MATLAB© program codes for analysis and visualization for this work.

  16. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2014-10-31

    This paper presents a full-cycle parallel state estimation (PSE) implementation using a preconditioned conjugate gradient algorithm. The developed code is able to solve large-size power system state estimation within 5 seconds using real-world data, comparable to the Supervisory Control And Data Acquisition (SCADA) rate. This achievement allows the operators to know the system status much faster to help improve grid reliability. Case study results of the Bonneville Power Administration (BPA) system with real measurements are presented. The benefits of fast state estimation are also discussed.

  17. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  18. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  19. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any Eclipse-based repository with a similar structure. It also can apply build parameters and preferences automatically at the end of the checkout.

  20. Fastpath Speculative Parallelization

    NASA Astrophysics Data System (ADS)

    Spear, Michael F.; Kelsey, Kirk; Bai, Tongxin; Dalessandro, Luke; Scott, Michael L.; Ding, Chen; Wu, Peng

    We describe Fastpath, a system for speculative parallelization of sequential programs on conventional multicore processors. Our system distinguishes between the lead thread, which executes at almost-native speed, and speculative threads, which execute somewhat slower. This allows us to achieve nontrivial speedup, even on two-core machines. We present a mathematical model of potential speedup, parameterized by application characteristics and implementation constants. We also present preliminary results gleaned from two different Fastpath implementations, each derived from an implementation of software transactional memory.

  1. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  2. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  3. Parallel programming and the poker programming environment. Interim technical report

    SciTech Connect

    Snyder, L.

    1984-04-01

    Parallel programming is described as the conversion of an abstract, machine independent algorithm to a form, called a program, suitable for execution on a particular computer. The conversion activity is simplified where the form of the abstraction is close to the form required of the programming system. Fine mechanisms are identified as commonly occurring in algorithms specification. The Poker Parallel Programming Environment is known to support these five mechanisms conveniently; thus the conversion is easy and the parallel programming is simple. The Poker environment is described and examples are provided. An analysis of the efficiency of the programming facilities provided by Poker is given and they all seem to be very efficient.

  4. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  5. Hawaii electric system reliability.

    SciTech Connect

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  6. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  7. Synchronous Parallel Kinetic Monte Carlo

    SciTech Connect

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  8. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  9. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  10. Parallel Harness for Informatic Stream Hashing

    Energy Science and Technology Software Center (ESTSC)

    2012-09-11

    PHISH is a lightweight framework which a set of independent processes can use to exchange data as they run on the same desktop machine, on processors of a parallel machine, or on different machines across a network. This enables them to work in a coordinated parallel fashion to perform computations on either streaming, archived, or self-generated data. The PHISH distribution includes a simple, portable library for performing data exchanges in useful patterns either via MPImore » message-passing or ZMQ sockets. PHISH input scripts are used to describe a data-processing algorithm, and additional tools provided in the PHISH distribution convert the script into a form that can be launched as a parallel job.« less

  11. Parallel Harness for Informatic Stream Hashing

    SciTech Connect

    2012-09-11

    PHISH is a lightweight framework which a set of independent processes can use to exchange data as they run on the same desktop machine, on processors of a parallel machine, or on different machines across a network. This enables them to work in a coordinated parallel fashion to perform computations on either streaming, archived, or self-generated data. The PHISH distribution includes a simple, portable library for performing data exchanges in useful patterns either via MPI message-passing or ZMQ sockets. PHISH input scripts are used to describe a data-processing algorithm, and additional tools provided in the PHISH distribution convert the script into a form that can be launched as a parallel job.

  12. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  13. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  14. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  15. Benchmarking massively parallel architectures

    SciTech Connect

    Lubeck, O.; Moore, J.; Simmons, M.; Wasserman, H.

    1993-01-01

    The purpose of this paper is to summarize some initial experiences related to measuring the performance of massively parallel processors (MPPs) at Los Alamos National Laboratory (LANL). Actually, the range of MPP architectures the authors have used is rather limited, being confined mostly to the Thinking Machines Corporation (TMC) Connection Machine CM-2 and CM-5. Some very preliminary work has been carried out on the Kendall Square KSR-1, and efforts related to other machines, such as the Intel Paragon and the soon-to-be-released CRAY T3D are planned. This paper will concentrate more on methodology rather than discuss specific architectural strengths and weaknesses; the latter is expected to be the subject of future reports. MPP benchmarking is a field in critical need of structure and definition. As the authors have stated previously, such machines have enormous potential, and there is certainly a dire need for orders of magnitude computational power over current supercomputers. However, performance reports for MPPs must emphasize actual sustainable performance from real applications in a careful, responsible manner. Such has not always been the case. A recent paper has described in some detail, the problem of potentially misleading performance reporting in the parallel scientific computing field. Thus, in this paper, the authors briefly offer a few general ideas on MPP performance analysis.

  16. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  17. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement activities has been developed to provide an integrated framework for future methods development. Some of the major components of the SCALE parallel computing development plan are parallelization and multithreading of computationally intensive modules and redesign of the fundamental SCALE computational architecture.

  18. Parallel computations in hydro acoustics

    NASA Astrophysics Data System (ADS)

    Pelz, Richard B.

    1994-10-01

    This research concerns the algorithmic development, computer implementation and direct numerical simulation of incompressible and compressible flow of naval relevance. Calculations were executed on a class of current generation multiprocessors. Pseudospectral methods were used exclusively. Lack of parallel algorithms critical to the effective implementation of spectral methods on parallel computers necessitated the need for the development of parallel FFT algorithms for real, conjugate symmetric and real symmetric sequences. These algorithms are applied to spectral methods, but also in many areas of scientific computing. The last algorithm, the parallel fast discrete cosine transform, is used extensively in image and signal processing. The parallel Fourier pseudospectral method for the incompressible Navier-Stokes equations was developed and implemented on many multiprocessors. Reconnection of orthogonally interacting vortex tubes was then investigated using the algorithm on parallel computers as well as vector supercomputers. The parallel Fourier pseudospectral method for the compressible Navier-Stokes equations was also developed. Shock/vortex interactions in two dimensions were investigated.

  19. Extended Parallelism Models for Optimization on Massively Parallel Computers

    SciTech Connect

    Eldred, M.S.; Schimel, B.D.

    1999-05-24

    Single-level parallel optimization approaches, those in which either the simulation code executes in parallel or the optimiza- tion algorithm invokes multiple simultaneous single-processor analyses, have been investigated previously and been shown to be effective in reducing the time required to compute optimal solutions. However, these approaches have clear performance limita- tions that prevent effective scaling with the thousands of processors available in massively parallel supercomputers. In more recent work, a capability has been developed for multilevel parallelism in which multiple instances of multiprocessor simulations are coordinated simultaneously. This implementation employs a master-slave approach using the Message Passing Interface (MPI) within the DAKOTA software toolkit. Mathematical analysis on achieving peak efficiency in multilevel parallelism has shown that the most effective processor partitioning scheme is the one that limits the size of multiprocessor simulations in favor of concurrent execution of multiple simulations. That is, if both coarse-grained and fine-grained parallelism can be exploited, then preference should be given to the coarse-grained parallelism. This analysis was verified in multilevel paralIel computatiorud experiments on networks of workstations (NOWS) and on the Intel TeraFLOPS massively parallel supercomputer. In current work, methods for exploiting additional coarse-grained parallelism in optimization are being investigated so that fine-grained efficiency losses can be further minimized. These activities are focusing on both algorithmic coarse-grained parallel- ism (multiple independent function evaluations) through the development of speculative gradient methods and concurrent iterator strategies and on function evaluation coarse-grained parallelism (multiple separable simulations within a function evaluation) through the development of general partitioning and nested synchronization facilities. The net result is a total of four separate lev- els of parallelism which can minimize efficiency losses and achieve near linear scaling on massively parallel computers.

  20. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  1. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into anchors around reference documents called pivots. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora Bag of Words program and initial performance results of end-to-end a document processing workflow are reported.

  2. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  3. Unified Parallel Software

    Energy Science and Technology Software Center (ESTSC)

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use ofmore » EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.« less

  4. Parallel Vegetation Stripe Formation Through Hydrologic Interactions

    NASA Astrophysics Data System (ADS)

    Cheng, Yiwei; Stieglitz, Marc; Turk, Greg; Engel, Victor

    2010-05-01

    It has long been a challenge to theoretical ecologists to describe vegetation pattern formations such as the "tiger bush" stripes and "leopard bush" spots in Niger, and the regular maze patterns often observed in bogs in North America and Eurasia. To date, most of simulation models focus on reproducing the spot and labyrinthine patterns, and on the vegetation bands which form perpendicular to surface and groundwater flow directions. Various hypotheses have been invoked to explain the formation of vegetation patterns: selective grazing by herbivores, fire, and anisotropic environmental conditions such as slope. Recently, short distance facilitation and long distance competition between vegetation (a.k.a scale dependent feedback) has been proposed as a generic mechanism for vegetation pattern formation. In this paper, we test the generality of this mechanism by employing an existing, spatially explicit, advection-reaction-diffusion type model to describe the formation of regularly spaced vegetation bands, including those that are parallel to flow direction. Such vegetation patterns are, for example, characteristic of the ridge and slough habitat in the Florida Everglades and which are thought to have formed parallel to the prevailing surface water flow direction. To our knowledge, this is the first time that a simple model encompassing a nutrient accumulation mechanism along with biomass development and flow is used to demonstrate the formation of parallel stripes. We also explore the interactive effects of plant transpiration, slope and anisotropic hydraulic conductivity on the resulting vegetation pattern. Our results highlight the ability of the short distance facilitation and long distance competition mechanism to explain the formation of the different vegetation patterns beyond semi-arid regions. Therefore, we propose that the parallel stripes, like the other periodic patterns observed in both isotropic and anisotropic environments, are self-organized and form as a result of scale dependent feedback. Results from this study improve upon the current understanding on the formation of parallel stripes and provide a more general theoretical framework for future empirical and modeling efforts.

  5. Parallel TreeSPH

    NASA Astrophysics Data System (ADS)

    Dav, Romeel; Dubinski, John; Hernquist, Lars

    1997-08-01

    We describe PTreeSPH, a gravity treecode combined with an SPH hydrodynamics code designed for parallel supercomputers having distributed memory. Our computational algorithm is based on the popular TreeSPH code of Hernquist & Katz (1989)[ApJS, 70, 419]. PTreeSPH utilizes a domain decomposition procedure and a synchronous hypercube communication paradigm to build self-contained subvolumes of the simulation on each processor at every timestep. Computations then proceed in a manner analogous to a serial code. We use the Message Passing Interface (MPI) communications package, making our code easily portable to a variety of parallel systems. PTreeSPH uses individual smoothing lengths and timesteps, with a communication algorithm designed to minimize exchange of information while still providing all information required to accurately perform SPH computations. We have incorporated periodic boundary conditions with forces calculated using a quadrupole Ewald summation method, and comoving integration under a variety of cosmologies. Following algorithms presented in Katz et al. (1996)[ApJS, 105, 19], we have also included radiative cooling, heating from a parameterized ionizing background, and star formation. A cosmological simulation from z = 49 to z = 2 with 64 3 gas particles and 64 3 dark matter particles requires 1800 node-hours on a Cray T3D, with a communications overhead of 8%, load balanced to ? 95% level. When used on the new Cray T3E, this code will be capable of performing cosmological hydrodynamical simulations down to z = 0 with 2 10 6 particles, or to z = 2 with 10 7 particles, in a reasonable amount of time. Even larger simulations will be practical in situations where the matter is not highly clustered or when periodic boundaries are not required.

  6. Parallel Imaging Microfluidic Cytometer

    PubMed Central

    Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

  7. Ultra Reliability Workshop Introduction

    NASA Technical Reports Server (NTRS)

    Shapiro, Andrew A.

    2006-01-01

    This plan is the accumulation of substantial work by a large number of individuals. The Ultra-Reliability team consists of representatives from each center who have agreed to champion the program and be the focal point for their center. A number of individuals from NASA, government agencies (including the military), universities, industry and non-governmental organizations also contributed significantly to this effort. Most of their names may be found on the Ultra-Reliability PBMA website.

  8. Parallel Ada benchmarks for the SVMS

    NASA Technical Reports Server (NTRS)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  9. Combinatorial parallel and scientific computing.

    SciTech Connect

    Pinar, Ali; Hendrickson, Bruce Alan

    2005-04-01

    Combinatorial algorithms have long played a pivotal enabling role in many applications of parallel computing. Graph algorithms in particular arise in load balancing, scheduling, mapping and many other aspects of the parallelization of irregular applications. These are still active research areas, mostly due to evolving computational techniques and rapidly changing computational platforms. But the relationship between parallel computing and discrete algorithms is much richer than the mere use of graph algorithms to support the parallelization of traditional scientific computations. Important, emerging areas of science are fundamentally discrete, and they are increasingly reliant on the power of parallel computing. Examples include computational biology, scientific data mining, and network analysis. These applications are changing the relationship between discrete algorithms and parallel computing. In addition to their traditional role as enablers of high performance, combinatorial algorithms are now customers for parallel computing. New parallelization techniques for combinatorial algorithms need to be developed to support these nontraditional scientific approaches. This chapter will describe some of the many areas of intersection between discrete algorithms and parallel scientific computing. Due to space limitations, this chapter is not a comprehensive survey, but rather an introduction to a diverse set of techniques and applications with a particular emphasis on work presented at the Eleventh SIAM Conference on Parallel Processing for Scientific Computing. Some topics highly relevant to this chapter (e.g. load balancing) are addressed elsewhere in this book, and so we will not discuss them here.

  10. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  11. Parallel structural optimization with different parallel analysis interfaces

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E. M.; Hsiung, Ching-Kuo

    1990-01-01

    The real benefit of structural optimization techniques is in the application of these techniques to large structures such as full vehicles or full aircraft. For these structures, however, the sequential computer's time and memory requirements prohibit the solutions. With the rapid development of parallel computers, parallel processing of large scale structural optimization problems is achievable. In this paper we discuss the parallel processing of structural optimization problems with parallel structural analysis. Two different types of interface between the optimization and analysis routines are developed and tested.

  12. Message based event specification for debugging nondeterministic parallel programs

    SciTech Connect

    Damohdaran-Kamal, S.K.; Francioni, J.M.

    1995-02-01

    Portability and reliability of parallel programs can be severely impaired by their nondeterministic behavior. Therefore, an effective means to precisely and accurately specify unacceptable nondeterministic behavior is necessary for testing and debugging parallel programs. In this paper we describe a class of expressions, called Message Expressions that can be used to specify nondeterministic behavior of message passing parallel programs. Specification of program behavior with Message Expressions is easier than pattern based specification techniques in that the former does not require knowledge of run-time event order, whereas that later depends on the user`s knowledge of the run-time event order for correct specification. We also discuss our adaptation of Message Expressions for use in a dynamic distributed testing and debugging tool, called mdb, for programs written for PVM (Parallel Virtual Machine).

  13. Photon detection with parallel asynchronous processing

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1990-01-01

    An approach to photon detection with a parallel asynchronous signal processor is described. The visible or IR photon-detection capability of the silicon p(+)-n-n(+) detectors and the parallel asynchronous processing are addressed separately. This approach would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture consisting of a stack of planar arrays of the devices would form a 2D array processor with a 2D array of inputs located directly behind a focal-plane detector array. A 2D image data stream would propagate in neuronlike asynchronous pulse-coded form through the laminar processor. Such systems can integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The possibility of multispectral image processing is addressed.

  14. Reliability Generalization of the Psychopathy Checklist Applied in Youthful Samples

    ERIC Educational Resources Information Center

    Campbell, Justin S.; Pulos, Steven; Hogan, Mike; Murry, Francie

    2005-01-01

    This study examines the average reliability of Hare Psychopathy Checklists (PCLs) adapted for use in samples of youthful offenders (aged 12 to 21 years). Two forms of reliability are examined: 18 alpha estimates of internal consistency and 18 intraclass correlation (two or more raters) estimates of interrater reliability. The results, an average

  15. Transition between viscous and collisionless regimes of parallel flows damping in RFPs*

    NASA Astrophysics Data System (ADS)

    Fiksel, G.; Mirnov, V. V.; Svidzinski, V. A.

    2009-05-01

    Strong ion heating is observed during sawtooth crashes in the Madison Symmetric Torus reversed field pinch (RFP) experiments. The mechanism of dissipation due to damping of parallel flows generated by tearing instabilities is examined. In collisional limit, the viscous dissipation is caused by the effect of parallel viscosity in Braginskii equations. Since the ion mean free path ? exceeds the parallel scale length of the flows k||-1 , the collisional formalism cannot provide reliable predictions. Several kinetic closures have been proposed in the past to incorporate kinetic effects into the plasma momentum equation in the form of Landau-like integral for the effective collisionless viscous force. To investigate the transition from viscous to collisonless regimes we develop an alternative approach based on numerical solution of the kinetic equation with Landau collisional operator. Direct computational modeling of the ion heating yields the rate of dissipation and allows us to follow the transition between two limiting cases as a function of the parameter k||?. *The work supported by the N.S.F. and the U.S.D.O.E.

  16. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  17. Parallelization of adaptive MC integrators

    NASA Astrophysics Data System (ADS)

    Kreckel, Richard

    1997-11-01

    Monte Carlo (MC) methods for numerical integration seem to be embarrassingly parallel on first sight. When adaptive schemes are applied in order to enhance convergence however, the seemingly most natural way of replicating the whole job on each processor can potentially ruin the adaptive behaviour. Using the popular VEGAS-Algorithm as an example an economic method of semi-micro parallelization with variable grain-size is presented and contrasted with another straightforward approach of macro-parallelization. A portable implementation of this semi-micro parallelization is used in the xloops-project and is made publicly available.

  18. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  19. Parallel processor engine model program

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P.

    1984-01-01

    The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

  20. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  1. Reliability and validity of Turkish versions of the child, parent and staff cancer fatigue scales.

    PubMed

    Gerçeker, Gülçin Özalp; Yilmaz, Hatice Bal

    2012-01-01

    This study was designed to adapt the Turkish versions of scales to evaluate fatigue in children with cancer from the perspectives of the children, parents and staff. The objective of this study was to validate "Child Fatigue Scale-24 hours" (CFS-24 hours), "Parent Fatigue Scale-24 hours" (PFS-24 hours) and "Staff Fatigue Scale-24 hours" (SFS-24 hours) for use in Turkish clinical research settings. Translation of the scales into Turkish and validity and reliability tests were performed. The validity of the translated scales was assessed with language validity and content validity. The reliability of the translated scales was assessed with internal consistency. The scales were evaluated by considering the following: calculation of the Cronbach alpha coefficient for parallel form reliability with 52 pediatric cancer patients, 86 parents and 43 nurses. The internal consistency was estimated as 0.88 for the Child Fatigue Scale-24 hours, 0.77 for the Parent Fatigue Scale-24 hours, and 0.72 for the Staff Fatigue Scale-24 hours (Cronbach's α). The Turkish version of the Child Fatigue Scale-24 hours, the Parent Fatigue Scale-24 hours and the Staff Fatigue Scale-24 hours were judged reliable and valid instruments to assess fatigue in children and showed good psychometric properties. These scales should assist in understanding to what extent initiatives can minimize or eliminate fatigue. Our scales are recommended for further studies and use in pediatric oncology clinics as routine measurements and nursing initiatives should be planned accordingly. PMID:22994723

  2. Proposed reliability cost model

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  3. Orbiter Autoland reliability analysis

    NASA Technical Reports Server (NTRS)

    Welch, D. Phillip

    1993-01-01

    The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.

  4. Parallel Smoothed Aggregation Multigrid: Aggregation Strategies on Massively Parallel Machines

    SciTech Connect

    Ray S. Tuminaro

    2000-11-09

    Algebraic multigrid methods offer the hope that multigrid convergence can be achieved (for at least some important applications) without a great deal of effort from engineers and scientists wishing to solve linear systems. In this paper the authors consider parallelization of the smoothed aggregation multi-grid method. Smoothed aggregation is one of the most promising algebraic multigrid methods. Therefore, developing parallel variants with both good convergence and efficiency properties is of great importance. However, parallelization is nontrivial due to the somewhat sequential aggregation (or grid coarsening) phase. In this paper, they discuss three different parallel aggregation algorithms and illustrate the advantages and disadvantages of each variant in terms of parallelism and convergence. Numerical results will be shown on the Intel Teraflop computer for some large problems coming from nontrivial codes: quasi-static electric potential simulation and a fluid flow calculation.

  5. Software reliability perspectives

    NASA Technical Reports Server (NTRS)

    Wilson, Larry; Shen, Wenhui

    1987-01-01

    Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

  6. Reliability Centered Maintenance - Methodologies

    NASA Technical Reports Server (NTRS)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  7. Waste package reliability

    SciTech Connect

    Sastre, C.; Pescatore, C.; Sullivan, T.

    1986-02-01

    Probabilistic Reliability Analysis is identified as the preferred method to identify, organize, and convey the necessary information to meet the NRC standard on reasonable assurance of waste package performance according to regulatory requirements. The document addresses both the qualitative and quantitative aspects of the analysis, and suggests reliability analysis requirements by a prospective license applicant, as well as review procedures by the regulatory agency. In particular, a method for the quantitative evaluation of a waste package reliability is demonstrated through a simplified analysis. The method is based on the repetitive usage of a performance model for values of the model parameters that span their range of uncertainty. Techniques for selecting values of the input parameters, viewed as random variables, and for generating empirical correlations among experimental data are also described. Aspects which would need to be covered in a more comprehensive document are indicated.

  8. Structural Properties of G,T-Parallel Duplexes

    PubMed Central

    Avi, Anna; Cubero, Elena; Gargallo, Raimundo; Gonzlez, Carlos; Orozco, Modesto; Eritja, Ramon

    2010-01-01

    The structure of G,T-parallel-stranded duplexes of DNA carrying similar amounts of adenine and guanine residues is studied by means of molecular dynamics (MD) simulations and UV- and CD spectroscopies. In addition the impact of the substitution of adenine by 8-aminoadenine and guanine by 8-aminoguanine is analyzed. The presence of 8-aminoadenine and 8-aminoguanine stabilizes the parallel duplex structure. Binding of these oligonucleotides to their target polypyrimidine sequences to form the corresponding G,T-parallel triplex was not observed. Instead, when unmodified parallel-stranded duplexes were mixed with their polypyrimidine target, an interstrand Watson-Crick duplex was formed. As predicted by theoretical calculations parallel-stranded duplexes carrying 8-aminopurines did not bind to their target. The preference for the parallel-duplex over the Watson-Crick antiparallel duplex is attributed to the strong stabilization of the parallel duplex produced by the 8-aminopurines. Theoretical studies show that the isomorphism of the triads is crucial for the stability of the parallel triplex. PMID:20798879

  9. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the ability of both meshing methods to resolve simulation details by varying the local grid spacing.

  10. Gearbox Reliability Collaborative Update (Presentation)

    SciTech Connect

    Sheng, S.

    2013-10-01

    This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

  11. Reliable inverter systems

    NASA Technical Reports Server (NTRS)

    Nagano, S.

    1979-01-01

    Base driver with common-load-current feedback protects paralleled inverter systems from open or short circuits. Circuit eliminates total system oscillation that can occur in conventional inverters because of open circuit in primary transformer winding. Common feedback signal produced by functioning modules forces operating frequency of failed module to coincide with clock drive so module resumes normal operating frequency in spite of open circuit.

  12. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  13. Understanding biological computation: reliable learning and recognition.

    PubMed Central

    Hogg, T; Huberman, B A

    1984-01-01

    We experimentally examine the consequences of the hypothesis that the brain operates reliably, even though individual components may intermittently fail, by computing with dynamical attractors. Specifically, such a mechanism exploits dynamic collective behavior of a system with attractive fixed points in its phase space. In contrast to the usual methods of reliable computation involving a large number of redundant elements, this technique of self-repair only requires collective computation with a few units, and it is amenable to quantitative investigation. Experiments on parallel computing arrays show that this mechanism leads naturally to rapid self-repair, adaptation to the environment, recognition and discrimination of fuzzy inputs, and conditional learning, properties that are commonly associated with biological computation. PMID:6593731

  14. Electronic logic for enhanced switch reliability

    DOEpatents

    Cooper, J.A.

    1984-01-20

    A logic circuit is used to enhance redundant switch reliability. Two or more switches are monitored for logical high or low output. The output for the logic circuit produces a redundant and fail-safe representation of the switch outputs. When both switch outputs are high, the output is high. Similarly, when both switch outputs are low, the logic circuit's output is low. When the output states of the two switches do not agree, the circuit resolves the conflict by memorizing the last output state which both switches were simultaneously in and produces the logical complement of this output state. Thus, the logic circuit of the present invention allows the redundant switches to be treated as if they were in parallel when the switches are open and as if they were in series when the switches are closed. A failsafe system having maximum reliability is thereby produced.

  15. Reliable aluminum contact formation by electrostatic bonding

    NASA Astrophysics Data System (ADS)

    Krpti, T.; Pap, A. E.; Radnczi, Gy; Beke, B.; Brsony, I.; Frjes, P.

    2015-07-01

    The paper presents a detailed study of a reliable method developed for aluminum fusion wafer bonding assisted by the electrostatic force evolving during the anodic bonding process. The IC-compatible procedure described allows the parallel formation of electrical and mechanical contacts, facilitating a reliable packaging of electromechanical systems with backside electrical contacts. This fusion bonding method supports the fabrication of complex microelectromechanical systems (MEMS) and micro-opto-electromechanical systems (MOEMS) structures with enhanced temperature stability, which is crucial in mechanical sensor applications such as pressure or force sensors. Due to the applied electrical potential of??-1000?V the Al metal layers are compressed by electrostatic force, and at the bonding temperature of 450?C intermetallic diffusion causes aluminum ions to migrate between metal layers.

  16. Quantifying reliability uncertainty : a proof of concept.

    SciTech Connect

    Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna; Lorio, John F.; Fatherley, Quinn; Anderson-Cook, Christine; Wilson, Alyson G.; Zurn, Rena M.

    2009-10-01

    This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

  17. Designing reliability into accelerators

    SciTech Connect

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the ``factories,`` reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

  18. Designing reliability into accelerators

    SciTech Connect

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

  19. Software reliability report

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1991-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

  20. Parametric Mass Reliability Study

    NASA Technical Reports Server (NTRS)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  1. Reliable solar cookers

    SciTech Connect

    Magney, G.K.

    1992-12-31

    The author describes the activities of SERVE, a Christian relief and development agency, to introduce solar ovens to the Afghan refugees in Pakistan. It has provided 5,000 solar cookers since 1984. The experience has demonstrated the potential of the technology and the need for a durable and reliable product. Common complaints about the cookers are discussed and the ideal cooker is described.

  2. Grid reliability management tools

    SciTech Connect

    Eto, J.; Martinez, C.; Dyer, J.; Budhraja, V.

    2000-10-01

    To summarize, Consortium for Electric Reliability Technology Solutions (CERTS) is engaged in a multi-year program of public interest R&D to develop and prototype software tools that will enhance system reliability during the transition to competitive markets. The core philosophy embedded in the design of these tools is the recognition that in the future reliability will be provided through market operations, not the decisions of central planners. Embracing this philosophy calls for tools that: (1) Recognize that the game has moved from modeling machine and engineering analysis to simulating markets to understand the impacts on reliability (and vice versa); (2) Provide real-time data and support information transparency toward enhancing the ability of operators and market participants to quickly grasp, analyze, and act effectively on information; (3) Allow operators, in particular, to measure, monitor, assess, and predict both system performance as well as the performance of market participants; and (4) Allow rapid incorporation of the latest sensing, data communication, computing, visualization, and algorithmic techniques and technologies.

  3. 1 CFR 21.23 - Parallel citations of Code and Federal Register.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... § 21.23 Parallel citations of Code and Federal Register. For parallel reference, the Code of Federal Regulations and the Federal Register may be cited in the following forms, as appropriate: ___ CFR ___ (___ FR... 1 General Provisions 1 2014-01-01 2012-01-01 true Parallel citations of Code and Federal...

  4. 1 CFR 21.23 - Parallel citations of Code and Federal Register.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... § 21.23 Parallel citations of Code and Federal Register. For parallel reference, the Code of Federal Regulations and the Federal Register may be cited in the following forms, as appropriate: ___ CFR ___ (___ FR... 1 General Provisions 1 2011-01-01 2011-01-01 false Parallel citations of Code and Federal...

  5. 1 CFR 21.23 - Parallel citations of Code and Federal Register.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... § 21.23 Parallel citations of Code and Federal Register. For parallel reference, the Code of Federal Regulations and the Federal Register may be cited in the following forms, as appropriate: ___ CFR ___ (___ FR... 1 General Provisions 1 2013-01-01 2012-01-01 true Parallel citations of Code and Federal...

  6. 1 CFR 21.23 - Parallel citations of Code and Federal Register.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... § 21.23 Parallel citations of Code and Federal Register. For parallel reference, the Code of Federal Regulations and the Federal Register may be cited in the following forms, as appropriate: ___ CFR ___ (___ FR... 1 General Provisions 1 2010-01-01 2010-01-01 false Parallel citations of Code and Federal...

  7. 1 CFR 21.23 - Parallel citations of Code and Federal Register.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... § 21.23 Parallel citations of Code and Federal Register. For parallel reference, the Code of Federal Regulations and the Federal Register may be cited in the following forms, as appropriate: ___ CFR ___ (___ FR... 1 General Provisions 1 2012-01-01 2012-01-01 false Parallel citations of Code and Federal...

  8. Parallel processing techniques for finite element analysis of nonlinear large truss structures

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1989-01-01

    Methods were developed for parallel processing of finite element solutions of large truss structures. The parallel processing techniques were implemented in two stages, i.e., the repeated forming of the nonlinear global stiffness matrix and the solving of the global system of equations. The Sequent Balance 21000 parallel computer was employed to demonstrate the procedures and the speed-up.

  9. Is Monte Carlo embarrassingly parallel?

    SciTech Connect

    Hoogenboom, J. E.

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  10. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  11. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  12. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  13. Parallel integer sorting with medium and fine-scale parallelism

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  14. Multimedia OC12 parallel interface using VCSEL array to achieve high-performance cost-effective optical interconnections

    NASA Astrophysics Data System (ADS)

    Chang, Edward S.

    1996-09-01

    The multimedia communication needs high-performance, cost- effective communication techniques to transport data for the fast-growing multimedia traffic resulting from the recent deployment of World Wide Web (WWW), media-on-demand , and other multimedia applications. To transport a large volume, of multimedia data, high-performance servers are required to perform media processing and transfer. Typically, the high- performance multimedia server is a massively parallel processor with a high number of I/O ports, high storage capacity, fast signal processing, and excellent cost- performance. The parallel I/O ports of the server are connected to multiple clients through a network switch which uses parallel links in both switch-to-server and switch-to- client connections. In addition to media processing and storage, media communication is also a major function of the multimedia system. Without a high-performance communication network, a high-performance server can not deliver its full capacity of service to clients. Fortunately, there are many advanced communication technologies developed for networking, which can be adopted by the multimedia communication to economically deliver the full capacity of a high-performance multimedia service to clients. The VCSEL array technology has been developed for gigabit-rate parallel optical interconnections because of its high bandwidth, small-size, and easy-fabrication advantages. Several firms are developing multifiber, low-skew, low-cost ribbon cables to transfer signals form a VCSEL array. The OC12 SONET data-rate is widely used by high-performance multimedia communications for its high-data-rate and cost- effectiveness. Therefore, the OC12 VCSEL parallel optical interconnection is the ideal technology to meet the high- performance low-cost requirements for delivering affordable multimedia services to mass users. This paper describes a multimedia OC12 parallel optical interconnection using a VCSEL array transceiver, a multifiber ribbon cable, and MT connectors to achieve a high-performance, low-cost parallel link. A logical model of a multimedia server with parallel connections to an ATM switch, and to clients is presented. The design of the parallel optical link is analyzed. Furthermore, the link configured for testing, the test method, and test results are presented to confirm the analysis and to assure reliable link performance.

  15. Reliability-based aeroelastic optimization of a composite aircraft wing via fluid-structure interaction of high fidelity solvers

    NASA Astrophysics Data System (ADS)

    Nikbay, M.; Fakkusoglu, N.; Kuru, M. N.

    2010-06-01

    We consider reliability based aeroelastic optimization of a AGARD 445.6 composite aircraft wing with stochastic parameters. Both commercial engineering software and an in-house reliability analysis code are employed in this high-fidelity computational framework. Finite volume based flow solver Fluent is used to solve 3D Euler equations, while Gambit is the fluid domain mesh generator and Catia-V5-R16 is used as a parametric 3D solid modeler. Abaqus, a structural finite element solver, is used to compute the structural response of the aeroelastic system. Mesh based parallel code coupling interface MPCCI-3.0.6 is used to exchange the pressure and displacement information between Fluent and Abaqus to perform a loosely coupled fluid-structure interaction by employing a staggered algorithm. To compute the probability of failure for the probabilistic constraints, one of the well known MPP (Most Probable Point) based reliability analysis methods, FORM (First Order Reliability Method) is implemented in Matlab. This in-house developed Matlab code is embedded in the multidisciplinary optimization workflow which is driven by Modefrontier. Modefrontier 4.1, is used for its gradient based optimization algorithm called NBI-NLPQLP which is based on sequential quadratic programming method. A pareto optimal solution for the stochastic aeroelastic optimization is obtained for a specified reliability index and results are compared with the results of deterministic aeroelastic optimization.

  16. Hydrologic Terrain Processing Using Parallel Computing

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Watson, D. W.; Wallace, R. M.; Schreuders, K.; Tesfa, T. K.

    2009-12-01

    Topography in the form of Digital Elevation Models (DEMs), is widely used to derive information for the modeling of hydrologic processes. Hydrologic terrain analysis augments the information content of digital elevation data by removing spurious pits, deriving a structured flow field, and calculating surfaces of hydrologic information derived from the flow field. The increasing availability of high-resolution terrain datasets for large areas poses a challenge for existing algorithms that process terrain data to extract this hydrologic information. This paper will describe parallel algorithms that have been developed to enhance hydrologic terrain pre-processing so that larger datasets can be more efficiently computed. Message Passing Interface (MPI) parallel implementations have been developed for pit removal, flow direction, and generalized flow accumulation methods within the Terrain Analysis Using Digital Elevation Models (TauDEM) package. The parallel algorithm works by decomposing the domain into striped or tiled data partitions where each tile is processed by a separate processor. This method also reduces the memory requirements of each processor so that larger size grids can be processed. The parallel pit removal algorithm is adapted from the method of Planchon and Darboux that starts from a high elevation then progressively scans the grid, lowering each grid cell to the maximum of the original elevation or the lowest neighbor. The MPI implementation reconciles elevations along process domain edges after each scan. Generalized flow accumulation extends flow accumulation approaches commonly available in GIS through the integration of multiple inputs and a broad class of algebraic rules into the calculation of flow related quantities. It is based on establishing a flow field through DEM grid cells, that is then used to evaluate any mathematical function that incorporates dependence on values of the quantity being evaluated at upslope (or downslope) grid cells as well as other input quantities. The parallel generalized flow accumulation implementation relies on a dependency grid initialized with the number of upslope grid cells, that is reduced as each upslope cell is evaluated so as to track via a ready queue when each grid cell is ready for computation. The parallel implementations of these terrain analysis methods have enabled the processing of grids larger than were possible using the memory-based single processor implementation, as well as reducing computation times when run on multi-core desktop workstations and parallel computing clusters.

  17. Parallel computing: One opportunity, four challenges

    SciTech Connect

    Gaudiot, J.-L.

    1989-12-31

    The author reviews briefly the area of parallel computer processing. This area has been expanding at a great rate in the past decade. Great strides have been made in the hardware area, and in the speed of performance of chips. However to some degree the hardware area is beginning to run into basic physical speed limits, which will slow the rate of advance of this area simply because of physical limitations. The author looks at ways that computer architecture, and software applications, can work to continue the rate of increase in computing power which has occurred over the past decade. Four particular areas are mentioned: programmability; communication network design; reliable operation; performance evaluation and benchmarking.

  18. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK's current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN's and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  19. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK`s current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN`s and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  20. Space Shuttle Propulsion System Reliability

    NASA Technical Reports Server (NTRS)

    Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David

    2011-01-01

    This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.

  1. Human Reliability Program Workshop

    SciTech Connect

    Landers, John; Rogers, Erin; Gerke, Gretchen

    2014-05-18

    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  2. Reliability of photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1986-01-01

    In order to assess the reliability of photovoltaic modules, four categories of known array failure and degradation mechanisms are discussed, and target reliability allocations have been developed within each category based on the available technology and the life-cycle-cost requirements of future large-scale terrestrial applications. Cell-level failure mechanisms associated with open-circuiting or short-circuiting of individual solar cells generally arise from cell cracking or the fatigue of cell-to-cell interconnects. Power degradation mechanisms considered include gradual power loss in cells, light-induced effects, and module optical degradation. Module-level failure mechanisms and life-limiting wear-out mechanisms are also explored.

  3. Reliable broadcast protocols

    NASA Technical Reports Server (NTRS)

    Joseph, T. A.; Birman, Kenneth P.

    1989-01-01

    A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.

  4. Waste package reliability analysis

    SciTech Connect

    Pescatore, C.; Sastre, C.

    1983-01-01

    Proof of future performance of a complex system such as a high-level nuclear waste package over a period of hundreds to thousands of years cannot be had in the ordinary sense of the word. The general method of probabilistic reliability analysis could provide an acceptable framework to identify, organize, and convey the information necessary to satisfy the criterion of reasonable assurance of waste package performance according to the regulatory requirements set forth in 10 CFR 60. General principles which may be used to evaluate the qualitative and quantitative reliability of a waste package design are indicated and illustrated with a sample calculation of a repository concept in basalt. 8 references, 1 table.

  5. ATLAS reliability analysis

    SciTech Connect

    Bartsch, R.R.

    1995-09-01

    Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

  6. Compact, Reliable EEPROM Controller

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Kleyner, Igor

    2010-01-01

    A compact, reliable controller for an electrically erasable, programmable read-only memory (EEPROM) has been developed specifically for a space-flight application. The design may be adaptable to other applications in which there are requirements for reliability in general and, in particular, for prevention of inadvertent writing of data in EEPROM cells. Inadvertent writes pose risks of loss of reliability in the original space-flight application and could pose such risks in other applications. Prior EEPROM controllers are large and complex and do not provide all reasonable protections (in many cases, few or no protections) against inadvertent writes. In contrast, the present controller provides several layers of protection against inadvertent writes. The controller also incorporates a write-time monitor, enabling determination of trends in the performance of an EEPROM through all phases of testing. The controller has been designed as an integral subsystem of a system that includes not only the controller and the controlled EEPROM aboard a spacecraft but also computers in a ground control station, relatively simple onboard support circuitry, and an onboard communication subsystem that utilizes the MIL-STD-1553B protocol. (MIL-STD-1553B is a military standard that encompasses a method of communication and electrical-interface requirements for digital electronic subsystems connected to a data bus. MIL-STD- 1553B is commonly used in defense and space applications.) The intent was to both maximize reliability while minimizing the size and complexity of onboard circuitry. In operation, control of the EEPROM is effected via the ground computers, the MIL-STD-1553B communication subsystem, and the onboard support circuitry, all of which, in combination, provide the multiple layers of protection against inadvertent writes. There is no controller software, unlike in many prior EEPROM controllers; software can be a major contributor to unreliability, particularly in fault situations such as the loss of power or brownouts. Protection is also provided by a powermonitoring circuit.

  7. Spacecraft transmitter reliability

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A workshop on spacecraft transmitter reliability was held at the NASA Lewis Research Center on September 25 and 26, 1979, to discuss present knowledge and to plan future research areas. Since formal papers were not submitted, this synopsis was derived from audio tapes of the workshop. The following subjects were covered: users' experience with space transmitters; cathodes; power supplies and interfaces; and specifications and quality assurance. A panel discussion ended the workshop.

  8. Reliability Degradation Due to Stockpile Aging

    SciTech Connect

    Robinson, David G.

    1999-04-01

    The objective of this reseach is the investigation of alternative methods for characterizing the reliability of systems with time dependent failure modes associated with stockpile aging. Reference to 'reliability degradation' has, unfortunately, come to be associated with all types of aging analyes: both deterministic and stochastic. In this research, in keeping with the true theoretical definition, reliability is defined as a probabilistic description of system performance as a funtion of time. Traditional reliability methods used to characterize stockpile reliability depend on the collection of a large number of samples or observations. Clearly, after the experiments have been performed and the data has been collected, critical performance problems can be identified. A Major goal of this research is to identify existing methods and/or develop new mathematical techniques and computer analysis tools to anticipate stockpile problems before they become critical issues. One of the most popular methods for characterizing the reliability of components, particularly electronic components, assumes that failures occur in a completely random fashion, i.e. uniformly across time. This method is based primarily on the use of constant failure rates for the various elements that constitute the weapon system, i.e. the systems do not degrade while in storage. Experience has shown that predictions based upon this approach should be regarded with great skepticism since the relationship between the life predicted and the observed life has been difficult to validate. In addition to this fundamental problem, the approach does not recognize that there are time dependent material properties and variations associated with the manufacturing process and the operational environment. To appreciate the uncertainties in predicting system reliability a number of alternative methods are explored in this report. All of the methods are very different from those currently used to assess stockpile reliability, but have been used extensively in various forms outside Sandia National Laboratories. It is hoped that this report will encourage the use of 'nontraditional' reliabilty and uncertainty techniques in gaining insight into stockpile reliability issues.

  9. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  10. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  11. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Van Rosendale, John

    1989-01-01

    A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.

  12. Development of a Parallel Redundant STATCOM System

    NASA Astrophysics Data System (ADS)

    Takeda, Masatoshi; Yasuda, Satoshi; Tamai, Shinzo; Morishima, Naoki

    This paper presents a new concept of parallel redundant STATCOM system. This system consists of a number of medium capacity STATCOM units connected in parallel, which can achieve a high operational reliability and functional flexibility. The proposed STATCOM system has such redundant operation characteristics that the remaining STATCOM units can maintain their operation even though some of the STATCOM units are out of service. And also, it has flexible convertibility so that it can be converted to a BTB or a UPFC system easily, according to the diversified change of needs in power systems. In order to realize this concept, the authors developed several important key technologies for the STATCOM, such as the novel PWM scheme that enables effective cancellation of lower order harmonics, GCT inverter technologies with small loss consumption, and the coordination control scheme with capacitor banks to ensure effective dynamic performance with minimum loss consumption. The proposed STATCOM system was put into practical applications, exhibiting excellent performance characteristics at each site.

  13. Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux

    SciTech Connect

    Guo Zehua; Tang Xianzhu

    2012-06-15

    In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

  14. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1986-01-01

    A nonlinear structural dynamics program with an element library that exploits parallel processing is under development. The aim is to exploit scheduling-allocation so that parallel processing and vectorization can effectively be treated in a general purpose program. As a byproduct an automatic scheme for assigning time steps was devised. A rudimentary form of the program is complete and has been tested; it shows substantial advantage can be taken of parallelism. In addition, a stability proof for the subcycling algorithm has been developed.

  15. General Aviation Aircraft Reliability Study

    NASA Technical Reports Server (NTRS)

    Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

    2001-01-01

    This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

  16. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  17. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  18. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  19. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  20. Appendix E: Parallel Pascal development system

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Parallel Pascal Development System enables Parallel Pascal programs to be developed and tested on a conventional computer. It consists of several system programs, including a Parallel Pascal to standard Pascal translator, and a library of Parallel Pascal subprograms. The library includes subprograms for using Parallel Pascal on a parallel system with a fixed degree of parallelism, such as the Massively Parallel Processor, to conveniently manipulate arrays which have dimensions than the hardware. Programs can be conveninetly tested with small sized arrays on the conventional computer before attempting to run on a parallel system.

  1. Address tracing for parallel machines

    NASA Technical Reports Server (NTRS)

    Stunkel, Craig B.; Janssens, Bob; Fuchs, W. Kent

    1991-01-01

    Recently implemented parallel system address-tracing methods based on several metrics are surveyed. The issues specific to collection of traces for both shared and distributed memory parallel computers are highlighted. Five general categories of address-trace collection methods are examined: hardware-captured, interrupt-based, simulation-based, altered microcode-based, and instrumented program-based traces. The problems unique to shared memory and distributed memory multiprocessors are examined separately.

  2. Matching parallel algorithm and architecture

    SciTech Connect

    Chiang, Y.P.; Fu, K.S.

    1983-01-01

    An attributed directed graph model which is a combination of high-level Petri Nets and and/or graphs is described. This model provides a method for matching parallel algorithms to architectures or vice versa. The analysis of parallel computation using this model is described. Examples are given to demonstrate the descriptive power of this model and how it helps us to match an algorithm and an architecture. 18 references.

  3. Asynchronous parallel status comparator

    DOEpatents

    Arnold, Jeffrey W. (828 Hickory Ridge Rd., Aiken, SC 29801); Hart, Mark M. (223 Limerick Dr., Aiken, SC 29803)

    1992-01-01

    Apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition.

  4. Asynchronous parallel status comparator

    DOEpatents

    Arnold, J.W.; Hart, M.M.

    1992-12-15

    Disclosed is an apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition. 4 figs.

  5. Parallel architectures for problem solving

    SciTech Connect

    Kale, L.V.

    1985-01-01

    The problem of exploiting a large amount of hardware in parallel is one of the biggest challenges facing computer science today. The problem of designing parallel architectures and execution methods for solving large combinatorially explosive problems is studied here. Such problems typically do not have a regular structure that can be readily exploited for parallel execution. Prolog is chosen as a language to specify computation because it is seen as a language that is conceptually simple as well as amenable to parallel interpretation. A tree representation of Prolog computation called the REDUCE-OR tree is described as an alternative to the AND-OR tree representation. A process model based on this representation is developed; it captures more parallelism than most other proposed models. A class of bus architectures is proposed to implement the process model. A general model of parallel Prolog systems is developed and the proposed architectures examined in its framework. One of the important features of the proposed architectures is that they limit contracting of work to a close neighborhood. Various interconnection networks are analyzed, and a new one called the lattice-mesh is proposed. The lattice-mesh improves on the square grid of buses, while retaining its linear-area property. An extensive simulation framework was built. Results of some of the experiments conducted on the simulation system are given.

  6. Architectures for reasoning in parallel

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.

    1989-01-01

    The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

  7. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  8. Parallel computation of seismic analysis of high arch dam

    NASA Astrophysics Data System (ADS)

    Chen, Houqun; Ma, Huaifa; Tu, Jin; Cheng, Guangqing; Tang, Juzhen

    2008-03-01

    Parallel computation programs are developed for three-dimensional meso-mechanics analysis of fully-graded dam concrete and seismic response analysis of high arch dams (ADs), based on the Parallel Finite Element Program Generator (PFEPG). The computational algorithms of the numerical simulation of the meso-structure of concrete specimens were studied. Taking into account damage evolution, static preload, strain rate effect, and the heterogeneity of the meso-structure of dam concrete, the fracture processes of damage evolution and configuration of the cracks can be directly simulated. In the seismic response analysis of ADs, all the following factors are involved, such as the nonlinear contact due to the opening and slipping of the contraction joints, energy dispersion of the far-field foundation, dynamic interactions of the dam-foundation-reservoir system, and the combining effects of seismic action with all static loads. The correctness, reliability and efficiency of the two parallel computational programs are verified with practical illustrations.

  9. Parallel keyed hash function construction based on chaotic maps

    NASA Astrophysics Data System (ADS)

    Xiao, Di; Liao, Xiaofeng; Deng, Shaojiang

    2008-06-01

    Recently, a variety of chaos-based hash functions have been proposed. Nevertheless, none of them works efficiently in parallel computing environment. In this Letter, an algorithm for parallel keyed hash function construction is proposed, whose structure can ensure the uniform sensitivity of hash value to the message. By means of the mechanism of both changeable-parameter and self-synchronization, the keystream establishes a close relation with the algorithm key, the content and the order of each message block. The entire message is modulated into the chaotic iteration orbit, and the coarse-graining trajectory is extracted as the hash value. Theoretical analysis and computer simulation indicate that the proposed algorithm can satisfy the performance requirements of hash function. It is simple, efficient, practicable, and reliable. These properties make it a good choice for hash on parallel computing platform.

  10. Sub-Second Parallel State Estimation

    SciTech Connect

    Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.; Wang, Shaobu; Huang, Zhenyu

    2014-10-31

    This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly state estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.

  11. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    SciTech Connect

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  12. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  13. Ultimately Reliable Pyrotechnic Systems

    NASA Technical Reports Server (NTRS)

    Scott, John H.; Hinkel, Todd

    2015-01-01

    This paper presents the methods by which NASA has designed, built, tested, and certified pyrotechnic devices for high reliability operation in extreme environments and illustrates the potential applications in the oil and gas industry. NASA's extremely successful application of pyrotechnics is built upon documented procedures and test methods that have been maintained and developed since the Apollo Program. Standards are managed and rigorously enforced for performance margins, redundancy, lot sampling, and personnel safety. The pyrotechnics utilized in spacecraft include such devices as small initiators and detonators with the power of a shotgun shell, detonating cord systems for explosive energy transfer across many feet, precision linear shaped charges for breaking structural membranes, and booster charges to actuate valves and pistons. NASA's pyrotechnics program is one of the more successful in the history of Human Spaceflight. No pyrotechnic device developed in accordance with NASA's Human Spaceflight standards has ever failed in flight use. NASA's pyrotechnic initiators work reliably in temperatures as low as -420 F. Each of the 135 Space Shuttle flights fired 102 of these initiators, some setting off multiple pyrotechnic devices, with never a failure. The recent landing on Mars of the Opportunity rover fired 174 of NASA's pyrotechnic initiators to complete the famous '7 minutes of terror.' Even after traveling through extreme radiation and thermal environments on the way to Mars, every one of them worked. These initiators have fired on the surface of Titan. NASA's design controls, procedures, and processes produce the most reliable pyrotechnics in the world. Application of pyrotechnics designed and procured in this manner could enable the energy industry's emergency equipment, such as shutoff valves and deep-sea blowout preventers, to be left in place for years in extreme environments and still be relied upon to function when needed, thus greatly enhancing safety and operational availability.

  14. VCSEL-based parallel optical transmission module

    NASA Astrophysics Data System (ADS)

    Shen, Rongxuan; Chen, Hongda; Zuo, Chao; Pei, Weihua; Zhou, Yi; Tang, Jun

    2005-02-01

    This paper describes the design process and performance of the optimized parallel optical transmission module. Based on 1×12 VCSEL (Vertical Cavity Surface Emitting Laser) array, we designed and fabricated the high speed parallel optical modules. Our parallel optical module contains a 1×12 VCSEL array, a 12 channel CMOS laser driver circuit, a high speed PCB (Printed Circuit Board), a MT fiber connector and a packaging housing. The L-I-V characteristics of the 850nm VCSEL was measured at the operating current 8mA, 3dB frequency bandwidth more than 3GHz and the optical output 1mW. The transmission rate of all 12 channels is 30Gbit/s, with a single channel 2.5Gbit/s. By adopting the integration of the 1×12 VCSEL array and the driver array, we make a high speed PCB (Printed Circuit Board) to provide the optoelectronic chip with the operating voltage and high speed signals current. The LVDS (Low-Voltage Differential Signals) was set as the input signal to achieve better high frequency performance. The active coupling was adopted with a MT connector (8° slant fiber array). We used the Small Form Factor Pluggable (SFP) packaging. With the edge connector, the module could be inserted into the system dispense with bonding process.

  15. Nuclear performance and reliability

    SciTech Connect

    Rothwell, G.

    1993-07-01

    If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive. There has been a significant improvement in nuclear power plant performance, due largely to a decline in the forced outage rate and a dramatic drop in the average number of forced outages per fuel cycle. If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive over time. To encourage further increases in performance, regulatory incentive schemes should reward reactor operators for improved reliability and safety, as well as for improved performance.

  16. Ferrite logic reliability study

    NASA Technical Reports Server (NTRS)

    Baer, J. A.; Clark, C. B.

    1973-01-01

    Development and use of digital circuits called all-magnetic logic are reported. In these circuits the magnetic elements and their windings comprise the active circuit devices in the logic portion of a system. The ferrite logic device belongs to the all-magnetic class of logic circuits. The FLO device is novel in that it makes use of a dual or bimaterial ferrite composition in one physical ceramic body. This bimaterial feature, coupled with its potential for relatively high speed operation, makes it attractive for high reliability applications. (Maximum speed of operation approximately 50 kHz.)

  17. Lyapunov functions for parallel neural networks

    NASA Astrophysics Data System (ADS)

    Goles, Eric; Vichniac, Grad Y.

    1986-08-01

    We construct additive Lyapunov functions for neural networks defined by threshold transition rules implemented synchronously ( la Little). These functions take the forms Ep=-?Tx?1 (Manhattan metric norm in the configuration space) and Ep=-?i??jTijx?j?? (norm in the space of internal states) if q states per site are present. These functions can be seen as energies for parallel iterations, comparable to Hopfield's energy Es=- for sequential iterations. Applications to the Marr-Poggio cooperative algorithm for stereopsis are presented.

  18. Fault Tree Reliability Analysis and Design-for-reliability

    Energy Science and Technology Software Center (ESTSC)

    1998-05-05

    WinR provides a fault tree analysis capability for performing systems reliability and design-for-reliability analyses. The package includes capabilities for sensitivity and uncertainity analysis, field failure data analysis, and optimization.

  19. On Component Reliability and System Reliability for Space Missions

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Gillespie, Amanda M.; Monaghan, Mark W.; Sampson, Michael J.; Hodson, Robert F.

    2012-01-01

    This paper is to address the basics, the limitations and the relationship between component reliability and system reliability through a study of flight computing architectures and related avionics components for NASA future missions. Component reliability analysis and system reliability analysis need to be evaluated at the same time, and the limitations of each analysis and the relationship between the two analyses need to be understood.

  20. Understanding the Elements of Operational Reliability: A Key for Achieving High Reliability

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.

    2010-01-01

    This viewgraph presentation reviews operational reliability and its role in achieving high reliability through design and process reliability. The topics include: 1) Reliability Engineering Major Areas and interfaces; 2) Design Reliability; 3) Process Reliability; and 4) Reliability Applications.

  1. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  2. FAROW. Fatigue and Reliability of Wind Turbines

    SciTech Connect

    Veer, S.P.; Winterstein, S.R.; Lange, C.H.; Wilson, T.A.

    1994-11-01

    FAROW is a computer program that assists in the probalistic analysis of the Fatigue and Reliabiity of Wind turbines. The fatigue lifetime of wind turbine components is calculated using functional forms for important input quantities. Parameters of these functions are defined in an input file as either constants or random variables. The user can select from a library of random variable distribution functions. FAROW uses structural reliability techniques to calculate the mean time to failure, probability of failure before a target lifetime, relative importance of each of the random inputs, and the sensitivity of the reliability to all input parameters. Monte Carlo simulation is also available.

  3. Fatigue and Reliability of Wind Turbines

    SciTech Connect

    1995-08-17

    FAROW is a computer program that assists in the probalistic analysis of the Fatigue and Reliabiity of Wind turbines. The fatigue lifetime of wind turbine components is calculated using functional forms for important input quantities. Parameters of these functions are defined in an input file as either constants or random variables. The user can select from a library of random variable distribution functions. FAROW uses structural reliability techniques to calculate the mean time to failure, probability of failure before a target lifetime, relative importance of each of the random inputs, and the sensitivity of the reliability to all input parameters. Monte Carlo simulation is also available.

  4. Parallel plasma fluid turbulence calculations

    SciTech Connect

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-12-31

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center`s CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.

  5. Parallel computation and computers for artificial intelligence

    SciTech Connect

    Kowalik, J.S. )

    1988-01-01

    This book discusses Parallel Processing in Artificial Intelligence; Parallel Computing using Multilisp; Execution of Common Lisp in a Parallel Environment; Qlisp; Restricted AND-Parallel Execution of Logic Programs; PARLOG: Parallel Programming in Logic; and Data-driven Processing of Semantic Nets. Attention is also given to: Application of the Butterfly Parallel Processor in Artificial Intelligence; On the Range of Applicability of an Artificial Intelligence Machine; Low-level Vision on Warp and the Apply Programming Mode; AHR: A Parallel Computer for Pure Lisp; FAIM-1: An Architecture for Symbolic Multi-processing; and Overview of Al Application Oriented Parallel Processing Research in Japan.

  6. Incorporating public preferences in planning urban water supply reliability

    NASA Astrophysics Data System (ADS)

    Howe, Charles W.; Smith, Mark Griffin

    1993-10-01

    This study has two objectives: (1) to compare the attitudes of the water-using public, water officials, and elected officials toward the risk of water supply shortage; and (2) to develop a methodology for incorporating water users' valuation of reliability in system design. Using contingent valuation techniques, we have measured the benefits and costs of different reliability levels in terms of water users' willingness to pay (WTP) for increases in reliability and in terms of their willingness to accept (WTA) compensation in the form of lower water bills for lower levels of reliability. Three cities in northern Colorado with diverse baseline levels of water supply reliability (Aurora, Boulder, and Longmont) are the study sites. Contrary to our hypothesis that water managers are unjustifiably risk averse, we find that water managers' preferences are consistent with customer WTP (WTA) values associated with the risk of water shortages and the system costs associated with reliability. Water managers in Boulder (high reliability) were willing to consider reductions in the level of system reliability while water managers in Aurora and Longmont (low reliability) favored the status quo or increased reliability. While these attitudes were sometimes contrary to a majority of customers' expressed interests in change, they were shown to be justified by comparison of supply system costs (savings) with aggregate WTP for additional reliability (WTA for less reliability).

  7. Massively parallel MRI detector arrays.

    PubMed

    Keil, Boris; Wald, Lawrence L

    2013-04-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas via reception, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called "ultimate" SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  8. PARAVT: Parallel Voronoi Tessellation code

    NASA Astrophysics Data System (ADS)

    Gonzalez, Roberto E.

    2016-01-01

    We present a new open source code for massive parallel computation of Voronoi tessellations(VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition take into account consistent boundary computation between tasks, and support periodic conditions. In addition, the code compute neighbors lists, Voronoi density and Voronoi cell volumes for each particle, and can compute density on a regular grid.

  9. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  10. Massively Parallel MRI Detector Arrays

    PubMed Central

    Keil, Boris; Wald, Lawrence L

    2013-01-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called ultimate SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  11. Fast data parallel polygon rendering

    SciTech Connect

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  12. Supporting data intensive applications with medium grained parallelism

    SciTech Connect

    Pfaltz, J.L.; French, J.C.; Grimshaw, A.S.; Son, S.H.

    1992-04-01

    ADAMS is an ambitious effort to provide new database access paradigms for the kinds of scientific applications that require massively parallel access to very large data sets in order to be effective. Many of the Grand Challenge Problems fall into this category, as well as those kinds of scientific research which depend on widely distributed shared sets of disparate data. The essence of the ADAMS approach is to view data purely in functional terms, rather than the more traditional structural view in which multiple data items are aggregated into records or tuples of flat files. Further, ADAMS has been implemented as an embedded interface so that scientists can develop applications in the host programming language of their choice, often Fortran, Pascal, or C, and still access shared data generated in other environments. The syntax and semantics of ADAMS is essentially complete. The functional nature of the ADAMS data interface paradigm simplifies its implementation in a distributed environment, e.g., the Mentat run-time system, because one must only distribute functional servers, not pieces of data structures. However, this only opens up the possibility of effective parallel database processing; to realize this potential far more work must be done in the areas of data dependence, intra-statement parallelism, parallel query optimization, and maintaining consistency and reliability in concurrent systems. Discovering how to make effective parallel data access an actually in real scientific applications is the point of this research.

  13. Parallelization of the Simulated Annealing Algorithm

    NASA Astrophysics Data System (ADS)

    Seacat, Russell Holland, III

    Nuclear medicine imaging involves the introduction of a radiopharmaceutical into the body and the subsequent detection of the radiation emanating from the organ at which the procedure was directed. The data set resulting from such a procedure is generally very underdetermined, due to the dimensions of the imaging apparatus, and underconstrained due to the noise in the imaging process. A means by which more information can be obtained is through a form of imaging utilizing code-apertures. Although increasing the amount of information collected, coded-aperture imaging results in a multiplexing of the data. Demultiplexing the data requires a reconstruction process not required in conventional nuclear medicine imaging. The reconstruction process requires the optimization of an estimate to the object to be reconstructed. This optimization is done through the minimization of an energy functional. The minimization of such energy functionals requires the optimization of several parameters. Solution of this type problem is difficult because there are far too many degrees of freedom to permit an exhaustive search for an optimum, and in many cases no algorithms are known which will determine the exact optimum with significantly less work than exhaustive search. Instead, heuristic algorithms, such as the simulated annealing algorithm, have been employed and have proven effective in minimizing such energy functionals. Unfortunately, the simulated annealing algorithm, as characteristic of Monte Carlo algorithms, is very computer intensive; in fact, it is so intensive that insufficient computational power is often the chief hindrance to investigation of the algorithm. The simulated annealing algorithm, however, is amenable to parallel processing. The goal of the research in this dissertation is to investigate the parameters involved in implementing the simulated annealing algorithm in parallel; however, the form of the simulated annealing algorithm implemented here requires no annealing because the energy functionals investigated are quadratic in form. The parameters related to the parallelization of the simulated annealing algorithm include the decomposition of the reconstruction space among the processors, the formulation of the problem at the estimate level with the smallest task being a single perturbation trial evaluated on a local basis, and the communications required to keep all the processors as current as possible with changes made simultaneously to the estimate. Three objects, varying in size, shape and detail, are reconstructed utilizing the TRIMM parallel processor.

  14. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  15. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 7. Technical Report #1206

    ERIC Educational Resources Information Center

    Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,

  16. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 2. Technical Report #1201

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the second-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,

  17. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 6. Technical Report #1205

    ERIC Educational Resources Information Center

    Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the sixth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,

  18. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 5. Technical Report #1204

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the fifth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,

  19. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 4. Technical Report #1203

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the fourth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,

  20. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 3. Technical Report #1202

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Irvin, P. Shawn; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the third-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,

  1. Maximum likelihood estimation and the multivariate Bernoulli distribution: An application to reliability

    SciTech Connect

    Kvam, P.H.

    1994-08-01

    We investigate systems designed using redundant component configurations. If external events exist in the working environment that cause two or more components in the system to fail within the same demand period, the designed redundancy in the system can be quickly nullified. In the engineering field, such events are called common cause failures (CCFs), and are primary factors in some risk assessments. If CCFs have positive probability, but are not addressed in the analysis, the assessment may contain a gross overestimation of the system reliability. We apply a discrete, multivariate shock model for a parallel system of two or more components, allowing for positive probability that such external events can occur. The methods derived are motivated by attribute data for emergency diesel generators from various US nuclear power plants. Closed form solutions for maximum likelihood estimators exist in many cases; statistical tests and confidence intervals are discussed for the different test environments considered.

  2. Testing for PV Reliability (Presentation)

    SciTech Connect

    Kurtz, S.; Bansal, S.

    2014-09-01

    The DOE SUNSHOT workshop is seeking input from the community about PV reliability and how the DOE might address gaps in understanding. This presentation describes the types of testing that are needed for PV reliability and introduces a discussion to identify gaps in our understanding of PV reliability testing.

  3. Recent Developments in Reliability Analysis.

    ERIC Educational Resources Information Center

    Krippendorff, Klaus

    When one wants to set data reliability standards for a class of scientific inquiries or when one needs to compare and select among many different kinds of data with reliabilities that are crucial to a particular research undertaking, then one needs a single reliability coefficient that is adaptable to all or most situations. Work toward this goal

  4. Parallel coding schemes of whisker velocity in the rat's somatosensory system.

    PubMed

    Lottem, Eran; Gugig, Erez; Azouz, Rony

    2015-03-15

    The function of rodents' whisker somatosensory system is to transform tactile cues, in the form of vibrissa vibrations, into neuronal responses. It is well established that rodents can detect numerous tactile stimuli and tell them apart. However, the transformation of tactile stimuli obtained through whisker movements to neuronal responses is not well-understood. Here we examine the role of whisker velocity in tactile information transmission and its coding mechanisms. We show that in anaesthetized rats, whisker velocity is related to the radial distance of the object contacted and its own velocity. Whisker velocity is accurately and reliably coded in first-order neurons in parallel, by both the relative time interval between velocity-independent first spike latency of rapidly adapting neurons and velocity-dependent first spike latency of slowly adapting neurons. At the same time, whisker velocity is also coded, although less robustly, by the firing rates of slowly adapting neurons. Comparing first- and second-order neurons, we find similar decoding efficiencies for whisker velocity using either temporal or rate-based methods. Both coding schemes are sufficiently robust and hardly affected by neuronal noise. Our results suggest that whisker kinematic variables are coded by two parallel coding schemes and are disseminated in a similar way through various brain stem nuclei to multiple brain areas. PMID:25552637

  5. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-03-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

  6. ITER LHe Plants Parallel Operation

    NASA Astrophysics Data System (ADS)

    Fauve, E.; Bonneton, M.; Chalifour, M.; Chang, H.-S.; Chodimella, C.; Monneret, E.; Vincent, G.; Flavien, G.; Fabre, Y.; Grillot, D.

    The ITER Cryogenic System includes three identical liquid helium (LHe) plants, with a total average cooling capacity equivalent to 75 kW at 4.5 K.The LHe plants provide the 4.5 K cooling power to the magnets and cryopumps. They are designed to operate in parallel and to handle heavy load variations.In this proceedingwe will describe the presentstatusof the ITER LHe plants with emphasis on i) the project schedule, ii) the plantscharacteristics/layout and iii) the basic principles and control strategies for a stable operation of the three LHe plants in parallel.

  7. Medipix2 parallel readout system

    NASA Astrophysics Data System (ADS)

    Fanti, V.; Marzeddu, R.; Randaccio, P.

    2003-08-01

    A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

  8. Program for computer aided reliability estimation

    NASA Technical Reports Server (NTRS)

    Mathur, F. P. (inventor)

    1972-01-01

    A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.

  9. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  10. Supercomputing on massively parallel bit-serial architectures

    NASA Technical Reports Server (NTRS)

    Iobst, Ken

    1985-01-01

    Research on the Goodyear Massively Parallel Processor (MPP) suggests that high-level parallel languages are practical and can be designed with powerful new semantics that allow algorithms to be efficiently mapped to the real machines. For the MPP these semantics include parallel/associative array selection for both dense and sparse matrices, variable precision arithmetic to trade accuracy for speed, micro-pipelined train broadcast, and conditional branching at the processing element (PE) control unit level. The preliminary design of a FORTRAN-like parallel language for the MPP has been completed and is being used to write programs to perform sparse matrix array selection, min/max search, matrix multiplication, Gaussian elimination on single bit arrays and other generic algorithms. A description is given of the MPP design. Features of the system and its operation are illustrated in the form of charts and diagrams.

  11. Design and performance of VLSI based parallel multiplier

    SciTech Connect

    Agrawal, D.P.; Pathak, G.C.; Swain, N.K.; Agrawal, B.K.

    1983-01-01

    The VLSI design and layout of a (log /sup 2/n) time n-bit binary parallel multiplier for two unsigned operands is introduced. The proposed design consists of partitioning the multiplier and multiplicand bits into four groups of n/4 bits each and then reducing the matrix of sixteen product terms using three to two parallel counters and a brent-kung (log n) time parallel adder. Area-time performance of the present scheme has been compared with the existing schemes for parallel multipliers. Regular and recursive design of the multiplier is shown to be suitable for vlsi implementation and an improved table lookup multiplier has been used to form the basis of the recursive design scheme. 17 references.

  12. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  13. Parallel retreat of rock slopes underlain by alternation of strata

    NASA Astrophysics Data System (ADS)

    Imaizumi, Fumitoshi; Nishii, Ryoko; Murakami, Wataru; Daimaru, Hiromu

    2015-06-01

    Characteristic landscapes (e.g., cuesta, cliff and overhang of caprock, or stepped terrain) formed by differential erosion can be found in areas composed of variable geology exhibiting different resistances to weathering. Parallel retreat of slopes, defined as recession of slopes without changes in their topography, is sometimes observed on slopes composed of multiple strata. However, the conditions needed for such parallel retreat have not yet been sufficiently clarified. In this study, we elucidated the conditions for parallel retreat of rock slopes composed of alternating layers using a geometric method. In addition, to evaluate whether various rock slopes fulfilled the conditions for parallel retreat, we analyzed topographic data obtained from periodic measurement of rock slopes in the Aka-kuzure landslide, central Japan. Our geometric analysis of the two-dimensional slopes indicates that dip angle, slope gradient, and erosion rate are the factors that determine parallel retreat conditions. However, dip angle does not significantly affect parallel retreat conditions in the case of steep back slopes (slope gradient > 40). In contrast, dip angle is an important factor when we consider the parallel retreat conditions in dip slopes and gentler back slopes (slope gradient < 40). Geology in the Aka-kuzure landslide is complex because of faulting, folding, and toppling, but spatial distribution of the erosion rate measured by airborne LiDAR scanning and terrestrial laser scanning (TLS) roughly fulfills parallel retreat conditions. The Aka-kuzure landslide is characterized by repetition of steep sandstone cliffs and gentle shale slopes that form a stepped topography. The inherent resistance of sandstone to weathering is greater than that of shale. However, the vertical erosion rate within the sandstone was higher than that within the shale, due to direct relationship between slope gradient and vertical erosion rate in the Aka-kuzure landslide.

  14. Spectrophotometric Assay of Mebendazole in Dosage Forms Using Sodium Hypochlorite

    NASA Astrophysics Data System (ADS)

    Swamy, N.; Prashanth, K. N.; Basavaiah, K.

    2014-07-01

    A simple, selective and sensitive spectrophotometric method is described for the determination of mebendazole (MBD) in bulk drug and dosage forms. The method is based on the reaction of MBD with hypochlorite in the presence of sodium bicarbonate to form the chloro derivative of MBD, followed by the destruction of the excess hypochlorite by nitrite ion. The color was formed by the oxidation of iodide with the chloro derivative of MBD to iodine in the presence of starch and forming the blue colored product, which was measured at 570 nm. The optimum conditions that affect the reaction were ascertained and, under these conditions, a linear relationship was obtained in the concentration range of 1.25-25.0·g/ml MBD. The calculated molar absorptivity and Sandell sensitivity values are 9.56·103 l·mol-1·cm-1 and 0.031 μg/cm2, respectively. The limits of detection and quantification are 0.11 and 0.33 μg/ml, respectively. The proposed method was applied successfully to the determination of MBD in bulk drug and dosage forms, and no interference was observed from excipients present in the dosage forms. The reliability of the proposed method was further checked by parallel determination by the reference method and also by recovery studies.

  15. A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix

    NASA Technical Reports Server (NTRS)

    Shroff, Gautam

    1989-01-01

    A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

  16. Physiologic Trend Detection and Artifact Rejection: A Parallel Implementation of a Multi-state Kalman Filtering Algorithm

    PubMed Central

    Sittig, Dean F.; Factor, Michael

    1989-01-01

    Using a parallel implementation of the multi-state Kalman filtering algorithm, we have developed an accurate method of reliably detecting and identifying trends, abrupt changes, and artifacts from multiple physiologic data streams in real-time. The Kalman filter algorithm was implemented within an innovative software architecture for parallel computation: a parallel process trellis. Examples, processed in real-time, of both simulated and actual data serve to illustrate the potential value of the Kalman filter as a tool in physiologic monitoring.

  17. The AIS-5000 parallel processor

    SciTech Connect

    Schmitt, L.A.; Wilson, S.S.

    1988-05-01

    The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In this paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.

  18. Tutorial: Parallel Simulation on Supercomputers

    SciTech Connect

    Perumalla, Kalyan S

    2012-01-01

    This tutorial introduces typical hardware and software characteristics of extant and emerging supercomputing platforms, and presents issues and solutions in executing large-scale parallel discrete event simulation scenarios on such high performance computing systems. Covered topics include synchronization, model organization, example applications, and observed performance from illustrative large-scale runs.

  19. GRay: Massive parallel ODE integrator

    NASA Astrophysics Data System (ADS)

    Chan, Chi-kwan; Psaltis, Dimitrios; Ozel, Feryal

    2014-03-01

    GRay is a massive parallel ordinary differential equation integrator that employs the "stream processing paradigm." It is designed to efficiently integrate billions of photons in curved spacetime according to Einstein's general theory of relativity. The code is implemented in CUDA C/C++.

  20. Matpar: Parallel Extensions for MATLAB

    NASA Technical Reports Server (NTRS)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  1. Parallel, Distributed Scripting with Python

    SciTech Connect

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

  2. Parallel computing: A case study

    NASA Astrophysics Data System (ADS)

    Slaets, Jan F. W.; Travieso, Gonzalo

    1989-11-01

    A simple molecular dynamics simulation is used to analyze some speed optimization techniques. The efficiency of sequential and parallel algorithms are discussed. An implementation on a T800 transputer array is proposed and the estimated performance is compared with that obtained on a supercomputer.

  3. Evaluation of fault-tolerant parallel-processor architectures over long space missions

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1989-01-01

    The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.

  4. Product reliability and thin-film photovoltaics

    NASA Astrophysics Data System (ADS)

    Gaston, Ryan; Feist, Rebekah; Yeung, Simon; Hus, Mike; Bernius, Mark; Langlois, Marc; Bury, Scott; Granata, Jennifer; Quintana, Michael; Carlson, Carl; Sarakakis, Georgios; Ogden, Douglas; Mettas, Adamantios

    2009-08-01

    Despite significant growth in photovoltaics (PV) over the last few years, only approximately 1.07 billion kWhr of electricity is estimated to have been generated from PV in the US during 2008, or 0.27% of total electrical generation. PV market penetration is set for a paradigm shift, as fluctuating hydrocarbon prices and an acknowledgement of the environmental impacts associated with their use, combined with breakthrough new PV technologies, such as thin-film and BIPV, are driving the cost of energy generated with PV to parity or cost advantage versus more traditional forms of energy generation. In addition to reaching cost parity with grid supplied power, a key to the long-term success of PV as a viable energy alternative is the reliability of systems in the field. New technologies may or may not have the same failure modes as previous technologies. Reliability testing and product lifetime issues continue to be one of the key bottlenecks in the rapid commercialization of PV technologies today. In this paper, we highlight the critical need for moving away from relying on traditional qualification and safety tests as a measure of reliability and focus instead on designing for reliability and its integration into the product development process. A drive towards quantitative predictive accelerated testing is emphasized and an industrial collaboration model addressing reliability challenges is proposed.

  5. Reliability analysis of continuous fiber composite laminates

    NASA Technical Reports Server (NTRS)

    Thomas, David J.; Wetherhold, Robert C.

    1990-01-01

    A composite lamina may be viewed as a homogeneous solid whose directional strengths are random variables. Calculation of the lamina reliability under a multi-axial stress state can be approached by either assuming that the strengths act separately (modal or independent action), or that they interact through a quadratic interaction criterion. The independent action reliability may be calculated in closed form, while interactive criteria require simulations; there is currently insufficient data to make a final determination of preference between them. Using independent action for illustration purposes, the lamina reliability may be plotted in either stress space or in a non-dimensional representation. For the typical laminated plate structure, the individual lamina reliabilities may be combined in order to produce formal upper and lower bounds of reliability for the laminate, similar in nature to the bounds on properties produced from variational elastic methods. These bounds are illustrated for a (0/plus or minus 15)sub s Graphite/Epoxy (GR/EP) laminate. And addition, simple physically plausible phenomenological rules are proposed for redistribution of load after a lamina has failed. These rules are illustrated by application to (0/plus or minus 15)sub s and (90/plus or minus 45/0)sub s GR/EP laminates and results are compared with respect to the proposed bounds.

  6. Mirror versus parallel bimanual reaching

    PubMed Central

    2013-01-01

    Background In spite of their importance to everyday function, tasks that require both hands to work together such as lifting and carrying large objects have not been well studied and the full potential of how new technology might facilitate recovery remains unknown. Methods To help identify the best modes for self-teleoperated bimanual training, we used an advanced haptic/graphic environment to compare several modes of practice. In a 2-by-2 study, we compared mirror vs. parallel reaching movements, and also compared veridical display to one that transforms the right hand’s cursor to the opposite side, reducing the area that the visual system has to monitor. Twenty healthy, right-handed subjects (5 in each group) practiced 200 movements. We hypothesized that parallel reaching movements would be the best performing, and attending to one visual area would reduce the task difficulty. Results The two-way comparison revealed that mirror movement times took an average 1.24 s longer to complete than parallel. Surprisingly, subjects’ movement times moving to one target (attending to one visual area) also took an average of 1.66 s longer than subjects moving to two targets. For both hands, there was also a significant interaction effect, revealing the lowest errors for parallel movements moving to two targets (p < 0.001). This was the only group that began and maintained low errors throughout training. Conclusion Combined with other evidence, these results suggest that the most intuitive reaching performance can be observed with parallel movements with a veridical display (moving to two separate targets). These results point to the expected levels of challenge for these bimanual training modes, which could be used to advise therapy choices in self-neurorehabilitation. PMID:23837908

  7. Parallel execution of LISP programs

    SciTech Connect

    Weening, J.S.

    1989-01-01

    This dissertation considers several issues in the execution of Lisp programs on shared-memory multiprocessors. An overview of constructs for explicit parallelism in Lisp is first presented. The problems of partitioning a program into processes and scheduling these processes are then described, and a number of methods for performing these are proposed. These include cutting off process creation based on properties of the computation tree of the program, and basing partitioning decisions on the state of the system at runtime instead of the program. An experimental study of these methods has been performed using a simulator for parallel Lisp. The simulator, written in common Lisp using a continuation-passing style, is described in detail. This is followed by a description of the experiments that were performed and an analysis of the results. Two programs are used as illustrations-a Fast Fourier Transform, which has an abundance of parallelism, and the Cocke-Younger-Kasami parsing algorithm, for which good speedup is not as easy to obtain. The difficulty of using cutoff-based partitioning methods, and the differences between various scheduling methods, are shown. A combination of partitioning and scheduling methods which the author calls dynamic partitioning is analyzed in more detail. This method is based on examining the machine's runtime state; it requires that the programmer only identify parallelism in the program, without deciding which potential parallelism is actually useful. Several theorems are proved providing upper bounds on the amount of overhead produced by this method. He concludes that for programs whose computation trees have small height relative to their total size, dynamic partitioning can achieve asymptotically minimal overhead in the cost of process creation.

  8. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  9. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  10. Parallelization of implicit finite difference schemes in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel

    1990-01-01

    Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.

  11. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2011-01-01

    Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

  12. Superfast robust digital image correlation analysis with parallel computing

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Tian, Long

    2015-03-01

    Existing digital image correlation (DIC) using the robust reliability-guided displacement tracking (RGDT) strategy for full-field displacement measurement is a path-dependent process that can only be executed sequentially. This path-dependent tracking strategy not only limits the potential of DIC for further improvement of its computational efficiency but also wastes the parallel computing power of modern computers with multicore processors. To maintain the robustness of the existing RGDT strategy and to overcome its deficiency, an improved RGDT strategy using a two-section tracking scheme is proposed. In the improved RGDT strategy, the calculated points with correlation coefficients higher than a preset threshold are all taken as reliably computed points and given the same priority to extend the correlation analysis to their neighbors. Thus, DIC calculation is first executed in parallel at multiple points by separate independent threads. Then for the few calculated points with correlation coefficients smaller than the threshold, DIC analysis using existing RGDT strategy is adopted. Benefiting from the improved RGDT strategy and the multithread computing, superfast DIC analysis can be accomplished without sacrificing its robustness and accuracy. Experimental results show that the presented parallel DIC method performed on a common eight-core laptop can achieve about a 7 times speedup.

  13. Age-forming aluminum panels

    NASA Technical Reports Server (NTRS)

    Baxter, G. I.

    1976-01-01

    Contoured-stiffened 63 by 337 inch 2124 aluminum alloy panels are machined in-the-flat to make integral, tapered T-capped stringers, parallel with longitudinal centerline. Aging fixture, which includes net contour formers made from lofted contour templates, has eggcrate-like structure for use in forming and checking panels.

  14. The design and implementation of a workbench for application specific message-passing parallel reconfigurable architectures

    SciTech Connect

    Hwang, K.R.D.

    1989-01-01

    This thesis develops a message-passing model for the design, simulation and evaluation of parallel reconfigurable architectures. A designer's workbench, OODRA (Object-Oriented Design of Reliable/Reconfigurable Architecture), has been implemented to realize the proposed message-passing model, and it provides a window-based, menu driven, graphics interactive environment for designing application-specific parallel architectures as well as the development of reconfiguration algorithms, the reliability analysis, and architectural level yield analysis. Applications of the workbench include the design and evaluation of an adaptive digital beamforming architecture, and the fast Fourier Transform algorithm running on a simulated Hypercube architecture.

  15. Demonstrating tail-gas treater reliability reduces costs

    SciTech Connect

    Kafesjian, A.S.; Dewey, R.C.

    1995-04-01

    The reliability of a hybrid tail-gas treating unit (TGTU), proposed as an alternative to parallel TGTUs, is nearly equal to that of two parallel units. This is proven using fault tree analysis. A Gulf Coast refiner was able to reduce major process equipment needed to satisfy environmental regulations and permit expansion of sulfur recovery facilities. Estimated capital cost savings of 67% are achievable by installing a hybrid system instead of a complete unit to process tail gas from the second of two parallel Claus sulfur recovery units. Using the same component failure rates, component repair times and human error probabilities for both cases and applying well-documented methods of quantifying onstream time, it is shown that there is not a meaningful difference in the annual sulfur dioxide (SO{sub 2}) emissions between the two cases. The hybrid unit provides the reliability and onstream time required by regulatory agencies, while reducing the capital outlay to comply with environmental regulations. The paper discusses background of the problem, fault tree methodology, two case descriptions, fault tree development, quench tower subsystem failure, component reliability data, quantitative fault tree analysis, and emissions comparison.

  16. Reliability and the system engineer

    SciTech Connect

    Elia, F.A. Jr. )

    1988-01-01

    Today's system engineers must be able to predict the reliability of systems to ensure that the reliability goals are met before finalizing system designs or design modifications. This paper presents an updated view of the role of the system engineer in reliability analysis. References are provided for the tools available to the system engineer to accomplish these Goals. These tools include computer programs for reliability analysis, applicable codes and standards, and data bases for component reliability and system event frequency data. The benefits of this approach to reliability analysis are discussed in terms of nuclear and chemical plant safety. Also discussed is the need for the system engineer to assimilate normal system operating requirements, test requirements, code requirements, and human factors, as well as system transients.

  17. Nuclear weapon reliability evaluation methodology

    SciTech Connect

    Wright, D.L.

    1993-06-01

    This document provides an overview of those activities that are normally performed by Sandia National Laboratories to provide nuclear weapon reliability evaluations for the Department of Energy. These reliability evaluations are first provided as a prediction of the attainable stockpile reliability of a proposed weapon design. Stockpile reliability assessments are provided for each weapon type as the weapon is fielded and are continuously updated throughout the weapon stockpile life. The reliability predictions and assessments depend heavily on data from both laboratory simulation and actual flight tests. An important part of the methodology are the opportunities for review that occur throughout the entire process that assure a consistent approach and appropriate use of the data for reliability evaluation purposes.

  18. Reliability of Wireless Sensor Networks

    PubMed Central

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2014-01-01

    Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

  19. Reliability of wireless sensor networks.

    PubMed

    Dmaso, Antnio; Rosa, Nelson; Maciel, Paulo

    2014-01-01

    Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

  20. CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED

    EPA Science Inventory

    This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter, and the power-law relationship between...

  1. CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED

    EPA Science Inventory

    This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter , and the power-law relationship betwe...

  2. Hierarchial parallel computer architecture defined by computational multidisciplinary mechanics

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug; Johnson, Keith

    1989-01-01

    The goal is to develop an architecture for parallel processors enabling optimal handling of multi-disciplinary computation of fluid-solid simulations employing finite element and difference schemes. The goals, philosphical and modeling directions, static and dynamic poly trees, example problems, interpolative reduction, the impact on solvers are shown in viewgraph form.

  3. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  4. Parallel multiplex laser feedback interferometry

    SciTech Connect

    Zhang, Song; Tan, Yidong; Zhang, Shulian

    2013-12-15

    We present a parallel multiplex laser feedback interferometer based on spatial multiplexing which avoids the signal crosstalk in the former feedback interferometer. The interferometer outputs two close parallel laser beams, whose frequencies are shifted by two acousto-optic modulators by 2? simultaneously. A static reference mirror is inserted into one of the optical paths as the reference optical path. The other beam impinges on the target as the measurement optical path. Phase variations of the two feedback laser beams are simultaneously measured through heterodyne demodulation with two different detectors. Their subtraction accurately reflects the target displacement. Under typical room conditions, experimental results show a resolution of 1.6 nm and accuracy of 7.8 nm within the range of 100 ?m.

  5. Parallel supercomputing with commodity components

    NASA Technical Reports Server (NTRS)

    Warren, M. S.; Goda, M. P.; Becker, D. J.

    1997-01-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10(sup 15) floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  6. Instruction-level parallel processing.

    PubMed

    Fisher, J A; Rau, R

    1991-09-13

    The performance of microprocessors has increased steadily over the past 20 years at a rate of about 50% per year. This is the cumulative result of architectural improvements as well as increases in circuit speed. Moreover, this improvement has been obtained in a transparent fashion, that is, without requiring programmers to rethink their algorithms and programs, thereby enabling the tremendous proliferation of computers that we see today. To continue this performance growth, microprocessor designers have incorporated instruction-level parallelism (ILP) into new designs. ILP utilizes the parallel execution ofthe lowest level computer operations-adds, multiplies, loads, and so on-to increase performance transparently. The use of ILP promises to make possible, within the next few years, microprocessors whose performance is many times that of a CRAY-IS. This article provides an overview of ILP, with an emphasis on ILP architectures-superscalar, VLIW, and dataflow processors-and the compiler techniques necessary to make ILP work well. PMID:17831442

  7. A generalized parallel replica dynamics

    NASA Astrophysics Data System (ADS)

    Binder, Andrew; Lelivre, Tony; Simpson, Gideon

    2015-03-01

    Metastability is a common obstacle to performing long molecular dynamics simulations. Many numerical methods have been proposed to overcome it. One method is parallel replica dynamics, which relies on the rapid convergence of the underlying stochastic process to a quasi-stationary distribution. Two requirements for applying parallel replica dynamics are knowledge of the time scale on which the process converges to the quasi-stationary distribution and a mechanism for generating samples from this distribution. By combining a Fleming-Viot particle system with convergence diagnostics to simultaneously identify when the process converges while also generating samples, we can address both points. This variation on the algorithm is illustrated with various numerical examples, including those with entropic barriers and the 2D Lennard-Jones cluster of seven atoms.

  8. All-exchanges parallel tempering.

    PubMed

    Calvo, F

    2005-09-22

    An alternative exchange strategy for parallel tempering simulations is introduced. Instead of attempting to swap configurations between two randomly chosen but adjacent replicas, the acceptance probabilities of all possible swap moves are calculated a priori. One specific swap move is then selected according to its probability and enforced. The efficiency of the method is illustrated first on the case of two Lennard-Jones (LJ) clusters containing 13 and 31 atoms, respectively. The convergence of the caloric curve is seen to be at least twice as fast as in conventional parallel tempering simulations, especially for the difficult case of LJ31. Further evidence for an improved efficiency is reported on the ergodic measure introduced by Mountain and Thirumalai [J. Phys. Chem. 93, 6975 (1989)], calculated here for LJ13 close to the melting point. Finally, tests on two simple spin systems indicate that the method should be particularly useful when a limited number of replicas are available. PMID:16392474

  9. Parallel processing via a dual olfactory pathway in the honeybee.

    PubMed

    Brill, Martin F; Rosenbaum, Tobias; Reus, Isabelle; Kleineidam, Christoph J; Nawrot, Martin P; Rössler, Wolfgang

    2013-02-01

    In their natural environment, animals face complex and highly dynamic olfactory input. Thus vertebrates as well as invertebrates require fast and reliable processing of olfactory information. Parallel processing has been shown to improve processing speed and power in other sensory systems and is characterized by extraction of different stimulus parameters along parallel sensory information streams. Honeybees possess an elaborate olfactory system with unique neuronal architecture: a dual olfactory pathway comprising a medial projection-neuron (PN) antennal lobe (AL) protocerebral output tract (m-APT) and a lateral PN AL output tract (l-APT) connecting the olfactory lobes with higher-order brain centers. We asked whether this neuronal architecture serves parallel processing and employed a novel technique for simultaneous multiunit recordings from both tracts. The results revealed response profiles from a high number of PNs of both tracts to floral, pheromonal, and biologically relevant odor mixtures tested over multiple trials. PNs from both tracts responded to all tested odors, but with different characteristics indicating parallel processing of similar odors. Both PN tracts were activated by widely overlapping response profiles, which is a requirement for parallel processing. The l-APT PNs had broad response profiles suggesting generalized coding properties, whereas the responses of m-APT PNs were comparatively weaker and less frequent, indicating higher odor specificity. Comparison of response latencies within and across tracts revealed odor-dependent latencies. We suggest that parallel processing via the honeybee dual olfactory pathway provides enhanced odor processing capabilities serving sophisticated odor perception and olfactory demands associated with a complex olfactory world of this social insect. PMID:23392673

  10. Efficient, massively parallel eigenvalue computation

    NASA Technical Reports Server (NTRS)

    Huo, Yan; Schreiber, Robert

    1993-01-01

    In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.

  11. Parallel strategies for SAR processing

    NASA Astrophysics Data System (ADS)

    Segoviano, Jesus A.

    2004-12-01

    This article proposes a series of strategies for improving the computer process of the Synthetic Aperture Radar (SAR) signal treatment, following the three usual lines of action to speed up the execution of any computer program. On the one hand, it is studied the optimization of both, the data structures and the application architecture used on it. On the other hand it is considered a hardware improvement. For the former, they are studied both, the usually employed SAR process data structures, proposing the use of parallel ones and the way the parallelization of the algorithms employed on the process is implemented. Besides, the parallel application architecture classifies processes between fine/coarse grain. These are assigned to individual processors or separated in a division among processors, all of them in their corresponding architectures. For the latter, it is studied the hardware employed on the computer parallel process used in the SAR handling. The improvement here refers to several kinds of platforms in which the SAR process is implemented, shared memory multicomputers, and distributed memory multiprocessors. A comparison between them gives us some guidelines to follow in order to get a maximum throughput with a minimum latency and a maximum effectiveness with a minimum cost, all together with a limited complexness. It is concluded and described, that the approach consisting of the processing of the algorithms in a GNU/Linux environment, together with a Beowulf cluster platform offers, under certain conditions, the best compromise between performance and cost, and promises the major development in the future for the Synthetic Aperture Radar computer power thirsty applications in the next years.

  12. Parallel Processing in Combustion Analysis

    NASA Technical Reports Server (NTRS)

    Schunk, Richard Gregory; Chung, T. J.

    2000-01-01

    The objective of this research is to demonstrate the application of the Flow-field Dependent Variation (FDV) method to a problem of current interest in supersonic chemical combustion. Due in part to the stiffness of the chemical reactions, the solution of such problems on unstructured three dimensional grids often dictates the use of parallel computers. Preliminary results for the injection of a supersonic hydrogen stream into vitiated air are presented.

  13. Parallel Power Grid Simulation Toolkit

    Energy Science and Technology Software Center (ESTSC)

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  14. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  15. A fourth generation reliability predictor

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Martensen, Anna L.

    1988-01-01

    A reliability/availability predictor computer program has been developed and is currently being beta-tested by over 30 US companies. The computer program is called the Hybrid Automated Reliability Predictor (HARP). HARP was developed to fill an important gap in reliability assessment capabilities. This gap was manifested through the use of its third-generation cousin, the Computer-Aided Reliability Estimation (CARE III) program, over a six-year development period and an additional three-year period during which CARE III has been in the public domain. The accumulated experience of the over 30 establishments now using CARE III was used in the development of the HARP program.

  16. Task parallelism and high-performance languages

    SciTech Connect

    Foster, I.

    1996-03-01

    The definition of High Performance Fortran (HPF) is a significant event in the maturation of parallel computing: it represents the first parallel language that has gained widespread support from vendors and users. The subject of this paper is to incorporate support for task parallelism. The term task parallelism refers to the explicit creation of multiple threads of control, or tasks, which synchronize and communicate under programmer control. Task and data parallelism are complementary rather than competing programming models. While task parallelism is more general and can be used to implement algorithms that are not amenable to data-parallel solutions, many problems can benefit from a mixed approach, with for example a task-parallel coordination layer integrating multiple data-parallel computations. Other problems admit to both data- and task-parallel solutions, with the better solution depending on machine characteristics, compiler performance, or personal taste. For these reasons, we believe that a general-purpose high-performance language should integrate both task- and data-parallel constructs. The challenge is to do so in a way that provides the expressivity needed for applications, while preserving the flexibility and portability of a high-level language. In this paper, we examine and illustrate the considerations that motivate the use of task parallelism. We also describe one particular approach to task parallelism in Fortran, namely the Fortran M extensions. Finally, we contrast Fortran M with other proposed approaches and discuss the implications of this work for task parallelism and high-performance languages.

  17. Scalable Performance Environments for Parallel Systems

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Olson, Robert D.; Aydt, Ruth A.; Madhyastha, Tara M.; Birkett, Thomas; Jensen, David W.; Nazief, Bobby A. A.; Totty, Brian K.

    1991-01-01

    As parallel systems expand in size and complexity, the absence of performance tools for these parallel systems exacerbates the already difficult problems of application program and system software performance tuning. Moreover, given the pace of technological change, we can no longer afford to develop ad hoc, one-of-a-kind performance instrumentation software; we need scalable, portable performance analysis tools. We describe an environment prototype based on the lessons learned from two previous generations of performance data analysis software. Our environment prototype contains a set of performance data transformation modules that can be interconnected in user-specified ways. It is the responsibility of the environment infrastructure to hide details of module interconnection and data sharing. The environment is written in C++ with the graphical displays based on X windows and the Motif toolkit. It allows users to interconnect and configure modules graphically to form an acyclic, directed data analysis graph. Performance trace data are represented in a self-documenting stream format that includes internal definitions of data types, sizes, and names. The environment prototype supports the use of head-mounted displays and sonic data presentation in addition to the traditional use of visual techniques.

  18. Human reliability assessment: tools for law enforcement

    NASA Astrophysics Data System (ADS)

    Ryan, Thomas G.; Overlin, Trudy K.

    1997-01-01

    This paper suggests ways in which human reliability analysis (HRA) can assist the United State Justice System, and more specifically law enforcement, in enhancing the reliability of the process from evidence gathering through adjudication. HRA is an analytic process identifying, describing, quantifying, and interpreting the state of human performance, and developing and recommending enhancements based on the results of individual HRA. It also draws on lessons learned from compilations of several HRA. Given the high legal standards the Justice System is bound to, human errors that might appear to be trivial in other venues can make the difference between a successful and unsuccessful prosecution. HRA has made a major contribution to the efficiency, favorable cost-benefit ratio, and overall success of many enterprises where humans interface with sophisticated technologies, such as the military, ground transportation, chemical and oil production, nuclear power generation, commercial aviation and space flight. Each of these enterprises presents similar challenges to the humans responsible for executing action and action sequences, especially where problem solving and decision making are concerned. Nowhere are humans confronted, to a greater degree, with problem solving and decision making than are the diverse individuals and teams responsible for arrest and adjudication of criminal proceedings. This paper concludes that because of the parallels between the aforementioned technologies and the adjudication process, especially crime scene evidence gathering, there is reason to believe that the HRA technology, developed and enhanced in other applications, can be transferred to the Justice System with minimal cost and with significant payoff.

  19. Permission Forms

    ERIC Educational Resources Information Center

    Zirkel, Perry A.

    2005-01-01

    The prevailing practice in public schools is to routinely require permission or release forms for field trips and other activities that pose potential for liability. The legal status of such forms varies, but they are generally considered to be neither rock-solid protection nor legally valueless in terms of immunity. The following case and the

  20. Permission Forms

    ERIC Educational Resources Information Center

    Zirkel, Perry A.

    2005-01-01

    The prevailing practice in public schools is to routinely require permission or release forms for field trips and other activities that pose potential for liability. The legal status of such forms varies, but they are generally considered to be neither rock-solid protection nor legally valueless in terms of immunity. The following case and the…

  1. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  2. Performance and Scalability Evaluation of the Ceph Parallel File System

    SciTech Connect

    Wang, Feiyi; Nelson, Mark; Oral, H Sarp; Settlemyer, Bradley W; Atchley, Scott; Caldwell, Blake A; Hill, Jason J

    2013-01-01

    Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

  3. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  4. Software reliability - Measures and effects in flight critical digital avionics systems

    NASA Technical Reports Server (NTRS)

    Dunn, William R.

    1986-01-01

    The paper discusses software reliability as it applies particularly to design and evaluation of flight-critical digital avionics systems. Measures of software reliability, measurement methods and reliability (macro-) models are discussed. Recent work assessing their accuracy in predicting software errors in 'fly-by-wire' Newtonian applications is presented. Additional, detailed topics are discussed including software error distributions (e.g. catastrophic vs. noncatastrophic) and the effects of system growth/maturity on reliability improvement. In practical flight-critical digital applications, software reliability improvement is sought through use of parallel, redundant software (i.e. N-version programming) or backup software that can be invoked in the event of (primary) software failure. Achievable reliability levels are however highly sensitive to common-mode specification and programming errors. Recent data correlating these errors with net software reliability are discussed.

  5. Highly reliable PLC systems

    SciTech Connect

    Beckman, L.V.

    1995-03-01

    Today`s control engineers are afforded many options when designing microprocessor based systems for safety applications. The use of some form of redundancy is typical, but the final selection must match the requirements of the application. Should the system be fail safe or fault tolerant? Is safety the overriding consideration, or is production a concern as well? Are redundant PLC`s (Programmable Logic Controllers) adequate, or should a system specifically designed for safety applications be utilized? There is a considerable effort in progress, both in the USA and in Europe, to establish guidelines and standards which match the safety integrity of the system with the degree of risk inherent in the application. This paper is intended to provide an introduction to the subject, and explore some of the microprocessor based alternatives available to the control or safety engineer.

  6. Catalytic Parallel Kinetic Resolution under Homogeneous Conditions

    PubMed Central

    Duffey, Trisha A.; MacKay, James A.; Vedejs, Edwin

    2010-01-01

    Two complementary chiral catalysts, the phosphine 8d and the DMAP-derived ent-23b, are used simultaneously to selectively activate one of a mixture of two different achiral anhydrides as acyl donors under homogeneous conditions. The resulting activated intermediates 25 and 26 react with the racemic benzylic alcohol 5 to form enantioenriched esters (R)-24 and (S)-17 by fully catalytic parallel kinetic resolution (PKR). The aroyl ester (R)-24 is obtained with near-ideal enantioselectivity for the PKR process, but (S)-17 is contaminated by ca. 8% of the minor enantiomer (R)-17 resulting from a second pathway via formation of mixed anhydride 24 and its activation by 8d. PMID:20557113

  7. A parallel algorithm for implicit depletant simulations

    NASA Astrophysics Data System (ADS)

    Glaser, Jens; Karas, Andrew S.; Glotzer, Sharon C.

    2015-11-01

    We present an algorithm to simulate the many-body depletion interaction between anisotropic colloids in an implicit way, integrating out the degrees of freedom of the depletants, which we treat as an ideal gas. Because the depletant particles are statistically independent and the depletion interaction is short-ranged, depletants are randomly inserted in parallel into the excluded volume surrounding a single translated and/or rotated colloid. A configurational bias scheme is used to enhance the acceptance rate. The method is validated and benchmarked both on multi-core processors and graphics processing units for the case of hard spheres, hemispheres, and discoids. With depletants, we report novel cluster phases in which hemispheres first assemble into spheres, which then form ordered hcp/fcc lattices. The method is significantly faster than any method without cluster moves and that tracks depletants explicitly, for systems of colloid packing fraction ?c < 0.50, and additionally enables simulation of the fluid-solid transition.

  8. Interrelation Between Safety Factors and Reliability

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.

  9. Stirling Convertor Fasteners Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.

    2006-01-01

    Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.

  10. The Reliability of Density Measurements.

    ERIC Educational Resources Information Center

    Crothers, Charles

    1978-01-01

    Data from a land-use study of small- and medium-sized towns in New Zealand are used to ascertain the relationship between official and effective density measures. It was found that the reliability of official measures of density is very low overall, although reliability increases with community size. (Author/RLV)

  11. Computer-Aided Reliability Estimation

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.

    1986-01-01

    CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.

  12. Satellites: Reliability and operation security

    NASA Astrophysics Data System (ADS)

    Giudicelli, Philippe; Demarquilly, Dominique

    1993-11-01

    The importance of safety and reliability analysis is underlined. It is a crucial point for designing systems, such as satellites, in which neither failure repairing can be easily performed nor unoperating broken equipment can be replaced. A system's reliability and security can be assessed by computerized simulations, which also allow solution finding. The major difficulties concerning satellites is searching for errors in nonelectronic equipment.

  13. The Verification-based Analysis of Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1996-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  14. Fatigue reliability assessment of correlated welded web-frame joints

    NASA Astrophysics Data System (ADS)

    Huang, W.; Garbatov, Y.; Guedes Soares, C.

    2014-03-01

    The objective of this work is to analyze the fatigue reliability of complex welded structures composed of multiple web-frame joints, accounting for correlation effects. A three-dimensional finite element model using the 20-node solid elements is generated. A linear elastic finite element analysis was performed, hotspot stresses in a web-frame joint were analyzed and fatigue damage was quantified employing the S-N approach. The statistical descriptors of the fatigue life of a non-correlated web-frame joint containing several critical hotspots were estimated. The fatigue reliability of a web-frame joint wasmodeled as a series system of correlated components using the Ditlevsen bounds. The fatigue reliability of the entire welded structure with multiple web-frame joints, modeled as a parallel system of non-correlated web-frame joints was also calculated.

  15. Mapping Pixel Windows To Vectors For Parallel Processing

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    1996-01-01

    Mapping performed by matrices of transistor switches. Arrays of transistor switches devised for use in forming simultaneous connections from square subarray (window) of n x n pixels within electronic imaging device containing np x np array of pixels to linear array of n(sup2) input terminals of electronic neural network or other parallel-processing circuit. Method helps to realize potential for rapidity in parallel processing for such applications as enhancement of images and recognition of patterns. In providing simultaneous connections, overcomes timing bottleneck or older multiplexing, serial-switching, and sample-and-hold methods.

  16. Parallel Monte Carlo Simulation for control system design

    NASA Technical Reports Server (NTRS)

    Schubert, Wolfgang M.

    1995-01-01

    The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.

  17. Reliability-based design optimization using efficient global reliability analysis.

    SciTech Connect

    Bichon, Barron J.; Mahadevan, Sankaran; Eldred, Michael Scott

    2010-05-01

    Finding the optimal (lightest, least expensive, etc.) design for an engineered component that meets or exceeds a specified level of reliability is a problem of obvious interest across a wide spectrum of engineering fields. Various methods for this reliability-based design optimization problem have been proposed. Unfortunately, this problem is rarely solved in practice because, regardless of the method used, solving the problem is too expensive or the final solution is too inaccurate to ensure that the reliability constraint is actually satisfied. This is especially true for engineering applications involving expensive, implicit, and possibly nonlinear performance functions (such as large finite element models). The Efficient Global Reliability Analysis method was recently introduced to improve both the accuracy and efficiency of reliability analysis for this type of performance function. This paper explores how this new reliability analysis method can be used in a design optimization context to create a method of sufficient accuracy and efficiency to enable the use of reliability-based design optimization as a practical design tool.

  18. Reliably assessing prediction reliability for high dimensional QSAR data.

    PubMed

    Huang, Jianping; Fan, Xiaohui

    2013-02-01

    Predictability and prediction reliability are of utmost important to characterize a good Quantitative structure-activity relationships (QSAR) model. However, validation methods are insufficient to guarantee the prediction reliability of QSAR models. Moreover, high dimensional samples also pose great challenge to traditional methods in terms of predictive power. Therefore, this study presents a predictive classifier (i.e., TreeEC) that can assess prediction reliability with high confidence, especially for facing high dimensional QSAR data. Two approaches for assessing prediction reliability are provided, i.e., applicability domain and prediction confidence. We demonstrate that the applicability domain has difficulty to guarantee the models' prediction reliability, where samples intensively close to the domain center are often poor predicted than those outside the domain. Instead, prediction confidence is more promising for assessing prediction reliability. Based on a large data set assessed by prediction confidence, external samples assessed with high confidence greater than 95 % can be reliably predicted with an accuracy of 94 %, in contrast to the average accuracy of 84 %. We also illustrate that TreeEC are less affected by high dimensionality than other popular methods according to 11 public data sets. A free version of TreeEC with a user-friendly interface can also be downloading from website http://pharminfo.zju.edu.cn/computation/TreeEC/TreeEC.html. PMID:23250826

  19. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  20. Photovoltaic performance and reliability workshop

    SciTech Connect

    Mrig, L.

    1993-12-01

    This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986--1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the US, PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in the field were brought together to exchange the technical knowledge and field experience as related to current information in this evolving field of PV reliability. The papers presented here reflect this effort since the last workshop held in September, 1992. The topics covered include: cell and module characterization, module and system testing, durability and reliability, system field experience, and standards and codes.

  1. Global Arrays Parallel Programming Toolkit

    SciTech Connect

    Nieplocha, Jaroslaw; Krishnan, Manoj Kumar; Palmer, Bruce J.; Tipparaju, Vinod; Harrison, Robert J.; Chavarra-Miranda, Daniel

    2011-01-01

    The two predominant classes of programming models for parallel computing are distributed memory and shared memory. Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in modern computers this characteristic can have a negative impact on performance and scalability. Careful code restructuring to increase data reuse and replacing fine grain load/stores with block access to shared data can address the problem and yield performance for shared memory that is competitive with message-passing. However, this performance comes at the cost of compromising the ease of use that the shared memory model advertises. Distributed memory models, such as message-passing or one-sided communication, offer performance and scalability but they are difficult to program. The Global Arrays toolkit attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed by the programmer. This management is achieved by calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be specified by the programmer and hence managed. GA is related to the global address space languages such as UPC, Titanium, and, to a lesser extent, Co-Array Fortran. In addition, by providing a set of data-parallel operations, GA is also related to data-parallel languages such as HPF, ZPL, and Data Parallel C. However, the Global Array programming model is implemented as a library that works with most languages used for technical computing and does not rely on compiler technology for achieving parallel efficiency. It also supports a combination of task- and data-parallelism and is available as an extension of the message passing (MPI) model. The GA model exposes to the programmer the hierarchical memory of modern high-performance computer systems, and by recognizing the communication overhead for remote data transfer, it promotes data reuse and locality of reference. Virtually all the scalable architectures possess non-uniform memory access characteristics that reflect their multi-level memory hierarchies. These hierarchies typically comprise processor registers, multiple levels of cache, local memory, and remote memory. Over time, both the number of levels and the cost (in processor cycles) of accessing deeper levels has been increasing. It is important for any scalable programming model to address memory hierarchy since it is critical to the efficient execution of scalable applications.

  2. Implementing clips on a parallel computer

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1987-01-01

    The C language integrated production system (CLIPS) is a forward chaining rule based language to provide training and delivery for expert systems. Conceptually, rule based languages have great potential for benefiting from the inherent parallelism of the algorithms that they employ. During each cycle of execution, a knowledge base of information is compared against a set of rules to determine if any rules are applicable. Parallelism also can be employed for use with multiple cooperating expert systems. To investigate the potential benefits of using a parallel computer to speed up the comparison of facts to rules in expert systems, a parallel version of CLIPS was developed for the FLEX/32, a large grain parallel computer. The FLEX implementation takes a macroscopic approach in achieving parallelism by splitting whole sets of rules among several processors rather than by splitting the components of an individual rule among processors. The parallel CLIPS prototype demonstrates the potential advantages of integrating expert system tools with parallel computers.

  3. Parallelizing alternating direction implicit solver on GPUs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present a parallel Alternating Direction Implicit (ADI) solver on GPUs. Our implementation significantly improves existing implementations in two aspects. First, we address the scalability issue of existing Parallel Cyclic Reduction (PCR) implementations by eliminating their hardware resource con...

  4. Parallel computational fluid dynamics - Implementations and results

    NASA Astrophysics Data System (ADS)

    Simon, Horst D.

    The present volume on parallel CFD discusses implementations on parallel machines, numerical algorithms for parallel CFD, and performance evaluation and computer science issues. Attention is given to a parallel algorithm for compressible flows through rotor-stator combinations, a massively parallel Euler solver for unstructured grids, a fast scheme to analyze 3D disk airflow on a parallel computer, and a block implicit multigrid solution of the Euler equations. Topics addressed include a 3D ADI algorithm on distributed memory multiprocessors, clustered element-by-element computations for fluid flow, hypercube FFT and the Fourier pseudospectral method, and an investigation of parallel iterative algorithms for CFD. Also discussed are fluid dynamics using interface methods on parallel processors, sorting for particle flow simulation on the connection machine, a large grain mapping method, and efforts toward a Teraflops capability for CFD.

  5. Do parallel beta-helix proteins have a unique fourier transform infrared spectrum?

    PubMed Central

    Khurana, R; Fink, A L

    2000-01-01

    Several polypeptides have been found to adopt an unusual domain structure known as the parallel beta-helix. These domains are characterized by parallel beta-strands, three of which form a single parallel beta-helix coil, and lead to long, extended beta-sheets. We have used ATR-FTIR (attenuated total reflectance-fourier transform infrared spectroscopy) to analyze the secondary structure of representative examples of this class of protein. Because the three-dimensional structures of parallel beta-helix proteins are unique, we initiated this study to determine if there was a corresponding unique FTIR signal associated with the parallel beta-helix conformation. Analysis of the amide I region, emanating from the carbonyl stretch vibration, reveals a strong absorbance band at 1638 cm(-1) in each of the parallel beta-helix proteins. This band is assigned to the parallel beta-sheet structure. However, components at this frequency are also commonly observed for beta-sheets in many classes of globular proteins. Thus we conclude that there is no unique infrared signature for parallel beta-helix structure. Additional contributions in the 1638 cm(-1) region, and at lower frequencies, were ascribed to hydrogen bonding between the coils in the loop/turn regions and amide side-chain interactions, respectively. A 13-residue peptide that forms fibrils and has been proposed to form beta-helical structure was also examined, and its FTIR spectrum was compared to that of the parallel beta-helix proteins. PMID:10653812

  6. Good form.

    PubMed

    Sorrel, Amy Lynn

    2015-03-01

    New standardized prior authorization forms for health care services and prescription drugs released by the Texas Department of Insurance promise to alleviate administrative busy work and its related costs. PMID:25761070

  7. Parallel Assembly of LIGA Components

    SciTech Connect

    Christenson, T.R.; Feddema, J.T.

    1999-03-04

    In this paper, a prototype robotic workcell for the parallel assembly of LIGA components is described. A Cartesian robot is used to press 386 and 485 micron diameter pins into a LIGA substrate and then place a 3-inch diameter wafer with LIGA gears onto the pins. Upward and downward looking microscopes are used to locate holes in the LIGA substrate, pins to be pressed in the holes, and gears to be placed on the pins. This vision system can locate parts within 3 microns, while the Cartesian manipulator can place the parts within 0.4 microns.

  8. Parallel Mapping Approaches for GNUMAP

    PubMed Central

    Clement, Nathan L.; Clement, Mark J.; Snell, Quinn; Johnson, W. Evan

    2013-01-01

    Mapping short next-generation reads to reference genomes is an important element in SNP calling and expression studies. A major limitation to large-scale whole-genome mapping is the large memory requirements for the algorithm and the long run-time necessary for accurate studies. Several parallel implementations have been performed to distribute memory on different processors and to equally share the processing requirements. These approaches are compared with respect to their memory footprint, load balancing, and accuracy. When using MPI with multi-threading, linear speedup can be achieved for up to 256 processors. PMID:23396612

  9. True Shear Parallel Plate Viscometer

    NASA Technical Reports Server (NTRS)

    Ethridge, Edwin; Kaukler, William

    2010-01-01

    This viscometer (which can also be used as a rheometer) is designed for use with liquids over a large temperature range. The device consists of horizontally disposed, similarly sized, parallel plates with a precisely known gap. The lower plate is driven laterally with a motor to apply shear to the liquid in the gap. The upper plate is freely suspended from a double-arm pendulum with a sufficiently long radius to reduce height variations during the swing to negligible levels. A sensitive load cell measures the shear force applied by the liquid to the upper plate. Viscosity is measured by taking the ratio of shear stress to shear rate.

  10. The PARTY parallel runtime system

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Mirchandaney, Ravi; Smith, R. M.; Crowley, Kay; Nicol, D. M.

    1989-01-01

    In the present automated system for the organization of the data and computational operations entailed by parallel problems, in ways that optimize multiprocessor performance, general heuristics for partitioning program data and control are implemented by capturing and manipulating representations of a computation at run time. These heuristics are directed toward the dynamic identification and allocation of concurrent work in computations with irregular computational patterns. An optimized static-workload partitioning is computed for such repetitive-computation pattern problems as the iterative ones employed in scientific computation.

  11. Introduction to the POKER parallel programming environment

    SciTech Connect

    Snyder, L.

    1983-01-01

    The POKER parallel programming environment is a graphics-based, interactive system for programming the configurable, highly parallel (CHIP) computer. Designed to support nearly all aspects of parallel programming in one integrated system, POKER has been implemented as a (=35000 line) C program on the VAX 11/780 under UNIX. It provides a number of novel features including graphics programming of parallel processor communication. 4 references.

  12. Parallel multi-computers and artificial intelligence

    SciTech Connect

    Uhr, L.

    1986-01-01

    This book examines the present state and future direction of multicomputer parallel architectures for artificial intelligence research and development of artificial intelligence applications. The book provides a survey of the large variety of parallel architectures, describing the current state of the art and suggesting promising architectures to produce artificial intelligence systems such as intelligence systems such as intelligent robots. This book integrates artificial intelligence and parallel processing research areas and discusses parallel processing from the viewpoint of artificial intelligence.

  13. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  14. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

  15. Parallel Computing Using Web Servers and "Servlets".

    ERIC Educational Resources Information Center

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common

  16. Coordination in serial-parallel image processing

    NASA Astrophysics Data System (ADS)

    Wjcik, Waldemar; Dubovoi, Vladymyr M.; Duda, Marina E.; Romaniuk, Ryszard S.; Yesmakhanova, Laura; Kozbakova, Ainur

    2015-12-01

    Serial-parallel systems used to convert the image. The control of their work results with the need to solve coordination problem. The paper summarizes the model of coordination of resource allocation in relation to the task of synchronizing parallel processes; the genetic algorithm of coordination developed, its adequacy verified in relation to the process of parallel image processing.

  17. Inductive Information Retrieval Using Parallel Distributed Computation.

    ERIC Educational Resources Information Center

    Mozer, Michael C.

    This paper reports on an application of parallel models to the area of information retrieval and argues that massively parallel, distributed models of computation, called connectionist, or parallel distributed processing (PDP) models, offer a new approach to the representation and manipulation of knowledge. Although this document focuses on

  18. Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism

    ERIC Educational Resources Information Center

    Agarwal, Mayank

    2009-01-01

    The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

  19. Parallel transport and band theory in crystals

    NASA Astrophysics Data System (ADS)

    Fruchart, Michel; Carpentier, David; Gaw?dzki, Krzysztof

    2014-06-01

    We show that different conventions for Bloch Hamiltonians on non-Bravais lattices correspond to different natural definitions of parallel transport of Bloch eigenstates. Generically the Berry curvatures associated with these parallel transports differ, while physical quantities are naturally related to a canonical choice of the parallel transport.

  20. Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism

    ERIC Educational Resources Information Center

    Agarwal, Mayank

    2009-01-01

    The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics