Note: This page contains sample records for the topic parallel forms reliability from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results.
Last update: August 15, 2014.
1

Estimating Parallel Form Reliability from One Administration of a Criterion-Referenced Test: A Computer Program for Practitioners.  

ERIC Educational Resources Information Center

Explains Subkoviak's method for estimating alternate-form reliability from one administration of a criterion-referenced test and describes computer program that handles tests for large number of examinees and allows application of Subkoviak's technique. Concludes that program is superior to other methods since user can directly check…

Saltstone, Robert; And Others

1989-01-01

2

Reliability of a Parallel Pipe Network  

NASA Technical Reports Server (NTRS)

The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

Herrera, Edgar; Chamis, Christopher (Technical Monitor)

2001-01-01

3

Overview of ICLASS research: Reliable and parallel computing  

NASA Technical Reports Server (NTRS)

An overview of Illinois Computer Laboratory for Aerospace Systems and Software (ICLASS) Research: Reliable and Parallel Computing is presented. Topics covered include: reliable and fault tolerant computing; fault tolerant multiprocessor architectures; fault tolerant matrix computation; and parallel processing.

Iyer, Ravi K.

1987-01-01

4

Essay Reliability: Form and Meaning.  

ERIC Educational Resources Information Center

This study is an attempt at a cohesive characterization of the concept of essay reliability. As such, it takes as a basic premise that previous and current practices in reporting reliability estimates for essay tests have certain shortcomings. The study provides an analysis of these shortcomings--partly to encourage a fuller understanding of the…

Shale, Doug

5

Lorentzian Affine Hypersurfaces with Parallel Cubic Form  

Microsoft Academic Search

We study Lorentzian affine hypersurfaces of $${\\\\mathbb{R}^{n+1}}$$ having parallel cubic form with respect to the Levi-Civita connection of the affine Berwald-Blaschke metric. As main result,\\u000a we obtain a complete classification of these hypersurfaces.

Zejun Hu; Cece Li; Haizhong Li; Luc Vrancken

2011-01-01

6

Thermodynamics of Forming a Parallel DNA Crossover  

PubMed Central

Abstract The process of genetic recombination involves the formation of branched four-stranded DNA structures known as Holliday junctions. The Holliday junction is known to have an antiparallel orientation of its helices, i.e., the crossover occurs between strands of opposite polarity. Some intermediates in this process are known to involve two crossover sites, and these may involve crossovers between strands of identical polarity. Surprisingly, if a crossover occurs at every possible juxtaposition of backbones between parallel DNA double helices, the molecules form a paranemic structure with two helical domains, known as PX-DNA. Model PX-DNA molecules can be constructed from a variety of DNA molecules with five nucleotide pairs in the minor groove and six, seven or eight nucleotide pairs in the major groove. A topoisomer of the PX motif is the juxtaposed JX1 molecule, wherein one crossover is missing between the two helical domains. The JX1 molecule offers an outstanding baseline molecule with which to compare the PX molecule, so as to measure the thermodynamic cost of forming a crossover in a parallel molecule. We have made these measurements using calorimetric and ultraviolet hypochromicity methods, as well as denaturing gradient gel electrophoretic methods. The results suggest that in relaxed conditions, a system that meets the pairing requirements for PX-DNA would prefer to form the PX motif relative to juxtaposed molecules, particularly for the 6:5 structure.

Spink, Charles H.; Ding, Liang; Yang, Qingyi; Sheardy, Richard D.; Seeman, Nadrian C.

2009-01-01

7

Angles Formed by Parallel Lines and a Transversal  

NSDL National Science Digital Library

In this lesson you will learn how to classify angles formed by parallel lines and a transversal as well as how to find the measures of these angles. You have proably heard of parallel lines but you proably don\\'t know about all the special angles that are formed when a line intersects a set of parallel lines. Click on the lecture below to learn about these special angles. The lecture has sound so make sure your ...

Brown, Mrs.

2007-10-19

8

Armed Services Vocational Aptitude Battery (ASVAB): Alternate Forms Reliability (Forms 8, 9, 10, and 11). Technical Paper for Period October 1980-April 1985.  

ERIC Educational Resources Information Center

A study investigated the alternate forms reliability of the Armed Services Vocational Aptitude Battery (ASVAB) Forms 8, 9, 10, and 11. Usable data were obtained from 62,938 armed services applicants who took the ASVAB in January and February 1983. Results showed that the parallel forms reliability coefficients between ASVAB Form 8a and the…

Palmer, Pamla; And Others

9

Minimal eating observation form: Reliability and validity  

Microsoft Academic Search

Objectives  Eating difficulties are common for patients in hospitals (82% have one or more). Eating difficulties predict undernourishment,\\u000a need for assistance when eating, length of hospital stay and level of care after hospital stay. Eating difficulties have through\\u000a factor analysis (FA) been found to belong to three dimensions (ingestion, deglutition and energy). The present study investigates\\u000a inter-observer reliability. Other questions at

A. Westergren; C. Lindholm; A. Mattsson; K. Ulander

2009-01-01

10

PRAND: GPU accelerated parallel random number generation library: Using most reliable algorithms and applying parallelism of modern GPUs and CPUs  

NASA Astrophysics Data System (ADS)

The library PRAND for pseudorandom number generation for modern CPUs and GPUs is presented. It contains both single-threaded and multi-threaded realizations of a number of modern and most reliable generators recently proposed and studied in Barash (2011), Matsumoto and Tishimura (1998), L'Ecuyer (1999,1999), Barash and Shchur (2006) and the efficient SIMD realizations proposed in Barash and Shchur (2011). One of the useful features for using PRAND in parallel simulations is the ability to initialize up to 1019 independent streams. Using massive parallelism of modern GPUs and SIMD parallelism of modern CPUs substantially improves performance of the generators.

Barash, L. Yu.; Shchur, L. N.

2014-04-01

11

Method for forming a flat band of parallel, contiguous strands  

US Patent & Trademark Office Database

A method for fabricating a flat band of parallel, contiguous strands or wires using just a single strand or wire. Wire is wound onto a drum to produce a single-wire helix having a predetermined pitch, after which the same wire is wound in a reverse direction onto the drum to produce a second single-wire helix contiguous with the first one. These two winding steps are repeated until a helix having the desired number of contiguous wires has been formed. Unwinding the multi-wire helix from the drum produces the desired flat band of parallel, contiguous wires.

1986-12-23

12

Reliability analysis of a direct parallel connected n+1 redundant power system based on highly reliable DC\\/DC modules  

Microsoft Academic Search

An n+1 redundant system using modular hybrid DC\\/DC converters connected in parallel where the normally associated isolation diodes are omitted is described. Reliability and efficiency analysis of the systems was performed, based on a comparison between the system described and a system based on a conventional (uninterruptible power system) UPS with added redundant functions. It is concluded that the proposed

L. Thorsell; P. Lindman

1988-01-01

13

The Reliable Router: A Reliable and High-Performance Communication Substrate for Parallel Computers  

Microsoft Academic Search

. The Reliable Router (RR) is a network switching elementtargeted to two-dimensional mesh interconnection network topologies.It is designed to run at 100 MHz and reach a useful link bandwidth of3.2 Gbit\\/sec. The Reliable Router uses adaptive routing coupled withlink-level retransmission and a unique-token protocol to increase bothperformance and reliability. The RR can handle a single node or linkfailure anywhere in

William J. Dally; Larry R. Dennison; David Harris; Kinhong Kan; Thucydides Xanthopoulos

1994-01-01

14

Parameter Interval Estimation of System Reliability for Repairable Multistate Series-Parallel System with Fuzzy Data  

PubMed Central

The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ? 100. In addition, the optimal ?-cut for the narrowest lower expected length and the narrowest upper expected length are considered.

2014-01-01

15

Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis  

NASA Technical Reports Server (NTRS)

During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

Gibson, Garth Alan

1990-01-01

16

Parallel 3D ALE code for metal forming analyses.  

National Technical Information Service (NTIS)

A three-dimensional arbitrary Lagrange-Eulerian (ALE) code is being developed for use as a general purpose tool for metal forming analyses. The focus of the effort is on the processes of forging, extrusion, casting and rolling. The ALE approach was chosen...

R. Neely R. Couch E. Dube S. Futral

1995-01-01

17

Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method  

ERIC Educational Resources Information Center

In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel

Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

2008-01-01

18

The classification of 4-dimensional non-degenerate affine hypersurfaces with parallel cubic form  

Microsoft Academic Search

In this paper, we complete the classification of 4-dimensional non-degenerate affine hypersurfaces with parallel cubic form with respect to the Levi-Civita connection of the affine Berwald–Blaschke metric.

Zejun Hu; Cece Li; Haizhong Li; Luc Vrancken

2011-01-01

19

The Experiences in Close Relationship Scale (ECR)Short Form: Reliability, Validity, and Factor Structure  

Microsoft Academic Search

We developed a 12-item, short form of the Experiences in Close Relationship Scale (ECR; Brennan, Clark, & Shaver, 1998) across 6 studies. In Study 1, we examined the reliability and factor structure of the measure. In Studies 2 and 3, we cross-validated the reliability, factor structure, and validity of the short form measure; whereas in Study 4, we examined test-retest

Meifen Wei; Daniel W. Russell; Brent Mallinckrodt; David L. Vogel

2007-01-01

20

Similarity of the Multidimensional Space Defined by Parallel Forms of a Mathematics Test.  

ERIC Educational Resources Information Center

The purpose of the paper is to determine whether test forms of the Mathematics Usage Test (AAP Math) of the American College Testing Program are parallel in a multidimensional sense. The AAP Math is an achievement test of mathematics concepts acquired by high school students by the end of their third year. To determine the dimensionality of the…

Reckase, Mark D.; And Others

21

The classification of 3-dimensional Lorentzian affine hypersurfaces with parallel cubic form  

Microsoft Academic Search

We study Lorentzian affine hypersurfaces in Rn+1 with parallel cubic form with respect to the Levi-Civita connection of the affine metric. As main result, a complete classification of such non-degenerate affine hypersurfaces in R4 is given.

Zejun Hu; Cece Li

2011-01-01

22

Commentary on "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data"  

ERIC Educational Resources Information Center

In the article "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data," Dinno (this issue) provides strong evidence that the distribution of random data does not have a significant influence on the outcome of the analysis. Hayton appreciates the thorough approach to evaluating this assumption, and agrees…

Hayton, James C.

2009-01-01

23

A Prediction Interval for a Score on a Parallel Test Form.  

ERIC Educational Resources Information Center

Given any observed number-right score on a test, a method is described for obtaining a predicition interval for the corresponding number-right score on a randomly parallel form of the same test. The interval can be written down directly from published tables of the hypergeometric distribution. (Author)

Lord, Frederic M.

1981-01-01

24

Reliability Modeling Methodology for Independent Approaches on Parallel Runways Safety Analysis  

NASA Technical Reports Server (NTRS)

This document is an adjunct to the final report An Integrated Safety Analysis Methodology for Emerging Air Transport Technologies. That report presents the results of our analysis of the problem of simultaneous but independent, approaches of two aircraft on parallel runways (independent approaches on parallel runways, or IAPR). This introductory chapter presents a brief overview and perspective of approaches and methodologies for performing safety analyses for complex systems. Ensuing chapter provide the technical details that underlie the approach that we have taken in performing the safety analysis for the IAPR concept.

Babcock, P.; Schor, A.; Rosch, G.

1998-01-01

25

Reliability  

NSDL National Science Digital Library

In essence, reliability is the consistency of test results. To understand the meaning of reliability and how it relates to validity, imagine going to an airport to take flight #007 from Pittsburgh to San Diego. If, every time the airplane makes the flight

Christmann, Edwin P.; Badgett, John L.

2008-11-01

26

Parallel FE Approximation of the Even/Odd Parity Form of the Linear Boltzmann Equation  

SciTech Connect

A novel solution method has been developed to solve the linear Boltzmann equation on an unstructured triangular mesh. Instead of tackling the first-order form of the equation, this approach is based on the even/odd-parity form in conjunction with the conventional mdtigroup discrete-ordinates approximation. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, and the method is well suited for massively parallel computers.

Drumm, Clifton R.; Lorenz, Jens

1999-07-21

27

Magnetosheath filamentary structures formed by ion acceleration at the quasi-parallel bow shock  

NASA Astrophysics Data System (ADS)

from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.

Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.

2014-04-01

28

An Investigation into Reliability, Availability, and Serviceability (RAS) Features for Massively Parallel Processor Systems  

SciTech Connect

A study has been completed into the RAS features necessary for Massively Parallel Processor (MPP) systems. As part of this research, a use case model was built of how RAS features would be employed in an operational MPP system. Use cases are an effective way to specify requirements so that all involved parties can easily understand them. This technique is in contrast to laundry lists of requirements that are subject to misunderstanding as they are without context. As documented in the use case model, the study included a look at incorporating system software and end-user applications, as well as hardware, into the RAS system.

KELLY, SUZANNE M.; OGDEN, JEFFREY BRANDON

2002-10-01

29

Highly reliable 64-channel sequential and parallel tubular reactor system for high-throughput screening of heterogeneous catalysts  

NASA Astrophysics Data System (ADS)

Highly reliable 64-channel sequential and parallel tubular reactor for high-throughput screening of heterogeneous catalysts is constructed with stainless steel. In order to have a uniform flow rate at each channel, 64 capillaries are placed between the outlet of multiport valve and the inlet of each reactor. Flow rate can be controlled within +/-1.5%. Flow distribution can be easily adjusted for sequential and parallel mode of operation. The reactor diameter is too big to have a uniform temperature distribution. Hence, the reactor body is separated into three radial zones and controlled independently with nine thermocouples. Temperature accuracy is +/-0.5 °C at 300 °C and +/-1 °C at 500 °C in sequential mode, while it is +/-2.5 °C in the range of 250-500 °C in parallel mode. The temperature, flow rate, reaction sequence, and product analysis are controlled by LABVIEW™ software and monitored simultaneously with displaying a live graph. The accuracy in the conversion is +/-2% at the level of 73% conversion when all reactors are loaded with same amount of catalyst. A quaternary catalyst library of 56 samples composed of Pt, Cu, Fe, and Co supported on AlSBA-15 (SBA-15 substituted with Al) are evaluated in the selective catalytic reduction of NO at various temperatures with our system. The most active compositions are rapidly screened at various temperatures.

Oh, Kwang Seok; Park, Yong Ki; Woo, Seong Ihl

2005-06-01

30

Mitochondrial gene rearrangements confirm the parallel evolution of the crab-like form.  

PubMed Central

The repeated appearance of strikingly similar crab-like forms in independent decapod crustacean lineages represents a remarkable case of parallel evolution. Uncertainty surrounding the phylogenetic relationships among crab-like lineages has hampered evolutionary studies. As is often the case, aligned DNA sequences by themselves were unable to fully resolve these relationships. Four nested mitochondrial gene rearrangements--including one of the few reported movements of an arthropod protein-coding gene--are congruent with the DNA phylogeny and help to resolve a crucial node. A phylogenetic analysis of DNA sequences, and gene rearrangements, supported five independent origins of the crab-like form, and suggests that the evolution of the crab-like form may be irreversible. This result supports the utility of mitochondrial gene rearrangements in phylogenetic reconstruction.

Morrison, C L; Harvey, A W; Lavery, S; Tieu, K; Huang, Y; Cunningham, C W

2002-01-01

31

Validity and Reliability of International Physical Activity Questionnaire-Short Form in Chinese Youth  

ERIC Educational Resources Information Center

Purpose: The psychometric profiles of the widely used International Physical Activity Questionnaire-Short Form (IPAQ-SF) in Chinese youth have not been reported. The purpose of this study was to examine the validity and reliability of the IPAQ-SF using a sample of Chinese youth. Method: One thousand and twenty-one youth (M[subscript age] = 14.26 ±…

Wang, Chao; Chen, Peijie; Zhuang, Jie

2013-01-01

32

Secure Internet Banking with Privacy Enhanced Mail - A Protocol for Reliable Exchange of Secured Order Forms  

Microsoft Academic Search

The Protocol for Reliable Exchange of Secured Order Forms is a model for securing today's favourite Internet service for business, the World-Wide Web, and its capability for exchanging order forms. Based on the PEM Internet standards (RFC 1421–1424) the protocol includes integrity of communication contents and authenticity of its origin, which allows for non-repudiation services, as well as confidentiality. It

Stephan Kolletzki

1996-01-01

33

Reliable  

Microsoft Academic Search

This article is concerned with the reliable H? output feedback control problem against actuator failures for a class of uncertain discrete time-delay systems with randomly occurred nonlinearities (RONs). The failures of actuators are quantified by a variable varying in a given interval. RONs are introduced to model a class of sector-like nonlinearities that occur in a probabilistic way according to

Yisha Liu; Zidong Wang; Wei Wang

2011-01-01

34

Perceptual integration of motion and form information: evidence of parallel-continuous processing.  

PubMed

In three visual search experiments, the processes involved in the efficient detection of motion-form conjunction targets were investigated. Experiment 1 was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets; stationary nontargets not sharing the target form had little influence. In Experiments 2 and 3, the effects of display factors influencing the visual (form) quality of moving items (movement speed and item size) were examined. Increasing the speed of the moving items (> 1.5 degrees/sec) facilitated target detection when the task required segregation of the moving from the stationary items. When no segregation was necessary, increasing the movement speed impaired performance: With large display items, motion speed had little effect on target detection, but with small items, search efficiency declined when items moved faster than 1.5 degrees/sec. This pattern indicates that moving nontargets exert a strong effect on the search rate (Experiment 1) because of the loss of visual quality for moving items above a certain movement speed. A parallel-continuous processing account of motion-form conjunction search is proposed, which combines aspects of Guided Search (Wolfe, 1994) and attentional engagement theory (Duncan & Humphreys, 1989). PMID:10909242

von Mühlenen, A; Müller, H J

2000-04-01

35

Alternating d(G-A) sequences form a parallel-stranded DNA homoduplex.  

PubMed Central

The oligonucleotides d[(G-A)7G] and d[(G-A)12G] self-associate under physiological conditions (10 mM MgCl2, neutral pH) into a stable double-helical structure (psRR-DNA) in which the two polypurine strands are in a parallel orientation in contrast to the antiparallel disposition of conventional B-DNA. We have characterized psRR-DNA by gel electrophoresis, UV absorption, vacuum UV circular dichroism, monomer-excimer fluorescence of oligonucleotides end-labelled with pyrene, and chemical probing with diethyl pyrocarbonate and dimethyl sulfate. The duplex is stable at pH 4-9, suggesting that the structure is compatible with, but does not require, protonation of the A residues. The data support a model derived from force-field analysis in which the parallel-stranded d(G-A)n helix is right-handed and constituted of alternating, symmetrical Gsyn.Gsyn and Aanti.Aanti base pairs with N1H...O6 and N6H...N7 hydrogen bonds, respectively. This dinucleotide structure may be the source of a negative peak observed at 190 nm in the vacuum UV CD spectrum, a feature previously reported only for left-handed Z-DNA. The related sequence d[(GAAGGA)4G] also forms a parallel-stranded duplex but one that is less stable and probably involves a slightly different secondary structure. We discuss the potential intervention of psRR-DNA in recombination, gene expression and the stabilization of genomic structure. Images

Rippe, K; Fritsch, V; Westhof, E; Jovin, T M

1992-01-01

36

Reliability and validity of the Attitudes Toward Seeking Professional Psychological Help Scale-Short Form  

Microsoft Academic Search

We examined the reliability and validity of the Attitudes Toward Seeking Professional Psychological Help Scale-Short Form (ATSPPH-SF), a widely cited measure of mental health treatment attitudes. Data from 296 college students and 389 primary care patients were analyzed. The ATSPPH-SF evidenced adequate internal consistency. Higher scores (indicating more positive treatment attitudes) were associated with less treatment-related stigma, and greater intentions

Jon D. Elhai; William Schweinle; Susan M. Anderson

2008-01-01

37

Microelectromechanical filter formed from parallel-connected lattice networks of contour-mode resonators  

DOEpatents

A microelectromechanical (MEM) filter is disclosed which has a plurality of lattice networks formed on a substrate and electrically connected together in parallel. Each lattice network has a series resonant frequency and a shunt resonant frequency provided by one or more contour-mode resonators in the lattice network. Different types of contour-mode resonators including single input, single output resonators, differential resonators, balun resonators, and ring resonators can be used in MEM filter. The MEM filter can have a center frequency in the range of 10 MHz-10 GHz, with a filter bandwidth of up to about 1% when all of the lattice networks have the same series resonant frequency and the same shunt resonant frequency. The filter bandwidth can be increased up to about 5% by using unique series and shunt resonant frequencies for the lattice networks.

Wojciechowski, Kenneth E; Olsson, III, Roy H; Ziaei-Moayyed, Maryam

2013-07-30

38

The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills  

ERIC Educational Resources Information Center

Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of…

Bae, Jungok; Lee, Yae-Sheik

2011-01-01

39

Reliability of self reported form of female genital mutilation and WHO classification: cross sectional study  

PubMed Central

Objective To assess the reliability of self reported form of female genital mutilation (FGM) and to compare the extent of cutting verified by clinical examination with the corresponding World Health Organization classification. Design Cross sectional study. Settings One paediatric hospital and one gynaecological outpatient clinic in Khartoum, Sudan, 2003-4. Participants 255 girls aged 4-9 and 282 women aged 17-35. Main outcome measures The women's reports of FGMthe actual anatomical extent of the mutilation, and the corresponding types according to the WHO classification. Results All girls and women reported to have undergone FGM had this verified by genital inspection. None of those who said they had not undergone FGM were found to have it. Many said to have undergone “sunna circumcision” (excision of prepuce and part or all of clitoris, equivalent to WHO type I) had a form of FGM extending beyond the clitoris (10/23 (43%) girls and 20/35 (57%) women). Of those who said they had undergone this form, nine girls (39%) and 19 women (54%) actually had WHO type III (infibulation and excision of part or all of external genitalia). The anatomical extent of forms classified as WHO type III varies widely. In 12/32 girls (38%) and 27/245 women (11%) classified as having WHO type III, the labia majora were not involved. Thus there is a substantial overlap, in an anatomical sense, between WHO types II and III. Conclusion The reliability of reported form of FGM is low. There is considerable under-reporting of the extent. The WHO classification fails to relate the defined forms to the severity of the operation. It is important to be aware of these aspects in the conduct and interpretation of epidemiological and clinical studies. WHO should revise its classification.

Elmusharaf, Susan; Elhadi, Nagla; Almroth, Lars

2006-01-01

40

Reliability and Validity of a Short Form of the Marijuana Craving Questionnaire  

PubMed Central

Background The Marijuana Craving Questionnaire (MCQ) is a valid and reliable, 47-item self-report instrument that assesses marijuana craving along four dimensions: compulsivity, emotionality, expectancy, and purposefulness. For use in research and clinical settings, we constructed a 12-item version of the MCQ by selecting three items from each of the four factors that exhibited the greatest within-factor internal consistency (Cronbach's alpha coefficient). Methods Adult marijuana users (n = 490), who had made at least one serious attempt to quit marijuana use but were not seeking treatment, completed the MCQ-Short Form (MCQ-SF) in a single session. Results Confirmatory factor analysis of the MCQ-SF indicated good fit with the 4-factor MCQ model, and the coefficient of congruence indicated moderate similarity in factor patterns and loadings between the MCQ and MCQ-SF. Homogeneity (unidimensionality and internal consistency) of MCQ-SF factors was also consistent with reliability values obtained in the initial validation of the MCQ. Conclusions Findings of psychometric fidelity indicate that the MCQ-SF is a reliable and valid measure of the same multidimensional aspects of marijuana craving as the MCQ in marijuana users not seeking treatment.

Heishman, Stephen J.; Evans, Rebecca J.; Singleton, Edward G.; Levin, Kenneth H.; Copersino, Marc L.; Gorelick, David A.

2009-01-01

41

Optimal design of parallel triplex forming oligonucleotides containing Twisted Intercalating Nucleic Acids--TINA  

PubMed Central

Twisted intercalating nucleic acid (TINA) is a novel intercalator and stabilizer of Hoogsteen type parallel triplex formations (PT). Specific design rules for position of TINA in triplex forming oligonucleotides (TFOs) have not previously been presented. We describe a complete collection of easy and robust design rules based upon more than 2500 melting points (Tm) determined by FRET. To increase the sensitivity of PT, multiple TINAs should be placed with at least 3 nt in-between or preferable one TINA for each half helixturn and/or whole helixturn. We find that ?Tm of base mismatches on PT is remarkably high (between 7.4 and 15.2°C) compared to antiparallel duplexes (between 3.8 and 9.4°C). The specificity of PT by ?Tm increases when shorter TFOs and higher pH are chosen. To increase ?Tms, base mismatches should be placed in the center of the TFO and when feasible, A, C or T to G base mismatches should be avoided. Base mismatches can be neutralized by intercalation of a TINA on each side of the base mismatch and masked by a TINA intercalating direct 3? (preferable) or 5? of it. We predict that TINA stabilized PT will improve the sensitivity and specificity of DNA based clinical diagnostic assays.

Schneider, Uffe V.; Mikkelsen, Nikolaj D.; J?hnk, Nina; Okkels, Limei M.; Westh, Henrik; Lisby, Gorm

2010-01-01

42

Solution structure of all parallel G-quadruplex formed by the oncogene RET promoter sequence  

PubMed Central

RET protein functions as a receptor-type tyrosine kinase and has been found to be aberrantly expressed in a wide range of human diseases. A highly GC-rich region upstream of the promoter plays an important role in the transcriptional regulation of RET. Here, we report the NMR solution structure of the major intramolecular G-quadruplex formed on the G-rich strand of this region in K+ solution. The overall G-quadruplex is composed of three stacked G-tetrad and four syn guanines, which shows distinct features for all parallel-stranded folding topology. The core structure contains one G-tetrad with all syn guanines and two other with all anti-guanines. There are three double-chain reversal loops: the first and the third loops are made of 3?nt G-C-G segments, while the second one contains only 1?nt C10. These loops interact with the core G-tetrads in a specific way that defines and stabilizes the overall G-quadruplex structure and their conformations are in accord with the experimental mutations. The distinct RET promoter G-quadruplex structure suggests that it can be specifically involved in gene regulation and can be an attractive target for pathway-specific drug design.

Tong, Xiaotian; Lan, Wenxian; Zhang, Xu; Wu, Houming; Liu, Maili; Cao, Chunyang

2011-01-01

43

Self-Stigma of Mental Illness Scale - Short Form: Reliability and Validity  

PubMed Central

The internalization of public stigma by persons with serious mental illnesses may lead to self-stigma, which harms self-esteem, self-efficacy, and empowerment. Previous research has evaluated a hierarchical model that distinguishes among stereotype awareness, agreement, application to self, and harm to self with the 40-item Self-Stigma of Mental Illness Scale (SSMIS). This study addressed SSMIS critiques (too long, contains offensive items that discourages test completion) by strategically omitting half of the original scale’s items. Here we report reliability and validity of the 20-item short form (SSMIS-SF) based on data from three previous studies. Retained items were rated less offensive by a sample of consumers. Results indicated adequate internal consistencies for each subscale. Repeated measures ANOVAs showed subscale means progressively diminished from awareness to harm. In support of its validity, the harm subscale was found to be inversely and significantly related to self-esteem, self-efficacy, empowerment, and hope. After controlling for level of depression, these relationships remained significant with the exception of the relation between empowerment and harm SSMIS-SF subscale. Future research with the SSMIS-SF should evaluate its sensitivity to change and its stability through test-rest reliability.

Corrigan, Patrick W.; Michaels, Patrick J.; Vega, Eduardo; Gause, Michael; Watson, Amy C.; Rusch, Nicolas

2012-01-01

44

Self-stigma of mental illness scale--short form: reliability and validity.  

PubMed

The internalization of public stigma by persons with serious mental illnesses may lead to self-stigma, which harms self-esteem, self-efficacy, and empowerment. Previous research has evaluated a hierarchical model that distinguishes among stereotype awareness, agreement, application to self, and harm to self with the 40-item Self-Stigma of Mental Illness Scale (SSMIS). This study addressed SSMIS critiques (too long, contains offensive items that discourages test completion) by strategically omitting half of the original scale's items. Here we report reliability and validity of the 20-item short form (SSMIS-SF) based on data from three previous studies. Retained items were rated less offensive by a sample of consumers. Results indicated adequate internal consistencies for each subscale. Repeated measures ANOVAs showed subscale means progressively diminished from awareness to harm. In support of its validity, the harm subscale was found to be inversely and significantly related to self-esteem, self-efficacy, empowerment, and hope. After controlling for level of depression, these relationships remained significant with the exception of the relation between empowerment and harm SSMIS-SF subscale. Future research with the SSMIS-SF should evaluate its sensitivity to change and its stability through test-rest reliability. PMID:22578819

Corrigan, Patrick W; Michaels, Patrick J; Vega, Eduardo; Gause, Michael; Watson, Amy C; Rüsch, Nicolas

2012-08-30

45

Validity, Reliability, and Potential Bias of Short Forms of Students' Evaluation of Teaching: The Case of UAE University  

ERIC Educational Resources Information Center

Students' opinions continue to be a significant factor in the evaluation of teaching in higher education institutions. The purpose of this study was to psychometrically assess short students evaluation of teaching (SET) forms using the UAE University form as a model. The study evaluated the form validity, reliability, the overall question,…

Dodeen, Hamzeh

2013-01-01

46

An Investigation of Angle Relationships Formed by Parallel Lines Cut by a Transversal Using GeoGebra  

NSDL National Science Digital Library

In this lesson, students will discover angle relationships formed (corresponding, alternate interior, alternate exterior, same-side interior, same-side exterior) when two parallel lines are cut by a transversal. They will establish definitions and identify whether these angle pairs are supplementary or congruent.

2013-01-08

47

Bringing the cognitive estimation task into the 21st century: normative data on two new parallel forms.  

PubMed

The Cognitive Estimation Test (CET) is widely used by clinicians and researchers to assess the ability to produce reasonable cognitive estimates. Although several studies have published normative data for versions of the CET, many of the items are now outdated and parallel forms of the test do not exist to allow cognitive estimation abilities to be assessed on more than one occasion. In the present study, we devised two new 9-item parallel forms of the CET. These versions were administered to 184 healthy male and female participants aged 18-79 years with 9-22 years of education. Increasing age and years of education were found to be associated with successful CET performance as well as gender, intellect, naming, arithmetic and semantic memory abilities. To validate that the parallel forms of the CET were sensitive to frontal lobe damage, both versions were administered to 24 patients with frontal lobe lesions and 48 age-, gender- and education-matched controls. The frontal patients' error scores were significantly higher than the healthy controls on both versions of the task. This study provides normative data for parallel forms of the CET for adults which are also suitable for assessing frontal lobe dysfunction on more than one occasion without practice effects. PMID:24671170

MacPherson, Sarah E; Wagner, Gabriela Peretti; Murphy, Patrick; Bozzali, Marco; Cipolotti, Lisa; Shallice, Tim

2014-01-01

48

Bringing the Cognitive Estimation Task into the 21st Century: Normative Data on Two New Parallel Forms  

PubMed Central

The Cognitive Estimation Test (CET) is widely used by clinicians and researchers to assess the ability to produce reasonable cognitive estimates. Although several studies have published normative data for versions of the CET, many of the items are now outdated and parallel forms of the test do not exist to allow cognitive estimation abilities to be assessed on more than one occasion. In the present study, we devised two new 9-item parallel forms of the CET. These versions were administered to 184 healthy male and female participants aged 18–79 years with 9–22 years of education. Increasing age and years of education were found to be associated with successful CET performance as well as gender, intellect, naming, arithmetic and semantic memory abilities. To validate that the parallel forms of the CET were sensitive to frontal lobe damage, both versions were administered to 24 patients with frontal lobe lesions and 48 age-, gender- and education-matched controls. The frontal patients’ error scores were significantly higher than the healthy controls on both versions of the task. This study provides normative data for parallel forms of the CET for adults which are also suitable for assessing frontal lobe dysfunction on more than one occasion without practice effects.

MacPherson, Sarah E.; Wagner, Gabriela Peretti; Murphy, Patrick; Bozzali, Marco; Cipolotti, Lisa; Shallice, Tim

2014-01-01

49

Human telomeric DNA forms parallel-stranded intramolecular G-quadruplex in K+ solution under molecular crowding condition.  

PubMed

The G-rich strand of human telomeric DNA can fold into a four-stranded structure called G-quadruplex and inhibit telomerase activity that is expressed in 85-90% tumor cells. For this reason, telomere quadruplex is emerging as a potential therapeutic target for cancer. Information on the structure of the quadruplex in the physiological environment is important for structure-based drug design targeting the quadruplex. Recent studies have raised significant controversy regarding the exact structure of the quadruplex formed by human telomeric DNA in a physiological relevant environment. Studies on the crystal prepared in K+ solution revealed a distinct propeller-shaped parallel-stranded conformation. However, many later works failed to confirm such structure in physiological K+ solution but rather led to the identification of a different hybrid-type mixed parallel/antiparallel quadruplex. Here we demonstrate that human telomere DNA adopts a parallel-stranded conformation in physiological K+ solution under molecular crowding conditions created by PEG. At the concentration of 40% (w/v), PEG induced complete structural conversion to a parallel-stranded G-quadruplex. We also show that the quadruplex formed under such a condition has unusual stability and significant negative impact on telomerase processivity. Since the environment inside cells is molecularly crowded, our results obtained under the cell mimicking condition suggest that the parallel-stranded quadruplex may be the more favored structure under physiological conditions, and drug design targeting the human telomeric quadruplex should take this into consideration. PMID:17705383

Xue, Yong; Kan, Zhong-yuan; Wang, Quan; Yao, Yuan; Liu, Jiang; Hao, Yu-hua; Tan, Zheng

2007-09-12

50

Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data  

ERIC Educational Resources Information Center

Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make assertions…

Dinno, Alexis

2009-01-01

51

The reliability of speeded tests  

Microsoft Academic Search

Some methods are presented for estimating the reliability of a partially speeded test without the use of a parallel form. The effect of these formulas on some test data is illustrated. Whenever an odd-even reliability is computed it is probably desirable to use one of the formulas noted in Section 2 of this paper in addition to the usual Spearman-Brown

Harold Gulliksen

1950-01-01

52

A reliability study of springback on the sheet metal forming process under probabilistic variation of prestrain and blank holder force  

NASA Astrophysics Data System (ADS)

This work deals with a reliability assessment of springback problem during the sheet metal forming process. The effects of operative parameters and material properties, blank holder force and plastic prestrain, on springback are investigated. A generic reliability approach was developed to control springback. Subsequently, the Monte Carlo simulation technique in conjunction with the Latin hypercube sampling method was adopted to study the probabilistic springback. Finite element method based on implicit/explicit algorithms was used to model the springback problem. The proposed constitutive law for sheet metal takes into account the adaptation of plastic parameters of the hardening law for each prestrain level considered. Rackwitz-Fiessler algorithm is used to find reliability properties from response surfaces of chosen springback geometrical parameters. The obtained results were analyzed using a multi-state limit reliability functions based on geometry compensations.

Mrad, Hatem; Bouazara, Mohamed; Aryanpour, Gholamreza

2013-08-01

53

A Validation Study of the Dutch Childhood Trauma Questionnaire-Short Form: Factor Structure, Reliability, and Known-Groups Validity  

ERIC Educational Resources Information Center

Objective: The 28-item Childhood Trauma Questionnaire-Short Form (CTQ-SF) has been translated into at least 10 different languages. The validity of translated versions of the CTQ-SF, however, has generally not been examined. The objective of this study was to investigate the factor structure, internal consistency reliability, and known-groups…

Thombs, Brett D.; Bernstein, David P.; Lobbestael, Jill; Arntz, Arnoud

2009-01-01

54

Alternate forms of the auditory-verbal learning test: issues of test comparability, longitudinal reliability, and moderating demographic variables  

Microsoft Academic Search

The present investigation examines the alternate-form and longitudinal reliability of two versions of the Auditory-Verbal Learning Test (AVLT) on a large, multiregional, healthy male sample. Subjects included 2,059 bisexual and homosexual HIV-seronegative males recruited from the Multicenter AIDS Cohort Study from centers in Baltimore, Chicago, Los Angeles, and Pittsburgh. The findings revealed no significant differences between forms upon initial or

Craig Lyons Uchiyama; Louis F. D'Elia; Ann M. Dellinger; James T. Becker; Ola A. Selnes; Jerry E. Wesch; Bai Bai Chen; Paul Satz; Wilfred van Gorp; Eric N. Miller

1995-01-01

55

Reliability and Validity of the Sensation-Seeking Scale: Psychometric Problems in Form V.  

ERIC Educational Resources Information Center

Psychometric properties of Zuckerman's Sensation Seeking Scale were examined. Evidence supported the theoretical notion of an individual difference variable in arousal-seeking. Other evidence, however, suggested that measurement problems continue to hamper research: the total score was moderately reliable, but the subscales were only marginally…

Ridgeway, Doreen; Russell, James A.

1980-01-01

56

The Tridimensional Personality Questionnaire: Reliability and Validity Studies and Derivation of a Short Form  

Microsoft Academic Search

A series of interrelated analyses were conducted on 2 samples of college students to examine the reliability and validity of the Tridimensional Personality Questionnaire (TPQ) and to develop and validate a short version of the scale. Factor analyses were conducted and tended to approximate Cloninger's proposed model. Novelty Seeking predicted a range of substance use and abuse measures, and substance

Kenneth J. Sher; Mark D. Wood; Timothy M. Crews; P. A. Vandiver

1995-01-01

57

The Queensland high risk foot form (QHRFF) - is it a reliable and valid clinical research tool for foot disease?  

PubMed Central

Background Foot disease complications, such as foot ulcers and infection, contribute to considerable morbidity and mortality. These complications are typically precipitated by “high-risk factors”, such as peripheral neuropathy and peripheral arterial disease. High-risk factors are more prevalent in specific “at risk” populations such as diabetes, kidney disease and cardiovascular disease. To the best of the authors’ knowledge a tool capturing multiple high-risk factors and foot disease complications in multiple at risk populations has yet to be tested. This study aimed to develop and test the validity and reliability of a Queensland High Risk Foot Form (QHRFF) tool. Methods The study was conducted in two phases. Phase one developed a QHRFF using an existing diabetes foot disease tool, literature searches, stakeholder groups and expert panel. Phase two tested the QHRFF for validity and reliability. Four clinicians, representing different levels of expertise, were recruited to test validity and reliability. Three cohorts of patients were recruited; one tested criterion measure reliability (n?=?32), another tested criterion validity and inter-rater reliability (n?=?43), and another tested intra-rater reliability (n?=?19). Validity was determined using sensitivity, specificity and positive predictive values (PPV). Reliability was determined using Kappa, weighted Kappa and intra-class correlation (ICC) statistics. Results A QHRFF tool containing 46 items across seven domains was developed. Criterion measure reliability of at least moderate categories of agreement (Kappa?>?0.4; ICC?>?0.75) was seen in 91% (29 of 32) tested items. Criterion validity of at least moderate categories (PPV?>?0.7) was seen in 83% (60 of 72) tested items. Inter- and intra-rater reliability of at least moderate categories (Kappa?>?0.4; ICC?>?0.75) was seen in 88% (84 of 96) and 87% (20 of 23) tested items respectively. Conclusions The QHRFF had acceptable validity and reliability across the majority of items; particularly items identifying relevant co-morbidities, high-risk factors and foot disease complications. Recommendations have been made to improve or remove identified weaker items for future QHRFF versions. Overall, the QHRFF possesses suitable practicality, validity and reliability to assess and capture relevant foot disease items across multiple at risk populations.

2014-01-01

58

The relative noise levels of parallel axis gear sets with various contact ratios and gear tooth forms  

NASA Technical Reports Server (NTRS)

The real noise reduction benefits which may be obtained through the use of one gear tooth form as compared to another is an important design parameter for any geared system, especially for helicopters in which both weight and reliability are very important factors. This paper describes the design and testing of nine sets of gears which are as identical as possible except for their basic tooth geometry. Noise measurements were made at various combinations of load and speed for each gear set so that direct comparisons could be made. The resultant data was analyzed so that valid conclusions could be drawn and interpreted for design use.

Drago, Raymond J.; Lenski, Joseph W., Jr.; Spencer, Robert H.; Valco, Mark; Oswald, Fred B.

1993-01-01

59

Preparation, Thermal Properties and Thermal Reliability of Form-Stable Paraffin\\/Polypropylene Composite for Thermal Energy Storage  

Microsoft Academic Search

This study is focused on the preparation, characterization, and determination of thermal properties and thermal reliability\\u000a of paraffin\\/polypropylene (PP) composite as a novel form-stable phase change material (PCM) for thermal energy storage applications.\\u000a In the composite, paraffin acts as a PCM when PP is operated as supporting material. The composites prepared at different\\u000a mass fractions of paraffin (50, 60, 70,

Cemil Alkan; Kemal Kaya; Ahmet Sar?

2009-01-01

60

Do cataclastic deformation bands form parallel to lines of no finite elongation (LNFE) or zero extension directions?  

NASA Astrophysics Data System (ADS)

Conjugate cataclastic deformation bands cut unconsolidated sand and gravel at McKinleyville, California, and dip shallowly towards the north-northeast and south-southwest. The acute dihedral angle between the two sets of deformation bands is 47° and is bisected by the sub-horizontal, north-northeast directed incremental and finite shortening directions. Trishear models of fault propagation folding above the McKinleyville fault predict two sets of LNFE (lines of no finite elongation) that plunge steeply and shallowly to the south and north. These predictions are inconsistent with deformation band orientations and suggest that deformation bands did not form parallel to these LNFE. During plane strain, zero extension directions with acute dihedral angles of 47° develop when the dilatancy rate (dV/d?1) is -4.3. Experimental dilatancy rates for Vosges sandstone (cohesion > 0) and unconsolidated Hostun sand suggest the deformation bands either developed parallel to zero extension directions or in accordance with the Mohr-Coulomb criterion, assuming initial porosities of 22% and 39%, respectively. An empirical relationship between dV/d?1, relative density and mean stress suggests that dilatancy rates for Vosges sandstone overestimate dV/d?1 at McKinleyville. Deformation bands at McKinleyville likely developed either in a Mohr-Coulomb orientation, or an intermediate orientation bounded by the Mohr-Coulomb (?C) and Roscoe (?R) angles.

Imber, Jonathan; Perry, Tom; Jones, Richard R.; Wightman, Ruth H.

2012-12-01

61

Reliability and validity of the parent form of the social competence scale in Chinese preschoolers.  

PubMed

The Parent Form of the Social Competence Scale (SCS-PF) was translated into Chinese and validated in a sample of Chinese preschool children (N = 443). Results confirmed a single dimension and high internal consistency in the SCS-PF. Mothers' ratings on the SCS-PF correlated moderately with teachers' ratings on the Teacher Form of the Social Competence Scale and weakly with teachers' ratings on the Student-Teacher Relationship Scale. PMID:23045868

Zhang, Xiao; Ke, Xue; Wang, Xiaoyan

2012-08-01

62

Self-Formed Barrier with Cu-Mn alloy Metallization and its Effects on Reliability  

SciTech Connect

Advancement of semiconductor devices requires the realization of an ultra-thin (less than 5 nm thick) diffusion barrier layer between Cu interconnect and insulating layers. Self-forming barrier layers have been considered as an alternative barrier structure to the conventional Ta/TaN barrier layers. The present work investigated the possibility of the self-forming barrier layer using Cu-Mn alloy thin films deposited directly on SiO2. After annealing at 450 deg. C for 30 min, an amorphous oxide layer of 3-4 nm in thickness was formed uniformly at the interface. The oxide formation was accompanied by complete expulsion of Mn atoms from the Cu-Mn alloy, leading to a drastic decrease in resistivity of the film. No interdiffusion was observed between Cu and SiO2, indicating an excellent diffusion-barrier property of the interface oxide.

Koike, J.; Wada, M. [Dept. of Materials Science, Tohoku University, Sendai 980-8579 (Japan); Usui, T.; Nasu, H.; Takahashi, S.; Shimizu, N.; Yoshimaru, M.; Shibata, H. [Semiconductor Technology Academic Research Center (STARC), Yokohama, 222-0033 (Japan)

2006-02-07

63

Development and reliability testing of a food store observation form. — Measures of the Food Environment  

Cancer.gov

Skip to Main Content at the National Institutes of Health | www.cancer.gov Print Page E-mail Page Search: Please wait while this form is being loaded.... Home Browse by Resource Type Browse by Area of Research Research Networks Funding Information About

64

Assessment of the Reliability and Validity of the Discrete-Trials Teaching Evaluation Form  

ERIC Educational Resources Information Center

Discrete-trials teaching (DTT) is a frequently used method for implementing Applied Behavior Analysis treatment with children with autism. Fazzio, Arnal, and Martin (2007) developed a 21-component checklist, the Discrete-Trials Teaching Evaluation Form (DTTEF), for assessing instructors conducting DTT. In Phase 1 of this research, three experts on…

Babel, Danielle A.; Martin, Garry L.; Fazzio, Daniela; Arnal, Lindsay; Thomson, Kendra

2008-01-01

65

Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments  

ERIC Educational Resources Information Center

Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the…

Satern, Miriam N.

2011-01-01

66

Novel N2O-oxynitridation technology for forming highly reliable EEPROM tunnel oxide films  

Microsoft Academic Search

Ultrathin (≃6 nm) oxynitrided SiO2 (SiOxNy) films have been formed on Si(100) by rapid thermal processing (RTP) in an N2O ambient. It is demonstrated that with this technology the generation of electron traps in bulk SiO2 and the low-field leakage during Fowler-Nordheim electron injection can be greatly reduced. This behavior of SiOx Ny film can be explained by the idea

H. Fukuda; M. Yasuda; T. Iwabuchi; S. Ohno

1991-01-01

67

The Four Canonical TPR Subunits of Human APC/C Form Related Homo-Dimeric Structures and Stack in Parallel to Form a TPR Suprahelix?  

PubMed Central

The anaphase-promoting complex or cyclosome (APC/C) is a large E3 RING-cullin ubiquitin ligase composed of between 14 and 15 individual proteins. A striking feature of the APC/C is that only four proteins are involved in directly recognizing target proteins and catalyzing the assembly of a polyubiquitin chain. All other subunits, which account for > 80% of the mass of the APC/C, provide scaffolding functions. A major proportion of these scaffolding subunits are structurally related. In metazoans, there are four canonical tetratricopeptide repeat (TPR) proteins that form homo-dimers (Apc3/Cdc27, Apc6/Cdc16, Apc7 and Apc8/Cdc23). Here, we describe the crystal structure of the N-terminal homo-dimerization domain of Schizosaccharomyces pombe Cdc23 (Cdc23Nterm). Cdc23Nterm is composed of seven contiguous TPR motifs that self-associate through a related mechanism to those of Cdc16 and Cdc27. Using the Cdc23Nterm structure, we generated a model of full-length Cdc23. The resultant “V”-shaped molecule docks into the Cdc23-assigned density of the human APC/C structure determined using negative stain electron microscopy (EM). Based on sequence conservation, we propose that Apc7 forms a homo-dimeric structure equivalent to those of Cdc16, Cdc23 and Cdc27. The model is consistent with the Apc7-assigned density of the human APC/C EM structure. The four canonical homo-dimeric TPR proteins of human APC/C stack in parallel on one side of the complex. Remarkably, the uniform relative packing of neighboring TPR proteins generates a novel left-handed suprahelical TPR assembly. This finding has implications for understanding the assembly of other TPR-containing multimeric complexes.

Zhang, Ziguo; Chang, Leifu; Yang, Jing; Conin, Nora; Kulkarni, Kiran; Barford, David

2013-01-01

68

Reliability and validity of the Turkish version short-form McGill pain questionnaire in patients with rheumatoid arthritis.  

PubMed

The translation of existing pain measurement scales is considered important in producing internationally comparable measures for evidence based practice. In measuring the pain experience, the short-form of McGill's pain questionnaire (SF-MPQ) is one of the most widely used and translated instruments. The purpose of this study was to examine whether the Turkish version of the SF-MPQ is a valid and reliable tool to assess pain and to be used as a clinical and research instrument. Translation retranslation of the English version of the SF-MPQ was done blindly and independently by four individuals and adapted by a team. Eighty-nine rheumatological patients awaiting control by a rheumatologist were assessed by the Turkish version of the SF-MPQ in the morning and in the afternoon of the same day. Internal consistency was found adequate at both assessments with Cronbach's alpha 0.705 for test and 0.713 for retest. For reliability of the total, sensory, affective, and evaluative total pain intensity, high intraclass correlations were demonstrated (0.891, 0.868, 0.716, and 0.796, respectively). Correlation of total, sensory and affective score with the numeric rating scale was tested for construct validity demonstrating r = 0.637 (p < 0.001) for test and r = 0.700 (p < 0.001) for retest. Correlation with erythrocycte sedimentation rates for concurrent validity was found to be r = 0.518 (p < 0.001) for test and r = 0.497 (p < 0.001) for retest. The results of this study indicate that the Turkish version of the SF-MPQ is a reliable and valid instrument for the measurement of pain in Turkish speaking patients with rheumatoid arthritis. PMID:17106618

Yakut, Yavuz; Yakut, Edibe; Bayar, Kiliçhan; Uygur, Fatma

2007-07-01

69

Parallel Programming with Polaris  

Microsoft Academic Search

Parallel programming tools are limited, making effective parallel programming difficult and cumbersome. Compilers that translate conventional sequential programs into parallel form would liberate programmers from the complexities of explicit, machine oriented parallel programming. The paper discusses parallel programming with Polaris, an experimental translator of conventional Fortran programs that target machines such as the Cray T3D

William Blume; Ramon Doallo; Rudolf Eigenmann; John Grout; Jay Hoeflinger; Thomas Lawrence; Jaejin Lee; David A. Padua; Yunheung Paek; William M. Pottenger; Lawrence Rauchwerger; Peng Tu

1996-01-01

70

Initial validation of the Spanish childhood trauma questionnaire-short form: factor structure, reliability and association with parenting.  

PubMed

The present study examines the internal consistency and factor structure of the Spanish version of the Childhood Trauma Questionnaire-Short Form (CTQ-SF) and the association between the CTQ-SF subscales and parenting style. Cronbach's ? and confirmatory factor analyses (CFA) were performed in a female clinical sample (n = 185). Kendall's ? correlations were calculated between the maltreatment and parenting scales in a subsample of 109 patients. The Spanish CTQ-SF showed adequate psychometric properties and a good fit of the 5-factor structure. The neglect and abuse scales were negatively associated with parental care and positively associated with overprotection scales. The results of this study provide initial support for the reliability and validity of the Spanish CTQ-SF. PMID:23266990

Hernandez, Ana; Gallardo-Pujol, David; Pereda, Noemí; Arntz, Arnoud; Bernstein, David P; Gaviria, Ana M; Labad, Antonio; Valero, Joaquín; Gutiérrez-Zotes, Jose Alfonso

2013-05-01

71

Comparison of Educators' and Industrial Managers' Work Motivation Using Parallel Forms of the Work Components Study Questionnaire.  

ERIC Educational Resources Information Center

The idea that educators would differ from business managers on Herzberg's motivation factors and Blum's security orientations was posited. Parallel questionnaires were used to measure the motivational variables. The sample was composed of 432 teachers, 118 administrators, and 192 industrial managers. Data were analyzed using multivariate and…

Thornton, Billy W.; And Others

72

Farsi Version of Social Skills Rating System-Secondary Student Form: Cultural Adaptation, Reliability and Construct Validity  

PubMed Central

Objective: Assessment of social skills is a necessary requirement to develop and evaluate the effectiveness of cognitive and behavioral interventions. This paper reports the cultural adaptation and psychometric properties of the Farsi version of the social skills rating system-secondary students form (SSRS-SS) questionnaire (Gresham and Elliot, 1990), in a normative sample of secondary school students. Methods: A two-phase design was used that phase 1 consisted of the linguistic adaptation and in phase 2, using cross-sectional sample survey data, the construct validity and reliability of the Farsi version of the SSRS-SS were examined in a sample of 724 adolescents aged from 13 to 19 years. Results: Content validity index was excellent, and the floor/ceiling effects were low. After deleting five of the original SSRS-SS items, the findings gave support for the item convergent and divergent validity. Factor analysis revealed four subscales. Results showed good internal consistency (0.89) and temporal stability (0.91) for the total scale score. Conclusion: Findings demonstrated support for the use of the 27-item Farsi version in the school setting. Directions for future research regarding the applicability of the scale in other settings and populations of adolescents are discussed.

Eslami, Ahmad Ali; Amidi Mazaheri, Maryam; Mostafavi, Firoozeh; Abbasi, Mohamad Hadi; Noroozi, Ensieh

2014-01-01

73

Binding of oligonucleotides to a viral hairpin forming RNA triplexes with parallel G*GoC triplets  

PubMed Central

Infrared and UV spectroscopies have been used to study the assembly of a hairpin nucleotide sequence (nucleotides 3–30) of the 5? non-coding region of the hepatitis C virus RNA (5?-GGCGGGGAUUAUCCCCGCUGUGAGGCGG-3?) with a RNA 20mer ligand (5?-CCGCCUCACAAAGGUGGGGU-3?) in the presence of magnesium ion and spermidine. The resulting complex involves two helical structural domains: the first one is an intermolecular duplex stem at the bottom of the target hairpin and the second one is a parallel triplex generated by the intramolecular hairpin duplex and the ligand. Infrared spectroscopy shows that N-type sugars are exclusively present in the complex. This is the first case of formation of a RNA parallel triplex with purine motif and shows that this type of targeting RNA strands to viral RNA duplexes can be used as an alternative to antisense oligonucleotides or ribozymes.

Carmona, Pedro; Molina, Marina

2002-01-01

74

The Reliability of Simple, Direct Measures of Written Expression.  

ERIC Educational Resources Information Center

The reliability of four measures of written expression was examined (total words written, mature words, words spelled correctly, and letters in sequence). Subjects included elementary-age students in several school districts, some of whom were learning disabled. Results revealed high coefficients for test-retest reliability, parallel-form

Marston, Doug; Deno, Stanley

75

Verbal and Visual Parallelism  

ERIC Educational Resources Information Center

This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

Fahnestock, Jeanne

2003-01-01

76

Test reliability and effective test length  

Microsoft Academic Search

Measures of effective test length are developed for speeded and power tests, which are independent of the number of items in the test or of the time required for administration. These measures are used in determining reliability for (1) speeded and power tests, where a separately timed short parallel form is administered in addition to the full-length test; (2) power

William H. Angoff

1953-01-01

77

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 1. Technical Report #1216  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest/and alternate form) and G-Theory/D-Study research on the easy CBM reading measures, grades 1-5. Data were gathered in the spring 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due…

Anderson, Daniel; Park, Jasmine, Bitnara; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

2012-01-01

78

The French-Canadian Version of the Self-Report Coping Scale: Estimates of the Reliability, Validity, and Development of a Short Form  

ERIC Educational Resources Information Center

This investigation was conducted to explore the reliability and validity of scores on the French Canadian version of the Self-Report Coping Scale (SRCS; D. L. Causey & E. F. Dubow, 1992) and that of a short form of the SRCS. Evidence provides initial support for construct validity by replication of the factor structure and correlations with…

Hebert, Martine; Parent, Nathalie; Daignault, Isabelle V.

2007-01-01

79

The major G-quadruplex formed in the human BCL-2 proximal promoter adopts a parallel structure with a 13-nt loop in K+ solution.  

PubMed

The human BCL-2 gene contains a 39-bp GC-rich region upstream of the P1 promoter that has been shown to be critically involved in the regulation of BCL-2 gene expression. Inhibition of BCL-2 expression can decrease cellular proliferation and enhance the efficacy of chemotherapy. Here we report the major G-quadruplex formed in the Pu39 G-rich strand in this BCL-2 promoter region. The 1245G4 quadruplex adopts a parallel structure with one 13-nt and two 1-nt chain-reversal loops. The 1245G4 quadruplex involves four nonsuccessive G-runs, I, II, IV, V, unlike the previously reported bcl2 MidG4 quadruplex formed on the central four G-runs. The parallel 1245G4 quadruplex with the 13-nt loop, unexpectedly, appears to be more stable than the mixed parallel/antiparallel MidG4. Parallel-stranded structures with two 1-nt loops and one variable-length middle loop are found to be prevalent in the promoter G-quadruplexes; the variable middle loop is suggested to determine the specific overall structure and potential ligand recognition site. A limit of 7 nt in loop length is used in all quadruplex-predicting software. Thus, the formation and high stability of the 1245G4 quadruplex with a 13-nt loop is significant. The presence of two distinct interchangeable G-quadruplexes in the overlapping region of the BCL-2 promoter is intriguing, suggesting a novel mechanism for gene transcriptional regulation and ligand modulation. PMID:24450880

Agrawal, Prashansa; Lin, Clement; Mathad, Raveendra I; Carver, Megan; Yang, Danzhou

2014-02-01

80

Factor structure, reliability, and known groups validity of the German version of the Childhood Trauma Questionnaire (Short-form) in Swiss patients and nonpatients.  

PubMed

The Childhood Trauma Questionnaire-Short Form is the most widely used instrument to assess childhood trauma and has been translated into 10 languages. However, research into validity and reliability of these translated versions is scarce. The present study aimed to investigate the factor structure, internal consistency, reliability, and known-groups validity of the German Childhood Trauma Questionnaire-Short Form (Bernstein & Fink, 1998). Six-hundred and sixty-one clinical and nonclinical participants completed the German Childhood Trauma Questionnaire-Short Form. A confirmatory factor analysis was conducted to assess the 5-factor structure of the original Childhood Trauma Questionnaire-Short Form. To investigate known-groups validity, the confirmatory factor analysis latent factor levels between clinical and nonclinical participants were compared. The original 5-factor structure was confirmed, with only the Physical Neglect scale showing rather poor fit. As a conclusion, the results support the validity and reliability of the German Childhood Trauma Questionnaire-Short Form. It is recommended to use the German Childhood Trauma Questionnaire-Short Form to assess experiences of childhood trauma. PMID:24641795

Karos, Kai; Niederstrasser, Nils; Abidi, Latifa; Bernstein, David P; Bader, Klaus

2014-01-01

81

Validity and Reliability of the Turkish Form of Technology-Rich Outcome-Focused Learning Environment Inventory  

ERIC Educational Resources Information Center

The purpose of the study was to investigate the reliability and validity of a Turkish adaptation of Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI) which was developed by Aldridge, Dorman, and Fraser. A sample of 985 students from 16 high schools (Grades 9-12) participated in the study. Translation process followed…

Cakir, Mustafa

2011-01-01

82

Comparability, Reliability, and Practice Effects on Alternate Forms of the Digit Symbol Substitution and Symbol Digit Modalities Tests  

ERIC Educational Resources Information Center

The present study examined the comparability of 4 alternate forms of the Digit Symbol Substitution test and the Symbol Digit Modalities (written) test, including the original versions. Male contact-sport athletes (N=112) were assessed on 1 of the 4 forms of each test. Reasonable alternate form comparability was demonstrated through establishing…

Hinton-Bayre, Anton; Geffen, Gina

2005-01-01

83

Solution structure of the major G-quadruplex formed in the human VEGF promoter in K+: insights into loop interactions of the parallel G-quadruplexes  

PubMed Central

Vascular endothelial growth factor (VEGF) proximal promoter region contains a poly G/C-rich element that is essential for basal and inducible VEGF expression. The guanine-rich strand on this tract has been shown to form the DNA G-quadruplex structure, whose stabilization by small molecules can suppress VEGF expression. We report here the nuclear magnetic resonance structure of the major intramolecular G-quadruplex formed in this region in K+ solution using the 22mer VEGF promoter sequence with G-to-T mutations of two loop residues. Our results have unambiguously demonstrated that the major G-quadruplex formed in the VEGF promoter in K+ solution is a parallel-stranded structure with a 1:4:1 loop-size arrangement. A unique capping structure was shown to form in this 1:4:1 G-quadruplex. Parallel-stranded G-quadruplexes are commonly found in the human promoter sequences. The nuclear magnetic resonance structure of the major VEGF G-quadruplex shows that the 4-nt middle loop plays a central role for the specific capping structures and in stabilizing the most favored folding pattern. It is thus suggested that each parallel G-quadruplex likely adopts unique capping and loop structures by the specific middle loops and flanking segments, which together determine the overall structure and specific recognition sites of small molecules or proteins. LAY SUMMARY: The human VEGF is a key regulator of angiogenesis and plays an important role in tumor survival, growth and metastasis. VEGF overexpression is frequently found in a wide range of human tumors; the VEGF pathway has become an attractive target for cancer therapeutics. DNA G-quadruplexes have been shown to form in the proximal promoter region of VEGF and are amenable to small molecule drug targeting for VEGF suppression. The detailed molecular structure of the major VEGF promoter G-quadruplex reported here will provide an important basis for structure-based rational development of small molecule drugs targeting the VEGF G-quadruplex for gene suppression.

Agrawal, Prashansa; Hatzakis, Emmanuel; Guo, Kexiao; Carver, Megan; Yang, Danzhou

2013-01-01

84

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Passage Reading Fluency Assessments: Grade 4. Technical Report #1219  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

85

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 2. Technical Report #1217  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest an alternate form) and G-Theory/D-Study on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from the convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due to…

Anderson, Daniel; Lai, Cheg-Fei; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

2012-01-01

86

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 5. Technical Report #1220  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Lai, Cheng-Fei; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

2012-01-01

87

Reliability and Validity of the Korean Version of the Childhood Trauma Questionnaire-Short Form for Psychiatric Outpatients  

PubMed Central

Objective The Childhood Trauma Questionnaire (CTQ) is perhaps the most widely used and well-studied retrospective measure of childhood abuse or neglect. This study tested the initial reliability and validity of a Korean translation of the Childhood Trauma Questionnaire (CTQ-K) among non-psychotic psychiatric outpatients. Methods The CTQ-K was administered to a total of 163 non-psychotic psychiatric outpatients at a university-affiliated training hospital. Internal consistency, four-week test-retest reliability, and validity were calculated. A portion of the participants (n=65) also completed the Trauma Assessment Questionnaire (TAQ), the Impact of Events Scale-Revised, and the Dissociative Experiences Scale-Taxon. Results Four-week test-retest reliability was high (r=0.87) and internal consistency was good (Cronbach's ?=0.88). Each type of childhood trauma was significantly correlated with the corresponding subscale of the TAQ, thus confirming its concurrent validity. In addition, the CTQ-K total score was positively related to post-traumatic symptoms and pathological dissociation, demonstrating the convergent validity of the scale. The CTQ-K was also negatively correlated with the competence and safety subscale of the TAQ, confirming discriminant validity. Additionally, we confirmed the factorial validity by identifying a five-factor structure that explained 64% of the total variance. Conclusion Our study indicates that the CTQ-K is a measure of psychometric soundness that can be used to assess childhood abuse or neglect in Korean patients. It also supports the cross-cultural equivalence of the scale.

Park, Seon-Cheol; Yang, Hyunjoo; Oh, Dong Hoon

2011-01-01

88

American Shoulder and Elbow Surgeons Standardized Shoulder Assessment Form, patient self-report section: Reliability, validity, and responsiveness  

Microsoft Academic Search

The purpose of this study was to examine the psychomet- ric properties of the American Shoulder and Elbow Sur- geons Standardized Shoulder Assessment Form (ASES), patient self-report section. Patients with shoulder dysfunc- tion (n 63) completed the ASES, The University of Pennsylvania Shoulder Score, and the Short Form-36 during the initial evaluation, 24 to 72 hours after the ini- tial

Lori A. Michener; Philip W. McClure; Brian J. Sennett

89

An Investigation of Psychometric Properties of Coping Styles Scale Brief Form: A Study of Validity and Reliability  

ERIC Educational Resources Information Center

The aim of the current study was to develop a short form of Coping Styles Scale based on COPE Inventory. A total of 275 undergraduate students (114 female, and 74 male) were administered in the first study. In order to test factors structure of Coping Styles Scale Brief Form, principal components factor analysis and direct oblique rotation was…

Bacanli, Hasan; Surucu, Mustafa; Ilhan, Tahsin

2013-01-01

90

Introns form compositional clusters in parallel with the compositional clusters of the coding sequences to which they pertain.  

PubMed

This report deals with the study of compositional properties of human gene sequences evaluating similarities and differences among functionally distinct sectors of the gene independently of the reading frame. To retrieve the compositional information of DNA, we present a neighbor base dependent coding system in which the alphabet of 64 letters (DNA triplets) is compressed to an alphabet of 14 letters here termed triplet composons. The triplets containing the same set of distinct bases in whatever order and number form a triplet composon. The reading of the DNA sequence is performed starting at any letter of the initial triplet and then moving, triplet-to-triplet, until the end of the sequence. The readings were made in an overlapping way along the length of the sequences. The analysis of the compositional content in terms of the composon usage frequencies of the gene sequences shows that: (i) the compositional content of the sequences is far from that of random sequences, even in the case of non-protein coding sequences; (ii) coding sequences can be classified as components of compositional clusters; and (iii) intron sequences in a cluster have the same composon usage frequencies, even as their base composition differs notably from that of their home coding sequences. A comparison of the composon usage frequencies between human and mouse homologous genes indicated that two clusters found in humans do not have their counterpart in mouse whereas the others clusters are stable in both species with respect to their composon usage frequencies in both coding and noncoding sequences. PMID:21132282

Fuertes, Miguel A; Pérez, José M; Zuckerkandl, Emile; Alonso, Carlos

2011-01-01

91

A 12Item Short-Form Health Survey: construction of scales and preliminary tests of reliability and validity  

Microsoft Academic Search

Regression methods were used to select and score 12 items from the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) to reproduce the Physical Component Summary and Mental Component Summary scales in the general US population (n=2,333). The resulting 12-item short-form (SF-12) achieved multiple R squares of 0.911 and 0.918 in predictions of the SF-36 Physical Component Summary and SF-36

Ware John E. Jr; Mark Kosinski; Susan D. Keller

1996-01-01

92

Reliability, validity, and utility of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in assessments of bariatric surgery candidates.  

PubMed

In the current study, we examined the reliability, validity, and clinical utility of Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2011) scores in a sample of 759 bariatric surgery candidates. We provide descriptives for all scales, internal consistency and standard error of measurement estimates for all substantive scales, external correlates of substantive scales using chart review and self-report criteria, and relative risk ratios to assess the clinical utility of the instrument. Results generally support the reliability, validity, and clinical utility of MMPI-2-RF scale scores in the psychological evaluation of bariatric surgery candidates. Limitations, future directions, and practical application of these results are discussed. PMID:23914953

Tarescavage, Anthony M; Wygant, Dustin B; Boutacoff, Lana I; Ben-Porath, Yossef S

2013-12-01

93

Japanese Version of Home Form of the ADHD-RS: An Evaluation of Its Reliability and Validity  

ERIC Educational Resources Information Center

Using the Japanese version of home form of the ADHD-RS, this survey attempted to compare the scores between the US and Japan and examined the correlates of ADHD-RS. We collected responses from parents or rearers of 5977 children (3119 males and 2858 females) in nursery, elementary, and lower-secondary schools. A confirmed factor analysis of…

Tani, Iori; Okada, Ryo; Ohnishi, Masafumi; Nakajima, Shunji; Tsujii, Masatsugu

2010-01-01

94

Female genital mutilation in sierra leone: forms, reliability of reported status, and accuracy of related demographic and health survey questions.  

PubMed

Objective. To determine forms of female genital mutilation (FGM), assess consistency between self-reported and observed FGM status, and assess the accuracy of Demographic and Health Surveys (DHS) FGM questions in Sierra Leone. Methods. This cross-sectional study, conducted between October 2010 and April 2012, enrolled 558 females aged 12-47 from eleven antenatal clinics in northeast Sierra Leone. Data on demography, FGM status, and self-reported anatomical descriptions were collected. Genital inspection confirmed the occurrence and extent of cutting. Results. All participants reported FGM status; 4 refused genital inspection. Using the WHO classification of FGM, 31.7% had type Ib; 64.1% type IIb; and 4.2% type IIc. There was a high level of agreement between reported and observed FGM prevalence (81.2% and 81.4%, resp.). There was no correlation between DHS FGM responses and anatomic extent of cutting, as 2.7% reported pricking; 87.1% flesh removal; and 1.1% that genitalia was sewn closed. Conclusion. Types I and II are the main forms of FGM, with labia majora alterations in almost 5% of cases. Self-reports on FGM status could serve as a proxy measurement for FGM prevalence but not for FGM type. The DHS FGM questions are inaccurate for determining cutting extent. PMID:24204384

Bjälkander, Owolabi; Grant, Donald S; Berggren, Vanja; Bathija, Heli; Almroth, Lars

2013-01-01

95

Reliability and validity of the Spanish version of the Child Health and Illness Profile (CHIP) Child-Edition, Parent Report Form (CHIP-CE/PRF)  

PubMed Central

Background The objectives of the study were to assess the reliability, and the content, construct, and convergent validity of the Spanish version of the CHIP-CE/PRF, to analyze parent-child agreement, and compare the results with those of the original U.S. version. Methods Parents from a representative sample of children aged 6-12 years were selected from 9 primary schools in Barcelona. Test-retest reliability was assessed in a convenience subsample of parents from 2 schools. Parents completed the Spanish version of the CHIP-CE/PRF. The Achenbach Child Behavioural Checklist (CBCL) was administered to a convenience subsample. Results The overall response rate was 67% (n = 871). There was no floor effect. A ceiling effect was found in 4 subdomains. Reliability was acceptable at the domain level (internal consistency = 0.68-0.86; test-retest intraclass correlation coefficients = 0.69-0.85). Younger girls had better scores on Satisfaction and Achievement than older girls. Comfort domain score was lower (worse) in children with a probable mental health problem, with high effect size (ES = 1.45). The level of parent-child agreement was low (0.22-0.37). Conclusions The results of this study suggest that the parent version of the Spanish CHIP-CE has acceptable psychometric properties although further research is needed to check reliability at sub-domain level. The CHIP-CE parent report form provides a comprehensive, psychometrically sound measure of health for Spanish children 6 to 12 years old. It can be a complementary perspective to the self-reported measure or an alternative when the child is unable to complete the questionnaire. In general, the results are similar to the original U.S. version.

2010-01-01

96

Data Parallelism and Functional Programming  

Microsoft Academic Search

\\u000aData parallelism is often seen as a form of explicit parallelism for SIMD and vector machines, and data parallel programming as an explicit programming paradigm for these architectures. Data parallel languages possess certain software qualities as well, which justifies their use in higher level programming and specification closer to the algorithm domain. Thus, it is interesting to study how the

Björn Lisper

1996-01-01

97

Braid: integrating task and data parallelism  

Microsoft Academic Search

Archetype data parallel or task parallel applications are well served by contemporary languages. However, for applications containing a balance of task and data parallelism the choice of language is less clear. While there are languages that enable both forms of parallelism, e.g., one can write data parallel programs using a task parallel language, there are few languages which support both.

Emily A. West; Andrew S. Grimshaw

1995-01-01

98

General peroxidase activity of a parallel G-quadruplex-hemin DNAzyme formed by Pu39WT - a mixed G-quadruplex forming sequence in the Bcl-2 P1 promoter  

PubMed Central

Background A 39-base-pair sequence (Pu39WT) located 58 to 19 base pairs upstream of the Bcl-2 P1 promoter has been implicated in the formation of an intramolecular mixed G-quadruplex structure and is believed to play a major role in the regulation of bcl-2 transcription. However, an extensive functional exploration requires further investigation. To further exploit the structure–function relationship of the Pu39WT-hemin DNAzyme, the secondary structure and peroxidase activity of the Pu39WT-hemin complex were investigated. Results Experimental results showed that when Pu39WT was incubated with hemin, it formed a uniparallel G-quadruplex-hemin complex in K+ or Na+ solution, rather than a mixed hybrid without bound hemin. Also, Pu39WT-hemin showed peroxidase activity (ABTS2?) in the presence of H2O2 to produce the colored radical anion (ABTS•-), which could then be used to determine the parameters governing the catalytic efficiency and reveal the peroxidase activity of the Pu39WT-hemin DNAzyme. Conclusions These results demonstrate the general peroxidase activity of Pu39WT-hemin DNAzyme, which is an intramolecular parallel G-quadruplex structure. This peroxidase activity of hemin complexed with the G-quadruplex-forming sequence in the Bcl-2 gene promoter may imply a potential mechanism of hemin-mediated cellular injury.

2014-01-01

99

Parallel Rendering.  

National Technical Information Service (NTIS)

This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems...

T. W. Crockett

1995-01-01

100

Parallel processing  

SciTech Connect

This report examines the current techniques of parallel processing, transputers, vector and vector supercomputers and covers such areas as transputer applications, programming models and language design for parallel processing.

Jesshop, C.

1987-01-01

101

Improved electrical and reliability characteristics in metal/oxide/nitride/oxide/silicon capacitors with blocking oxide layers formed under the radical oxidation process.  

PubMed

We propose a Metal-Oxide-Nitride-Oxide-Silicon (MONOS) structure whose blocking oxide is formed by radical oxidation on the silicon nitride (Si3N4) layer to improve the electrical and reliability characteristics. We directly compare the electrical and reliability properties of the MONOS capacitors with two different blocking oxide (SiO2) layers, which are called a "radical oxide" grown by the radical oxidation and a "CVD oxide" deposited by chemical vapor deposition (CVD) respectively. The MONOS capacitor with a radical oxide shows a larger C-V memory window of 3.6 V at sweep voltages from 9 V to -9 V, faster program/erase speeds of 1 micros/1 ms at bias voltages of -6 V and 8 V, a lower leakage current of 7 pA and a longer data retention, compared to those of the MONOS capacitor with a CVD oxide. These improvements have been attributed to both high densification of blocking oxide film and increased nitride-related memory traps at the interface between the blocking oxide and Si3N4 layer by radical oxidation. PMID:21128482

An, Ho-Myoung; Kim, Hee Dong; Seo, Yu Jeong; Kim, Kyoung Chan; Sung, Yun Mo; Koo, Sang-Mo; Koh, Jung-Hyuk; Kim, Tae Geun

2010-07-01

102

Parallel rendering  

NASA Technical Reports Server (NTRS)

This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

Crockett, Thomas W.

1995-01-01

103

Parallel Algorithms  

NSDL National Science Digital Library

Content prepared for the Supercomputing 2002 session on "Using Clustering Technologies in the Classroom". Contains a series of exercises for teaching parallel computing concepts through kinesthetic activities.

Gray, Paul

104

Parallel Optimisation  

NSDL National Science Digital Library

An introduction to optimisation techniques that may improve parallel performance and scaling on HECToR. It assumes that the reader has some experience of parallel programming including basic MPI and OpenMP. Scaling is a measurement of the ability for a parallel code to use increasing numbers of cores efficiently. A scalable application is one that, when the number of processors is increased, performs better by a factor which justifies the additional resource employed. Making a parallel application scale to many thousands of processes requires not only careful attention to the communication, data and work distribution but also to the choice of the algorithms to use. Since the choice of algorithm is too broad a subject and very particular to application domain to include in this brief guide we concentrate on general good practices towards parallel optimisation on HECToR.

105

Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach  

ERIC Educational Resources Information Center

Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

2012-01-01

106

Comparative Study in the Reliability Allocation Process.  

National Technical Information Service (NTIS)

The purpose of the report was to investigate the differences in system reliability and system cost in the reliability allocation process between an independent assumption among operating components of an active-parallel-exponential system and a dependent ...

T. D. Cox

1972-01-01

107

Parallel biocomputing  

PubMed Central

Background With the advent of high throughput genomics and high-resolution imaging techniques, there is a growing necessity in biology and medicine for parallel computing, and with the low cost of computing, it is now cost-effective for even small labs or individuals to build their own personal computation cluster. Methods Here we briefly describe how to use commodity hardware to build a low-cost, high-performance compute cluster, and provide an in-depth example and sample code for parallel execution of R jobs using MOSIX, a mature extension of the Linux kernel for parallel computing. A similar process can be used with other cluster platform software. Results As a statistical genetics example, we use our cluster to run a simulated eQTL experiment. Because eQTL is computationally intensive, and is conceptually easy to parallelize, like many statistics/genetics applications, parallel execution with MOSIX gives a linear speedup in analysis time with little additional effort. Conclusions We have used MOSIX to run a wide variety of software programs in parallel with good results. The limitations and benefits of using MOSIX are discussed and compared to other platforms.

2011-01-01

108

Parallel machines: Parallel machine languages  

SciTech Connect

This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

Iannucci, R.A. (IBM (US))

1990-01-01

109

Scalable parallel communications  

NASA Technical Reports Server (NTRS)

Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

1992-01-01

110

Reliability Estimation Using Validity Coefficients.  

ERIC Educational Resources Information Center

Occasionally situations arise in which a measurement does not lend itself to such traditional methods of reliability estimation as the test-retest, parallel-test, or internal consistency methods. This paper proposes basing reliability estimation in such situations on estimates of validity coefficients as lower bounds. (Author/LMO)

Krammer, Hein, P. M.; Van Der Linden, Wim J.

1986-01-01

111

Integer Programming Formulation of Constrained Reliability Problems  

Microsoft Academic Search

This paper investigates the solution by integer programming of reliability optimization problems which are subject to linear and nonlinear separable restraints. In particular, the following problems are solved: (1) maximizing reliability for a parallel redundancy system subject to multiple linear restraints, (2) minimizing cost of a parallel redundancy system subject to multiple nonlinear and separable restraint functions while maintaining an

F. A. Tillman; J. M. Liittschwager

1967-01-01

112

Parallel thinking  

Microsoft Academic Search

Assuming that the multicore revolution plays out the way the microprocessor industry expects, it seems that within a decade most programming will involve parallelism at some level. One needs to ask how this affects the the way we teach computer science, or even how we have people think about computation. With regards to teaching there seem to be three basic

Guy E. Blelloch

2009-01-01

113

Stitch-Bond Parallel-Gap Welding for IC Circuits: Stitch-bonded flatbacks can be superior to soldered dual-in-lines where size, weight, and reliability are important.  

National Technical Information Service (NTIS)

This citation summarizes a one-page announcement of technology available for utilization. Flatback integrated circuits installed by stitch-bond/parallel-gap welding can be considerably more economical for complex circuit boards than conventional solder-in...

1981-01-01

114

Improved techniques of parallel gap welding and monitoring  

NASA Technical Reports Server (NTRS)

Welding programs which show that parallel gas welding is a reliable process are discussed. When monitoring controls and nondestructive tests are incorporated into the process, parallel gap welding becomes more reliable and cost effective. The panel fabrication techniques and the HAC thermal cycling test indicate reliable product integrity. The design and building of automated tooling and fixturing for welding are discussed.

Mardesich, N.; Gillanders, M. S.

1984-01-01

115

Parallel processing and simulation  

Microsoft Academic Search

Summary form only given. High instruction execution rates may be achieved through a vorpal of inexpensive processors operating in parallel. The harnessing of this raw computing power to discrete event simulation applications is an active area of research. Three major approaches to the problem, of assigning computational tasks to processing elements may be identified: (1) model based assignment, (2) local

John C. Comfort; David Jefferson; Y. V. Reddy; Paul Reynolds; Sallie Sheppard

1983-01-01

116

The MOS 36-item Short-Form Health Survey (SF36): III. Tests of data quality, scaling assumptions, and reliability across diverse patient groups  

Microsoft Academic Search

The widespread use of standardized health surveys is predicated on the largely untested assumption that scales constructed from those surveys will satisfy minimum psychometric requirements across diverse population groups. Data from the Medical Outcomes Study (MOS) were used to evaluate data completeness and quality, test scaling assumptions, and estimate internal-consistency reliability for the eight scales constructed from the MOS SF-36

Colleen A. McHorney; Ware John E. Jr; J. F. Rachel Lu; Cathy Donald Sherbourne

1994-01-01

117

Adaptive parallel logic networks  

NASA Technical Reports Server (NTRS)

Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

Martinez, Tony R.; Vidal, Jacques J.

1988-01-01

118

Parallelizing Timed Petri Net simulations  

NASA Technical Reports Server (NTRS)

The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

Nicol, David M.

1993-01-01

119

Sense Discrimination with Parallel Corpora  

Microsoft Academic Search

This paper describes an experiment that uses translation equivalents derived from parallel corpora to determine sense distinctions that can be used for automatic sense-tagging and other disambiguation tasks. Our results show that sense distinctions derived from cross-lingual information are at least as reliable as those made by human annotators. Because our approach is fully automated through all its steps, it

Nancy Ide; Tomaz Erjavec; Dan Tufis

2002-01-01

120

Evaluation of General Classes of Reliability Estimators Often Used in Statistical Analyses of Quasi-Experimental Designs  

NASA Astrophysics Data System (ADS)

In this paper major reliability estimators are analyzed and there comparatively result are discussed. There strengths and weaknesses are evaluated in this case study. Each of the reliability estimators has certain advantages and disadvantages. Inter-rater reliability is one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers. As an alternative, you could look at the correlation of ratings of the same single observer repeated on two different occasions. Each of the reliability estimators will give a different value for reliability. In general, the test-retest and inter-rater reliability estimates will be lower in value than the parallel forms and internal consistency ones because they involve measuring at different times or with different raters. Since reliability estimates are often used in statistical analyses of quasi-experimental designs.

Saini, K. K.; Sehgal, R. K.; Sethi, B. L.

2008-10-01

121

d(CGGTGGT) forms an octameric parallel G-quadruplex via stacking of unusual G(:C):G(:C):G(:C):G(:C) octads  

PubMed Central

Among non-canonical DNA secondary structures, G-quadruplexes are currently widely studied because of their probable involvement in many pivotal biological roles, and for their potential use in nanotechnology. The overall quadruplex scaffold can exhibit several morphologies through intramolecular or intermolecular organization of G-rich oligodeoxyribonucleic acid strands. In particular, several G-rich strands can form higher order assemblies by multimerization between several G-quadruplex units. Here, we report on the identification of a novel dimerization pathway. Our Nuclear magnetic resonance, circular dichroism, UV, gel electrophoresis and mass spectrometry studies on the DNA sequence dCGGTGGT demonstrate that this sequence forms an octamer when annealed in presence of K+ or NH4+ ions, through the 5?-5? stacking of two tetramolecular G-quadruplex subunits via unusual G(:C):G(:C):G(:C):G(:C) octads.

Borbone, Nicola; Amato, Jussara; Oliviero, Giorgia; D'Atri, Valentina; Gabelica, Valerie; De Pauw, Edwin; Piccialli, Gennaro; Mayol, Luciano

2011-01-01

122

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Word and Passage Reading Fluency Assessments: Grade 3. Technical Report #1218  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

123

Reliability analysis  

NASA Technical Reports Server (NTRS)

The objective was to search for and demonstrate approaches and concepts for fast wafer probe tests of mechanisms affecting the reliability of MOS technology and, based on these, develop and optimize test chips and test procedures. Progress is reported on four important wafer-level reliability problems: gate-oxide radiation hardness; hot-electron effects; time-dependence dielectric breakdown; and electromigration.

1985-01-01

124

Reliability training  

NASA Technical Reports Server (NTRS)

Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

Lalli, Vincent R. (editor); Malec, Henry A. (editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

1992-01-01

125

Parallel processing and simulation  

SciTech Connect

Summary form only given. High instruction execution rates may be achieved through a vorpal of inexpensive processors operating in parallel. The harnessing of this raw computing power to discrete event simulation applications is an active area of research. Three major approaches to the problem, of assigning computational tasks to processing elements may be identified: (1) model based assignment, (2) local function based assignment, and (3) global function based assignment.

Comfort, J.C.

1983-01-01

126

High Performance Parallel Computing.  

National Technical Information Service (NTIS)

The three major research areas have been parallel structuring of computations, basic software for support of parallel computations and parallel architectures and supporting hardware. The work on parallel structuring of computations falls into three catego...

J. C. Browne

1982-01-01

127

Parallel Computing Explained  

NSDL National Science Digital Library

Several tutorials on parallel computing. Overview of parallel computing. Porting and code parallelization. Scalar, cache, and parallel code tuning. Timing, profiling and performance analysis. Overview of IBM Regatta P690.

Ncsa

128

The Bergen left-right discrimination test: practice effects, reliable change indices, and strategic performance in the standard and alternate form with inverted stimuli.  

PubMed

Several authors pointed out that left-right discrimination (LRD) tasks may be entangled with differential demands on mental rotation (MR). However, studies considering this interrelationship are rare. To differentially assess LRD of stimuli with varying additional demands on MR, we constructed and evaluated an extended version of the Bergen right-left discrimination (BRLD) test including additional subtests with inverted stickmen stimuli in 174 healthy participants (50?, 124?) and measured subjective reports on participants' strategies to accomplish the task. Moreover, we analyzed practice effects and reliable change indices (RCIs) on BRLD performance, as well as gender differences. Performance significantly differed between subtests with high and low demands on MR with best scores on subtests with low demands on MR. Participants' subjective strategies corroborate these results: MR was most frequently reported for subtests with highest MR demands (and lowest test performance). Pronounced practice effects were observed for all subtests. Sex differences were not observed. We conclude that our extended version of the BRLD allows for the differentiation between LRD with high and low demands on MR abilities. The type of stimulus materials is likely to be critical for the differential assessment of MR and LRD. Moreover, RCIs provide a basis for the clinical application of the BRLD. PMID:24174271

Grewe, Philip; Ohmann, Hanno A; Markowitsch, Hans J; Piefke, Martina

2014-05-01

129

The feasibility, reliability and validity of the McGill Quality of Life Questionnaire-Cardiff Short Form (MQOL-CSF) in palliative care population  

Microsoft Academic Search

In terminally-ill patients, effective measurement of health-related quality of life (HRQoL) needs to be done while imposing minimal burden. In an attempt to ensure that routine HRQoL assessment is simple but capable of eliciting adequate information, the McGill Quality of Life Questionnaire-Cardiff Short Form (MQOL-CSF: 8 items) was developed from its original version, the McGill Quality of Life Questionnaire (MQOL:

Pei Lin Lua; Sam Salek; Ilora Finlay; Chris Lloyd-Richards

2005-01-01

130

Andorra-I: A Parallel Prolog System that Transparently Exploits both And and Or-Parallelism  

Microsoft Academic Search

Andorra-I is an experimental parallel Prolog system that transparently exploits both dependent and-parallelism and or-parallelism. It constitutes the first implementation of the Basic Andorra model, a parallel execution model for logic programs in which determinate goals are executed before other goals. This model, besides combining two of the most important forms of implicit parallelism in logic programs, also provides a

Vítor Santos Costa; David H. D. Warren; Rong Yang

1991-01-01

131

An efficient reliable broadcast protocol  

Microsoft Academic Search

Many distributed and parallel applications can make good use of broadcast communication. In this paper we present a (software) protocol that simulates reliable broadcast, even on an unreliable network. Using this protocol, application programs need not worry about lost messages. Recovery of communication failures is handled automatically and transparently by the protocol. In normal operation, our protocol is more efficient

M. Frans Kaashoek; Andrew S. Tanenbaum; Susan Flynn Hummel; Henri E. Bal

1989-01-01

132

Parallel Programming in the Age of Ubiquitous Parallelism  

NASA Astrophysics Data System (ADS)

Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

Pingali, Keshav

2014-04-01

133

Parallel pivoting combined with parallel reduction  

NASA Technical Reports Server (NTRS)

Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

Alaghband, Gita

1987-01-01

134

Constructing an Index for the Subjective Well-being Under Neuroleptics scale (SWN), short form: applying structural equation modeling for testing reliability and validity of the index.  

PubMed

Structural equation modeling (SEM) has been widely used in psychology and sociology for testing validity of measurement instruments. However, this statistical technique has so far played minor role in quality-of-life research. The main objective of this paper is to demonstrate the potential of SEM for constructing and testing the validity of a Subjective Well-being under Neuroleptics (SWN) index for patients with schizophrenia. For these purposes, data from the GEO study (Gesundheitsökonomische Evaluation von Olanzapin in Deutschland; Health economics study of olanzapine in the treatment of schizophrenia in Germany) were used. The GEO is a prospective, comparative, noninterventional, observational study. A total of 646 participants treated with either olanzapine (n = 416) or haloperidol (n = 230) were enrolled in the study; 360 patients were available for factor analyses. The short (20-item) form of the SWN scale was administered to assess patients' perspectives on their quality of life. The structural equation models (SEMs) were then applied to construct 5- and 10-item indexes based on SWN. The data indicate that the 5-item index is the most time-saving approach for evaluating perceptions of well-being (and thus, quality of life) among patients with schizophrenia. The application of SEM showed no appreciable loss of validity of this index. PMID:17004003

Schmidt, Peter; Clouth, Johannes; Haggenmüller, Lars; Naber, Dieter; Reitberger, Ursula

2006-09-01

135

Parallel Recognition of Series-Parallel Graphs  

Microsoft Academic Search

Recently, He and Yesha gave an algorithm for recognizing directed series parallel graphs, in time O(log2n) with linearly many EREW processors. We give a new algorithm for this problem, based on a structural characterization of series parallel graphs in terms of their ear decompositions. Our algorithm can recognize undirected as well as directed series parallel graphs. It can be implemented

David Eppstein

1992-01-01

136

The Ohio Scales Youth Form: Expansion and Validation of a Self-Report Outcome Measure for Young Children  

ERIC Educational Resources Information Center

We examined the validity and reliability of a self-report outcome measure for children between the ages of 8 and 11. The Ohio Scales Problem Severity scale is a brief, practical outcome measure available in three parallel forms: Parent, Youth, and Agency Worker. The Youth Self-Report form is currently validated for children ages 12 and older. The…

Dowell, Kathy A.; Ogles, Benjamin M.

2008-01-01

137

Results from the translation and adaptation of the Iranian Short-Form McGill Pain Questionnaire (I-SF-MPQ): preliminary evidence of its reliability, construct validity and sensitivity in an Iranian pain population  

PubMed Central

Background The Short Form McGill Pain Questionnaire (SF-MPQ) is one of the most widely used instruments to assess pain. The aim of this study was to translate and culturally adapt the questionnaire for Farsi (the official language of Iran) speakers in order to test its reliability and sensitivity. Methods We followed Guillemin's guidelines for cross-cultural adaption of health-related measures, which include forward-backward translations, expert committee meetings, and face validity testing in a pilot group. Subsequently, the questionnaire was administered to a sample of 100 diverse chronic pain patients attending a tertiary pain and rehabilitation clinic. In order to evaluate test-retest reliability, patients completed the questionnaire in the morning and early evening of their first visit. Finally, patients were asked to complete the questionnaire for the third time after completing a standardized treatment protocol three weeks later. Intraclass correlation coefficient (ICC) was used to evaluate reliability. We used principle component analysis to assess construct validity. Results Ninety-two subjects completed the questionnaire both in the morning and in the evening of the first visit (test-retest reliability), and after three weeks (sensitivity to change). Eight patients who did not finish treatment protocol were excluded from the study. Internal consistency was found by Cronbach's alpha to be 0.951, 0.832 and 0.840 for sensory, affective and total scores respectively. ICC resulted in 0.906 for sensory, 0.712 for affective and 0.912 for total pain score. Item to subscale score correlations supported the convergent validity of each item to its hypothesized subscale. Correlations were observed to range from r2 = 0.202 to r2 = 0.739. Sensitivity or responsiveness was evaluated by pair t-test, which exhibited a significant difference between pre- and post-treatment scores (p < 0.001). Conclusion The results of this study indicate that the Iranian version of the SF-MPQ is a reliable questionnaire and responsive to changes in the subscale and total pain scores in Persian chronic pain patients over time.

2011-01-01

138

Special parallel processing workshop  

SciTech Connect

This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

NONE

1994-12-01

139

Parallel Mesh Generation  

Microsoft Academic Search

Parallel mesh generation is a relatively new research area between the boundaries of two scientific computing disciplines:\\u000a computational geometry and parallel computing. In this chapter we present a survey of parallel unstructured mesh generation\\u000a methods. Parallel mesh generation methods decompose the original mesh generation problem into smaller sub-problems which are\\u000a meshed in parallel. We organize the parallel mesh generation methods

Nikos Chrisochoides

140

Reliability Analysis of Redundant Networks Using Flow Graphs  

Microsoft Academic Search

A flow graph approach for reliability analysis is applied to redundancy with elements in the simple series-parallel networks and to the more general case with elements in non-series-parallel combinations. The reliability of networks for elements with open or short failures is analyzed with flow graphs. Typical examples are shown.

K. B. Misra; T. S. M. Rao

1970-01-01

141

Intraclass correlations: Uses in assessing rater reliability  

Microsoft Academic Search

Reliability coefficients often take the form of intraclass correlation coefficients. In this article, guidelines are given for choosing among 6 different forms of the intraclass correlation for reliability studies in which n targets are rated by k judges. Relevant to the choice of the coefficient are the appropriate statistical model for the reliability study and the applications to be made

Patrick E. Shrout; Joseph L. Fleiss

1979-01-01

142

PHACT: Parallel HOG and Correlation Tracking  

NASA Astrophysics Data System (ADS)

Histogram of Oriented Gradients (HOG) based methods for the detection of humans have become one of the most reliable methods of detecting pedestrians with a single passive imaging camera. However, they are not 100 percent reliable. This paper presents an improved tracker for the monitoring of pedestrians within images. The Parallel HOG and Correlation Tracking (PHACT) algorithm utilises self learning to overcome the drifting problem. A detection algorithm that utilises HOG features runs in parallel to an adaptive and stateful correlator. The combination of both acting in a cascade provides a much more robust tracker than the two components separately could produce.

Hassan, Waqas; Birch, Philip; Young, Rupert; Chatwin, Chris

2014-03-01

143

Parallel systems using the Weibull model  

NASA Astrophysics Data System (ADS)

A series system reliability is based on the minimum life time of its components. Its dual, the parallel system, is based on maximum. Here, we consider the statistical analysis of a parallel system where its components follows the Weibull parametric model. Our perspective is Bayesian. Due to the mathematical complexity, to obtain the posterior distribution we use the Metropolis-Hasting simulation method. Based on this posterior, we evaluated the evidence of the Full Bayesian Significance Test -FBST- for comparing component reliabilities. The reason for using FBST is the fact that we are testing precise hypotheses. An example illustrates the methodology.

Polpo, A.; Coque, M. A.; de B. Pereira, C. A.

2008-11-01

144

Parallelizing constraint programs  

Microsoft Academic Search

The availability of commodity multicore and multiprocessor machines and the inherent parallelism in constraint programming search offer significant opportunities for constraint programming. Both constraint-based local search and finite-domain techniques can dramatically benefit from parallelization. Yet, currently available libraries and languages offer very limited support to exploit the inherent parallelism and the high human cost incurred to develop parallel solutions confine

Laurent D. Michel

2010-01-01

145

Software behavior oriented parallelization  

Microsoft Academic Search

Many sequential applications are difficult to parallelize because of unpredictable control flow, indirect data access, and input- dependent parallelism. These difficulties led us to build a software system for behavior oriented parallelization (BOP), which allows a program to be parallelized based on partial information about pro- gram behavior, for example, a user reading just part of the source code, or

Chen Ding; Xipeng Shen; Kirk Kelsey; Chris Tice; Ruke Huang; Chengliang Zhang

2007-01-01

146

Parallel Mandelbrot Set Model  

NSDL National Science Digital Library

The Parallel Mandelbrot Set Model is a parallelization of the sequential MandelbrotSet model, which does all the computations on a single processor core. This parallelization is able to use a computer with more than one cores (or processors) to carry out the same computation, thus speeding up the process. The parallelization is done using the model elements in the Parallel Java group. These model elements allow easy use of the Parallel Java library created by Alan Kaminsky. In particular, the parallelization used for this model is based on code in Chapters 11 and 12 of Kaminsky's book Building Parallel Java. The Parallel Mandelbrot Set Model was developed using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double click the ejs_chaos_ParallelMandelbrotSet.jar file to run the program if Java is installed.

Franciscouembre

2011-11-24

147

Photovoltaic module reliability workshop  

SciTech Connect

The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

Mrig, L. (ed.)

1990-01-01

148

Multiple Parallelisms in Animal Cytokinesis  

Microsoft Academic Search

The process of cytokinesis in animal cells is usually presented as a relatively simple picture: A cleavage plane is first positioned in the equatorial region by the astral microtubules of the anaphase mitotic apparatus, and a contractile ring made up of parallel filaments of actin and myosin II is formed and encircles the cortex at the division site. Active sliding

Taro Q. P. Uyeda; Akira Nagasaki; Shigehiko Yumura

2004-01-01

149

DUST EXTINCTION FROM BALMER DECREMENTS OF STAR-FORMING GALAXIES AT 0.75 {<=} z {<=} 1.5 WITH HUBBLE SPACE TELESCOPE/WIDE-FIELD-CAMERA 3 SPECTROSCOPY FROM THE WFC3 INFRARED SPECTROSCOPIC PARALLEL SURVEY  

SciTech Connect

Spectroscopic observations of H{alpha} and H{beta} emission lines of 128 star-forming galaxies in the redshift range 0.75 {<=} z {<=} 1.5 are presented. These data were taken with slitless spectroscopy using the G102 and G141 grisms of the Wide-Field-Camera 3 (WFC3) on board the Hubble Space Telescope as part of the WFC3 Infrared Spectroscopic Parallel survey. Interstellar dust extinction is measured from stacked spectra that cover the Balmer decrement (H{alpha}/H{beta}). We present dust extinction as a function of H{alpha} luminosity (down to 3 Multiplication-Sign 10{sup 41} erg s{sup -1}), galaxy stellar mass (reaching 4 Multiplication-Sign 10{sup 8} M {sub Sun }), and rest-frame H{alpha} equivalent width. The faintest galaxies are two times fainter in H{alpha} luminosity than galaxies previously studied at z {approx} 1.5. An evolution is observed where galaxies of the same H{alpha} luminosity have lower extinction at higher redshifts, whereas no evolution is found within our error bars with stellar mass. The lower H{alpha} luminosity galaxies in our sample are found to be consistent with no dust extinction. We find an anti-correlation of the [O III] {lambda}5007/H{alpha} flux ratio as a function of luminosity where galaxies with L {sub H{alpha}} < 5 Multiplication-Sign 10{sup 41} erg s{sup -1} are brighter in [O III] {lambda}5007 than H{alpha}. This trend is evident even after extinction correction, suggesting that the increased [O III] {lambda}5007/H{alpha} ratio in low-luminosity galaxies is likely due to lower metallicity and/or higher ionization parameters.

Dominguez, A.; Siana, B.; Masters, D. [Department of Physics and Astronomy, University of California Riverside, Riverside, CA 92521 (United States)] [Department of Physics and Astronomy, University of California Riverside, Riverside, CA 92521 (United States); Henry, A. L.; Martin, C. L. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)] [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Scarlata, C.; Bedregal, A. G. [Minnesota Institute for Astrophysics, University of Minnesota, Minneapolis, MN 55455 (United States)] [Minnesota Institute for Astrophysics, University of Minnesota, Minneapolis, MN 55455 (United States); Malkan, M.; Ross, N. R. [Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095 (United States)] [Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095 (United States); Atek, H.; Colbert, J. W. [Spitzer Science Center, Caltech, Pasadena, CA 91125 (United States)] [Spitzer Science Center, Caltech, Pasadena, CA 91125 (United States); Teplitz, H. I.; Rafelski, M. [Infrared Processing and Analysis Center, Caltech, Pasadena, CA 91125 (United States)] [Infrared Processing and Analysis Center, Caltech, Pasadena, CA 91125 (United States); McCarthy, P.; Hathi, N. P.; Dressler, A. [Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States)] [Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States); Bunker, A., E-mail: albertod@ucr.edu [Department of Physics, Oxford University, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom)

2013-02-15

150

An Algorithm for the Reliability Evaluation of Redundant Networks  

Microsoft Academic Search

An algorithm is presented for evaluation of reliability of any redundant network. It uses the properties of digraphs and is especially suitable for the computer analysis of large complex networks. A method for deriving the reliability expression for any type of network is also described along with a Fortran computer program for the reliability evaluation of series-parallel networks.

K. B. Misra

1970-01-01

151

Microlenses focal length measurement using Z-scan and parallel moiré deflectometry  

NASA Astrophysics Data System (ADS)

In this paper, a simple and accurate method based on Z-scan and parallel moiré deflectometry for measuring the focal length of microlenses is reported. A laser beam is focused by one lens and is re-collimated by another lens, and then strikes a parallel moiré deflectometer. In the presence of a microlens near the focal point of the first lens, the radius of curvature of the beam is changed; the parallel moiré fringes are formed only due to the beam divergence or convergence. The focal length of the microlens is obtained from the moiré fringe period graph without the need to know the position of the principal planes. This method is simple, more reliable, and completely automated. The implementation of the method is straightforward. Since a focused laser beam and Z-scan in free space are used, it can be employed for determining small focal lengths of small size microlenses without serious limitation on their size.

Rasouli, Saifollah; Rajabi, Y.; Sarabi, H.

2013-12-01

152

Exploiting fine grain parallelism in Prolog  

SciTech Connect

The goals of this paper is to design a Prolog system that automatically exploits parallelism in Prolog with low overhead memory management and task management schemes, and to demonstrate by means of detailed simulations that such a Prolog system can indeed achieve a significant speedup over the fastest sequential Prolog systems. The authors achieve these goals by first identifying the large sources of overhead in parallel Prolog execution: side-effects caused by parallel tasks, choicepoints created by parallel tasks, tasks creation, task scheduling, task suspension and context switching. The authors then identify a form of parallelism, called flow parallelism, that can be exploited with low overhead because parallel execution is restricted to goals that do not cause side-effects and do not create choicepoints. The authors develop a master-slave model of parallel execution that eliminates task suspension and context switching. The model uses program partitioning and task scheduling techniques that do not require task suspension and context switching to prevent deadlock. The authors identify architectural techniques to support the parallel execution model and develop the Flow Parallel Prolog Machines (FPPM) architecture and implementation. Finally, the authors evaluate the performance of FPPM and investigate the design tradeoffs using measurements on a detailed, register- transfer level simulator. FPPM achieves an average speedup of about a factor of 2 (as much as a factor of 5 for some programs) over the current highest performance sequential Prolog implementation, the VLSI-BAM. The speedups over other parallel Prolog systems are much larger.

Singhal, A.

1990-01-01

153

Temperature Integrated Load Sharing of Paralleled Modules  

Microsoft Academic Search

Paralleling power modules is designed to share system loads (stresses) equally to improve system reliability. Due to variations in the parameters in the power converter system, temperature mismatches may occur. These mismatches may lead to unequal life expectancy of individual converters in the total system. It is believed that equalizing the operating temperature of the semiconductor devices may improve total

J. L. Barnette; M. R. Zolghadri; M. Walters; A. Homaifar

2006-01-01

154

DC Circuits: Parallel Resistances  

NSDL National Science Digital Library

In this interactive learning activity, students will learn about parallel circuits. They will measure and calculate the resistance of parallel circuits and answer several questions about the example circuit shown.

2013-07-30

155

Parallel Particle Swarm Optimizer.  

National Technical Information Service (NTIS)

Time requirements for the solving of complex large-scale engineering problems can be substantially reduced by using parallel computation. Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel impleme...

J. F. Schutte B. Fregly R. T. Haftka A. D. George

2003-01-01

156

Parallel flow diffusion battery  

DOEpatents

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

Yeh, H.C.; Cheng, Y.S.

1984-01-01

157

Parallel text search methods  

Microsoft Academic Search

A comparison of recently proposed parallel text search methods to alternative available search strategies that use serial processing machines suggests parallel methods do not provide large-scale gains in either retrieval effectiveness or efficiency.

Gerard Salton; Chris Buckley

1988-01-01

158

The Circulating Processor Model of Parallel Systems  

Microsoft Academic Search

This paper introduces the circulating processor model for parallel computer systems. Models of parallel systems tend to be computationally complex due to synchronization constraints such as task forking and joining. However, product form queuing network models remain computationally efficient as the size of the system grows by calculating only the mean performance metrics of the system. The circulating processor model

Amy W. Apon; Lawrence W. Dowdy

1997-01-01

159

Avionics Design for Reliability.  

National Technical Information Service (NTIS)

Contents: Introduction and overview--Reliability under austerity; Avionics reliability control during development; Reliability growth modelling for avionics; Illusory reliability growth; Experienced in-flight avionics malfunctions; Failures affecting reli...

1976-01-01

160

Parallel integrated frame synchronizer chip  

NASA Technical Reports Server (NTRS)

A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

2000-01-01

161

Parallel Adaptive Mesh Refinement Library  

NASA Technical Reports Server (NTRS)

Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

Mac-Neice, Peter; Olson, Kevin

2005-01-01

162

CFD on parallel computers  

NASA Astrophysics Data System (ADS)

CFD or Computational Fluid Dynamics is one of the scientific disciplines that has always posed new challenges to the capabilities of the modern, ultra-fast supercomputers, and now to the even faster parallel computers. For applications where number crunching is of primary importance, there is perhaps no escaping parallel computers since sequential computers can only be (as projected) as fast as a few gigaflops and no more, unless, of course, some altogether new technology appears in future. For parallel computers, on the other hand, there is no such limit since any number of processors can be made to work in parallel. Computationally demanding CFD codes and parallel computers are therefore soul-mates, and will remain so for all foreseeable future. So much so that there is a separate and fast-emerging discipline that tackles problems specific to CFD as applied to parallel computers. For some years now, there is an international conference on parallel CFD. So, one can indeed say that parallel CFD has arrived. To understand how CFD codes are parallelized, one must understand a little about how parallel computers function. Therefore, in what follows we will first deal with parallel computers, how a typical CFD code (if there is one such) looks like, and then the strategies of parallelization.

Basu, A. J.

1994-10-01

163

NAS Parallel Benchmarks.  

National Technical Information Service (NTIS)

The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as th...

D. H. Bailey

2009-01-01

164

Parallel Implicit Algorithms for CFD  

NASA Technical Reports Server (NTRS)

The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

Keyes, David E.

1998-01-01

165

Reliability of a science admission test (HAM-Nat) at Hamburg medical school  

PubMed Central

Objective: The University Hospital in Hamburg (UKE) started to develop a test of knowledge in natural sciences for admission to medical school in 2005 (Hamburger Auswahlverfahren für Medizinische Studiengänge, Naturwissenschaftsteil, HAM-Nat). This study is a step towards establishing the HAM-Nat. We are investigating parallel forms reliability, the effect of a crash course in chemistry on test results, and correlations of HAM-Nat test results with a test of scientific reasoning (similar to a subtest of the "Test for Medical Studies", TMS). Methods: 316 first-year students participated in the study in 2007. They completed different versions of the HAM-Nat test which consisted of items that had already been used (HN2006) and new items (HN2007). Four weeks later half of the participants were tested on the HN2007 version of the HAM-Nat again, while the other half completed the test of scientific reasoning. Within this four week interval students were offered a five day chemistry course. Results: Parallel forms reliability for four different test versions ranged from rtt=.53 to rtt=.67. The retest reliabilities of the HN2007 halves were rtt=.54 and rtt =.61. Correlations of the two HAM-Nat versions with the test of scientific reasoning were r=.34 und r=.21. The crash course in chemistry had no effect on HAM-Nat scores. Conclusions: The results suggest that further versions of the test of natural sciences will not easily conform to the standards of internal consistency, parallel-forms reliability and retest reliability. Much care has to be taken in order to assemble items which could be used interchangeably for the construction of new test versions. The test of scientific reasoning and the HAM-Nat are tapping different constructs. Participation in a chemistry course did not improve students’ achievement, probably because the content of the course was not coordinated with the test and many students lacked of motivation to do well in the second test.

Hissbach, Johanna; Klusmann, Dietrich; Hampe, Wolfgang

2011-01-01

166

Parallel algorithm development  

SciTech Connect

Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

Adams, T.F.

1996-06-01

167

Parallel I/O Systems  

NSDL National Science Digital Library

* Redundant disk array architectures,* Fault tolerance issues in parallel I/O systems,* Caching and prefetching,* Parallel file systems,* Parallel I/O systems, * Parallel I/O programming paradigms, * Parallel I/O applications and environments, * Parallel programming with parallel I/O

Apon, Amy

168

Towards Distributed Memory Parallel Program Analysis  

SciTech Connect

This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

Quinlan, D; Barany, G; Panas, T

2008-06-17

169

Design considerations for parallel graphics libraries  

NASA Technical Reports Server (NTRS)

Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

Crockett, Thomas W.

1994-01-01

170

Parallel Atomistic Simulations  

SciTech Connect

Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

HEFFELFINGER,GRANT S.

2000-01-18

171

Introduction to parallel computing  

SciTech Connect

Today's supercomputers and parallel computers provide an unprecedented amount of computational power in one machine. A basic understanding of the parallel computing techniques that assist in the capture and utilization of that computational power is essential to appreciate the capabilities and the limitations of parallel supercomputers. In addition, an understanding of technical vocabulary is critical in order to converse about parallel computers. The relevant techniques, vocabulary, currently available hardware architectures, and programming languages which provide the basic concepts of parallel computing are introduced in this document. This document updates the document entitled Introduction to Parallel Supercomputing, M88-42, October 1988. It includes a new section on languages for parallel computers, updates the hardware related sections, and includes current references.

Lafferty, E.L.; Michaud, M.C.; Prelle, M.J.; Goethert, J.B.

1992-05-01

172

High Performance Parallel Computational Nanotechnology  

NASA Technical Reports Server (NTRS)

At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

Saini, Subhash; Craw, James M. (Technical Monitor)

1995-01-01

173

Reliability computation from reliability block diagrams  

NASA Technical Reports Server (NTRS)

Computer program computes system reliability for very general class of reliability block diagrams. Four factors are considered in calculating probability of system success: active block redundancy, standby block redundancy, partial redundancy, and presence of equivalent blocks in the diagram.

Chelson, P. O.; Eckstein, E. Y.

1975-01-01

174

Two portable parallel tridiagonal solvers  

SciTech Connect

Many scientific computer codes involve linear systems of equations which are coupled only between nearest neighbors in a single dimension. The most common situation can be formulated as a tridiagonal matrix relating source terms and unknowns. This system of equations is commonly solved using simple forward and back substitution. The usual algorithm is spectacularly ill suited for parallel processing with distributed data, since information must be sequentially communicated across all domains. Two new tridiagonal algorithms have been implemented in FORTRAN 77. The two algorithms differ only in the form of the unknown which is to be found. The first and simplest algorithm solves for a scalar quantity evaluated at each point along the single dimension being considered. The second algorithm solves for a vector quantity evaluated at each point. The solution method is related to other recently published approaches, such as that of Bondeli. An alternative parallel tridiagonal solver, used as part of an Alternating Direction Implicit (ADI) scheme, has recently been developed at LLNL by Lambert. For a discussion of useful parallel tridiagonal solvers, see the work of Mattor, et al. Previous work appears to be concerned only with scalar unknowns. This paper presents a new technique which treats both scalar and vector unknowns. There is no restriction upon the sizes of the subdomains. Even though the usual tridiagonal formulation may not be theoretically optimal when used iteratively, it is used in so many computer codes that it appears reasonable to write a direct substitute for it. The new tridiagonal code can be used on parallel machines with a minimum of disruption to pre-existing programming. As tested on various parallel computers, the parallel code shows efficiency greater than 50% (that is, more than half of the available computer operations are used to advance the calculation) when each processor is given at least 100 unknowns for which to solve.

Eltgroth, P.G.

1994-07-15

175

Parallel digital forensics infrastructure.  

SciTech Connect

This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick

2009-10-01

176

FormCalc 7  

NASA Astrophysics Data System (ADS)

We present additions and improvements in Version 7 of FormCalc, most notably analytic tensor reduction, choice of OPP methods, and MSSM initialization via FeynHiggs, as well as a parallelized Cuba library for numerical integration.

Agrawal, S.; Hahn, T.; Mirabella, E.

2012-06-01

177

Natural and Artificial Parallel Computation.  

National Technical Information Service (NTIS)

Contents: The Nature of Parallel Programming; Applications of Parallel Supercomputers: Scientific Results and Computer Science Lessons; Towards General-Purpose Parallel Computers; Cooperative Computation in Brains and Computers; Parallel Systems in the Ce...

O. Simula

1992-01-01

178

Optimistic parallelism requires abstractions  

Microsoft Academic Search

The problem of writing software for multicore processors is greatly simplified if we could automatically parallelize sequential programs. Although auto-parallelization has been studied for many decades, it has succeeded only in a few application areas such as dense matrix computations. In particular, auto-parallelization of irregular programs, which are organized around large, pointer-based data struc- tures like graphs, has seemed intractable.

Milind Kulkarni; Keshav Pingali; Bruce Walter; Ganesh Ramanarayanan; Kavita Bala; L. Paul Chew

2007-01-01

179

PCLIPS: Parallel CLIPS  

NASA Technical Reports Server (NTRS)

A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

1994-01-01

180

Integrated circuit reliability testing  

NASA Technical Reports Server (NTRS)

A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

Buehler, Martin G. (Inventor); Sayah, Hoshyar R. (Inventor)

1990-01-01

181

Integrated circuit reliability testing  

NASA Technical Reports Server (NTRS)

A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

Buehler, Martin G. (inventor); Sayah, Hoshyar R. (inventor)

1988-01-01

182

One and two dimensional parallel partial response for parallel readout optical memories  

Microsoft Academic Search

We extend partial response (PR) precoding to two-dimensions and consider it, as well as parallel one-dimensional (1D) PR, for use in parallel readout optical memory systems. We also develop expressions for optically implementable two-dimensional (2D) zero-forcing equalizers to be used in conjunction with these forms of PR precoding

Brita H. Olson; S. C. Esener

1995-01-01

183

Low-power approaches for parallel, free-space photonic interconnects  

SciTech Connect

Future advances in the application of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMS) and board-level parallel connections. Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. These will make use of new device-level technologies such as vertical cavity surface-emitting lasers and special low-power parallel photoreceiver circuits. Depending on the application, these device technologies will often be monolithically integrated to reduce the amount of board or module real estate required by the photonics. Highly parallel MCM and board-level applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated in photonic and optoelectronic technologies. An example is found in two-dimensional point-to-point array interconnects for MCM stacking. These interconnects are based on high-efficiency Vertical Cavity Surface Emitting Lasers (VCSELs), Heterojunction Bipolar Transistor (HBT) photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques. Individual channels have been demonstrated at 100 Mb/s, operating with a direct 3.3V CMOS electronic interface while using 45 mW of electrical power. These results demonstrate how optoelectronic device technologies can be optimized for low-power parallel link applications.

Carson, R.F.; Lovejoy, M.L.; Lear, K.L.; WSarren, M.E.; Seigal, P.K.; Craft, D.C.; Kilcoyne, S.P.; Patrizi, G.A.; Blum, O.

1995-12-31

184

Planarity Testing in Parallel  

Microsoft Academic Search

We present a parallel algorithm based on open ear decomposition to con- struct an embedding of a graph onto the plane or report that the graph is nonpla- nar. Our parallel algorithm runs on a CRCW PRAM in logarithmic time with a number of processors bounded by that needed for finding connected components in a graph and for performing bucket

Vijaya Ramachandran; John H. Reif

1994-01-01

185

Parallelization of thermochemical nanolithography.  

PubMed

One of the most pressing technological challenges in the development of next generation nanoscale devices is the rapid, parallel, precise and robust fabrication of nanostructures. Here, we demonstrate the possibility to parallelize thermochemical nanolithography (TCNL) by employing five nano-tips for the fabrication of conjugated polymer nanostructures and graphene-based nanoribbons. PMID:24337109

Carroll, Keith M; Lu, Xi; Kim, Suenne; Gao, Yang; Kim, Hoe-Joon; Somnath, Suhas; Polloni, Laura; Sordan, Roman; King, William P; Curtis, Jennifer E; Riedo, Elisa

2014-01-16

186

The Nas Parallel Benchmarks  

Microsoft Academic Search

A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of five parallel kernels and three simulated application benchmarks. Together theymimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications.The principal distinguishing feature of these benchmarks is their penciland paper specification---all details of these benchmarks are

D. Bailey; E. Barszcz; J. Barton; D. Browning; R. Carter; L. Dagum

1994-01-01

187

Parallel texture caching  

Microsoft Academic Search

The creation of high-quality images requires new functionality and higher performance in real-time graphics architectures. In terms of functionality, texture mapping has become an integral component of graphics systems, and in terms of performance, parallel techniques are used at all stages of the graphics pipeline. In rasterization, texture caching has become prevalent for reduc- ing texture bandwidth requirements. However, parallel

Homan Igehy; Matthew Eldridge; Pat Hanrahan

1999-01-01

188

Parallel computing works  

SciTech Connect

An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

Not Available

1991-10-23

189

Parallel methods for dynamic simulation of multiple manipulator systems  

NASA Technical Reports Server (NTRS)

In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

Mcmillan, Scott; Sadayappan, P.; Orin, David E.

1993-01-01

190

Parallel Programming and Parallel Abstractions in Fortress  

Microsoft Academic Search

\\u000a The Programming Language Research Group at Sun Microsystems Laboratories seeks to apply lessons learned from the Java (TM)\\u000a Programming Language to the next generation of programming languages. The Java language supports platform-independent parallel\\u000a programming with explicit multithreading and explicit locks. As part of the DARPA program for High Productivity Computing\\u000a Systems, we are developing Fortress, a language intended to support

Guy L. Steele Jr.

2006-01-01

191

Parallelism and Scalability in an Image Processing Application  

Microsoft Academic Search

The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form\\u000a of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application\\u000a areas that traditionally have not used parallel programs. This paper investigates parallelism and scalability of an embedded\\u000a image processing application. The major challenges faced

Morten S. Rasmussen; Matthias B. Stuart; Sven Karlsson

2009-01-01

192

Compositional C++: Compositional Parallel Programming  

Microsoft Academic Search

A compositional parallel program is a program constructed by composing component programs in parallel, where the composed program inherits properties of its components. In this paper, we describe a small extension of C++ called Compositional C++ or CC++ which is an object-oriented notation that supports compositional parallel programming. CC++ integrates different paradigms of parallel programming: data-parallel, task-parallel and object-parallel paradigms;

K. Mani Chandy; Carl Kesselman

1992-01-01

193

ANN-based Reliability Analysis for Deep Excavation  

Microsoft Academic Search

In this study, a reliability evaluation method, integrated with artificial neural network (ANN), and first-order reliability method (FORM), or Monte-Carlo simulation (MCS), is explored. By performing a case study on the reliability of deep excavation within soft ground, an analysis procedure for reliability analysis is proposed. The evaluation model of ANN-based FORM or ANN-based MCS is superior to traditional reliability

Fu-Kuo Huang; G. S. Wang

2007-01-01

194

Parallel nearest neighbor calculations  

NASA Astrophysics Data System (ADS)

We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

Trease, Harold

195

Light, The Universe and Parallel Radiosity  

Microsoft Academic Search

Radiosity provides an effective method for modelling a diffuse environment. Unfortu- nately this method is computationally expensive. The main cost of the radiosity method is the calculation of form factors which takes approximately 95% of the computation time. Through the use of a task farm the computation of form factors can be parallelised, on a Massively Parallel Processor (MPP) super

Alex Brodsky

1996-01-01

196

Reliability Generalization: "Lapsus Linguae"  

ERIC Educational Resources Information Center

This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

Smith, Julie M.

2011-01-01

197

The NAS parallel benchmarks  

NASA Technical Reports Server (NTRS)

A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

Bailey, David (editor); Barton, John (editor); Lasinski, Thomas (editor); Simon, Horst (editor)

1993-01-01

198

Analysis of Reliability Block Diagrams by Boolean Techniques  

Microsoft Academic Search

A reliability block diagram for complex systems is often analyzed by applying the series\\/parallel product laws or, where this is not possible, by using a conditional probability result (Bayes theorem). In both cases, the analysis is conducted in the probabilistic domain and, for complex systems, is lengthy. An alternative method is to consider the component reliability parameters to be Boolean

R. G. Bennetts

1982-01-01

199

New Parallel Sorting Schemes.  

National Technical Information Service (NTIS)

This paper describes a family of parallel sorting algorithms for a multiprocessor system. These algorithms are enumeration sorts and comprise the following phases: count acquisition: the keys are subdivided into subsets and for each key the number of smal...

F. P. Preparata

1977-01-01

200

Parallel programming with PCN  

SciTech Connect

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

Foster, I.; Tuecke, S.

1991-12-01

201

Parallel RC Circuits.  

National Technical Information Service (NTIS)

Reviews the operation of parallel rc circuit and specifically points out how to solve for branch currents and total impedance by using ohm's law. Reviews vector representations and shows how approximate total current and phase angle are found by measuring...

1994-01-01

202

Parallel programming with PCN  

SciTech Connect

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, a set of tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory at info.mcs.anl.gov.

Foster, I.; Tuecke, S.

1991-09-01

203

Parallelization of thermochemical nanolithography  

NASA Astrophysics Data System (ADS)

One of the most pressing technological challenges in the development of next generation nanoscale devices is the rapid, parallel, precise and robust fabrication of nanostructures. Here, we demonstrate the possibility to parallelize thermochemical nanolithography (TCNL) by employing five nano-tips for the fabrication of conjugated polymer nanostructures and graphene-based nanoribbons.One of the most pressing technological challenges in the development of next generation nanoscale devices is the rapid, parallel, precise and robust fabrication of nanostructures. Here, we demonstrate the possibility to parallelize thermochemical nanolithography (TCNL) by employing five nano-tips for the fabrication of conjugated polymer nanostructures and graphene-based nanoribbons. Electronic supplementary information (ESI) available: Details on the cantilevers array, on the sample preparation, and on the GO AFM experiments. See DOI: 10.1039/c3nr05696a

Carroll, Keith M.; Lu, Xi; Kim, Suenne; Gao, Yang; Kim, Hoe-Joon; Somnath, Suhas; Polloni, Laura; Sordan, Roman; King, William P.; Curtis, Jennifer E.; Riedo, Elisa

2014-01-01

204

Series and Parallel Circuits  

NSDL National Science Digital Library

In this activity, learners demonstrate and discuss simple circuits as well as the differences between parallel and serial circuit design and functions. Learners test two different circuit designs through the use of low voltage light bulbs.

Ieee

2013-08-30

205

Designing Parallel Operating Systems via Parallel Programming  

Microsoft Academic Search

Ever-increasing demand for computing capability is driving the con- struction of ever-larger computer clusters, soon to be reaching tens of thou- sands of processors. Many functionalities of system software have failed to scale accordingly-systems are becoming more complex, less reliable, and less effi- cient. Our premise is that these deficiencies arise from a lack of global control and coordination of

Eitan Frachtenberg; Kei Davis; Fabrizio Petrini; Juan Fernández; José Carlos Sancho

2004-01-01

206

Self-Tuning Parallelism  

Microsoft Academic Search

Assigning additional processors to a parallel application may slow it down or lead to poor computer utilization. This paper\\u000a demonstrates that it is possible for an application to automatically choose its own, optimal degree of parallelism. The technique\\u000a is based on a simple binary search procedure for finding the optimal number of processors, subject to one of the following\\u000a criteria:

Otilia Werner-kytölä; Walter F. Tichy

2000-01-01

207

Parallel encrypted array multipliers  

SciTech Connect

An algorithm for direct two's-complement and sign-magnitude parallel multiplication is described. The partial product matrix representing the multiplication is converted to an equivalent matrix by encryption. Its reduction, producing the final result, needs no specialized adders and can be added with any parallel array addition technique. It contains no negative terms and no extra ''correction'' rows; in addition, it produces the multiplication with fewer than the minimal number of rows required for a direct multiplication process.

Vassiliadis, S.; Putrino, M.; Schwarz, E.M.

1988-07-01

208

Reliable algorithm for modal decomposition  

NASA Technical Reports Server (NTRS)

This paper describes a reliable, general algorithm for modal decomposition in real arithmetic and its use in analyzing and synthesizing control logic for linear dynamic systems. The numerical difficulties are described associated with computing the Jordan canonical form when the system has repeated, or nearly repeated, eigenvalues. A new algorithm is described that satisfactorily solves these numerical difficulties. The relation and extension to related numerical analysis research are discussed to clarify the reliability of the techniques. Finally, its implementation as a practical modal decomposition method for efficiently computing the matrix exponential, transfer functions, and frequency response is also described.

Walker, Robert A.; Bryson, Arthur E., Jr.

1990-01-01

209

Exploiting parallel microprocessor microarchitectures with a compiler code generator  

Microsoft Academic Search

With advances in VLSI technology, microprocessor designers can provide more microarchitectural parallelism to increase performance. We have identified four major forms of such parallelism: multiple microoperations issued per cycle, multiple result distribution buses, multiple execution units, and pipelined execution units. The experiments reported in this paper address two important issues: The effects of these forms and the appropriate balance among

Wen-mei W. Hwu; Pohua P. Chang

1988-01-01

210

Validity and reliability of the multidimensional health locus of control scale for college students  

PubMed Central

Background The purpose of the present study was to assess the validity and reliability of Form A of Multidimensional Health Locus of Control scales in Iran. Health locus of control is one of the most widely measured parameters of health belief for the planning of health education programs. Methods 496 university students participated in this study. The reliability coefficients were calculated in three different methods: test-retest, parallel forms and Cronbach alpha. In order to survey validity of the scale we used three methods including content validity, concurrent validity and construct validity. Results We established the content validity of the Persian translation by translating (and then back-translating) each item from the English version into the Persian version. The concurrent validity of the questionnaire, as measured by Levenson's IPC scale was .57 (P < .001), .49 (P < .01) and .53 (P < .001) for IPC, respectively. Exploratory principal components analysis supported a three-factor structure that items loading adequately on each factor. Moreover, the approximate orthogonal of the dimensions were obtained through correlation analyses. In addition, the reliability results were acceptable, too. Conclusion The results showed that the reliability and validity of Persian Form A of MHLC was acceptable and respectable and is suggested as an applicable criterion for similar studies in Iran.

Moshki, Mahdi; Ghofranipour, Fazlollah; Hajizadeh, Ebrahim; Azadfallah, Parviz

2007-01-01

211

Evaluation of competing software reliability predictions  

NASA Technical Reports Server (NTRS)

Different software reliability models can produce very different answers when called upon to predict future reliability in a reliability growth context. Users need to know which, if any, of the competing predictions are trustworthy. Some techniques are presented which form the basis of a partial solution to this problem. Rather than attempting to decide which model is generally best, the approach adopted here allows a user to decide upon the most appropriate model for each application.

Abdel-Ghaly, A. A.; Chan, P. Y.; Littlewood, B.

1986-01-01

212

Comprehensive Design Reliability Activities for Aerospace Propulsion Systems  

NASA Technical Reports Server (NTRS)

This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

Christenson, R. L.; Whitley, M. R.; Knight, K. C.

2000-01-01

213

A high-speed linear algebra library with automatic parallelism  

NASA Technical Reports Server (NTRS)

Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

Boucher, Michael L.

1994-01-01

214

Sublattice parallel replica dynamics  

NASA Astrophysics Data System (ADS)

Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998), 10.1103/PhysRevB.57.R13985] by combining it with the synchronous sublattice approach of Shim and Amar [Y. Shim and J. G. Amar, Phys. Rev. B 71, 125432 (2005), 10.1103/PhysRevB.71.125432], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

Martínez, Enrique; Uberuaga, Blas P.; Voter, Arthur F.

2014-06-01

215

Redundant system reliability analysis  

NASA Technical Reports Server (NTRS)

Computer Aided Redundant System Reliability Analysis (CARSARA) program facilitates reliability assessment of fault-tolerance reconfigurable systems. CARSRA accounts for influences from transient faults and is used to model wide range of redundancy management strategies.

Masreliez, C. J.

1979-01-01

216

Reliability Prediction for Spacecraft.  

National Technical Information Service (NTIS)

This study provides the basis for improving the utility of Mil-Hdbk-217 for reliability prediction of spacecraft components and systems. The reliability performance histories of 300 satellite vehicles, which were launched between the early 1960's through ...

H. Hecht M. Hecht

1985-01-01

217

Human Reliability Program Overview  

SciTech Connect

This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

Bodin, Michael

2012-09-25

218

Scalable Parallel Matrix Multiplication on Distributed Memory Parallel Computers  

Microsoft Academic Search

Consider any known sequential algorithm for matrix multipli- cation over an arbitrary ring with time complexity ,w here . We show that such an algorithm can be parallelized on a distributed memory parallel computer (DMPC) in time by using processors. Such a parallel computation is cost optimal and matches the performance of PRAM. Further- more, our parallelization on a DMPC

Keqin Li

2000-01-01

219

Reliability model generator specification  

NASA Technical Reports Server (NTRS)

The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

Cohen, Gerald C.; Mccann, Catherine

1990-01-01

220

Reliability as Argument  

ERIC Educational Resources Information Center

Reliability consists of both important social and scientific values and methods for evidencing those values, though in practice methods are often conflated with the values. With the two distinctly understood, a reliability argument can be made that articulates the particular reliability values most relevant to the particular measurement situation…

Parkes, Jay

2007-01-01

221

Predicting software reliability  

NASA Technical Reports Server (NTRS)

A detailed look is given to software reliability techniques. A conceptual model of the failure process is examined, and some software reliability growth models are discussed. Problems for which no current solutions exist are addressed, emphasizing the very difficult problem of safety-critical systems for which the reliability requirements can be enormously demanding.

Littlewood, B.

1989-01-01

222

SPINning parallel systems software.  

SciTech Connect

We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin.

Matlin, O.S.; Lusk, E.; McCune, W.

2002-03-15

223

Using multivariate generalizability theory to assess the effect of content stratification on the reliability of a performance assessment.  

PubMed

In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed. PMID:20509047

Keller, Lisa A; Clauser, Brian E; Swanson, David B

2010-12-01

224

Technique Used in Determining Field Operational Reliability.  

National Technical Information Service (NTIS)

The report depicts several of the problems involved in conducting an investigation to determine the operational reliability of an Army radio set utilized under actual field conditions. A reporting form was distributed to the troops prior to the exercise c...

J. W. D'Oria

1966-01-01

225

Parallel Complexity of Matrix Multiplication  

Microsoft Academic Search

Effective design of parallel matrix multiplication algorithms relies on the consideration of many interdependent issues based on the underlying parallel machine or network upon which such algorithms will be implemented, as well as, the type of methodology utilized by an algorithm. In this paper, we determine the parallel complexity of multiplying two (not necessarily square) matrices on parallel distributed-memory machines

Eunice E. Santos

2003-01-01

226

Scalable Parallel Programming with CUDA  

Microsoft Academic Search

The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now parallel systems. Furthermore, their parallelism continues to scale with Moore's law. The challenge is to develop mainstream application software that transparently scales its parallelism to leverage the increasing number of processor cores, much as 3D graphics applications transparently scale their parallelism to manycore GPUs with

John Nickolls; Ian Buck; Michael Garland; Kevin Skadron

2008-01-01

227

Effective Automatic Parallelization with Polaris  

Microsoft Academic Search

. The Polaris project has delivered a new parallelizing compiler that overcomes severe limitations of current compilers. While available parallelizing compilers may succeed on small kernels, they often fail to extract any meaningful parallelism from large applications. In contrast, Polaris has proven to speed up real programs significantly beyond the degree achieved by the parallelization tools available on the SGI

William Blume; Rudolf Eigenmann; Keith Faigin; John Grout; Jay Hoeflinger; David Padua; Paul Petersen; William Pottenger; Lawrence Rauchwerger; Peng Tu; Stephen Weatherford

1995-01-01

228

Parallel Spectral Numerical Methods  

NSDL National Science Digital Library

This module teaches the principals of Fourier spectral methods, their utility in solving partial differential equation and how to implement them in code. Performance considerations for several Fourier spectral implementations are discussed and methods for effective scaling on parallel computers are explained.

Chen, Gong; Cloutier, Brandon; Li, Ning; Muite, Benson; Rigge, Paul

229

Parallel Merge Sort  

Microsoft Academic Search

We give a parallel implementation of merge sort on a CREW PRAM that uses n processors and O(logn) time; the constant in the running time is small. We also give a more complex version of the algorithm for the EREW PRAM; it also uses n processors and O(logn) time. The constant in the running time is still moderate, though not

Richard Cole

1986-01-01

230

Parallel hierarchical global illumination  

SciTech Connect

Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

Snell, Q.O.

1997-10-08

231

Parallel programming with PCN  

SciTech Connect

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

Foster, I.; Tuecke, S.

1993-01-01

232

Massively parallel processor computer  

NASA Technical Reports Server (NTRS)

An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

Fung, L. W. (inventor)

1983-01-01

233

Remarks on Parallel Analysis.  

ERIC Educational Resources Information Center

Use of parallel analysis (PA), a selection rule for the number-of-factors problem, is investigated from the viewpoint of permutation assessment through a Monte Carlo simulation. Results reveal advantages and limitations of PA. Tables of sample eigenvalues are included. (SLD)

Buja, Andreas; Eyuboglu, Nermin

1992-01-01

234

Parallel and distributed computation  

Microsoft Academic Search

This book focuses on numerical algorithms suited for parallelization for solving systems of equations and optimization problems. Emphasis on relaxation methods of the Jacobi and Gauss-Seidel type, and issues of communication and synchronization. Topics covered include: Algorithms for systems of linear equations and matrix inversion; Herative methods for nonlinear problems; and Shortest paths and dynamic programming.

Dimitri P. Bertsekas; John N. Tsitsiklis

1989-01-01

235

Parallel distributed viewshed analysis  

Microsoft Academic Search

——- ... .. 1. ABSTRACT The paper describes a number of distributed approaches to implementing a parallel vklbility a]g~rithm for Viewshed analysis. The problem can be simplified by considering a range of domain partitioning strategies for optimizing tie proc=sor worldoads. The best approaches are shown to work 22 times faster across a network of 24 processors. Such strategies allow traditional

J. Andrew Ware; David B. Kidner; Philip J. Rallings

1998-01-01

236

Hexagonal Parallel Pattern Transformations  

Microsoft Academic Search

The concept of the two-dimensional (2-D) parallel computer with square module arrays was first introduced by Unger. It is the purpose of this paper to discuss the relative merits of square and hexagonal module arrays, to propose an operational symbolism for the various basic hexagonal modular transformations which may be performed by these comupters, to illustrate some logical circuit implementation,

M. J. E. Golay

1969-01-01

237

Parallelizing the Data Cube  

Microsoft Academic Search

This paper presents a general methodology for the efficient parallelization of existing data cube construction algorithms. We describe two different partitioning strategies, one for top-down and one for bottom- up cube algorithms. Both partitioning strategies assign subcubes to individual processors in such a way that the loads assigned to the processors are balanced. Our methods reduce inter processor communication overhead

Frank K. H. A. Dehne; Todd Eavis; Susanne E. Hambrusch; Andrew Rau-chaplin

2001-01-01

238

High performance parallel architectures  

SciTech Connect

In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

Anderson, R.E. (Lawrence Livermore National Lab., CA (USA))

1989-09-01

239

Parallel Traveling Salesman Problem  

NSDL National Science Digital Library

The traveling salesman problem is a classic optimization problem in which one seeks to minimize the path taken by a salesman in traveling between N cities, where the salesman stops at each city one and only one time, never retracing his/her route. This implementation is designed to run on UNIX systems with X-Windows, and includes parallelization using MPI.

Joiner, David; Hassinger, Jonathan

240

Compositional parallel programming languages.  

SciTech Connect

In task-parallel programs, diverse activities can take place concurrently, and communication and synchronization patterns are complex and not easily predictable. Previous work has identified compositionality as an important design principle for task-parallel programs. In this article, we discuss alternative approaches to the realization of this principle, which holds that properties of program components should be preserved when those components are composed in parallel with other program components. We review two programming languages, Strand and Program Composition Notation, that support compositionality via a small number of simple concepts, namely, monotone operations on shared objects, a uniform addressing mechanism, and parallel composition. Both languages have been used extensively for large-scale application development, allowing us to provide an informed assessment of both their strengths and their weaknesses. We observe that while compositionality simplifies development of complex applications, the use of specialized languages hinders reuse of existing code and tools and the specification of domain decomposition strategies. This suggests an alternative approach based on small extensions to existing sequential languages. We conclude the article with a discussion of two languages that realized this strategy.

Foster, I.; Mathematics and Computer Science

1996-01-01

241

Parallel Plate Detectors.  

National Technical Information Service (NTIS)

A 5x3cm exp 2 (timing only) and a 15x5cm exp 2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate ...

D. Gardes P. Volkov

1981-01-01

242

Reliability models for dataflow computer systems  

NASA Technical Reports Server (NTRS)

The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

Kavi, K. M.; Buckles, B. P.

1985-01-01

243

User's guide to the Reliability Estimation System Testbed (REST)  

NASA Technical Reports Server (NTRS)

The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

1992-01-01

244

Banking on NUG reliability  

SciTech Connect

Plant reliability is an important issue that has been raised frequently in the context of purchased power. Most recently, section 712 of the 1992 Energy Policy Act asked state regulators to explore whether the use of highly leveraged capital structures by exempt wholesale generators (EWGs) threatens reliability. This article shows that the relative reliability of nonutility generators (NUGs) and utility-owned generation varies from case to case. Therefore, absolute statements about NUG reliability as a function of financial leverage are not only difficult to make but also highly suspect. NUG reliability is a contentious issue. Some parties strongly support the view that NUG reliability generally exceeds that of utility-owned generation. Others question this view. Indeed, studies that showed NUG plants to be more reliable than utility-owned generation may have overstated NUG reliability for several reasons. First, NUG plants are, on average, newer than utility plants; newer plants tend to be more reliable. Second, the relatively continuous operation of NUGs causes less wear and tear. And third, NUG reliability data may suffer from a self-selection bias: comprehensive data on utility plant performance exist, but NUG plant performance data have been compiled from survey responses and successful NUGs may have been more likely to respond.

Kolbe, L.; Johnson, S.; Pfeifenberger, J.

1994-05-15

245

Parallel Web prefetching on cluster server  

Microsoft Academic Search

Prefetching is an important technique for single Web server to reduce the average Web access latency and applying it on cluster server will produce better performance. Two models for parallel Web prefetching on cluster server described in the form of I\\/O automaton are proposed in this paper according to the different service approaches of Web cluster server: session persistence and

Cairong Yan; Junyi Shen; Qinke Peng

2005-01-01

246

Reliability quantification and visualization for electric microgrids  

NASA Astrophysics Data System (ADS)

The electric grid in the United States is undergoing modernization from the state of an aging infrastructure of the past to a more robust and reliable power system of the future. The primary efforts in this direction have come from the federal government through the American Recovery and Reinvestment Act of 2009 (Recovery Act). This has provided the U.S. Department of Energy (DOE) with 4.5 billion to develop and implement programs through DOE's Office of Electricity Delivery and Energy Reliability (OE) over the a period of 5 years (2008-2012). This was initially a part of Title XIII of the Energy Independence and Security Act of 2007 (EISA) which was later modified by Recovery Act. As a part of DOE's Smart Grid Programs, Smart Grid Investment Grants (SGIG), and Smart Grid Demonstration Projects (SGDP) were developed as two of the largest programs with federal grants of 3.4 billion and $600 million respectively. The Renewable and Distributed Systems Integration (RDSI) demonstration projects were launched in 2008 with the aim of reducing peak electricity demand by 15 percent at distribution feeders. Nine such projects were competitively selected located around the nation. The City of Fort Collins in co-operative partnership with other federal and commercial entities was identified to research, develop and demonstrate a 3.5MW integrated mix of heterogeneous distributed energy resources (DER) to reduce peak load on two feeders by 20-30 percent. This project was called FortZED RDSI and provided an opportunity to demonstrate integrated operation of group of assets including demand response (DR), as a single controllable entity which is often called a microgrid. As per IEEE Standard 1547.4-2011 (IEEE Guide for Design, Operation, and Integration of Distributed Resource Island Systems with Electric Power Systems), a microgrid can be defined as an electric power system which has following characteristics: (1) DR and load are present, (2) has the ability to disconnect from and parallel with the area Electric Power Systems (EPS), (3) includes the local EPS and may include portions of the area EPS, and (4) is intentionally planned. A more reliable electric power grid requires microgrids to operate in tandem with the EPS. The reliability can be quantified through various metrics for performance measure. This is done through North American Electric Reliability Corporation (NERC) metrics in North America. The microgrid differs significantly from the traditional EPS, especially at asset level due to heterogeneity in assets. Thus, the performance cannot be quantified by the same metrics as used for EPS. Some of the NERC metrics are calculated and interpreted in this work to quantify performance for a single asset and group of assets in a microgrid. Two more metrics are introduced for system level performance quantification. The next step is a better representation of the large amount of data generated by the microgrid. Visualization is one such form of representation which is explored in detail and a graphical user interface (GUI) is developed as a deliverable tool to the operator for informative decision making and planning. Electronic appendices-I and II contain data and MATLAB© program codes for analysis and visualization for this work.

Panwar, Mayank

247

Reliability of neural encoding  

NASA Astrophysics Data System (ADS)

The reliability with which a neuron is able to create the same firing pattern when presented with the same stimulus is of critical importance to the understanding of neuronal information processing. We show that reliability is closely related to the process of phaselocking. Experimental results for the reliability of neuronal firing in the spinal cord of rat are presented and compared to results from an integrate and fire model.

Alstrøm, Preben; Beierholm, Ulrik; Nielsen, Carsten Dahl; Ryge, Jesper; Kiehn, Ole

2002-11-01

248

Software Reliability Improvement Techniques  

Microsoft Academic Search

Digital systems offer various advantages over analog systems. Their use in largescale control systems has greatly expanded\\u000a in recent years. This raises challenging issues to be resolved. Extremely high-confidence in software reliability is one issue\\u000a for safety-critical systems, such as NPPs. Some issues related to software reliability are tightly coupled with software faults\\u000a to evaluate software reliability (Chapter 4). There

Han Seong Son; Seo Ryong Koo

249

Human reliability analysis  

SciTech Connect

The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach.

Dougherty, E.M.; Fragola, J.R.

1988-01-01

250

Parallel Consensual Neural Networks  

NASA Technical Reports Server (NTRS)

A new neural network architecture is proposed and applied in classification of remote sensing/geographic data from multiple sources. The new architecture is called the parallel consensual neural network and its relation to hierarchical and ensemble neural networks is discussed. The parallel consensual neural network architecture is based on statistical consensus theory. The input data are transformed several times and the different transformed data are applied as if they were independent inputs and are classified using stage neural networks. Finally, the outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote sensing data and geographic data are given. The performance of the consensual neural network architecture is compared to that of a two-layer (one hidden layer) conjugate-gradient backpropagation neural network. The results with the proposed neural network architecture compare favorably in terms of classification accuracy to the backpropagation method.

Benediktsson, J. A.; Sveinsson, J. R.; Ersoy, O. K.; Swain, P. H.

1993-01-01

251

Parallel Subconvolution Filtering Architectures  

NASA Technical Reports Server (NTRS)

These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

Gray, Andrew A.

2003-01-01

252

The massively parallel processor  

NASA Technical Reports Server (NTRS)

Future sensor systems will utilize massively parallel computing systems for rapid analysis of two-dimensional data. The Goddard Space Flight Center has an ongoing program to develop these systems. A single-instruction multiple data computer known as the Massively Parallel Processor (MPP) is being fabricated for NASA by the Goodyear Aerospace Corporation. This processor contains 16,384 processing elements arranged in a 128 x 128 array. The MPP will be capable of adding more than 6 billion 8-bit numbers per second. Multiplication of eight-bit numbers can occur at a rate of 2 billion per second. Delivery of the MPP to Goddard Space Flight Center is scheduled for 1983.

Schaefer, D. H.; Fischer, J. R.; Wallgren, K. R.

1980-01-01

253

Collisionless parallel shocks  

NASA Technical Reports Server (NTRS)

Consideration is given to a collisionless parallel shock based on solitary-type solutions of the modified derivative nonlinear Schroedinger equation (MDNLS) for parallel Alfven waves. The standard derivative nonlinear Schroedinger equation is generalized in order to include the possible anisotropy of the plasma distribution and higher-order Korteweg-de Vies-type dispersion. Stationary solutions of MDNLS are discussed. The anisotropic nature of 'adiabatic' reflections leads to the asymmetric particle distribution in the upstream as well as in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves, which can significantly contribute to the shock thermalization.

Khabibrakhmanov, I. KH.; Galeev, A. A.; Galinskii, V. L.

1993-01-01

254

PCLIPS: Parallel CLIPS  

NASA Technical Reports Server (NTRS)

PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

Gryphon, Coranth D.; Miller, Mark D.

1991-01-01

255

Stochastic scheduling of parallel processors  

SciTech Connect

Selected topics of interest from and area of parallel processing systems are investigated. Problems concern specifically an optimal scheduling of jobs subject to a dependency structure, an analysis of the performance of a heuristic assignment schedule in a multiserver system of many competing queues, and the optimal service rate control of a parallel processing system. In general, multi-tasking leads to a stochastic scheduling problem in which n jobs subject to precedence constraints are to be processed on m processors. Of particular interest are intree forms of the precedence constraints and i.i.d. job processing times. Using an optimal stochastic control formulation, it is shown, under some conditions on the distributions, that HLF (Highest Levels First) policies and HLF combined with LERPT (Longest Expected Remaining Processing Time) within each level minimize expected makespan for nonpreemptive and preemptive scheduling, respectively, when m = 2. The relative performance of HLF heuristics are investigated for a model in which the job execution times are i.i.d. with an exponential distribution. Many situations in resource sharing environments can be modeled as a multi-server system of many competing queues.

Ko, S.J.

1985-01-01

256

Fast parallel sorting algorithms  

Microsoft Academic Search

A parallel bucket-sort algorithm is presented that requires time O(log n) and the use of n processors. The algorithm makes use of a technique that requires more space than the product of processors and time. A realistic model is used in which no memory contention is permitted. A procedure is also presented to sort n numbers in time O(k log

Daniel S. Hirschberg; R. L. Rivest

1978-01-01

257

Cid: A Parallel, \\  

Microsoft Academic Search

Cid is a parallel, “shared-memory” superset of C for distributed-memory machines. A major objective is to keep the entry cost low. For users-the language should be easily comprehensible to a C programmer. For implementors-it should run on standard hardware (including workstation farms); it should not require major new compilation techniques (which may not even be widely applicable); and it should

Rishiyur S. Nikhil

1994-01-01

258

Parallelizing the Data Cube  

Microsoft Academic Search

Abstract. This paper presents a general methodology,for the efficient parallelization of existing data cube construction algorithms. We describe two different partitioning strategies, one for top-down and one for bottom- up cube algorithms. Both partitioning strategies assign subcubes to individual processors in such a way that the loads assigned to the processors are balanced. Our methods reduce inter processor communication,overhead by

Frank K. H. A. Dehne; Todd Eavis; Susanne E. Hambrusch; Andrew Rau-chaplin

2002-01-01

259

Recalibrating software reliability models  

NASA Technical Reports Server (NTRS)

In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predicitons for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates prodcued by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

1990-01-01

260

Operational safety reliability research  

SciTech Connect

Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant.

Hall, R.E.; Boccio, J.L.

1986-01-01

261

Recalibrating software reliability models  

NASA Technical Reports Server (NTRS)

In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

1989-01-01

262

Making programmable BMS safe and reliable  

SciTech Connect

Burner management systems ensure safe admission of fuel to the furnace and prevent explosions. This article describes how programmable control systems can be every bit as safe and reliable as hardwired or standard programmable logic controller-based designs. High-pressure boilers are required by regulatory agencies and insurance companies alike to be equipped with a burner management system (BMS) to ensure safe admission of fuel to the furnace and to prevent explosions. These systems work in parallel with, but independently of, the combustion and feedwater control systems that start up, monitor, and shut down burners and furnaces. Safety and reliability are the fundamental requirements of a BMS. Programmable control system for BMS applications are now available that incorporate high safety and reliability into traditional microprocessor-based designs. With one of these control systems, a qualified systems engineer applying relevant standards, such as the National Fire Protection Assn (NFPA) 85 series, can design and implement a superior BMS.

Cusimano, J.A.

1995-12-01

263

Scalable Parallel Matrix Multiplication on Distributed Memory Parallel Computers  

Microsoft Academic Search

Consider any known sequential algorithm for matrix multiplication over an arbitrary ring with time complexity O(N?), where 2parallelized on a distributed memory parallel computer (DMPC) in O(logN) time by using N?\\/logN processors. Such a parallel computation is cost optimal and matches the performance of PRAM. Furthermore, our parallelization on a DMPC

Keqin Li

2001-01-01

264

Reliability Analysis and Modeling of ZigBee Networks  

NASA Astrophysics Data System (ADS)

The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.

Lin, Cheng-Min

265

Characterization of parallel computers and algorithms  

NASA Astrophysics Data System (ADS)

The principal current development in computing is the advent of the parallel computer in all its various forms - for example pipelined vector computers (CRAY-1 and CYBER 205) and arrays of processors (ICL DAP). This paper defines a two-parameter characterization of such computers that measures both the maximum performance and the amount of hardware parallelism. This allows a rational comparison of the performance of alternative algorithms on widely differing computers. As an example we consider the choice of the best algorithm for the solution of tridiagonal systems of equations.

Hockney, R. W.

1982-06-01

266

Device for balancing parallel strings  

DOEpatents

A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

Mashikian, Matthew S. (Storrs, CT) [Storrs, CT

1985-01-01

267

Performance Prediction of Large Parallel Applications using Parallel Simulations  

Microsoft Academic Search

Accurate simulation of large parallel applications can be facilitated with the use of direct execution and parallel discrete event simulation. This paper describes the use of COMPASS, a direct execution-driven, parallel simulator for performance prediction of programs that include both communication and I\\/O intensive applications. The simulator has been used to predict the performance of such applications on both distributed

Rajive Bagrodia; Ewa Deeljman; Steven Docy; Thomas Phan

1999-01-01

268

Hawaii electric system reliability.  

SciTech Connect

This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

Silva Monroy, Cesar Augusto; Loose, Verne William

2012-09-01

269

Recalibrating Software Reliability Models.  

National Technical Information Service (NTIS)

In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the ...

S. Brocklehurst P. Y. Chan B. Littlewood J. Snell

1989-01-01

270

Recalibrating Software Reliability Models  

Microsoft Academic Search

In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Worse, we are not even in a position to be abl_ to decide a priori which of the many models is most suitable in a particular context. Our own r_ecent work has

Sarah Brocklehurst; P. Y. Chan; Bev Littlewood; John Snell

1990-01-01

271

Practical software reliability modeling  

Microsoft Academic Search

NASA is increasingly dependent upon software in systems critical to the success of NASA's mission. The capability to accurately measure the reliability of the software in these systems is essential to ensuring that NASA systems will meet mission requirements. The Software Assurance Technology Center at the NASA Goddard Space Flight Center explored software reliability modeling as a practical measurement technique.

Dolores R. Wallace

2001-01-01

272

IEEE Reliability Test System  

Microsoft Academic Search

This report describes a load model, generation system, and transmission network which can be used to test or compare methods for reliability analysis of power systems. The objective is to define a system sufficiently broad to provide a basis for reporting on analysis methods for combined generation\\/transmission (composite) reliability.

1979-01-01

273

Web as a Parallel Corpus.  

National Technical Information Service (NTIS)

Parallel corpora have become an essential resource for work in multi- lingual natural language processing. In this report, we describe our work using the STRAND system for mining parallel text on the World Wide Web, first reviewing the original algorithm ...

N. A. Smith P. Resnick

2002-01-01

274

The Galley Parallel File System  

NASA Technical Reports Server (NTRS)

As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

Nieuwejaar, Nils; Kotz, David

1996-01-01

275

Parallel Pascal - An extended Pascal for parallel computers  

NASA Technical Reports Server (NTRS)

Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

Reeves, A. P.

1984-01-01

276

A Bayesian approach to reliability and confidence  

NASA Technical Reports Server (NTRS)

The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

Barnes, Ron

1989-01-01

277

Parallel Eclipse Project Checkout  

NASA Technical Reports Server (NTRS)

Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any Eclipse-based repository with a similar structure. It also can apply build parameters and preferences automatically at the end of the checkout.

Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

2011-01-01

278

Fastpath Speculative Parallelization  

NASA Astrophysics Data System (ADS)

We describe Fastpath, a system for speculative parallelization of sequential programs on conventional multicore processors. Our system distinguishes between the lead thread, which executes at almost-native speed, and speculative threads, which execute somewhat slower. This allows us to achieve nontrivial speedup, even on two-core machines. We present a mathematical model of potential speedup, parameterized by application characteristics and implementation constants. We also present preliminary results gleaned from two different Fastpath implementations, each derived from an implementation of software transactional memory.

Spear, Michael F.; Kelsey, Kirk; Bai, Tongxin; Dalessandro, Luke; Scott, Michael L.; Ding, Chen; Wu, Peng

279

Synchronous Parallel Kinetic Monte Carlo  

SciTech Connect

A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

Mart?nez, E; Marian, J; Kalos, M H

2006-12-14

280

Parallel Computing Experiences with CUDA  

Microsoft Academic Search

The CUDA programming model provides a straightforward means of describing inherently parallel computations, and NVIDIA's Tesla GPU architecture delivers high computational throughput on massively parallel problems. This article surveys experiences gained in applying CUDA to a diverse set of problems and the parallel speedups over sequential codes running on traditional CPU architectures attained by executing key computations on the GPU.

Michael Garland; Scott Le Grand; John Nickolls; Joshua Anderson; Jim Hardwick; Scott Morton; Everett Phillips; Yao Zhang; Vasily Volkov

2008-01-01

281

Parallelized direct execution simulation of message-passing parallel programs  

NASA Technical Reports Server (NTRS)

As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

1994-01-01

282

Parallel beamforming using synthetic transmit beams.  

PubMed

Parallel beamforming is frequently used to increase the acquisition rate of medical ultrasound imaging. However, such imaging systems will not be spatially shift invariant due to significant variation across adjacent beams. This paper investigates a few methods of parallel beam-forming that aims at eliminating this flaw and restoring the shift invariance property. The beam-to-beam variations occur because the transmit and receive beams are not aligned. The underlying idea of the main method presented here is to generate additional synthetic transmit beams (STB) through interpolation of the received, unfocused signal at each array element prior to beamforming. Now each of the parallel receive beams can be aligned perfectly with a transmit beam--synthetic or real--thus eliminating the distortion caused by misalignment. The proposed method was compared to the other compensation methods through a simulation study based on the ultrasound simulation software Field II. The results have been verified with in vitro experiments. The simulations were done with parameters similar to a standard cardiac examination with two parallel receive beams and a transmit-line spacing corresponding to the Rayleigh criterion, wavelength times f-number (lambda x f#). From the results presented, it is clear that straightforward parallel beamforming reduces the spatial shift invariance property of an ultrasound imaging system. The proposed method of using synthetic transmit beams seems to restore this important property, enabling higher acquisition rates without loss of image quality. PMID:17328324

Hergum, Torbjørn; Bjåstad, Tore; Kristoffersen, Kjell; Torp, Hans

2007-02-01

283

Reliability Generalization of the Psychopathy Checklist Applied in Youthful Samples  

ERIC Educational Resources Information Center

This study examines the average reliability of Hare Psychopathy Checklists (PCLs) adapted for use in samples of youthful offenders (aged 12 to 21 years). Two forms of reliability are examined: 18 alpha estimates of internal consistency and 18 intraclass correlation (two or more raters) estimates of interrater reliability. The results, an average…

Campbell, Justin S.; Pulos, Steven; Hogan, Mike; Murry, Francie

2005-01-01

284

The design and implementation of a workbench for application specific message-passing parallel reconfigurable architectures  

Microsoft Academic Search

This thesis develops a message-passing model for the design, simulation and evaluation of parallel reconfigurable architectures. A designer's workbench, OODRA (Object-Oriented Design of Reliable\\/Reconfigurable Architecture), has been implemented to realize the proposed message-passing model, and it provides a window-based, menu driven, graphics interactive environment for designing application-specific parallel architectures as well as the development of reconfiguration algorithms, the reliability analysis,

K. R. D

1989-01-01

285

Tolerant (Parallel) Programming  

NASA Technical Reports Server (NTRS)

In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

DiNucci, David C.; Bailey, David H. (Technical Monitor)

1997-01-01

286

Parallel Eigenvalue extraction  

NASA Technical Reports Server (NTRS)

A new numerical algorithm for the solution of large-order eigenproblems typically encountered in linear elastic finite element systems is presented. The architecture of parallel processing is utilized in the algorithm to achieve increased speed and efficiency of calculations. The algorithm is based on the frontal technique for the solution of linear simultaneous equations and the modified subspace eigenanalysis method for the solution of the eigenproblem. Assembly, elimination and back-substitution of degrees of freedom are performed concurrently, using a number of fronts. All fronts converge to and diverge from a predefined global front during elimination and back-substitution, respectively. In the meantime, reduction of the stiffness and mass matrices required by the modified subspace method can be completed during the convergence/divergence cycle and an estimate of the required eigenpairs obtained. Successive cycles of convergence and divergence are repeated until the desired accuracy of calculations is achieved. The advantages of this new algorithm in parallel computer architecture are discussed.

Akl, Fred A.

1989-01-01

287

Applied Parallel Metadata Indexing  

SciTech Connect

The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

Jacobi, Michael R [Los Alamos National Laboratory

2012-08-01

288

Making parallel lines meet  

PubMed Central

The extracellular matrix is constructed beyond the plasma membrane, challenging mechanisms for its control by the cell. In plants, the cell wall is highly ordered, with cellulose microfibrils aligned coherently over a scale spanning hundreds of cells. To a considerable extent, deploying aligned microfibrils determines mechanical properties of the cell wall, including strength and compliance. Cellulose microfibrils have long been seen to be aligned in parallel with an array of microtubules in the cell cortex. How do these cortical microtubules affect the cellulose synthase complex? This question has stood for as many years as the parallelism between the elements has been observed, but now an answer is emerging. Here, we review recent work establishing that the link between microtubules and microfibrils is mediated by a protein named cellulose synthase-interacting protein 1 (CSI1). The protein binds both microtubules and components of the cellulose synthase complex. In the absence of CSI1, microfibrils are synthesized but their alignment becomes uncoupled from the microtubules, an effect that is phenocopied in the wild type by depolymerizing the microtubules. The characterization of CSI1 significantly enhances knowledge of how cellulose is aligned, a process that serves as a paradigmatic example of how cells dictate the construction of their extracellular environment.

Baskin, Tobias I.; Gu, Ying

2012-01-01

289

Photovoltaic system reliability  

SciTech Connect

This paper discusses the reliability of several photovoltaic projects including SMUD`s PV Pioneer project, various projects monitored by Ascension Technology, and the Colorado Parks project. System times-to-failure range from 1 to 16 years, and maintenance costs range from 1 to 16 cents per kilowatt-hour. Factors contributing to the reliability of these systems are discussed, and practices are recommended that can be applied to future projects. This paper also discusses the methodology used to collect and analyze PV system reliability data.

Maish, A.B.; Atcitty, C. [Sandia National Labs., NM (United States); Greenberg, D. [Ascension Technology, Inc., Lincoln Center, MA (United States)] [and others

1997-10-01

290

Availability and reliability overview  

SciTech Connect

With the diversity of fuel costs, outages of high voltage direct current (HVDC) systems can have a large economic impact. Available methods to evaluate reliability are based on simple probability and sufficient data. A valid, consistent data base for historical performance is available through CIGRE publications. Additional information on future performance is available from each supplier's bid. Using all available information, including the customer's own estimate of reliability, reliability can be evaluated by calculating the expected value of energy unavailability for each supplier. 4 figures, 2 tables.

Albrecht, P.F.; Fink, J.L.

1984-01-01

291

Reliability Analysis Model  

NASA Technical Reports Server (NTRS)

RAM program determines probability of success for one or more given objectives in any complex system. Program includes failure mode and effects, criticality and reliability analyses, and some aspects of operations, safety, flight technology, systems design engineering, and configuration analyses.

1970-01-01

292

Microcircuit Reliability Bibliography.  

National Technical Information Service (NTIS)

The bibliography identifies literature pertinent to the reliability conscious microelectronic device industry. It is designed to enable the research engineer to conduct retrospective searches. File entry is made through a subject, corporate author or spec...

1971-01-01

293

Substation Reliability Centered Maintenance.  

National Technical Information Service (NTIS)

Substation Reliability Centered Maintenance (RCM) is a technique that is used to develop maintenance plans and criteria so the operational capability of substation equipment is achieved, restored, or maintained. The objective of the RCM process is to focu...

S. L. Purucker

1992-01-01

294

Underfill flow as viscous flow between parallel plates driven by capillary action  

Microsoft Academic Search

Epoxy underfill is often required to enhance the reliability of flip-chip interconnects. This study evaluates the flow of filled epoxy underfill materials between parallel plates driven by capillary action. An exact model was developed to understand the functional relationship between flow distance, flow time, separation distance, surface tension, and viscosity for quasi-steady laminar flow between parallel plates. The model was

Matthew K. Schwiebert; William H. Leong

1995-01-01

295

Underfill flow as viscous flow between parallel plates driven by capillary action  

Microsoft Academic Search

Epoxy underfill is often required to enhance the reliability of flip-chip interconnects. This study evaluates the flow of typical epoxy underfill materials between parallel plates driven by capillary action. An exact model was developed to understand the functional relationship between flow distance, flow time, separation distance, surface tension, and viscosity for quasisteady laminar flow between parallel plates. The model was

Matthew K. Schwiebert; William H. Leong

1996-01-01

296

Cooperative Communication Newtork with Parallel Spreading Method for MC-CDMA Systems  

Microsoft Academic Search

We propose an approach that improves reliability and throughput in wireless terrestrial networks with a novel cooperation strategy based on parallel spreading method. In proposed scheme, each spread sequence is constructed as a set of sub-spread sequences by parallel spreading method. They are distributed to adjacent terminals and delivered in a collaborative fashion. By exploiting multiple radio resources to transmit

Jaesung Lim

2007-01-01

297

PARALLEL ELECTRIC FIELD SPECTRUM OF SOLAR WIND TURBULENCE  

SciTech Connect

By searching through more than 10 satellite years of THEMIS and Cluster data, 3 reliable examples of parallel electric field turbulence in the undisturbed solar wind have been found. The perpendicular and parallel electric field spectra in these examples have similar shapes and amplitudes, even at large scales (frequencies below the ion gyroscale), where Alfvenic turbulence with no parallel electric field component is thought to dominate. The spectra of the parallel electric field fluctuations are power laws with exponents near -5/3 below the ion scales ({approx}0.1 Hz), and with a flattening of the spectrum in the vicinity of this frequency. At small scales (above a few Hz), the spectra are steeper than -5/3 with values in the range of -2.1 to -2.8. These steeper slopes are consistent with expectations for kinetic Alfven turbulence, although their amplitude relative to the perpendicular fluctuations is larger than expected.

Mozer, F. S.; Chen, C. H. K., E-mail: fmozer@ssl.berkeley.edu [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States)

2013-05-01

298

Message based event specification for debugging nondeterministic parallel programs  

SciTech Connect

Portability and reliability of parallel programs can be severely impaired by their nondeterministic behavior. Therefore, an effective means to precisely and accurately specify unacceptable nondeterministic behavior is necessary for testing and debugging parallel programs. In this paper we describe a class of expressions, called Message Expressions that can be used to specify nondeterministic behavior of message passing parallel programs. Specification of program behavior with Message Expressions is easier than pattern based specification techniques in that the former does not require knowledge of run-time event order, whereas that later depends on the user`s knowledge of the run-time event order for correct specification. We also discuss our adaptation of Message Expressions for use in a dynamic distributed testing and debugging tool, called mdb, for programs written for PVM (Parallel Virtual Machine).

Damohdaran-Kamal, S.K. [Los Alamos National Lab., NM (United States); Francioni, J.M. [University of Southwestern Louisiana, Lafayette, LA (United States)

1995-02-01

299

The Journey Toward Reliability  

NSDL National Science Digital Library

Kansas State University faculty members have partnered with industry to assist in the implementation of a reliability centered manufacturing (RCM) program. This paper highlights faculty members experiences, benefits to industry of implementing a reliability centered manufacturing program, and faculty members roles in the RCM program implementation. The paper includes lessons learned by faculty members, short-term extensions of the faculty-industry partnership, and a long-term vision for a RCM institute at the university level.

Brockway, Kathy V.; Spaulding, Greg

2010-03-15

300

Laser reliability prediction  

Microsoft Academic Search

This report presents the results of a program to locate, collect, and analyze laser reliability data, to construct laser reliability models, and to prepare revision sheets suitable for inclusion as a revision of Section 2.4, Lasers, in Mil-HDBK-217B. The report describes the methodology, analyses, models, failure rates, factors, and ground rules involved in the effort. It summarizes 10 million item-hours

T. R. Gagnier; E. W. Kimball; R. R. Selleck

1975-01-01

301

Multidisciplinary System Reliability Analysis  

NASA Technical Reports Server (NTRS)

The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

2001-01-01

302

Toward Parallel Document Clustering  

SciTech Connect

A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

Mogill, Jace A.; Haglin, David J.

2011-09-01

303

Parallel Imaging Microfluidic Cytometer  

PubMed Central

By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take.

Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

2011-01-01

304

Parallelizing OVERFLOW: Experiences, Lessons, Results  

NASA Technical Reports Server (NTRS)

The computer code OVERFLOW is widely used in the aerodynamic community for the numerical solution of the Navier-Stokes equations. Current trends in computer systems and architectures are toward multiple processors and parallelism, including distributed memory. This report describes work that has been carried out by the author and others at Ames Research Center with the goal of parallelizing OVERFLOW using a variety of parallel architectures and parallelization strategies. This paper begins with a brief description of the OVERFLOW code. This description includes the basic numerical algorithm and some software engineering considerations. Next comes a description of a parallel version of OVERFLOW, OVERFLOW/PVM, using PVM (Parallel Virtual Machine). This parallel version of OVERFLOW uses the manager/worker style and is part of the standard OVERFLOW distribution. Then comes a description of a parallel version of OVERFLOW, OVERFLOW/MPI, using MPI (Message Passing Interface). This parallel version of OVERFLOW uses the SPMD (Single Program Multiple Data) style. Finally comes a discussion of alternatives to explicit message-passing in the context of parallelizing OVERFLOW.

Jespersen, Dennis C.

1999-01-01

305

Reliability analysis and reliability-based optimization of composite laminated plate subject to buckling  

NASA Astrophysics Data System (ADS)

The purpose of this thesis is to investigate the effects of variations in design parameters on reliability of a composite plate subject to buckling. Then, the reliability-based design which maximize the reliability in terms of ply orientation angles of individual layers is obtained. It is illustrated the importance of considering the structural reliability in designing the composite plate subject to buckling. The composite material is known to have more uncertainties than a conventional material due to the fabrication process. It has been known that a deterministic optimum design is strongly anisotropic and very sensitive to the change in loading conditions. Therefore, it is necessary to consider the effect of variations in design parameters by applying the structural reliability theory. The reliability is evaluated by modeling the buckling failure as a series system consisting of potential eigenmodes. The mode reliability is obtained by the first-order reliability theory (FORM), where material constants, ply orientation angles and the applied loads are considered as random variables. In order to keep track of the intended buckling mode during the reliability analysis, the mode tracking method is utilized. Then, the failure probability of the series system is approximated by Ditlevsen's upper bound. The reliability-based optimization is formulated to find a laminate construction which maximize the system reliability. The problem is formulated as a nested problem with two levels of optimization; the reliability analysis and the reliability-based design. Through numerical calculation, the laminate construction of the reliability-based design is shown to be much different from that of the deterministic one. The deterministic buckling maximum design have more than two critical buckling modes. However, the reliability-based design has close mode reliabilies for the critical mode. The well balanced mode reliabilities in the latter lead to a higher system reliability. Finally, effects of the correlation between the random variables are investigated. It is clarified that the reliability-based design changes significantly in terms of the correlation coefficients. It is also shown that the reliability-based design ignoring correlation is sometimes less safe than even a deterministic buckling load maximization design when the random variables are correlated.

Kogiso, Nozomu

306

Statistical modelling of software reliability  

NASA Technical Reports Server (NTRS)

During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

Miller, Douglas R.

1991-01-01

307

Parallel processing research in the former Soviet Union  

SciTech Connect

This technical assessment report examines strengths and weaknesses of parallel processing research and development in the Soviet Union from the 1980s to June 1991. The assessment was carried out by panel of US scientists who are experts on parallel processing hardware, software, algorithms, and applications, and on Soviet computing. Soviet computer research and development organizations have pursued many of the major avenues of inquiry related to parallel processing that the West has chosen to explore. But, the limited size and substantial breadth of their effort have limited the collective depth of Soviet activity. Even more serious limitations (and delays) of Soviet achievement in parallel processing research can be traced to shortcomings of the Soviet computer industry, which was unable to supply adequate, reliable computer components. Without the ability to build, demonstrate, and test embodiments of their ideas in actual high-performance parallel hardware, both the scope of activity and the success of Soviet parallel processing researchers were severely limited. The quality of the Soviet parallel processing research assessed varied from very sound and interesting to pedestrian, with most of the groups at the major hardware and software centers to which the work is largely confined doing good (or at least serious) research. In a few instances, interesting and competent parallel language development work was found at institutions not associated with hardware development efforts. Unlike Soviet mainframe and minicomputer developers, Soviet parallel processing researchers have not concentrated their efforts on reverse- engineering specific Western systems. No evidence was found of successful Soviet attempts to use breakthroughs in parallel processing technology to leapfrog'' impediments and limitations that Soviet industrial weakness in microelectronics and other computer manufacturing areas impose on the performance of high-end Soviet computers.

Dongarra, J.J.; Snyder, L.; Wolcott, P.

1992-03-01

308

Parallel processing research in the former Soviet Union  

SciTech Connect

This technical assessment report examines strengths and weaknesses of parallel processing research and development in the Soviet Union from the 1980s to June 1991. The assessment was carried out by panel of US scientists who are experts on parallel processing hardware, software, algorithms, and applications, and on Soviet computing. Soviet computer research and development organizations have pursued many of the major avenues of inquiry related to parallel processing that the West has chosen to explore. But, the limited size and substantial breadth of their effort have limited the collective depth of Soviet activity. Even more serious limitations (and delays) of Soviet achievement in parallel processing research can be traced to shortcomings of the Soviet computer industry, which was unable to supply adequate, reliable computer components. Without the ability to build, demonstrate, and test embodiments of their ideas in actual high-performance parallel hardware, both the scope of activity and the success of Soviet parallel processing researchers were severely limited. The quality of the Soviet parallel processing research assessed varied from very sound and interesting to pedestrian, with most of the groups at the major hardware and software centers to which the work is largely confined doing good (or at least serious) research. In a few instances, interesting and competent parallel language development work was found at institutions not associated with hardware development efforts. Unlike Soviet mainframe and minicomputer developers, Soviet parallel processing researchers have not concentrated their efforts on reverse- engineering specific Western systems. No evidence was found of successful Soviet attempts to use breakthroughs in parallel processing technology to ``leapfrog`` impediments and limitations that Soviet industrial weakness in microelectronics and other computer manufacturing areas impose on the performance of high-end Soviet computers.

Dongarra, J.J.; Snyder, L.; Wolcott, P.

1992-03-01

309

Parallel Evaluation of Recursive Rule Queries  

Microsoft Academic Search

We investigate the parallel computational complexity of recursive rule queries. These queries are a subset of first-order relational queries augmented with recursion. They form an important psrt of the PROLOG language aud can be eva!uated in PTIME. In (32) Sagiv has shown that it is decidable whether a typed recursive rule query is equivalent to a first-order relational query. We

Stavros S. Cosmadakis; Paris C. Kanellakis

1986-01-01

310

Quantum search by parallel eigenvalue adiabatic passage  

NASA Astrophysics Data System (ADS)

We propose a strategy to implement the Grover search algorithm by adiabatic passage in a very efficient way. An adiabatic process can be characterized by the instantaneous eigenvalues of the pertaining Hamiltonian, some of which form a gap. The key to the efficiency is based on the use of parallel eigenvalues. This allows us to obtain nonadiabatic losses that are exponentially small, independently of the number of items in the database in which the search is performed.

Daems, D.; Guérin, S.; Cerf, N. J.

2008-10-01

311

Orbiter Autoland reliability analysis  

NASA Technical Reports Server (NTRS)

The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.

Welch, D. Phillip

1993-01-01

312

Proposed reliability cost model  

NASA Technical Reports Server (NTRS)

The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

Delionback, L. M.

1973-01-01

313

PMESH: A parallel mesh generator  

SciTech Connect

The Parallel Mesh Generation (PMESH) Project is a joint LDRD effort by A Division and Engineering to develop a unique mesh generation system that can construct large calculational meshes (of up to 10{sup 9} elements) on massively parallel computers. Such a capability will remove a critical roadblock to unleashing the power of massively parallel processors (MPPs) for physical analysis. PMESH will support a variety of LLNL 3-D physics codes in the areas of electromagnetics, structural mechanics, thermal analysis, and hydrodynamics.

Hardin, D.D.

1994-10-21

314

Parallel Database Systems: New Issues  

Microsoft Academic Search

Parallel database systems attempt to exploit recent multiprocessor computer architectures in order to build high-performance and high-availability database servers at a much lower price than equivalent mainframe computers. Although there are commercial SQL-based products, a number of open problems hamper the full exploitation of the capabilities of parallel systems. These problems touch on issues ranging from those of parallel processing

PATRICK VALDURIEZ

1993-01-01

315

Parallel Monte Carlo reactor neutronics  

SciTech Connect

The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved.

Blomquist, R.N.; Brown, F.B.

1994-03-01

316

Gearbox Reliability Collaborative Update (Presentation)  

SciTech Connect

This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

Sheng, S.

2013-10-01

317

Flexible resource allocation for reliable virtual cluster computing systems  

Microsoft Academic Search

Virtualization and cloud computing technologies now make it possible to create scalable and reliable virtual high performance computing clusters. Integrating these technologies, however, is complicated by fundamental and inherent differences in the way in which these systems allocate resources to computational tasks. Cloud computing systems immediately allocate available resources or deny requests. In contrast, parallel computing systems route all requests

Thomas J. Hacker; Kanak Mahadik

2011-01-01

318

Reliability Assessment for Two Versions of Vocabulary Levels Tests  

ERIC Educational Resources Information Center

This article reports a reliability study of two versions of the Vocabulary Levels Test at the 5000 word level. This study was motivated by a finding from an ongoing longitudinal study of vocabulary acquisition that Version A and Version B of Vocabulary Levels Test at the 5000 word level were not parallel. In order to investigate this issue,…

Xing, Peiling; Fulcher, Glenn

2007-01-01

319

Structural Properties of G,T-Parallel Duplexes  

PubMed Central

The structure of G,T-parallel-stranded duplexes of DNA carrying similar amounts of adenine and guanine residues is studied by means of molecular dynamics (MD) simulations and UV- and CD spectroscopies. In addition the impact of the substitution of adenine by 8-aminoadenine and guanine by 8-aminoguanine is analyzed. The presence of 8-aminoadenine and 8-aminoguanine stabilizes the parallel duplex structure. Binding of these oligonucleotides to their target polypyrimidine sequences to form the corresponding G,T-parallel triplex was not observed. Instead, when unmodified parallel-stranded duplexes were mixed with their polypyrimidine target, an interstrand Watson-Crick duplex was formed. As predicted by theoretical calculations parallel-stranded duplexes carrying 8-aminopurines did not bind to their target. The preference for the parallel-duplex over the Watson-Crick antiparallel duplex is attributed to the strong stabilization of the parallel duplex produced by the 8-aminopurines. Theoretical studies show that the isomorphism of the triads is crucial for the stability of the parallel triplex.

Avino, Anna; Cubero, Elena; Gargallo, Raimundo; Gonzalez, Carlos; Orozco, Modesto; Eritja, Ramon

2010-01-01

320

Parallel processor engine model program  

NASA Technical Reports Server (NTRS)

The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

Mclaughlin, P.

1984-01-01

321

Parallel processing and expert systems  

NASA Technical Reports Server (NTRS)

Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

Yan, Jerry C.; Lau, Sonie

1991-01-01

322

HPC Infrastructure for Solid Earth Simulation on Parallel Computers  

NASA Astrophysics Data System (ADS)

Recently, various types of parallel computers with various types of architectures and processing elements (PE) have emerged, which include PC clusters and the Earth Simulator. Moreover, users can easily access to these computer resources through network on Grid environment. It is well-known that thorough tuning is required for programmers to achieve excellent performance on each computer. The method for tuning strongly depends on the type of PE and architecture. Optimization by tuning is a very tough work, especially for developers of applications. Moreover, parallel programming using message passing library such as MPI is another big task for application programmers. In GeoFEM project (http://gefeom.tokyo.rist.or.jp), authors have developed a parallel FEM platform for solid earth simulation on the Earth Simulator, which supports parallel I/O, parallel linear solvers and parallel visualization. This platform can efficiently hide complicated procedures for parallel programming and optimization on vector processors from application programmers. This type of infrastructure is very useful. Source codes developed on PC with single processor is easily optimized on massively parallel computer by linking the source code to the parallel platform installed on the target computer. This parallel platform, called HPC Infrastructure will provide dramatic efficiency, portability and reliability in development of scientific simulation codes. For example, line number of the source codes is expected to be less than 10,000 and porting legacy codes to parallel computer takes 2 or 3 weeks. Original GeoFEM platform supports only I/O, linear solvers and visualization. In the present work, further development for adaptive mesh refinement (AMR) and dynamic load-balancing (DLB) have been carried out. In this presentation, examples of large-scale solid earth simulation using the Earth Simulator will be demonstrated. Moreover, recent results of a parallel computational steering tool using an MxN communication model will be shown. In an MxN communication model, the large-scale computation modules run on M PE's and high performance parallel visualization modules run on N PE's, concurrently. This can allow computation and visualization to select suitable parallel hardware environments respectively. Meanwhile, real-time steering can be achieved during computation so that the users can check and adjust the computation process in real time. Furthermore, different numbers of PE's can achieve better configuration between computation and visualization under Grid environment.

Nakajima, K.; Chen, L.; Okuda, H.

2004-12-01

323

Software reliability perspectives  

NASA Technical Reports Server (NTRS)

Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

Wilson, Larry; Shen, Wenhui

1987-01-01

324

High Performance Parallel Architectures  

NASA Technical Reports Server (NTRS)

Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

El-Ghazawi, Tarek; Kaewpijit, Sinthop

1998-01-01

325

Global Optimizations for Parallelism and Locality on Scalable Parallel Machines  

Microsoft Academic Search

Data locality is critical to achieving high performance on l arge-scale parallel machines. Non-local data accesses result in communica- tion that can greatly impact performance. Thus the mapping, or decomposition, of the computation and data onto the processors of a scalable parallel machine is a key issue in compiling programs for these architectures. This paper describes a compiler algorithm that

Jennifer-Ann M. Anderson; Monica S. Lam

1993-01-01

326

Electronic logic for enhanced switch reliability  

DOEpatents

A logic circuit is used to enhance redundant switch reliability. Two or more switches are monitored for logical high or low output. The output for the logic circuit produces a redundant and fail-safe representation of the switch outputs. When both switch outputs are high, the output is high. Similarly, when both switch outputs are low, the logic circuit's output is low. When the output states of the two switches do not agree, the circuit resolves the conflict by memorizing the last output state which both switches were simultaneously in and produces the logical complement of this output state. Thus, the logic circuit of the present invention allows the redundant switches to be treated as if they were in parallel when the switches are open and as if they were in series when the switches are closed. A failsafe system having maximum reliability is thereby produced.

Cooper, J.A.

1984-01-20

327

Quantifying reliability uncertainty : a proof of concept.  

SciTech Connect

This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna (Los Alamos National Laboratory, Los Alamos, NM); Lorio, John F.; Fatherley, Quinn (Los Alamos National Laboratory, Los Alamos, NM); Anderson-Cook, Christine (Los Alamos National Laboratory, Los Alamos, NM); Wilson, Alyson G. (Los Alamos National Laboratory, Los Alamos, NM); Zurn, Rena M.

2009-10-01

328

Concrete form block and form block structure  

US Patent & Trademark Office Database

A concrete form block for construction of a building includes first and second panel devices, each having inner and outer faces separated by ribs. Projecting connectors are disposed on the inner faces and each has a pin-receiving aperture. U-shaped couplers are used to connect the two panel devices together so that their inner faces are parallel. Each connector has first and second connecting pins and these are received in the apertures of the connectors of the panel devices, with each pin being pivotable in its aperture after insertion. The panel devices can be moved from a collapsed configuration having at least a reduced space between the inner faces and an in-use configuration with more space between these faces. There is also disclosed a panel structure having upper and lower channel forming frames connected to an outer wall portion thereof. These form a channel for receiving equipment for utilities.

2013-05-21

329

Determining Component Reliability and Redundancy for Optimum System Reliability  

Microsoft Academic Search

The usual constrained reliability optimization problem is extended to include determining the optimal level of component reliability and the number of redundancies in each stage. With cost, weight, and volume constraints, the problem is one in which the component reliability is a variable, and the optimal trade-off between adding components and improving individual component reliability is determined. This is a

Frank A. Tillman; Ching-Lai Hwang

1977-01-01

330

Toward a More Reliable Theory of Software Reliability  

Microsoft Academic Search

The notions of time and the operational profile incorporated into software reliability are incomplete. Reliability should be redefined as a function of application complexity, test effectiveness, and operating environment. We do not yet have a reliability equation that application complexity, test effectiveness, test suite diversity, and a fuller definition of the operational profile. We challenge the software reliability community to

James A. Whittaker; Jeffrey M. Voas

2000-01-01

331

Reliability Generalization (RG) Analysis: The Test Is Not Reliable  

ERIC Educational Resources Information Center

Literature shows that most researchers are unaware of some of the characteristics of reliability. This paper clarifies some misconceptions by describing the procedures, benefits, and limitations of reliability generalization while using it to illustrate the nature of score reliability. Reliability generalization (RG) is a meta-analytic method…

Warne, Russell

2008-01-01

332

Development of a Short Form of the Roommate Rapport Scale.  

ERIC Educational Resources Information Center

Evaluated a short form of the Roommate Rapport Scale that would maintain the scale's reliability and eliminate potentially objectionable items using students (N=320) who resided in dormitories. Results showed the short form to be reliable and unidimensional. (ABL)

Carey, John C.; And Others

1988-01-01

333

Parametric Mass Reliability Study  

NASA Technical Reports Server (NTRS)

The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

Holt, James P.

2014-01-01

334

Travel time reliability  

Microsoft Academic Search

Travel time and travel time reliability are important performance measures for assessing traffic condition and extent of congestion on a roadway. Most commonly used methods to obtain travel time data either produce only estimates of travel times or too few travel time data points for meaningful analysis. This study focuses on using a new probe vehicle technique, the Bluetooth technology,

Maria Martchouk

2009-01-01

335

Reliability growth via testing  

Microsoft Academic Search

Observed data values are typically assumed to come from an infinite population of items in reliability and survival analysis applications. The case of a finite population of items with exponentially distributed lifetimes is considered here. The data set consists of the lifetimes of a large number of items that are known to have exponentially distributed failure times with a failure

Lawrence M. Leemis

2010-01-01

336

Reliable Multicast Via Satellites  

Microsoft Academic Search

Automatic repeat request (ARQ) is a well-known technique to provide error control. In reliable satellite multicasting, ARQ may reduce system throughput as the number of receivers increases since the satellite has to retransmit a packer until all receivers correctly receive it. This performance degradation might be alleviated substantially by conducting retransmissions through terrestrial paths from the sender to each receiver

Guohong Cao; Yiqiong Wu

2001-01-01

337

Reliable broadcast protocols  

Microsoft Academic Search

A reliable broadcast protocol for an unreliable broadcast network is described. The protocol operates between the application programs and the broadcast network. It isolates the application programs from the unreliable characteristics of the communication network. The protocol guarantees that all of the broadcast messages are received at all of the operational receivers in a broadcast group. In addition, the sequence

Jo-Mei Chang; Nicholas F. Maxemchuk

1984-01-01

338

Software reliability report  

NASA Technical Reports Server (NTRS)

There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

Wilson, Larry

1991-01-01

339

Parallel execution model for Prolog  

SciTech Connect

One candidate language for parallel symbolic computing is Prolog. Numerous ways for executing Prolog in parallel have been proposed, but current efforts suffer from several deficiencies. Many cannot support fundamental types of concurrency in Prolog. Other models are of purely theoretical interest, ignoring implementation costs. Detailed simulation studies of execution models are scare; at present little is known about the costs and benefits of executing Prolog in parallel. In this thesis, a new parallel execution model for Prolog is presented: the PPP model or Parallel Prolog Processor. The PPP supports AND-parallelism, OR-parallelism, and intelligent backtracking. An implementation of the PPP is described, through the extension of an existing Prolog abstract machine architecture. Several examples of PPP execution are presented, and compilation to the PPP abstract instruction set is discussed. The performance effects of this model are reported, based on a simulation of a large benchmark set. The implications of these results for parallel Prolog systems are discussed, and directions for future work are indicated.

Fagin, B.S.

1987-01-01

340

Formal verification of parallel programs  

Microsoft Academic Search

Two formal models for parallel computation are presented: an abstract conceptual model and a parallel-program model. The former model does not distinguish between control and data states. The latter model includes the capability for the representation of an infinite set of control states by allowing there to be arbitrarily many instruction pointers (or processes) executing the program. An induction principle

Robert M. Keller

1976-01-01

341

Parallelism in random access machines  

Microsoft Academic Search

A model of computation based on random access machines operating in parallel and sharing a common memory is presented. The computational power of this model is related to that of traditional models. In particular, deterministic parallel RAM's can accept in polynomial time exactly the sets accepted by polynomial tape bounded Turing machines; nondeterministic RAM's can accept in polynomial time exactly

Steven Fortune; James Wyllie

1978-01-01

342

Fast data parallel polygon rendering  

Microsoft Academic Search

This paper describes a data parallel method for polygon rendering on a massively parallel machine. This method, based on a simple shading model, is targeted for applications which require very fast rendering for extremely large sets of polygons. Such sets are found in many scientific visualization applications. The renderer can handle arbitrarily complex polygons which need not be meshed. Issues

Frank A. Ortega; Charles D. Hansen; James P. Ahrens

1993-01-01

343

Parallelizing Monte Carlo with PMC  

SciTech Connect

PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

1994-11-01

344

Parallel pseudospectral domain decomposition techniques  

NASA Technical Reports Server (NTRS)

The influence of interface boundary conditions on the ability to parallelize pseudospectral multidomain algorithms is investigated. Using the properties of spectral expansions, a novel parallel two domain procedure is generalized to an arbitrary number of domains each of which can be solved on a separate processor. This interface boundary condition considerably simplifies influence matrix techniques.

Gottlieb, David; Hirsch, Richard S.

1989-01-01

345

Parallel pseudospectral domain decomposition techniques  

NASA Technical Reports Server (NTRS)

The influence of interface boundary conditions on the ability to parallelize pseudospectral multidomain algorithms is investigated. Using the properties of spectral expansions, a novel parallel two domain procedure is generalized to an arbitrary number of domains each of which can be solved on a separate processor. This interface boundary condition considerably simplifies influence matrix techniques.

Gottlieb, David; Hirsh, Richard S.

1988-01-01

346

Parallel controlled conspiracy number search  

Microsoft Academic Search

Tree search algorithms play an important role in many applications in the field of artificial intelligence. When playing board games like chess etc., computers use game tree search algorithms to evaluate a position. In this paper, we present a procedure that we call Parallel Controlled Conspiracy Number Search (Parallel CCNS). Shortly, we describe the principles of the sequential CCNS algorithm,

Ulf Lorenz

2001-01-01

347

A Kleene Iteration for Parallelism  

Microsoft Academic Search

. This paper extends automata-theoretic techniques to unbounded parallelbehaviour, as seen for instance in Petri nets. Languages are defined to besets of (labelled) series-parallel posets --- or, equivalently, sets of terms in analgebra with two product operations: sequential and parallel. In an earlier paper,we restricted ourselves to languages of posets having bounded width andintroduced a notion of branching automaton. In

Kamal Lodaya; Pascal Weil

1998-01-01

348

Massively parallel vector processing computer  

SciTech Connect

This patent describes a vector processing node for a computer of the type having a network of simultaneously operating vector processing nodes interconnected by bidirectional external busses for conveying parallel data words between the vector processing nodes. The vector processing node comprising: a bi-directional first bus for conveying parallel data words; a bi-directional second bus for conveying parallel data words; vector memory means connected for read and write access through the second bus for storing vectors comprising sequences of parallel data words conveyed on the second bus; vector processing means connected to the second bus for transmitting parallel data words to and receiving parallel data words from the vector memory means for generating output vectors comprising functions of input vectors stored in the vector memory means and for storing the output vectors in the vector memory means; and control means including a computer processor connected to the first bus, external port means controlled by the computer processor and connected between the first bus and the external busses, and local port means controlled by the computer processor connected between the first and second busses, for transmitting parallel data words to and receiving parallel data words from the first bus, the second bus, the external busses, and the vector memory.

Call, D.B.; Mudrow, A.; Johnson, R.C.; Bennion, R.F.

1990-01-02

349

Fast Parallel Matrix Inversion Algorithms  

Microsoft Academic Search

In this paper, an investigation of the parallel arithmetic complexity of matrix inversion, solving systems of linear equations, computing determinants and computing the characteristic polynomial of a matrix is reported. The parallel arithmetic complexity of solving equations has been an open question for several years. The gap between the complexity of the best algorithms (2n + 0(1), where n is

L. Csanky

1975-01-01

350

Parallelism in Mobile Agent Network  

Microsoft Academic Search

The paper deals with parallel mobile agents and related performance evaluation framework. A model called mobile agent network is proposed. It includes a multi-agent system consisting of co-operating and communicating mobile agents, a set of processing nodes in which the agents perform services and a network that connects processing nodes and allows agent mobility. Parallelism in mobile agent network is

Vjekoslav Sinkovic; Ignac Lovrek; Mario Kusek

351

Vertical Pricing and Parallel Imports  

Microsoft Academic Search

We generalize an earlier model of international vertical pricing to explain key features of parallel imports, or unauthorized trade in legitimate goods. When a manufacturer (or trademark owner) sells its product through an independent agent in one country, the agent may find it profitable to engage in parallel trade, selling the product to another country without the authorization of the

Yongmin Chen; KEITH E. MASKUS

2005-01-01

352

The Galley Parallel File System  

NASA Technical Reports Server (NTRS)

Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

Nieuwejaar, Nils; Kotz, David

1996-01-01

353

Reliability Degradation Due to Stockpile Aging  

SciTech Connect

The objective of this reseach is the investigation of alternative methods for characterizing the reliability of systems with time dependent failure modes associated with stockpile aging. Reference to 'reliability degradation' has, unfortunately, come to be associated with all types of aging analyes: both deterministic and stochastic. In this research, in keeping with the true theoretical definition, reliability is defined as a probabilistic description of system performance as a funtion of time. Traditional reliability methods used to characterize stockpile reliability depend on the collection of a large number of samples or observations. Clearly, after the experiments have been performed and the data has been collected, critical performance problems can be identified. A Major goal of this research is to identify existing methods and/or develop new mathematical techniques and computer analysis tools to anticipate stockpile problems before they become critical issues. One of the most popular methods for characterizing the reliability of components, particularly electronic components, assumes that failures occur in a completely random fashion, i.e. uniformly across time. This method is based primarily on the use of constant failure rates for the various elements that constitute the weapon system, i.e. the systems do not degrade while in storage. Experience has shown that predictions based upon this approach should be regarded with great skepticism since the relationship between the life predicted and the observed life has been difficult to validate. In addition to this fundamental problem, the approach does not recognize that there are time dependent material properties and variations associated with the manufacturing process and the operational environment. To appreciate the uncertainties in predicting system reliability a number of alternative methods are explored in this report. All of the methods are very different from those currently used to assess stockpile reliability, but have been used extensively in various forms outside Sandia National Laboratories. It is hoped that this report will encourage the use of 'nontraditional' reliabilty and uncertainty techniques in gaining insight into stockpile reliability issues.

Robinson, David G.

1999-04-01

354

Space Shuttle Propulsion System Reliability  

NASA Technical Reports Server (NTRS)

This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.

Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David

2011-01-01

355

Parallel-anemometric approach to windmill siting  

SciTech Connect

As more people turn to the use of small-scale wind energy systems, the need to develop a reliable short-term method for windmill siting becomes critical. The parallel-acemometric by direction (PAD) approach to siting is one attempt at such a method. Measurements of speed and direction for a three month period at both a prospective wind energy site and a site with known wind characteristics are taken. Data is divided into sets by direction and correlations between the two sites are computed for each set. The Weibull distribution is applied to these correlations in order to extrapolate the short-term data to a long-term prediction of available power at the prospective site.

Halperin, D.A.; Beckman, R.A.

1981-01-01

356

Reliability and Validity of a Computer Mediated Single-Word Intelligibility Test: Preliminary Findings for Children with Repaired Cleft Lip and Palate  

PubMed Central

Objective To determine the reliability and validity of a computer-mediated, 50 word intelligibility test designed to be a global measure of severity of speech disability in children with repaired cleft lip and palate (CLP). Design A prospective between group design was used with convenience sampling of patients from a university craniofacial center. Participants Thirty-eight children between the ages of 4 and 9 years. Twenty-two had repaired CLP while 16 had no clefts. Twenty adults served as listeners. Main Outcome Measure(s) Speech intelligibility scores were calculated for repeated administrations of a single-word test based upon the number of correct orthographically transcribed words by 4 groups of 5 listeners per child. Measures of parallel forms, inter-listener, and intra-listener reliability were estimated; measures of construct validity were also determined. Results All measures of reliability were adequate. Parallel forms reliability of the test based upon mean scores from 5 listeners per child was high (r=.97). Thirty-seven of 38 children had differences between forms of 11 percentage points or less. Construct validity of the test was shown by a) significantly lower speech intelligibility scores for children with CLP than controls, and b) a moderately high correlation (r=.79) between intelligibility scores and percent consonants correct for all children. Conclusions A computerized, single-word intelligibility test was described which appears to be a reliable and valid measure of global speech deficits in children with CLP. Additional development of the test may further facilitate standardized assessment of children with CLP.

Zajac, David J.; Plante, Caitrin; Lloyd, Amanda; Haley, Katarina L.

2011-01-01

357

Computation and parallel implementation for early vision  

NASA Technical Reports Server (NTRS)

The problem of early vision is to transform one or more retinal illuminance images-pixel arrays-to image representations built out of such primitive visual features such as edges, regions, disparities, and clusters. These transformed representations form the input to later vision stages that perform higher level vision tasks including matching and recognition. Researchers developed algorithms for: (1) edge finding in the scale space formulation; (2) correlation methods for computing matches between pairs of images; and (3) clustering of data by neural networks. These algorithms are formulated for parallel implementation of SIMD machines, such as the Massively Parallel Processor, a 128 x 128 array processor with 1024 bits of local memory per processor. For some cases, researchers can show speedups of three orders of magnitude over serial implementations.

Gualtieri, J. Anthony

1990-01-01

358

Rethinking validity and reliability in content analysis  

Microsoft Academic Search

The central thesis in this essay is that validity and reliability should be conceptualized differently across the various forms of content and the various uses of theory. This is especially true with applied communication research where a theory is not always available to guide the design. A distinction needs to made between manifest and latent (pattern and projective) content. Also,

W. James Potter

1999-01-01

359

Measuring agreement in medical informatics reliability studies  

Microsoft Academic Search

Agreement measures are used frequently in reliability studies that involve categorical data. Simple measures like observed agreement and specific agreement can reveal a good deal about the sample. Chance-corrected agreement in the form of the kappa statistic is used frequently based on its correspondence to an intraclass correlation coefficient and the ease of calculating it, but its magnitude depends on

George Hripcsak; Daniel F. Heitjan

2002-01-01

360

Are Stromatolites Reliable Biosignatures?  

Microsoft Academic Search

On the one hand, there is no doubt that some (perhaps most) stromatolites on Earth were formed with biologic influence. On the other, recent work has suggested that stromatolite-like structures have formed without biologic input.

F. A. Corsetti; W. M. Berelson; J. R. Spear; C. Pepe-Raney; C. Marshall; A. Olcott-Marshall

2010-01-01

361

Parallel processing and expert systems  

NASA Technical Reports Server (NTRS)

Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

Lau, Sonie; Yan, Jerry C.

1991-01-01

362

Template based parallel checkpointing in a massively parallel computer system  

DOEpatents

A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

Archer, Charles Jens (Rochester, MN); Inglett, Todd Alan (Rochester, MN)

2009-01-13

363

Meta-evaluation of Machine Translation Using Parallel Legal Texts  

Microsoft Academic Search

In this paper we report our recent work on the evaluation of a number of popular automatic evaluation metrics for machine\\u000a translation using parallel legal texts. The evaluation is carried out, following a recognized evaluation protocol, to assess\\u000a the reliability, the strengths and weaknesses of these evaluation metrics in terms of their correlation with human judgment\\u000a of translation quality. The

Billy Tak-ming Wong; Chunyu Kit

2009-01-01

364

Parallel-to-serial biphase-data converter  

NASA Technical Reports Server (NTRS)

Data converter produces a serial biphase output signal from parallel input data. Alternate bits are loaded into a shift register in complement form so that the bits appear at the end of the shift register in a true-complement form sequence.

Truelove, R. D.

1968-01-01

365

Reliable broadcast protocols  

NASA Technical Reports Server (NTRS)

A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.

Joseph, T. A.; Birman, Kenneth P.

1989-01-01

366

Data networks reliability  

NASA Astrophysics Data System (ADS)

The research from 1984 to 1986 on Data Network Reliability had the objective of developing general principles governing the reliable and efficient control of data networks. The research was centered around three major areas: congestion control, multiaccess networks, and distributed asynchronous algorithms. The major topics within congestion control were the use of flow control algorithms. The major topics within congestion control were the use of flow control to reduce congestion and the use of routing to reduce congestion. The major topics within multiaccess networks were the communication properties of multiaccess channels, collision resolution, and packet radio networks. The major topics within asynchronous distributed algorithms were failure recovery, time vs. communication tradeoffs, and the general theory of distributed algorithms.

Gallager, Robert G.

1988-10-01

367

Power electronics reliability.  

SciTech Connect

The project's goals are: (1) use experiments and modeling to investigate and characterize stress-related failure modes of post-silicon power electronic (PE) devices such as silicon carbide (SiC) and gallium nitride (GaN) switches; and (2) seek opportunities for condition monitoring (CM) and prognostics and health management (PHM) to further enhance the reliability of power electronics devices and equipment. CM - detect anomalies and diagnose problems that require maintenance. PHM - track damage growth, predict time to failure, and manage subsequent maintenance and operations in such a way to optimize overall system utility against cost. The benefits of CM/PHM are: (1) operate power conversion systems in ways that will preclude predicted failures; (2) reduce unscheduled downtime and thereby reduce costs; and (3) pioneering reliability in SiC and GaN.

Kaplar, Robert James; Brock, Reinhard C.; Marinella, Matthew; King, Michael Patrick; Stanley, James K.; Smith, Mark A.; Atcitty, Stanley

2010-10-01

368

Parallel processors and nonlinear structural dynamics algorithms and software  

NASA Technical Reports Server (NTRS)

A nonlinear structural dynamics program with an element library that exploits parallel processing is under development. The aim is to exploit scheduling-allocation so that parallel processing and vectorization can effectively be treated in a general purpose program. As a byproduct an automatic scheme for assigning time steps was devised. A rudimentary form of the program is complete and has been tested; it shows substantial advantage can be taken of parallelism. In addition, a stability proof for the subcycling algorithm has been developed.

Belytschko, T.

1986-01-01

369

Implementing a parallel C++ runtime system for scalable parallel systems  

Microsoft Academic Search

pC++ is a language extension to C++ designed toallow programmers to compose "concurrent aggregate"collection classes which can be aligned and distributedover the memory hierarchy of a parallel machine ina manner modeled on the High Performance FortranForum (HPFF) directives for Fortran 90. pC++ allowsthe user to write portable and efficient code whichwill run on a wide range of scalable parallel computersystems.

A. Malony; B. Mohr; P. Beckman; D. Gannon; S. Yang; F. Bodin; S. Kesavan

1993-01-01

370

Distribution system reliability indices  

SciTech Connect

Distribution system reliability assessment can be divided into two basic segments of measuring past performance and predicting future performance. This paper compares the results obtained from tow surveys dealing with United States and Canadian utility activities in regard to service continuity data collection and utilization. The paper also presents a summary of service continuity statistics for those Canadian utilities that participate in the Canadian Electrical Association annual service continuity reports.

Billinton, R.; Billinton, J.E.

1989-01-01

371

Reliability Centred Maintenance  

Microsoft Academic Search

Reliability centred maintenance (RCM) is a method for maintenance planning that was developed within the aircraft industry\\u000a and later adapted to several other industries and military branches. A high number of standards and guidelines have been issued\\u000a where the RCM methodology is tailored to different application areas, e.g., IEC 60300-3-11, MIL-STD-217, NAVAIR 00-25-403 (NAVAIR 2005), SAE JA 1012 (SAE 2002),

Marvin Rausand; Jørn Vatn

372

Compact, Reliable EEPROM Controller  

NASA Technical Reports Server (NTRS)

A compact, reliable controller for an electrically erasable, programmable read-only memory (EEPROM) has been developed specifically for a space-flight application. The design may be adaptable to other applications in which there are requirements for reliability in general and, in particular, for prevention of inadvertent writing of data in EEPROM cells. Inadvertent writes pose risks of loss of reliability in the original space-flight application and could pose such risks in other applications. Prior EEPROM controllers are large and complex and do not provide all reasonable protections (in many cases, few or no protections) against inadvertent writes. In contrast, the present controller provides several layers of protection against inadvertent writes. The controller also incorporates a write-time monitor, enabling determination of trends in the performance of an EEPROM through all phases of testing. The controller has been designed as an integral subsystem of a system that includes not only the controller and the controlled EEPROM aboard a spacecraft but also computers in a ground control station, relatively simple onboard support circuitry, and an onboard communication subsystem that utilizes the MIL-STD-1553B protocol. (MIL-STD-1553B is a military standard that encompasses a method of communication and electrical-interface requirements for digital electronic subsystems connected to a data bus. MIL-STD- 1553B is commonly used in defense and space applications.) The intent was to both maximize reliability while minimizing the size and complexity of onboard circuitry. In operation, control of the EEPROM is effected via the ground computers, the MIL-STD-1553B communication subsystem, and the onboard support circuitry, all of which, in combination, provide the multiple layers of protection against inadvertent writes. There is no controller software, unlike in many prior EEPROM controllers; software can be a major contributor to unreliability, particularly in fault situations such as the loss of power or brownouts. Protection is also provided by a powermonitoring circuit.

Katz, Richard; Kleyner, Igor

2010-01-01

373

Reliability and testing  

NASA Technical Reports Server (NTRS)

Reliability and its interdependence with testing are important topics for development and manufacturing of successful products. This generally accepted fact is not only a technical statement, but must be also seen in the light of 'Human Factors.' While the background for this paper is the experience gained with electromechanical/electronic space products, including control and system considerations, it is believed that the content could be also of interest for other fields.

Auer, Werner

1996-01-01

374

Spacecraft transmitter reliability  

NASA Technical Reports Server (NTRS)

A workshop on spacecraft transmitter reliability was held at the NASA Lewis Research Center on September 25 and 26, 1979, to discuss present knowledge and to plan future research areas. Since formal papers were not submitted, this synopsis was derived from audio tapes of the workshop. The following subjects were covered: users' experience with space transmitters; cathodes; power supplies and interfaces; and specifications and quality assurance. A panel discussion ended the workshop.

1980-01-01

375

Measuring the Performance of Parallel Message-Based Process Architectures  

Microsoft Academic Search

Message-based process architectures are widely regarded as an effective method for structuring parallel protocol pro- cessing on shared memory multi-processor platforms. A message-based process architectures is formed by binding one or more processing elements with the data messages and control messages received from applications and network interfaces. In this architecture, parallelism is achieved by simultaneously escorting multiple messages on separate

Douglas C. Schmidt; Tatsuya Suda

1995-01-01

376

PSSA: Parallel Stretched Simulated Annealing  

NASA Astrophysics Data System (ADS)

We consider the problem of finding all the global (and some local) minimizers of a given nonlinear optimization function (a class of problems also known as multi-local programming problems), using a novel approach based on Parallel Computing. The approach, named Parallel Stretched Simulated Annealing (PSSA), combines simulated annealing with stretching function technique, in a parallel execution environment. Our PSSA software allows to increase the resolution of the search domains (thus facilitating the discovery of new solutions) while keeping the search time bounded. The software was tested with a set of well known problems and some numerical results are presented.

Ribeiro, Tiago; Rufino, José; Pereira, Ana I.

2011-09-01

377

Software reliability studies  

NASA Technical Reports Server (NTRS)

There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

Hoppa, Mary Ann; Wilson, Larry W.

1994-01-01

378

PSPIKE: A Parallel Hybrid Sparse Linear System Solver  

NASA Astrophysics Data System (ADS)

The availability of large-scale computing platforms comprised of tens of thousands of multicore processors motivates the need for the next generation of highly scalable sparse linear system solvers. These solvers must optimize parallel performance, processor (serial) performance, as well as memory requirements, while being robust across broad classes of applications and systems. In this paper, we present a new parallel solver that combines the desirable characteristics of direct methods (robustness) and effective iterative solvers (low computational cost), while alleviating their drawbacks (memory requirements, lack of robustness). Our proposed hybrid solver is based on the general sparse solver PARDISO, and the “Spike” family of hybrid solvers. The resulting algorithm, called PSPIKE, is as robust as direct solvers, more reliable than classical preconditioned Krylov subspace methods, and much more scalable than direct sparse solvers. We support our performance and parallel scalability claims using detailed experimental studies and comparison with direct solvers, as well as classical preconditioned Krylov methods.

Manguoglu, Murat; Sameh, Ahmed H.; Schenk, Olaf

379

Transfer form  

Cancer.gov

10/02 Transfer Investigational Agent Form This form is to be used for an intra-institutional transfer, one transfer/form. Division of Cancer Prevention National Cancer Institute National Institutes of Health TRANSFER FROM: Investigator transferring agent:

380

Parallel Implicit Algorithms for CFD.  

National Technical Information Service (NTIS)

The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) 'Newton' refers to a quadratically convergent nonlinear iterat...

D. E. Keyes

1998-01-01

381

Demonstrating Forces between Parallel Wires.  

ERIC Educational Resources Information Center

Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

Baker, Blane

2000-01-01

382

Designing and Building Parallel Programs  

NSDL National Science Digital Library

Designing and Building Parallel Programs [Online] is an innovative traditional print and online resource publishing project. It incorporates the content of a textbook published by Addison-Wesley into an evolving online resource.

383

ICANS to UCANS: Parallel evolution  

Microsoft Academic Search

I show the earliest neutron sources, exhibit the historical development of slow-neutron sources, trace the early technical and community developments and the origins of ICANS and of UCANS, and find parallels between them.

John M. Carpenter

384

Basic Electricity - AC Parallel Circuits.  

National Technical Information Service (NTIS)

Shows the elements of an ac parallel circuit, examines the effects of current, and shows what the generator sees in the following circuits: where xl exceeds xc, xc exceeds xl, and xc and xl are equal.

1994-01-01

385

Parallel Vector Tile Optimizing Library.  

National Technical Information Service (NTIS)

PVTOL is a C++ library that allows cross-platform software portability without sacrificing high performance. Researchers at MIT Lincoln Laboratory developed the Parallel Vector Tile Optimizing Library (PVTOL) to address a primary challenge faced by develo...

E. M. Rutledge

2011-01-01

386

Parallelizing Timed Petri Net Simulations.  

National Technical Information Service (NTIS)

The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then si...

D. M. Nicol

1993-01-01

387

The Gaussian parallel relay network  

Microsoft Academic Search

We introduce the real, discrete-time Gaussian parallel relay network. This simple network is theoretically important in the context of network information theory. We present upper and lower bounds to capacity and explain where they coincide

B. Schein; R. Gallager

2000-01-01

388

"Feeling" Series and Parallel Resistances.  

ERIC Educational Resources Information Center

Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

Morse, Robert A.

1993-01-01

389

Fast Algorithms for Parallel Architectures.  

National Technical Information Service (NTIS)

Our work on Fast Algorithms for Parallel Architectures led us to investigate methods for computing all eigenvalues and eigen vectors of a summetric tridiagonal matrix on a distributed-memory MIMD multiprocessor. We have studied only those techniques havin...

M. H. SChultz

1990-01-01

390

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 2. Technical Report #1201  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the second-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Lai, Cheng-Fei; Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Tindal, Gerald

2012-01-01

391

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 3. Technical Report #1202  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the third-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Lai, Cheng-Fei; Irvin, P. Shawn; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

2012-01-01

392

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 7. Technical Report #1206  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald

2012-01-01

393

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 6. Technical Report #1205  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the sixth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

394

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 5. Technical Report #1204  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the fifth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Park, Bitnara Jasmine; Irvin, P. Shawn; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

2012-01-01

395

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 4. Technical Report #1203  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the fourth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Park, Bitnara Jasmine; Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

396

Master\\/slave speculative parallelization  

Microsoft Academic Search

Master\\/Slave Speculative Parallelization (MSSP) is an execution paradigm for improving the execution rate of sequential programs by parallelizing them speculatively for execution on a multiprocessor. In MSSP, one processor---the master---executes an approximate version of the program to compute selected values that the full program's execution is expected to compute. The master's results are checked by slave processors that execute the

Craig B. Zilles; Gurindar S. Sohi

2002-01-01

397

Graphics applications utilizing parallel processing  

NASA Technical Reports Server (NTRS)

The results are presented of research conducted to develop a parallel graphic application algorithm to depict the numerical solution of the 1-D wave equation, the vibrating string. The research was conducted on a Flexible Flex/32 multiprocessor and a Sequent Balance 21000 multiprocessor. The wave equation is implemented using the finite difference method. The synchronization issues that arose from the parallel implementation and the strategies used to alleviate the effects of the synchronization overhead are discussed.

Rice, John R.

1990-01-01

398

Parallel Parsing of Arithmetic Expressions  

Microsoft Academic Search

Parallel algorithms for parsing expressions on mesh, shuffle, cube, and cube-connected cycle parallel computers are presented. With n processors, it requires O( square root n) time on the mesh-connected model and O(log\\/sup 2\\/ n) time on others. For the mesh-connected computer, the author uses a wrap-around row-major ordering. For the shuffle computer, he uses an extra connection between adjacent processors,

Y. N. Srikant

1990-01-01

399

Address tracing for parallel machines  

NASA Technical Reports Server (NTRS)

Recently implemented parallel system address-tracing methods based on several metrics are surveyed. The issues specific to collection of traces for both shared and distributed memory parallel computers are highlighted. Five general categories of address-trace collection methods are examined: hardware-captured, interrupt-based, simulation-based, altered microcode-based, and instrumented program-based traces. The problems unique to shared memory and distributed memory multiprocessors are examined separately.

Stunkel, Craig B.; Janssens, Bob; Fuchs, W. Kent

1991-01-01

400

Extension and validation of fault-tree analysis for reliability prediction. Final report  

Microsoft Academic Search

This report presents the reliability projection for a type of fossil-fueled power plant which makes use of a combustion turbine and heat-recovery steam generator in parallel operation with a package boiler. A previous EPRI study (EPRI-AF--811, dated June 1978) demonstrated that a fault-tree reliability model can be used to estimate the reliability of such a plant. The present report makes

R. Land; L. Rayes; E. T. Burns

1980-01-01

401

Component Reliability and System Reliability for Space Missions.  

National Technical Information Service (NTIS)

This paper is to address the basics, the limitations and the relationship between component reliability and system reliability through a study of flight computing architectures and related avionics components for NASA future missions. Component reliabilit...

A. M. Gillespie M. J. Sampson M. W. Monaghan R. F. Hodson Y. Chen

2012-01-01

402

Calculation of Optimally Reliable Solar Cell Arrays  

Microsoft Academic Search

Present state-of-the-art emphasis has been placed on the use of silicon solar cells interconnected in series-parallel groups to form a solar array providing basic power for long lifetime spacecraft (perhaps greater than 3 months). To assure that sufficient power will be available to operate equipments during the specified mission time, a reasonable margin must be designed into an array to

R. Brenan; F. Mason

1964-01-01

403

Efficiency of parallel direct optimization  

NASA Technical Reports Server (NTRS)

Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

Janies, D. A.; Wheeler, W. C.

2001-01-01

404

Testing of reliability - Analysis tools  

NASA Technical Reports Server (NTRS)

An outline is presented of issues raised in verifying the accuracy of reliability analysis tools. State-of-the-art reliability analysis tools implement various decomposition, aggregation, and estimation techniques to compute the reliability of a diversity of complex fault-tolerant computer systems. However, no formal methodology has been formulated for validating the reliability estimates produced by these tools. The author presents three states of testing that can be performed on most reliability analysis tools to effectively increase confidence in a tool. These testing stages were applied to the SURE (semi-Markov Unreliability Range Evaluator) reliability analysis tool, and the results of the testing are discussed.

Hayhurst, Kelly J.

1989-01-01

405

On Component Reliability and System Reliability for Space Missions  

NASA Technical Reports Server (NTRS)

This paper is to address the basics, the limitations and the relationship between component reliability and system reliability through a study of flight computing architectures and related avionics components for NASA future missions. Component reliability analysis and system reliability analysis need to be evaluated at the same time, and the limitations of each analysis and the relationship between the two analyses need to be understood.

Chen, Yuan; Gillespie, Amanda M.; Monaghan, Mark W.; Sampson, Michael J.; Hodson, Robert F.

2012-01-01

406

Asynchronous parallel status comparator  

DOEpatents

Apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition.

Arnold, Jeffrey W. (828 Hickory Ridge Rd., Aiken, SC 29801); Hart, Mark M. (223 Limerick Dr., Aiken, SC 29803)

1992-01-01

407

Asynchronous parallel status comparator  

DOEpatents

Disclosed is an apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition. 4 figs.

Arnold, J.W.; Hart, M.M.

1992-12-15

408

Parallel search of strongly ordered game trees  

SciTech Connect

The alpha-beta algorithm forms the basis of many programs that search game trees. A number of methods have been designed to improve the utility of the sequential version of this algorithm, especially for use in game-playing programs. These enhancements are based on the observation that alpha beta is most effective when the best move in each position is considered early in the search. Trees that have this so-called strong ordering property are not only of practical importance but possess characteristics that can be exploited in both sequential and parallel environments. This paper draws upon experiences gained during the development of programs which search chess game trees. Over the past decade major enhancements of the alpha beta algorithm have been developed by people building game-playing programs, and many of these methods will be surveyed and compared here. The balance of the paper contains a study of contemporary methods for searching chess game trees in parallel, using an arbitrary number of independent processors. To make efficient use of these processors, one must have a clear understanding of the basic properties of the trees actually traversed when alpha-beta cutoffs occur. This paper provides such insights and concludes with a brief description of a refinement to a standard parallel search algorithm for this problem. 33 references.

Marsland, T.A.; Campbell, M.

1982-12-01

409

VCSEL-based parallel optical transmission module  

NASA Astrophysics Data System (ADS)

This paper describes the design process and performance of the optimized parallel optical transmission module. Based on 1×12 VCSEL (Vertical Cavity Surface Emitting Laser) array, we designed and fabricated the high speed parallel optical modules. Our parallel optical module contains a 1×12 VCSEL array, a 12 channel CMOS laser driver circuit, a high speed PCB (Printed Circuit Board), a MT fiber connector and a packaging housing. The L-I-V characteristics of the 850nm VCSEL was measured at the operating current 8mA, 3dB frequency bandwidth more than 3GHz and the optical output 1mW. The transmission rate of all 12 channels is 30Gbit/s, with a single channel 2.5Gbit/s. By adopting the integration of the 1×12 VCSEL array and the driver array, we make a high speed PCB (Printed Circuit Board) to provide the optoelectronic chip with the operating voltage and high speed signals current. The LVDS (Low-Voltage Differential Signals) was set as the input signal to achieve better high frequency performance. The active coupling was adopted with a MT connector (8° slant fiber array). We used the Small Form Factor Pluggable (SFP) packaging. With the edge connector, the module could be inserted into the system dispense with bonding process.

Shen, Rongxuan; Chen, Hongda; Zuo, Chao; Pei, Weihua; Zhou, Yi; Tang, Jun

2005-02-01

410

Intelligent spatial ecosystem modeling using parallel processors  

SciTech Connect

Spatial modeling of ecosystems is essential if one's modeling goals include developing a relatively realistic description of past behavior and predictions of the impacts of alternative management policies on future ecosystem behavior. Development of these models has been limited in the past by the large amount of input data required and the difficulty of even large mainframe serial computers in dealing with large spatial arrays. These two limitations have begun to erode with the increasing availability of remote sensing data and GIS systems to manipulate it, and the development of parallel computer systems which allow computation of large, complex, spatial arrays. Although many forms of dynamic spatial modeling are highly amenable to parallel processing, the primary focus in this project is on process-based landscape models. These models simulate spatial structure by first compartmentalizing the landscape into some geometric design and then describing flows within compartments and spatial processes between compartments according to location-specific algorithms. The authors are currently building and running parallel spatial models at the regional scale for the Patuxent River region in Maryland, the Everglades in Florida, and Barataria Basin in Louisiana. The authors are also planning a project to construct a series of spatially explicit linked ecological and economic simulation models aimed at assessing the long-term potential impacts of global climate change.

Maxwell, T.; Costanza, R. (Maryland International Inst. for Ecological Economics, Solomons (United States))

1993-05-01

411

DRBD: DYNAMIC RELIABILITY BLOCK DIAGRAMS FOR SYSTEM RELIABILITY MODELLING  

Microsoft Academic Search

With the rapid advances of computer-based technology in mission- critical domains such as aerospace, military, and power industries, critical systems exhibit more complex, dependent, and dynamic behaviours. Such dynamic system behaviours cannot be fully captured by existing reliability modelling tools. In this paper, we introduce a new reliability modelling tool, called dynamic reliability block diagrams (DRBD), to model dynamic relationships

H. Xu; L. Xing; R. Robidoux

2009-01-01

412

Ferrite logic reliability study  

NASA Technical Reports Server (NTRS)

Development and use of digital circuits called all-magnetic logic are reported. In these circuits the magnetic elements and their windings comprise the active circuit devices in the logic portion of a system. The ferrite logic device belongs to the all-magnetic class of logic circuits. The FLO device is novel in that it makes use of a dual or bimaterial ferrite composition in one physical ceramic body. This bimaterial feature, coupled with its potential for relatively high speed operation, makes it attractive for high reliability applications. (Maximum speed of operation approximately 50 kHz.)

Baer, J. A.; Clark, C. B.

1973-01-01

413

Reliability in Scientific Research  

NASA Astrophysics Data System (ADS)

1. Basic principles of reliability, human error, and other general issues; 2. Mathematical calculations; 3. Basic issues concerning hardware systems; 4. Obtaining items from commercial sources; 5. General points regarding the design and construction of apparatus; 6. Vacuum system leaks and related problems; 7. Vacuum pumps and gauges, and other vacuum-system concerns; 8. Mechanical devices and systems; 9. Cryogenic systems; 10. Visible and near-visible optics; 11. Electronic systems; 12. Interconnecting, wiring, and cabling for electronics; 13. Computer hardware and software, and stored information; 14. Experimental method.

Walker, I. R.

2011-01-01

414

17 CFR 12.24 - Parallel proceedings.  

Code of Federal Regulations, 2010 CFR

...2009-04-01 2009-04-01 false Parallel proceedings. 12.24 Section...Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall...

2009-04-01

415

17 CFR 12.24 - Parallel proceedings.  

Code of Federal Regulations, 2010 CFR

...2010-04-01 2010-04-01 false Parallel proceedings. 12.24 Section...Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall...

2010-04-01

416

Parallelizing the spectral transform method, part 2  

NASA Astrophysics Data System (ADS)

This paper describes the parallelization and performance of the spectral method for solving the shallow water equations on the surface of a sphere using a 128-node Intel iPSC/860 hypercube. The shallow water equations form a computational kernel of more complex climate models. This work is part of a research program to develop climate models that are capable of much longer simulations at a significantly finer resolution than current models. Such models are important in understanding the effects of the increasing atmospheric concentrations of greenhouse gases, and the computational requirements are so large that massively parallel multiprocessors will be necessary to run climate models simulations in a reasonable amount of time. The spectral method involves the transformation of data between the physical, Fourier, and spectral domains. Each of these domains is two-dimensional. The spectral method performs Fourier transforms in the longitude direction followed by summation in the latitude direction to evaluate the discrete spectral transform. A simple way of parallelizing the spectral code is to decompose the physical problem domain in just the latitude direction. This allows an optimized sequential FFT algorithm to be used in the longitude direction. However, this approach limits the number of processors that can be brought to bear on the problem. Decomposing the problem over both directions allows the parallelism inherent in the problem to be exploited more effectively - the grain size is reduced and more processors can be used. Results are presented that show that decomposing over both directions does result in a more rapid solution of the problem. The importance of minimizing communication latency and overlapping communication with calculation is stressed. General methods for doing this, that may be applied to many other problems, are discussed.

Walker, D. W.; Worley, P. H.; Drake, J. B.

1991-07-01

417

Validation of Software Reliability Models.  

National Technical Information Service (NTIS)

This report presents the results of a study and investigation of software reliability models. In particular, the purpose was to investigate the statistical properties of selected software reliability models, including the statistical properties of the par...

R. E. Schafer J. E. Angus J. F. Alter S. E. Emoto

1979-01-01

418

Reliability of DSM impact estimates.  

National Technical Information Service (NTIS)

Demand-side management (DSM) critics continue to question the reliability of DSM program savings, and therefore, the need for funding such programs. In this paper, the authors examine the issues underlying the discussion of reliability of DSM program savi...

E. L. Vine M. G. Kushler

1995-01-01

419

Nuclear weapon reliability evaluation methodology.  

National Technical Information Service (NTIS)

This document provides an overview of those activities that are normally performed by Sandia National Laboratories to provide nuclear weapon reliability evaluations for the Department of Energy. These reliability evaluations are first provided as a predic...

D. L. Wright

1993-01-01

420

Analysis tools for reliability databases.  

National Technical Information Service (NTIS)

This report outlines the work performed at Risoe, under contract with the Swedish Nuclear Power Inspectorate, with the goal to develop analysis tools for reliability databases, that can suit the information needs of the users of the TUD (Reliability/ Main...

J. Dorrepaal

1996-01-01

421

Effectively exploiting parallelism in data flow analysis  

Microsoft Academic Search

We present an effective approach to performing data flow analysis in parallel and identify three types of parallelism inherent in this solution process: independent-problem parallelism, separate-unit parallelism and algorithmic parallelism. We present our investigations of Fortran procedures from thePerfect Benchmarks andnetlib libraries, which reveal structural characteristics of program flow graphs that are amenable to algorithmic parallelism. Previously, the utility of

Yong-Fong Lee; Barbara G. Ryder

1994-01-01

422

Supporting data intensive applications with medium grained parallelism  

SciTech Connect

ADAMS is an ambitious effort to provide new database access paradigms for the kinds of scientific applications that require massively parallel access to very large data sets in order to be effective. Many of the Grand Challenge Problems fall into this category, as well as those kinds of scientific research which depend on widely distributed shared sets of disparate data. The essence of the ADAMS approach is to view data purely in functional terms, rather than the more traditional structural view in which multiple data items are aggregated into records or tuples of flat files. Further, ADAMS has been implemented as an embedded interface so that scientists can develop applications in the host programming language of their choice, often Fortran, Pascal, or C, and still access shared data generated in other environments. The syntax and semantics of ADAMS is essentially complete. The functional nature of the ADAMS data interface paradigm simplifies its implementation in a distributed environment, e.g., the Mentat run-time system, because one must only distribute functional servers, not pieces of data structures. However, this only opens up the possibility of effective parallel database processing; to realize this potential far more work must be done in the areas of data dependence, intra-statement parallelism, parallel query optimization, and maintaining consistency and reliability in concurrent systems. Discovering how to make effective parallel data access an actually in real scientific applications is the point of this research.

Pfaltz, J.L.; French, J.C.; Grimshaw, A.S.; Son, S.H.

1992-04-01

423

Making Reliability Arguments in Classrooms  

ERIC Educational Resources Information Center

Reliability methodology needs to evolve as validity has done into an argument supported by theory and empirical evidence. Nowhere is the inadequacy of current methods more visible than in classroom assessment. Reliability arguments would also permit additional methodologies for evidencing reliability in classrooms. It would liberalize methodology…

Parkes, Jay; Giron, Tilia

2006-01-01

424

A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)  

NASA Technical Reports Server (NTRS)

A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

Straeter, T. A.; Markos, A. T.

1975-01-01

425

Optimization Algorithms for Exploiting the Parallelism-Communication Tradeoff in Pipelined Parallelism  

Microsoft Academic Search

We address the problem of finding parallel plans for SQL queries using the two-phase approach of join ordering followed by parallelization. We fo- cus on the parallelization phase and develop al- gorithms for exploiting pipelined parallelism. We formulate parallelization as scheduling a weighted operator tree to minimize response time. Our model of response time captures the fundamental tradeoff between parallel

Waqar Hasan; Rajeev Motwani

1994-01-01

426

Physiologic Trend Detection and Artifact Rejection: A Parallel Implementation of a Multi-state Kalman Filtering Algorithm  

PubMed Central

Using a parallel implementation of the multi-state Kalman filtering algorithm, we have developed an accurate method of reliably detecting and identifying trends, abrupt changes, and artifacts from multiple physiologic data streams in real-time. The Kalman filter algorithm was implemented within an innovative software architecture for parallel computation: a parallel process trellis. Examples, processed in real-time, of both simulated and actual data serve to illustrate the potential value of the Kalman filter as a tool in physiologic monitoring.

Sittig, Dean F.; Factor, Michael

1989-01-01

427

Parallel plasma fluid turbulence calculations  

SciTech Connect

The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center`s CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.

Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

1994-12-31

428

Computing contingency statistics in parallel.  

SciTech Connect

Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

2010-09-01

429

Parallel imaging in MR angiography.  

PubMed

The recently developed techniques of parallel imaging with phased array coils are rapidly becoming accepted for magnetic resonance angiography (MRA) applications. This article reviews the various current parallel imaging techniques and their application to MRA. The increased scan efficiency provided by parallel imaging allows increased temporal or spatial resolution, and reduction of artifacts in contrast-enhanced MRA (CE-MRA). Increased temporal resolution in CE-MRA can be used to reduce the need for bolus timing and to provide hemodynamic information helpful for diagnosis. In addition, increased spatial resolution (or volume coverage) can be acquired in a breathhold (eg, in renal CE-MRA), or in otherwise limited clinically acceptable scan durations. The increased scan efficiency provided by parallel imaging has been successfully applied to CE-MRA as well as other MRA techniques such as inflow and phase contrast imaging. The large signal-to-noise ratio available in many MRA techniques lends these acquisitions to increased scan efficiency through parallel imaging. PMID:15479999

Wilson, Gregory J; Hoogeveen, Romhild M; Willinek, Winfried A; Muthupillai, Raja; Maki, Jeffrey H

2004-06-01

430

Evaluation of fault-tolerant parallel-processor architectures over long space missions  

NASA Technical Reports Server (NTRS)

The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.

Johnson, Sally C.

1989-01-01

431

Forms & Guidelines  

Cancer.gov

2003 Step 1: Developing a Cancer Prevention Clinical Trial Forms & Guidelines General Guidelines for Consortia Lead Organization to add Participating to Consortium (doc, 63kb) NCI Request for Proposals, Current DCP Letter of Intent Submission Form

432

Reliability analysis of continuous fiber composite laminates  

NASA Technical Reports Server (NTRS)

A composite lamina may be viewed as a homogeneous solid whose directional strengths are random variables. Calculation of the lamina reliability under a multi-axial stress state can be approached by either assuming that the strengths act separately (modal or independent action), or that they interact through a quadratic interaction criterion. The independent action reliability may be calculated in closed form, while interactive criteria require simulations; there is currently insufficient data to make a final determination of preference between them. Using independent action for illustration purposes, the lamina reliability may be plotted in either stress space or in a non-dimensional representation. For the typical laminated plate structure, the individual lamina reliabilities may be combined in order to produce formal upper and lower bounds of reliability for the laminate, similar in nature to the bounds on properties produced from variational elastic methods. These bounds are illustrated for a (0/plus or minus 15)sub s Graphite/Epoxy (GR/EP) laminate. And addition, simple physically plausible phenomenological rules are proposed for redistribution of load after a lamina has failed. These rules are illustrated by application to (0/plus or minus 15)sub s and (90/plus or minus 45/0)sub s GR/EP laminates and results are compared with respect to the proposed bounds.

Thomas, David J.; Wetherhold, Robert C.

1990-01-01

433

Design and performance of VLSI based parallel multiplier  

SciTech Connect

The VLSI design and layout of a (log /sup 2/n) time n-bit binary parallel multiplier for two unsigned operands is introduced. The proposed design consists of partitioning the multiplier and multiplicand bits into four groups of n/4 bits each and then reducing the matrix of sixteen product terms using three to two parallel counters and a brent-kung (log n) time parallel adder. Area-time performance of the present scheme has been compared with the existing schemes for parallel multipliers. Regular and recursive design of the multiplier is shown to be suitable for vlsi implementation and an improved table lookup multiplier has been used to form the basis of the recursive design scheme. 17 references.

Agrawal, D.P.; Pathak, G.C.; Swain, N.K.; Agrawal, B.K.

1983-01-01

434

Parallel computation using boundary elements in solid mechanics  

NASA Technical Reports Server (NTRS)

The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

Chien, L. S.; Sun, C. T.

1990-01-01

435

Supercomputing on massively parallel bit-serial architectures  

NASA Technical Reports Server (NTRS)

Research on the Goodyear Massively Parallel Processor (MPP) suggests that high-level parallel languages are practical and can be designed with powerful new semantics that allow algorithms to be efficiently mapped to the real machines. For the MPP these semantics include parallel/associative array selection for both dense and sparse matrices, variable precision arithmetic to trade accuracy for speed, micro-pipelined train broadcast, and conditional branching at the processing element (PE) control unit level. The preliminary design of a FORTRAN-like parallel language for the MPP has been completed and is being used to write programs to perform sparse matrix array selection, min/max search, matrix multiplication, Gaussian elimination on single bit arrays and other generic algorithms. A description is given of the MPP design. Features of the system and its operation are illustrated in the form of charts and diagrams.

Iobst, Ken

1985-01-01

436

Spray forming  

Microsoft Academic Search

Spray forming is a relatively new manufacturing process for near net shape preforms in a wide variety of alloys. Spray formed materials have a characteristic equiaxed microstructure with small grain sizes, low levels of solute partitioning, and inhibited coarsening of secondary phases. After consolidation to full density, spray formed materials have consistently shown properties superior to conventionally cast materials, and

P. S. Grant

1995-01-01

437

Massively Parallel MRI Detector Arrays  

PubMed Central

Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays.

Keil, Boris; Wald, Lawrence L

2013-01-01

438

Fast data parallel polygon rendering  

SciTech Connect

This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

Ortega, F.A.; Hansen, C.D.

1993-09-01

439

Visualizing Parallel Computer System Performance  

NASA Technical Reports Server (NTRS)

Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

Malony, Allen D.; Reed, Daniel A.

1988-01-01

440

Features in Continuous Parallel Coordinates.  

PubMed

Continuous Parallel Coordinates (CPC) are a contemporary visualization technique in order to combine several scalar fields, given over a common domain. They facilitate a continuous view for parallel coordinates by considering a smooth scalar field instead of a finite number of straight lines. We show that there are feature curves in CPC which appear to be the dominant structures of a CPC. We present methods to extract and classify them and demonstrate their usefulness to enhance the visualization of CPCs. In particular, we show that these feature curves are related to discontinuities in Continuous Scatterplots (CSP). We show this by exploiting a curve-curve duality between parallel and Cartesian coordinates, which is a generalization of the well-known point-line duality. Furthermore, we illustrate the theoretical considerations. Concluding, we discuss relations and aspects of the CPC's/CSP's features concerning the data analysis. PMID:22034308

Lehmann, Dirk J; Theisel, Holger

2011-12-01

441

A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix  

NASA Technical Reports Server (NTRS)

A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

Shroff, Gautam

1989-01-01

442

Form classification  

NASA Astrophysics Data System (ADS)

The problem of form classification is to assign a single-page form image to one of a set of predefined form types or classes. We classify the form images using low level pixel density information from the binary images of the documents. In this paper, we solve the form classification problem with a classifier based on the k-means algorithm, supported by adaptive boosting. Our classification method is tested on the NIST scanned tax forms data bases (special forms databases 2 and 6) which include machine-typed and handwritten documents. Our method improves the performance over published results on the same databases, while still using a simple set of image features.

Reddy, K. V. Umamaheswara; Govindaraju, Venu

2008-01-01

443

Constructions: Parallel Through A Point  

NSDL National Science Digital Library

After review of Construction Basics, the technique of constructing a parallel line through a point not on the line will be learned. Let's review the basics of Constructions in Geometry first: Constructions - General Rules Review of how to copy an angle is helpful; please review that here: Constructions: Copy a Line Segment and an Angle Now, using a paper, pencil, straight edge, and compass, you will learn how to construct a parallel through a point. A video demonstration is available to help you. (Windows Media ...

Neubert, Mrs.

2010-12-31

444

Gang scheduling a parallel machine  

SciTech Connect

Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

Gorda, B.C.; Brooks, E.D. III.

1991-03-01

445

Gang scheduling a parallel machine  

SciTech Connect

Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processes. User programs and their gangs of processes are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantum are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory.

Gorda, B.C.; Brooks, E.D. III.

1991-12-01

446

Reliability Impacts in Life Support Architecture and Technology Selection  

NASA Technical Reports Server (NTRS)

Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

Lange, Kevin E.; Anderson, Molly S.

2011-01-01

447

SIERRA - A 3-D device simulator for reliability modeling  

Microsoft Academic Search

SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector\\/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method.

Jue-Hsien Chern; Lawrence A. Arledge Jr.; Ping Yang; John T. Maeda

1989-01-01

448

SIERRA: a 3-D device simulator for reliability modeling  

Microsoft Academic Search

SIERRA is a 3-D general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under DC, transient, and small-signal conditions. Executing on a vector\\/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method.

Jue-hsien Chern; John T. Maeda; Lawrence A. Arledge Jr.; Ping Yang

1989-01-01

449

Computational methods for efficient structural reliability and reliability sensitivity analysis  

NASA Astrophysics Data System (ADS)

This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

Wu, Y.-T.

1993-04-01

450

Automatic parallelization of discrete event simulation programs  

Microsoft Academic Search

Developing parallel discrete event simulation code is currently very time-consuming and requires a high level of expertise. Few tools, if any, exist to aid conversion of existing sequential simulation programs to efficient parallel code. Traditional approaches to automatic parallelization, as used in many parallelizing compilers, are not well-suited for this application because of the irregular, data dependent nature of discrete

Jya-Jang Tsai; Richard M. Fujimoto

1993-01-01

451

Automatic Parallelization of Discrete Event Simulation Programs  

Microsoft Academic Search

Developing parallel discrete event simulation code is currently very time-consuming and requires a high level of expertise. Few tools, if any, exist to aid conversion of existing sequential simulation programs to efficient parallel code. Traditional approaches to automatic parallelization, as used in many parallelizing compilers, are not well-suited for this application because of the irregular, data dependent nature of discrete

Jya-Jang Tsai; Richard M. Fujimoto

1993-01-01

452

Composites: Trees for Data Parallel Programming  

Microsoft Academic Search

Data parallel programming languages offer ease of pro- gramming and debugging and scalability of parallel pro- grams to increasing numbers of processors. Unfortunately, the usefulness of these languages for non-scientific pro- grammers and loosely coupled parallel machines is cur- rently limited. In this paper, we present the composite tree model which seeks to provide greater flexibility via parallel data types,

Mark Chu-carroll; Lori L. Pollock

1994-01-01

453

Tool-supported parallel application development  

Microsoft Academic Search

Our goal is to ease the parallelization of applications on distributed-memory parallel processors. Part of our team is implementing parallel kernels common to industrially significant applications using High Performance Fortran (HPF) and the Message Passing Interface (MPI). They are assisted in this activity by a second group developing an integrated tool environment, Annai, consisting of a parallelization support tool, a

C. Clemencon; K. M. Decker; V. R. Deshpande; A. Endo; J. Fritscher; P. A. R. Lorenzo; N. Masuda; A. Muller; R. Ruhl; W. Sawyer; B. J. N. Wylie; F. Zimmermann

1996-01-01

454

Exploiting heterogeneous parallelism on a multithreaded multiprocessor  

Microsoft Academic Search

This paper describes an integrated architecture, compiler, runtime, and operating system solution to exploiting heterogeneous parallelism. The architecture is a pipelined multi-threaded multiprocessor, enabling the execution of very fine (multiple operations within an instruction) to very coarse (multiple jobs) parallel activities. The compiler and runtime focus on managing parallelism within a job, while the operating system focuses on managing parallelism

Gail A. Alverson; Robert Alverson; David Callahan; Brian Koblenz; Allan Porterfield; Burton J. Smith

1992-01-01

455

Polaris: the next generation in parallelizing compilers  

Microsoft Academic Search

Abstract: It is the goal of the Polaris project to develop a new parallelizing compiler that will overcomelimitations of current compilers. While current parallelizing compilers may succeed on small kernels,they often fail to extract any meaningful parallelism from large applications. After a study ofapplication codes, it was concluded that by adding a few new techniques to current compilers,automatic parallelization becomes

W Blume

1994-01-01

456

Polaris: Improving the Effectiveness of Parallelizing Compilers  

Microsoft Academic Search

. It is the goal of the Polaris project to develop a new parallelizing compiler that willovercome limitations of current compilers. While current parallelizing compilers may succeed onsmall kernels, they often fail to extract any meaningful parallelism from large applications. Aftera study of application codes, it was concluded that by adding a few new techniques to currentcompilers, automatic parallelization becomes

William Blume; Rudolf Eigenmann; Keith Faigin; John Grout; Jay Hoeflinger; David A. Padua; Paul Petersen; William M. Pottenger; Lawrence Rauchwerger; Peng Tu; Stephen Weatherford

1994-01-01

457

Reliable Communication in the Presence of Failures  

Microsoft Academic Search

The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying

KENNETH P. BIRMAN; THOMAS A. JOSEPH

1985-01-01

458

Reliable communication in the presence of failures  

Microsoft Academic Search

The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying

Kenneth P. Birman; Thomas A. Joseph

1987-01-01

459

File concepts for parallel I/O  

NASA Technical Reports Server (NTRS)

The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

Crockett, Thomas W.

1989-01-01

460

Parallel distributed computing using Python  

NASA Astrophysics Data System (ADS)

This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

2011-09-01

461

Feedback-optimized parallel tempering  

Microsoft Academic Search

We intro duce an algorithm for systematically improving the efficiency of parallel tempering Monte Carlo simulations by optimizing the simulated temperature set. Our approach is closely related to a recently introduced adaptive algorithm that optimizes the simulated statistical ensemble in generalized broad-histogram Monte Carlo simulations. Conventionally, a temperature set is chosen in such a way that the acceptance rates for

Helmut G Katzgraber; D avid AH; Matthias Troyer

462

Parallel Processing and Information Retrieval.  

ERIC Educational Resources Information Center

This issue contains nine articles that provide an overview of trends and research in parallel information retrieval. Topics discussed include network design for text searching; the Connection Machine System; PThomas, an adaptive information retrieval system on the Connection Machine; algorithms for document clustering; and system architecture for…

Rasmussen, Edie M.; And Others

1991-01-01

463

New Parallel-Sorting Schemes  

Microsoft Academic Search

In this paper, we describe a family of parallel-sorting algorithms for a multiprocessor system. These algorithms are enumeration sortings and comprise the following phases: 1) count acquisition: the keys are subdivided into subsets and for each key we determine the number of smaller keys (count) in every subset; 2) rank determination: the rank of a key is the sum of

Franco P. Preparata

1978-01-01

464

Query Optimization for Parallel Execution  

Microsoft Academic Search

The decreasing cost of computing makes it economically viable to reduce the response time of decision support queries by using parallel execution to exploit inexpen- sive resources. This goal poses the following query op- timization problem: Mzntmzze response ttme subject to constraints on throughput, which we motivate as the dual of the traditional DBMS problem, We address this novel problem

Sumit Ganguly; Waqar Hasan; Ravi Krishnamurthy

1992-01-01

465

Where are the parallel algorithms?  

NASA Technical Reports Server (NTRS)

Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

Voigt, R. G.

1985-01-01

466

Integrating hadoop and parallel DBMs  

Microsoft Academic Search

Teradata's parallel DBMS has been successfully deployed in large data warehouses over the last two decades for large scale business analysis in various industries over data sets ranging from a few terabytes to multiple petabytes. However, due to the explosive data volume increase in recent years at some customer sites, some data such as web logs and sensor data are

Yu Xu; Pekka Kostamaa; Like Gao

2010-01-01

467

Passive Parallel Automatic Minimalist Processing  

Microsoft Academic Search

Research for which the idea that many basic cognitive processes can be described as fast, parallel, and automatic is reviewed. Memory retrieval\\/decision processes have often been ignored in the cognitive literature. However, in some cases, computation- ally complex processes can be replaced with simple passive processes. Cue-dependent retrieval from memory provides a straightforward example of how encoding, memory, and retrieval

Roger Ratcliff; Gail McKoon

468

Practical Parallelism using Transputer Arrays  

Microsoft Academic Search

This paper explores methods for extracting parallelism from a wide variety of numerical applications. We investigate communications overheads and load-balancing for networks of transputers. After a discussion of some practical strategies for constructing occam programs, two case studies are analysed in detail.

David J. Pritchard; C. R. Askew; D. B. Carpenter; Ian Glendinning; Anthony J. G. Hey; Denis A. Nicole

1987-01-01

469

Aligning Sentences in Parallel Corpora  

Microsoft Academic Search

In this paper we describe a statistical technique for aligning sentences with their translations in two parallel corpora. In addition to certain anchor points that are available in our data, the only information about the sentences that we use for calculating alignments is the number of tokens that they contain. Because we make no use of the lexical details of

Peter F. Brown; Jennifer C. Lai; Robert L. Mercer

1991-01-01

470

Parallel processing of image contours  

NASA Astrophysics Data System (ADS)

This paper describes a parallel algorithm for ranking the pixels on a curve in O(log N) time using an EREW PRAM model. The algorithms accomplish this with N processors for a (root)N X (root)N image. After applying such an algorithm to an image, we are able to move the pixels from a curve into processors having consecutive addresses. This is important on hypercube connected machines like the Connection Machine because we can subsequently apply many algorithms to the curve using powerful segmented scan operations (i.e., parallel prefix operations). We shall illustrate this by first showing how we can find piecewise linear approximations of curves using Ramer's algorithm. This process has the effect of converting closed curves into simple polygons. As another example, we shall describe a more complicated parallel algorithm for computing the visibility graph of a simple planar polygon. The algorithm accomplishes this in O(k log N) time using O(N2/log N) processors for an N vertex polygon, where k is the link-diameter of the polygon in consideration. Both of these algorithms require only scan operations (as well as local neighbor communication) as the means of inter-processor communication. Thus, the algorithms can not only be implemented on an EREW PRAM, but also on a hypercube connected parallel machine, which is a more practical machine model. All these algorithms were implemented on the Connection Machine, and various performance tests were conducted.

Chen, Ling T.; Davis, Larry S.; Kruskal, Clyde P.

1992-04-01

471

Coarray Fortran for parallel programming  

Microsoft Academic Search

Co-Array Fortran, formerly known as F--, is a small extension of Fortran 95 for parallel processing. A Co-Array Fortran program is interpreted as if it were replicated a number of times and all copies were executed asynchronously. Each copy has its own set of data objects and is termed an image. The array syntax of Fortran 95 is extended with

Robert W. Numrich; John Reid

1998-01-01

472

GRay: Massive parallel ODE integrator  

NASA Astrophysics Data System (ADS)

GRay is a massive parallel ordinary differential equation integrator that employs the "stream processing paradigm." It is designed to efficiently integrate billions of photons in curved spacetime according to Einstein's general theory of relativity. The code is implemented in CUDA C/C++.

Chan, Chi-kwan; Psaltis, Dimitrios; Ozel, Feryal

2014-03-01

473

Parallel Symbolic Computing in Cid  

Microsoft Academic Search

We have designed and implemented a language called Cid for parallel applications with recursive linked data structures (e.g., lists, trees, graphs) and complex control structures (data dependent, recursion). Cid is unique in that, while targeting distributed memory machines, it attempts to preserve the traditional “MIMD threads plus lock-protected shared data” programming model that is standard on shared memory machines.

Rishiyur S. Nikhil

1995-01-01

474

Ejs Parallel Plate Capacitor Model  

NSDL National Science Digital Library

The Ejs Parallel Plate Capacitor model displays a parallel-plate capacitor which consists of two identical metal plates, placed parallel to one another. The capacitor can be charged by connecting one plate to the positive terminal of a battery and the other plate to the negative terminal. The dielectric constant and the separation of the plates can be changed via sliders. You can modify this simulation if you have Ejs installed by right-clicking within the plot and selecting "Open Ejs Model" from the pop-up menu item. Ejs Parallel Plate Capacitor model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_bu_capacitor.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models for Newtonian mechanics are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Duffy, Andrew

2008-07-14

475

Matpar: Parallel Extensions for MATLAB  

NASA Technical Reports Server (NTRS)

Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

Springer, P. L.

1998-01-01

476

Resequencing Considerations in Parallel Downloads  

Microsoft Academic Search

Several recent studies have proposed methods to ac- celerate the receipt of a file by downloading its parts from differ- ent servers in parallel. This paper formulates models for an ap- proach based on receiving only one copy of each of the data pack- ets in a file, while different packets may be obtained from different sources. This approach guarantees

Yoav Nebat; Moshe Sidi

2002-01-01

477

Parallel Computation of Invariant Measures  

Microsoft Academic Search

Let S:[0,1]?[0,1] be a nonsingular transformation and let P:L1(0,1)?L1(0,1) be the corresponding Frobenius–Perron operator. In this paper we propose a parallel algorithm for computing a fixed density of P, using Ulam's method and a modified Monte Carlo approach. Numerical results are also presented.

Jiu Ding; Zizhong Wang

2001-01-01

478

Turbomachinery CFD on parallel computers  

Microsoft Academic Search

The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers

Richard A. Blech; Edward J. Milner; Angela Quealy; Scott E. Townsend

1992-01-01

479

The plane with parallel coordinates  

Microsoft Academic Search

By means ofParallel Coordinates planar “graphs” of multivariate relations are obtained. Certain properties of the relationship correspond tothe geometrical properties of its graph. On the plane a point ?? line duality with several interesting properties is induced. A new duality betweenbounded and unbounded convex sets and hstars (a generalization of hyperbolas) and between Convex Unions and Intersections is found. This

Alfred Inselberg

1985-01-01

480

Safe parallelism for robotic control  

Microsoft Academic Search

During the Spring 2008 semester at Olin College, we introduced the programming language occam-pi to undergraduates as part of their first course in robotics. Students were able to explore image processing and autonomous behavioral control in a parallel programming language on a small mobile robotics platform with just two weeks of tutorial instruction. Our experiences to date suggest that the

Matthew C. Jadud; Christian L. Jacobsen; Carl G. Ritson; Jonathan Simpson

2008-01-01</