These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Reliability of a Parallel Pipe Network  

NASA Technical Reports Server (NTRS)

The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

Herrera, Edgar; Chamis, Christopher (Technical Monitor)

2001-01-01

2

Parallelized reliability estimation of reconfigurable computer networks  

NASA Technical Reports Server (NTRS)

A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.

Nicol, David M.; Das, Subhendu; Palumbo, Dan

1990-01-01

3

Practical preventive reliability using matrix forms  

NASA Technical Reports Server (NTRS)

This paper describes a practical new tool to improve the reliability of equipment. It uses a matrix form to help monitor anticipated component faults precisely. The tool provides the practical measures necessary to generate matrix tables that accurately display all causes of failure and all planned programmatic analyses, tests, processes, and inspections. When completed, the matrix tables give users in-depth visibility of the degree of protection against component failure that is provided by the program's preventive measure plans. The matrix form is an excellent technique for tracking, and thereby influencing, the reliability of nonelectronic spacecraft subsystems or components. No electronic applications have been tried, but the procedure can be useful in the electronics field, e.g., in integrated circuit production planning, and perhaps in process control, inspection or test lines for other electronic components. The drawing of forms, and the recording/manipulation of the data can be computerized.

Johnson, S. A.

1983-01-01

4

Closed-form solutions of the parallel plate problem  

Microsoft Academic Search

Closed-form solutions to the parallel plate problem have been derived for design of electrostatic devices that employ the parallel plate. With dimensionless height and force introduced to simplify the nonlinear parallel plate problem, a simple cubic equation implying behavior of the height of the movable plate corresponding an applied voltage has been derived and theoretically solved to provide closed-form solutions

Ki Bang Lee

2007-01-01

5

Armed Services Vocational Aptitude Battery (ASVAB): Alternate Forms Reliability (Forms 8, 9, 10, and 11). Technical Paper for Period October 1980-April 1985.  

ERIC Educational Resources Information Center

A study investigated the alternate forms reliability of the Armed Services Vocational Aptitude Battery (ASVAB) Forms 8, 9, 10, and 11. Usable data were obtained from 62,938 armed services applicants who took the ASVAB in January and February 1983. Results showed that the parallel forms reliability coefficients between ASVAB Form 8a and the…

Palmer, Pamla; And Others

6

An Energy-Efficient Reliability Model for Parallel Disk Systems  

Microsoft Academic Search

In the last decade, parallel disk systems have increasingly become popular for data-intensive applications running on high-performance computing platforms. Conservation of energy in parallel disk systems has a strong impact on the cost of cooling equipment and backup power-generation. This is because a significant amount of energy is consumed by parallel disks in high-performance computing centers. Although a wide range

Fangyang Shen; Xiao Qin; Andres Salazar; Adam Manzanares; Kiranmai Bellam

2009-01-01

7

Masking reveals parallel form systems in the visual brain  

PubMed Central

It is generally supposed that there is a single, hierarchically organized pathway dedicated to form processing, in which complex forms are elaborated from simpler ones, beginning with the orientation-selective cells of V1. In this psychophysical study, we undertook to test another hypothesis, namely that the brain’s visual form system consists of multiple parallel systems and that complex forms are other than the sum of their parts. Inspired by imaging experiments which show that forms of increasing perceptual complexity (lines, angles, and rhombuses) constituted from the same elements (lines) activate the same visual areas (V1, V2, and V3) with the same intensity and latency (Shigihara and Zeki, 2013, 2014), we used backward masking to test the supposition that these forms are processed in parallel. We presented subjects with lines, angles, and rhombuses as different target-mask pairs. Evidence in favor of our supposition would be if masking is the most effective when target and mask are processed by the same system and least effective when they are processed in different systems. Our results showed that rhombuses were strongly masked by rhombuses but only weakly masked by lines or angles, but angles and lines were well masked by each other. The relative resistance of rhombuses to masking by low-level forms like lines and angles suggests that complex forms like rhombuses may be processed in a separate parallel system, whereas lines and angles are processed in the same one. PMID:25120460

Lo, Yu Tung; Zeki, Semir

2014-01-01

8

Alternate Forms Reliability of the Behavioral Relaxation Scale: Preliminary Results  

ERIC Educational Resources Information Center

Alternate forms reliability of the Behavioral Relaxation Scale (BRS; Poppen,1998), a direct observation measure of relaxed behavior, was examined. A single BRS score, based on long duration observation (5-minute), has been found to be a valid measure of relaxation and is correlated with self-report and some physiological measures. Recently,…

Lundervold, Duane A.; Dunlap, Angel L.

2006-01-01

9

Parameter Interval Estimation of System Reliability for Repairable Multistate Series-Parallel System with Fuzzy Data  

PubMed Central

The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ? 100. In addition, the optimal ?-cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728

2014-01-01

10

Parallel 3D ALE code for metal forming analyses.  

National Technical Information Service (NTIS)

A three-dimensional arbitrary Lagrange-Eulerian (ALE) code is being developed for use as a general purpose tool for metal forming analyses. The focus of the effort is on the processes of forging, extrusion, casting and rolling. The ALE approach was chosen...

R. Neely, R. Couch, E. Dube, S. Futral

1995-01-01

11

Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method  

ERIC Educational Resources Information Center

In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

2008-01-01

12

SIMULATION OF A RELIABLE PARALLEL ROBOT D.L. Hamilton, J.K. Bennett & I.D. Walker  

E-print Network

SIMULATION OF A RELIABLE PARALLEL ROBOT CONTROLLER D.L. Hamilton, J.K. Bennett & I.D. Walker solutions to the robot control problem are an attractive alternative to single­processor systems as the need for fine motion robot control increases. Since multiple processors provide inherent redundancy, par­ allel

Bennett, John K.

13

Cyclic AMP Mediates a Presynaptic Form of LTP at Cerebellar Parallel Fiber Synapses  

Microsoft Academic Search

The N-methyl-D-aspartate receptor–independent form of long-term potentiation (LTP) at hippocampal mossy fiber synapses requires presynaptic Ca2+–dependent activation of adenylyl cyclase. To determine whether this form of LTP might occur at other synapses, we examined cerebellar parallel fibers that, like hippocampal mossy fiber synapses, express high levels of the Ca2+\\/calmodulin-sensitive adenylyl cyclase I. Repetitive stimulation of parallel fibers caused a long-lasting

Paul A Salin; Robert C Malenka; Roger A Nicoll

1996-01-01

14

Development of parallel 3D RKPM meshless bulk forming simulation system  

Microsoft Academic Search

A parallel computational implementation of modern meshless system is presented for explicit for 3D bulk forming simulation problems. The system is implemented by reproducing kernel particle method. Aspects of a coarse grain parallel paradigm—domain decompose method—are detailed for a Lagrangian formulation using model partitioning. Integration cells are uniquely assigned on each process element and particles are overlap in boundary zones.

H. Wang; Guangyao Li; X. Han; Zhi Hua Zhong

2007-01-01

15

An index-based short form of the WAIS-III with accompanying analysis of reliability  

E-print Network

An index-based short form of the WAIS-III with accompanying analysis of reliability and abnormality, Aberdeen, UK Objectives. To develop an index-based, seven subtest, short form of the WAIS-III that offers the following web address: http://www.abdn.ac.uk/ , psy086/Dept/sf_wais3.htm. Conclusions. The short form

Crawford, John R.

16

Commentary on "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data"  

ERIC Educational Resources Information Center

In the article "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data," Dinno (this issue) provides strong evidence that the distribution of random data does not have a significant influence on the outcome of the analysis. Hayton appreciates the thorough approach to evaluating this assumption, and agrees…

Hayton, James C.

2009-01-01

17

Reliability Modeling Methodology for Independent Approaches on Parallel Runways Safety Analysis  

NASA Technical Reports Server (NTRS)

This document is an adjunct to the final report An Integrated Safety Analysis Methodology for Emerging Air Transport Technologies. That report presents the results of our analysis of the problem of simultaneous but independent, approaches of two aircraft on parallel runways (independent approaches on parallel runways, or IAPR). This introductory chapter presents a brief overview and perspective of approaches and methodologies for performing safety analyses for complex systems. Ensuing chapter provide the technical details that underlie the approach that we have taken in performing the safety analysis for the IAPR concept.

Babcock, P.; Schor, A.; Rosch, G.

1998-01-01

18

Comparison of heuristic methods for reliability optimization of series-parallel systems  

E-print Network

Three heuristics, the max-min approach, Nakagawa and Nakashima method, and Kim and Yum method, are considered for the redundancy allocation problem with series-parallel structures. The max-min approach can formulate the problem as an integer linear...

Lee, Hsiang

2012-06-07

19

A comprehensive parallel study on the board level reliability of SAC, SACX and SCN solders  

Microsoft Academic Search

Legislation that mandates the banning of lead (Pb) in electronics due to environmental and health concerns has been actively pursued in many countries during the past fifteen years. Lead-free electronics will be deployed in many products that serve markets where the reliability is a critical requirement. Although a large number of research studies have been performed and are currently under

Fubin Song; Jeffery C. C. Lo; Jimmy K. S. Lam; Tong Jiang; S. W. Ricky Lee

2008-01-01

20

Corrected Estimates of WAIS-R Short Form Reliability and Standard Error of Measurement.  

ERIC Educational Resources Information Center

The calculations of D. Schretlen, R. H. B. Benedict, and J. H. Bobholz for the reliabilities of a short form of the Wechsler Adult Intelligence Scale--Revised (WAIS-R) (1994) consistently overestimated the values. More accurate values are provided for the WAIS--R and a seven-subtest short form. (SLD)

Axelrod, Bradley N.; And Others

1996-01-01

21

Alzheimer's Dementia: Performance on parallel forms of the Dementia Assessment Battery  

Microsoft Academic Search

Fifty-four patients with Alzheimer's disease performed on the Dementia Assessment Battery that comprised of finger tapping, forward digit span, naming, verbal memory, visual memory, Token Test, digit cancellation, word-list generation, symbol-digit substitution, and copying geometric designs. Four forms of the battery were administered at weekly intervals. The equivalence of the forms, the relative difficulty of the tests, retest reliability, and

Evelyn Lee Teng; Cynthia Wimer; Eugene Roberts; Antonio R. Damasio; Paul J. Eslinger; Marshal F. Folstein; Larry E. Tune; Peter J. Whitehouse; Eileen L. Bardolph; Helena C. Chui; Victor W. Henderson

1989-01-01

22

How reliable are patient-completed medication reconciliation forms compared with pharmacy lists?  

Microsoft Academic Search

ObjectivesMedication reconciliation is a Joint Commission for the Accreditation of Healthcare Organizations requirement to reduce medication errors. This study evaluated the reliability of patient-completed medication reconciliation forms (MRs) compared with pharmacy-generated lists and determined if there was a difference in concordance when patients completed the forms from memory compared with when they brought a separate list or pill bottles.

Carolyn Meyer; Michael Stern; Wendy Woolley; Rebecca Jeanmonod; Donald Jeanmonod

23

The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills  

ERIC Educational Resources Information Center

Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of…

Bae, Jungok; Lee, Yae-Sheik

2011-01-01

24

Reliability of self reported form of female genital mutilation and WHO classification: cross sectional study  

PubMed Central

Objective To assess the reliability of self reported form of female genital mutilation (FGM) and to compare the extent of cutting verified by clinical examination with the corresponding World Health Organization classification. Design Cross sectional study. Settings One paediatric hospital and one gynaecological outpatient clinic in Khartoum, Sudan, 2003-4. Participants 255 girls aged 4-9 and 282 women aged 17-35. Main outcome measures The women's reports of FGMthe actual anatomical extent of the mutilation, and the corresponding types according to the WHO classification. Results All girls and women reported to have undergone FGM had this verified by genital inspection. None of those who said they had not undergone FGM were found to have it. Many said to have undergone “sunna circumcision” (excision of prepuce and part or all of clitoris, equivalent to WHO type I) had a form of FGM extending beyond the clitoris (10/23 (43%) girls and 20/35 (57%) women). Of those who said they had undergone this form, nine girls (39%) and 19 women (54%) actually had WHO type III (infibulation and excision of part or all of external genitalia). The anatomical extent of forms classified as WHO type III varies widely. In 12/32 girls (38%) and 27/245 women (11%) classified as having WHO type III, the labia majora were not involved. Thus there is a substantial overlap, in an anatomical sense, between WHO types II and III. Conclusion The reliability of reported form of FGM is low. There is considerable under-reporting of the extent. The WHO classification fails to relate the defined forms to the severity of the operation. It is important to be aware of these aspects in the conduct and interpretation of epidemiological and clinical studies. WHO should revise its classification. PMID:16803943

Elmusharaf, Susan; Elhadi, Nagla; Almroth, Lars

2006-01-01

25

G-quadruplexes form ultrastable parallel structures in deep eutectic solvent.  

PubMed

G-quadruplex DNA is highly polymorphic. Its conformation transition is involved in a series of important life events. These controllable diverse structures also make G-quadruplex DNA a promising candidate as catalyst, biosensor, and DNA-based architecture. So far, G-quadruplex DNA-based applications are restricted done in aqueous media. Since many chemical reactions and devices are required to be performed under strictly anhydrous conditions, even at high temperature, it is challenging and meaningful to conduct G-quadruplex DNA in water-free medium. In this report, we systemically studied 10 representative G-quadruplexes in anhydrous room-temperature deep eutectic solvents (DESs). The results indicate that intramolecular, intermolecular, and even higher-order G-quadruplex structures can be formed in DES. Intriguingly, in DES, parallel structure becomes the G-quadruplex DNA preferred conformation. More importantly, compared to aqueous media, G-quadruplex has ultrastability in DES and, surprisingly, some G-quadruplex DNA can survive even beyond 110 °C. Our work would shed light on the applications of G-quadruplex DNA to chemical reactions and DNA-based devices performed in an anhydrous environment, even at high temperature. PMID:23282194

Zhao, Chuanqi; Ren, Jinsong; Qu, Xiaogang

2013-01-29

26

Validity, Reliability, and Potential Bias of Short Forms of Students' Evaluation of Teaching: The Case of UAE University  

ERIC Educational Resources Information Center

Students' opinions continue to be a significant factor in the evaluation of teaching in higher education institutions. The purpose of this study was to psychometrically assess short students evaluation of teaching (SET) forms using the UAE University form as a model. The study evaluated the form validity, reliability, the overall question,…

Dodeen, Hamzeh

2013-01-01

27

Multiple clusters of release sites formed by individual thalamic afferents onto cortical interneurons ensure reliable transmission  

PubMed Central

Summary Thalamic afferents supply the cortex with sensory information by contacting both excitatory neurons and inhibitory interneurons. Interestingly, thalamic contacts with interneurons constitute such a powerful synapse that even one afferent can fire interneurons, thereby driving feedforward inhibition. However, the spatial representation of this potent synapse on interneuron dendrites is poorly understood. Using Ca imaging and electron microscopy we show that an individual thalamic afferent forms multiple contacts with the interneuronal proximal dendritic arbor, preferentially near branch points. More contacts are correlated with larger amplitude synaptic responses. Each contact, consisting of a single bouton, can release up to 7 vesicles simultaneously, resulting in graded and reliable Ca transients. Computational modeling indicates that the release of multiple vesicles at each contact minimally reduces the efficiency of the thalamic afferent in exciting the interneuron. This strategy preserves the spatial representation of thalamocortical inputs across the dendritic arbor over a wide range of release conditions. PMID:21745647

Bagnall, Martha W.; Hull, Court; Bushong, Eric A.; Ellisman, Mark H.; Scanziani, Massimo

2012-01-01

28

Bringing the Cognitive Estimation Task into the 21st Century: Normative Data on Two New Parallel Forms  

PubMed Central

The Cognitive Estimation Test (CET) is widely used by clinicians and researchers to assess the ability to produce reasonable cognitive estimates. Although several studies have published normative data for versions of the CET, many of the items are now outdated and parallel forms of the test do not exist to allow cognitive estimation abilities to be assessed on more than one occasion. In the present study, we devised two new 9-item parallel forms of the CET. These versions were administered to 184 healthy male and female participants aged 18–79 years with 9–22 years of education. Increasing age and years of education were found to be associated with successful CET performance as well as gender, intellect, naming, arithmetic and semantic memory abilities. To validate that the parallel forms of the CET were sensitive to frontal lobe damage, both versions were administered to 24 patients with frontal lobe lesions and 48 age-, gender- and education-matched controls. The frontal patients’ error scores were significantly higher than the healthy controls on both versions of the task. This study provides normative data for parallel forms of the CET for adults which are also suitable for assessing frontal lobe dysfunction on more than one occasion without practice effects. PMID:24671170

MacPherson, Sarah E.; Wagner, Gabriela Peretti; Murphy, Patrick; Bozzali, Marco; Cipolotti, Lisa; Shallice, Tim

2014-01-01

29

Parallel spinors and parallel forms  

Microsoft Academic Search

(a) n = 2m, m > 2, the holonomy representation is (SU(m), [lm]R), and N = 2, (b) n = 4m, m > 2, the holonomy representation is (Sp(m), [V2m]R), and N = m + 1, (c) n = 8, the holonomy representation is (Spin(7)*, d7), and N = 1, (d) n = 7, the holonomy representation is (G2, (p7),

McKenzie Y. Wang

1989-01-01

30

Reducing Symmetric Banded Matrices to Tridiagonal Form - A Comparison of a New Parallel Algorithm with Two Serial Algorithms on the iPSC\\/860  

Microsoft Academic Search

We compare three algorithms for reducing symmetric banded matrices to tridiagonal form and evaluate their performance on the Intel iPSC\\/860 hypercube parallel computer. Two of these algorithms, the routines BANDR and SBTRD from the EISPACK and LAPACK libraries, resp., are serial algorithms with little potential for coarse grain parallelism. The third one, called SBTH, is a new parallel algorithm. Results

Bruno Lang

1992-01-01

31

The Standardized Diagnosis of Autism, Autism Diagnostic Interview-Revised: Interrater Reliability of the German Form of the Interview  

Microsoft Academic Search

The feasibility and reliability of the German form of the revised parental interview to diagnose autism (Autism Diagnostic Interview-Revised, ADI-R) was investigated in this study. Brief examples of the description of formerly and currently used diagnostic guidelines are given and the outline of the interview algorithm which establishes thresholds for inclusion criteria. An excellent-to-good reliability could be demonstrated for the

F. Poustka; S. Lisch; D. Rühl; A. Sacher; G. Schmötzer; K. Werner

1996-01-01

32

Measuring Executive Function in Early Childhood: A Focus on Maximal Reliability and the Derivation of Short Forms  

PubMed Central

This study provided estimates of the maximal reliability of a newly developed battery of executive function (EF) tasks for use in early childhood. In addition, it demonstrated how changes in maximal reliability can inform the selection of different “short forms” of the battery, depending on child age. Participants include children from the Family Life Project—a prospective longitudinal study (N = 1292) of families who were over-sampled from low income and African American families at the birth of a new child—at age 3, 4, and 5 year assessments. Results indicated that the EF battery had reasonably good maximal reliability (? = .73, 95% CI = .69 - .76) in a mixed-age sample that included children who were randomly selected from the age 3, 4, and 5-year assessments. In contrast, the maximal reliability of the battery ranged from poor to modest for within-age samples (?s = .47 [95% CI = .37 - .52], .62 [95% CI = .57 - .66], and .61 [95% CI = .55 - .66] at ages 3, 4, and 5 years, respectively). Although the derivation of a three-task “short form” of the battery always resulted in statistically significant decrements in maximal reliability, in some cases, the relative decrement in maximal reliability was quite modest and may be tolerable given the time savings and potential reduction in participant burden. Results are discussed with respect to the benefits of using maximal reliability to both evaluate task batteries and derive short forms, as well as how a focus on maximal reliability informs ongoing questions about the measurement and conceptualization of EF in early childhood. PMID:23397928

Willoughby, Michael T.; Pek, Jolynn; Blair, Clancy B.

2014-01-01

33

A reliability study of springback on the sheet metal forming process under probabilistic variation of prestrain and blank holder force  

NASA Astrophysics Data System (ADS)

This work deals with a reliability assessment of springback problem during the sheet metal forming process. The effects of operative parameters and material properties, blank holder force and plastic prestrain, on springback are investigated. A generic reliability approach was developed to control springback. Subsequently, the Monte Carlo simulation technique in conjunction with the Latin hypercube sampling method was adopted to study the probabilistic springback. Finite element method based on implicit/explicit algorithms was used to model the springback problem. The proposed constitutive law for sheet metal takes into account the adaptation of plastic parameters of the hardening law for each prestrain level considered. Rackwitz-Fiessler algorithm is used to find reliability properties from response surfaces of chosen springback geometrical parameters. The obtained results were analyzed using a multi-state limit reliability functions based on geometry compensations.

Mrad, Hatem; Bouazara, Mohamed; Aryanpour, Gholamreza

2013-08-01

34

A Validation Study of the Dutch Childhood Trauma Questionnaire-Short Form: Factor Structure, Reliability, and Known-Groups Validity  

ERIC Educational Resources Information Center

Objective: The 28-item Childhood Trauma Questionnaire-Short Form (CTQ-SF) has been translated into at least 10 different languages. The validity of translated versions of the CTQ-SF, however, has generally not been examined. The objective of this study was to investigate the factor structure, internal consistency reliability, and known-groups…

Thombs, Brett D.; Bernstein, David P.; Lobbestael, Jill; Arntz, Arnoud

2009-01-01

35

An index-based short-form of the WISC-IV with accompanying analysis of the reliability and  

E-print Network

and abnormality of differences John R. Crawford1 *, Vicki Anderson2,3 , Peter M. Rankin4 and Jayne MacDonald1 1 of the reliability and abnormality of Index score differences. The use of the short-form is illustrated with a case, 2003a) continues to serve as a workhorse of cognitive assessment in clinical research and practice

Crawford, John R.

36

Reliability of the International Physical Activity Questionnaire in Research Settings: Last 7-Day Self-Administered Long Form  

ERIC Educational Resources Information Center

The purpose of this study was to examine the test-retest reliability of the last 7-day long form International Physical Activity Questionnaire (Craig et al., 2003) and to examine the construct validity for the measure in a research setting. Participants were 151 male (n = 52) and female (n = 99) university students (M age = 24.15 years, SD = 5.01)…

Levy, Susan S.; Readdy, R. Tucker

2009-01-01

37

Comparisons between Classical Test Theory and Item Response Theory in Automated Assembly of Parallel Test Forms  

ERIC Educational Resources Information Center

The automated assembly of alternate test forms for online delivery provides an alternative to computer-administered, fixed test forms, or computerized-adaptive tests when a testing program migrates from paper/pencil testing to computer-based testing. The weighted deviations model (WDM) heuristic particularly promising for automated test assembly…

Lin, Chuan-Ju

2008-01-01

38

Massively parallel processor  

NASA Technical Reports Server (NTRS)

A brief description is given of the Massively Parallel Processor (MPP). Major applications of the MPP are in the area of image processing (where the operands are often very small integers) from very high spatial resolution passive image sensors, signal processing of radar data, and numerical modeling simulations of climate. The system can be programmed in assembly language or a high level language. Information on background, status, architecture, programming, hardware reliability, applications, and the MPP's development as a national resource for parallel algorithm research are presented in outline form.

1985-01-01

39

The Queensland high risk foot form (QHRFF) - is it a reliable and valid clinical research tool for foot disease?  

PubMed Central

Background Foot disease complications, such as foot ulcers and infection, contribute to considerable morbidity and mortality. These complications are typically precipitated by “high-risk factors”, such as peripheral neuropathy and peripheral arterial disease. High-risk factors are more prevalent in specific “at risk” populations such as diabetes, kidney disease and cardiovascular disease. To the best of the authors’ knowledge a tool capturing multiple high-risk factors and foot disease complications in multiple at risk populations has yet to be tested. This study aimed to develop and test the validity and reliability of a Queensland High Risk Foot Form (QHRFF) tool. Methods The study was conducted in two phases. Phase one developed a QHRFF using an existing diabetes foot disease tool, literature searches, stakeholder groups and expert panel. Phase two tested the QHRFF for validity and reliability. Four clinicians, representing different levels of expertise, were recruited to test validity and reliability. Three cohorts of patients were recruited; one tested criterion measure reliability (n?=?32), another tested criterion validity and inter-rater reliability (n?=?43), and another tested intra-rater reliability (n?=?19). Validity was determined using sensitivity, specificity and positive predictive values (PPV). Reliability was determined using Kappa, weighted Kappa and intra-class correlation (ICC) statistics. Results A QHRFF tool containing 46 items across seven domains was developed. Criterion measure reliability of at least moderate categories of agreement (Kappa?>?0.4; ICC?>?0.75) was seen in 91% (29 of 32) tested items. Criterion validity of at least moderate categories (PPV?>?0.7) was seen in 83% (60 of 72) tested items. Inter- and intra-rater reliability of at least moderate categories (Kappa?>?0.4; ICC?>?0.75) was seen in 88% (84 of 96) and 87% (20 of 23) tested items respectively. Conclusions The QHRFF had acceptable validity and reliability across the majority of items; particularly items identifying relevant co-morbidities, high-risk factors and foot disease complications. Recommendations have been made to improve or remove identified weaker items for future QHRFF versions. Overall, the QHRFF possesses suitable practicality, validity and reliability to assess and capture relevant foot disease items across multiple at risk populations. PMID:24468080

2014-01-01

40

The test-retest reliability of the Wechsler-Bellevue Intelligence Test (Form I) for a neuropsychiatric population  

Microsoft Academic Search

Form I of the W-B test was administered twice to 53 NP patients. Intervals were one week for 33 patients, one month for 20. There were no significant differences between these two groups. A statistically significant increase of 10.75 weighted score points on the second test was found. The test-retest reliabilities of the 11 subtests range from the .90's to

Richard C. Hamister

1949-01-01

41

Reliability and validity of the parent form of the social competence scale in Chinese preschoolers.  

PubMed

The Parent Form of the Social Competence Scale (SCS-PF) was translated into Chinese and validated in a sample of Chinese preschool children (N = 443). Results confirmed a single dimension and high internal consistency in the SCS-PF. Mothers' ratings on the SCS-PF correlated moderately with teachers' ratings on the Teacher Form of the Social Competence Scale and weakly with teachers' ratings on the Student-Teacher Relationship Scale. PMID:23045868

Zhang, Xiao; Ke, Xue; Wang, Xiaoyan

2012-08-01

42

Do cataclastic deformation bands form parallel to lines of no finite elongation (LNFE) or zero extension directions?  

NASA Astrophysics Data System (ADS)

Conjugate cataclastic deformation bands cut unconsolidated sand and gravel at McKinleyville, California, and dip shallowly towards the north-northeast and south-southwest. The acute dihedral angle between the two sets of deformation bands is 47° and is bisected by the sub-horizontal, north-northeast directed incremental and finite shortening directions. Trishear models of fault propagation folding above the McKinleyville fault predict two sets of LNFE (lines of no finite elongation) that plunge steeply and shallowly to the south and north. These predictions are inconsistent with deformation band orientations and suggest that deformation bands did not form parallel to these LNFE. During plane strain, zero extension directions with acute dihedral angles of 47° develop when the dilatancy rate (dV/d?1) is -4.3. Experimental dilatancy rates for Vosges sandstone (cohesion > 0) and unconsolidated Hostun sand suggest the deformation bands either developed parallel to zero extension directions or in accordance with the Mohr-Coulomb criterion, assuming initial porosities of 22% and 39%, respectively. An empirical relationship between dV/d?1, relative density and mean stress suggests that dilatancy rates for Vosges sandstone overestimate dV/d?1 at McKinleyville. Deformation bands at McKinleyville likely developed either in a Mohr-Coulomb orientation, or an intermediate orientation bounded by the Mohr-Coulomb (?C) and Roscoe (?R) angles.

Imber, Jonathan; Perry, Tom; Jones, Richard R.; Wightman, Ruth H.

2012-12-01

43

Psychometric Properties of the Social Problem Solving Inventory-Revised Short-Form: Is the Short Form a Valid and Reliable Measure for Young Adults?  

Microsoft Academic Search

The purpose of the present study was to examine the psychometric properties of the Social Problem-Solving Inventory-Revised\\u000a Short-Form (SPSI-R:SF), a 25-item self-report measure of real life social problem-solving ability. A sample of 219 Australian\\u000a university students aged 16–25 years participated in the study. The reliability of the SPSI-R:SF scales was adequate to excellent.\\u000a Evidence was demonstrated for convergent validity and divergent

Deanne Hawkins; Kate Sofronoff; Jeanie Sheffield

2009-01-01

44

Reliability and Validity of a Spanish Version of the Social Skills Rating System--Teacher Form  

ERIC Educational Resources Information Center

The aim of this study was to examine the psychometric properties of a Spanish version of the Social Skills Scale of the Social Skills Rating System-Teacher Form (SSRS-T) with a sample of children attending elementary schools in Puerto Rico (N = 357). The SSRS-T was developed for use with English-speaking children. Although translated, adapted, and…

Jurado, Michelle; Cumba-Aviles, Eduardo; Collazo, Luis C.; Matos, Maribel

2006-01-01

45

Development and reliability testing of a food store observation form. — Measures of the Food Environment  

Cancer.gov

Skip to Main Content at the National Institutes of Health | www.cancer.gov Print Page E-mail Page Search: Please wait while this form is being loaded.... Home Browse by Resource Type Browse by Area of Research Research Networks Funding Information About

46

Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments  

ERIC Educational Resources Information Center

Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the…

Satern, Miriam N.

2011-01-01

47

A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows  

NASA Astrophysics Data System (ADS)

Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

Allphin, Devin

48

The Parent Report Form of the CHIP???Child Edition: Reliability and Validity  

Microsoft Academic Search

Background: There is increasing recognition of the importance of obtaining children's reports of their health, but significant challenges must be overcome to do so in a systematic, population-based manner. Objective: The objective of this study was to present the initial tests of the Child Report Form of the Child Health and Illness Profile- Child Edition (CHIP-CE\\/CRF), a self-report health status

Anne W. Riley; Christopher B. Forrest; Barbara Starfield; Judith A. Robertson; Phyllis Friello

2004-01-01

49

Japanese version of school form of the ADHD-RS: an evaluation of its reliability and validity.  

PubMed

Using the Japanese version of school form of the ADHD-RS, this survey attempted to compare the scores between the US and Japan and examined the correlates of ADHD-RS. The classroom teachers of 7414 children (3842 males and 3572 females) evaluated all the children's behaviors. A confirmed factor analysis of ADHD-RS confirmed the two-factor solution (Inattentive and Hyperactive-Impulsive) same as previous studies. ADHD-RS scores were not related to IQ, but were associated with standardized achievement test scores. Males showed stronger ADHD tendencies than did the females, and the males tended to score lower as they grew older. Our comparison of the scores between the US and Japan found the Japanese children scored lower than did their US children. Japanese version of school form of the ADHD-RS with good reliability and validity was developed. More researches of ADHD in Japanese children are required. PMID:20688467

Ohnishi, Masafumi; Okada, Ryo; Tani, Iori; Nakajima, Shunji; Tsujii, Masatsugu

2010-01-01

50

The Forms of Bullying Scale (FBS): validity and reliability estimates for a measure of bullying victimization and perpetration in adolescence.  

PubMed

The study of bullying behavior and its consequences for young people depends on valid and reliable measurement of bullying victimization and perpetration. Although numerous self-report bullying-related measures have been developed, robust evidence of their psychometric properties is scant, and several limitations inhibit their applicability. The Forms of Bullying Scale (FBS), with versions to measure bullying victimization (FBS-V) and perpetration (FBS-P), was developed on the basis of existing instruments, for use with 12- to 15-year-old adolescents to economically, yet comprehensively measure both bullying perpetration and victimization. Measurement properties were estimated. Scale validity was tested using data from 2 independent studies of 3,496 Grade 8 and 783 Grade 8-10 students, respectively. Construct validity of scores on the FBS was shown in confirmatory factor analysis. The factor structure was not invariant across gender. Strong associations between the FBS-V and FBS-P and separate single-item bullying items demonstrated adequate concurrent validity. Correlations, in directions as expected with social-emotional outcomes (i.e., depression, anxiety, conduct problems, and peer support), provided robust evidence of convergent and discriminant validity. Responses to the FBS items were found to be valid and concurrently reliable measures of self-reported frequency of bullying victimization and perpetration, as well as being useful to measure involvement in the different forms of bullying behaviors. (PsycINFO Database Record (c) 2013 APA, all rights reserved). PMID:23730831

Shaw, Thérèse; Dooley, Julian J; Cross, Donna; Zubrick, Stephen R; Waters, Stacey

2013-12-01

51

The four canonical tpr subunits of human APC/C form related homo-dimeric structures and stack in parallel to form a TPR suprahelix.  

PubMed

The anaphase-promoting complex or cyclosome (APC/C) is a large E3 RING-cullin ubiquitin ligase composed of between 14 and 15 individual proteins. A striking feature of the APC/C is that only four proteins are involved in directly recognizing target proteins and catalyzing the assembly of a polyubiquitin chain. All other subunits, which account for >80% of the mass of the APC/C, provide scaffolding functions. A major proportion of these scaffolding subunits are structurally related. In metazoans, there are four canonical tetratricopeptide repeat (TPR) proteins that form homo-dimers (Apc3/Cdc27, Apc6/Cdc16, Apc7 and Apc8/Cdc23). Here, we describe the crystal structure of the N-terminal homo-dimerization domain of Schizosaccharomyces pombe Cdc23 (Cdc23(Nterm)). Cdc23(Nterm) is composed of seven contiguous TPR motifs that self-associate through a related mechanism to those of Cdc16 and Cdc27. Using the Cdc23(Nterm) structure, we generated a model of full-length Cdc23. The resultant "V"-shaped molecule docks into the Cdc23-assigned density of the human APC/C structure determined using negative stain electron microscopy (EM). Based on sequence conservation, we propose that Apc7 forms a homo-dimeric structure equivalent to those of Cdc16, Cdc23 and Cdc27. The model is consistent with the Apc7-assigned density of the human APC/C EM structure. The four canonical homo-dimeric TPR proteins of human APC/C stack in parallel on one side of the complex. Remarkably, the uniform relative packing of neighboring TPR proteins generates a novel left-handed suprahelical TPR assembly. This finding has implications for understanding the assembly of other TPR-containing multimeric complexes. PMID:23583778

Zhang, Ziguo; Chang, Leifu; Yang, Jing; Conin, Nora; Kulkarni, Kiran; Barford, David

2013-11-15

52

The Four Canonical TPR Subunits of Human APC/C Form Related Homo-Dimeric Structures and Stack in Parallel to Form a TPR Suprahelix?  

PubMed Central

The anaphase-promoting complex or cyclosome (APC/C) is a large E3 RING-cullin ubiquitin ligase composed of between 14 and 15 individual proteins. A striking feature of the APC/C is that only four proteins are involved in directly recognizing target proteins and catalyzing the assembly of a polyubiquitin chain. All other subunits, which account for > 80% of the mass of the APC/C, provide scaffolding functions. A major proportion of these scaffolding subunits are structurally related. In metazoans, there are four canonical tetratricopeptide repeat (TPR) proteins that form homo-dimers (Apc3/Cdc27, Apc6/Cdc16, Apc7 and Apc8/Cdc23). Here, we describe the crystal structure of the N-terminal homo-dimerization domain of Schizosaccharomyces pombe Cdc23 (Cdc23Nterm). Cdc23Nterm is composed of seven contiguous TPR motifs that self-associate through a related mechanism to those of Cdc16 and Cdc27. Using the Cdc23Nterm structure, we generated a model of full-length Cdc23. The resultant “V”-shaped molecule docks into the Cdc23-assigned density of the human APC/C structure determined using negative stain electron microscopy (EM). Based on sequence conservation, we propose that Apc7 forms a homo-dimeric structure equivalent to those of Cdc16, Cdc23 and Cdc27. The model is consistent with the Apc7-assigned density of the human APC/C EM structure. The four canonical homo-dimeric TPR proteins of human APC/C stack in parallel on one side of the complex. Remarkably, the uniform relative packing of neighboring TPR proteins generates a novel left-handed suprahelical TPR assembly. This finding has implications for understanding the assembly of other TPR-containing multimeric complexes. PMID:23583778

Zhang, Ziguo; Chang, Leifu; Yang, Jing; Conin, Nora; Kulkarni, Kiran; Barford, David

2013-01-01

53

Multiple nanoscale parallel grooves formed on Si3N4/TiC ceramic by femtosecond pulsed laser  

NASA Astrophysics Data System (ADS)

Multiple nanoscale parallel grooves were induced on Si3N4/TiC ceramic by a femtosecond pulsed laser with a pulse width of 120 fs, wavelength of 800 nm and repetition rate of 1000 Hz. Pulse energy, scanning speed and the number of overscans were studied for the formation of regular parallel grooves. The evolution of surface morphology, ablation dimension and surface roughness with different processing parameters was measured by scanning electron microscope (SEM), atomic force microscope (AFM) and white light interferometer. The results show that the uniform multiple nanoscale parallel grooves are obtained by optimizing the pulse energy, scanning speed and number of overscans. The optimum parameters are 2.5 ?J pulse energy and 130 ?m/s scanning speed with 1 overscan. At a constant scanning speed of 130 ?m/s, the period of the parallel grooves stays relatively constant with increasing pulse energy, fluctuating around 600 nm, which is smaller than the laser wavelength. Additionally, the period was found to increase in a roughly linear fashion with increasing scanning speed. The depth of grooves increases with the increasing pulse energy and decreasing scanning speed; the surface roughness increases with the increasing pulse energy, decreasing scanning speed and increasing number of overscans. Meanwhile, the formation mechanism of laser-induced multiple nanoscale parallel grooves on the Si3N4/TiC ceramic surface was discussed.

Xing, Youqiang; Deng, Jianxin; Lian, Yunsong; Zhang, Kedong; Zhang, Guodong; Zhao, Jun

2014-01-01

54

Using CAD Geometric Variation Approach for Lettering Complicated Letter on 3D Free-Form Surface by a 3DOF Parallel Machine Tool  

Microsoft Academic Search

A novel CAD geometric variation approach is proposed for machining a complicated shape workpiece and lettering on a 3D free-form surface or a plane by means of a 3-dofparallel machine tool. First, a simulation mechanism of a 3-DOF 3-SPR parallel manipulator is created, and its workspace is constructed by means of the 3-SPR simulation mechanism. Second, the tool path guiding

Yi Lu; Jia-yin Xu

2007-01-01

55

Verbal and Visual Parallelism  

ERIC Educational Resources Information Center

This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

Fahnestock, Jeanne

2003-01-01

56

Inter- and intrarater reliability of Minimal Eating Observation and Nutrition Form - version II (MEONF-II) nurse assessments among hospital inpatients  

PubMed Central

Background The Minimal Eating Observation and Nutrition form – version II (MEONF – II) is a recently developed nursing nutritional screening tool. However, its inter- and intrarater reliability has not been assessed. Methods Inpatients (n?=?24; median age, 69 years; 11 women) were assessed by eight nurses (interrater reliability, two nurses scored each patient independently) using the MEONF-II on two consecutive days (intrarater reliability, each patient was scored by the same nurse day 1 and day 2). Results Six patients were at moderate/high undernutrition risk. Inter- and intrarater reliabilities (Gwet’s agreement coefficient) for the MEONF-II 2-category classification (no/low risk versus moderate/high risk) were 0.93 and 0.81; for the 3-category classification (no/low – moderate – high risk) reliabilities (Gwet’s weighted agreement coefficient) were 0.98 and 0.88; and total score inter- and intrarater reliabilities (intraclass correlation) were 0.92 and 0.84. Conclusion Reliability of MEONF-II nurse assessments among adult hospital inpatients was supported and the tool can be used in research and clinical practice. PMID:25093011

2014-01-01

57

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 1. Technical Report #1216  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest/and alternate form) and G-Theory/D-Study research on the easy CBM reading measures, grades 1-5. Data were gathered in the spring 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due…

Anderson, Daniel; Park, Jasmine, Bitnara; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

2012-01-01

58

Proceedings of 2002 International Parallel and Distributed Processing Symposium, (IPDPS'02) On Reliable and Scalable Peer-to-Peer Web Document Sharing  

E-print Network

objects of all clients' browser caches. If a user request misses in its local browser cache and the proxy request of a client, the browser first checks if it exists in the local browser cache. If so, the requestProceedings of 2002 International Parallel and Distributed Processing Symposium, (IPDPS'02

59

Improved Reliability and ESD Characteristics of Flip-Chip GaN-Based LEDs With Internal Inverse-Parallel Protection Diodes  

Microsoft Academic Search

In this letter, a GaN\\/sapphire light-emitting diode (LED) structure was designed with improved electrostatic discharge (ESD) performance through the use of a shunt GaN ESD diode connected in inverse-parallel to the GaN LED. Thus, electrostatic charge can be discharged from the GaN LED through the shunt diode. We found that the ESD withstanding capability of GaN\\/sapphire LEDs incorporating this ESD-protection

Shih-Chang Shei; Jinn-Kong Sheu; Chien-Fu Shen

2007-01-01

60

Validity and Reliability of the Turkish Form of Technology-Rich Outcome-Focused Learning Environment Inventory  

ERIC Educational Resources Information Center

The purpose of the study was to investigate the reliability and validity of a Turkish adaptation of Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI) which was developed by Aldridge, Dorman, and Fraser. A sample of 985 students from 16 high schools (Grades 9-12) participated in the study. Translation process followed…

Cakir, Mustafa

2011-01-01

61

The Footwear Assessment Form: a reliable clinical tool to assess footwear characteristics of relevance to postural stability in older adults  

Microsoft Academic Search

Objective: Falls in older adults are common and may result in serious injury. Inappropriate footwear has been suggested to be a contributing factor to many falls. However no studies have been undertaken to determine whether clinicians can reliably assess footwear variables thought to influence postural stability in older adults. The aim of this study was therefore to develop a simple

Hylton B Menz; Catherine Sherrington

2000-01-01

62

Health-related quality of life of HIV-infected women: evidence for the reliability, validity and responsiveness of the Medical Outcomes Study Short-Form 20  

Microsoft Academic Search

The purpose of this study was to assess the reliability, validity and responsiveness of a health-related quality of life (HRQOL) instrument, the Medical Outcomes Short-Form 20-ltem General Health Survey (MOS SF-20), in a sample of women with the human immunodeficiency virus (HIV). Longitudinal data were collected on 202 HIV-infected women without AIDS who were receiving care at Kings County Hospital

M. Y. Smith; J. Feldman; P. Kelly; J. A. DeHovitz; K. Chirgwin; H. Minkoff

1996-01-01

63

Validity, Reliability, and Standard Errors of Measurement for Two Seven-Subtest Short Forms of the Wechsler Adult Intelligence Scale–III  

Microsoft Academic Search

Validity and reliability coefficients and standard errors of measurement for 2 7-subtest short forms (SF) of the Wechsler Adult Intelligence Scale–III (WAIS-III; D. Wechsler, 1997) are provided. Data for the study were obtained from the WAIS-III—WMS-III Technical Manual and were based on the 2,450 adolescents and adults in the WAIS-III standardization sample. SF1 consists of Information, Digit Span, Arithmetic, Similarities,

Joseph J. Ryan; L. Charles Ward

1999-01-01

64

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 2. Technical Report #1217  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest an alternate form) and G-Theory/D-Study on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from the convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due to…

Anderson, Daniel; Lai, Cheg-Fei; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

2012-01-01

65

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Passage Reading Fluency Assessments: Grade 4. Technical Report #1219  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

66

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 5. Technical Report #1220  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Lai, Cheng-Fei; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

2012-01-01

67

Specific Sequences from the Carboxyl Terminus of Human p53 Gene Product Form AntiParallel Tetramers in Solution  

Microsoft Academic Search

Human p53 is a tumor-suppressor gene product associated with control of the cell cycle and with growth suppression, and it is known to form homotetramers in solution. To investigate the relationship of structure to tetramerization, nine peptides corresponding to carboxyl-terminal sequences in human p53 were chemically synthesized, and their equilibrium associative properties were determined by analytical ultracentrifugation. Secondary structure, as

Hiroshi Sakamoto; Marc S. Lewis; Hiroaki Kodama; Ettore Appella; Kazuyasu Sakaguchi

1994-01-01

68

Parallel zippers formed by alpha-helical peptide columns in crystals of Boc-Aib-Glu(OBzl)-Leu-Aib-Ala-Leu-Aib-Ala-Lys(Z)-Aib-OMe.  

PubMed Central

The crystal structure of the decapeptide Boc-Aib-Glu(OBzl)-Leu-Aib-Ala-Leu-Aib-Ala-Lys(Z)-Aib-OMe (where Aib is alpha-aminoisobutyryl, Boc is t-butoxycarbonyl, OBzl is benzyl ester, and Z is benzyloxycarbonyl) illustrates a parallel zipper arrangement of interacting helical peptide columns. Head-to-tail NH...OC hydrogen bonding extends the alpha-helices formed by the decapeptide into long columns in the crystal. An additional NH...OC hydrogen bond in the head-to-tail region, between the extended side chains of Glu(OBzl), residue 2 in one molecule, and Lys(Z), residue 9 in another molecule, forms a "double tooth" on the side of the column. These double teeth are repeated regularly on the helical columns with spaces of six residues between them (approximately 10 A). The double teeth on a pair of parallel columns (all carbonyl groups pointed in the same direction) interdigitate in a zipper motif. All contacts in the zipper portion are of the van der Waals type. The peptide, with formula C66H103N11O17.H2O, crystallizes in space group P2(1)2(1)2(1) with a = 10.677(4) A, b = 16.452(6) A, and c = 43.779(13) A; overall agreement R = 10.2% for 3527 observed reflections (magnitude of /F0/ greater than 3 sigma); resolution 0.9 A. Images PMID:2236010

Karle, I L; Flippen-Anderson, J L; Uma, K; Balaram, P

1990-01-01

69

Balancing the Need for Reliability and Time Efficiency: Short Forms of the Wechsler Adult Intelligence Scale-III  

Microsoft Academic Search

Tables permitting the conversion of short-form composite scores to full-scale IQ estimates have been published for previous editions of the Wechsler Adult Intelligence Scale (WAIS). Equivalent tables are now needed for selected subtests of the WAIS-III. This article used Tellegen and Briggs’s formulae to convert the sum of scaled scores for four selected WAIS-III short-form combinations into full-scale IQ estimates.

Sharon L. E. Jeyakumar; Erin M. Warriner; Vaishali V. Raval; Saadia A. Ahmad

2004-01-01

70

Reliability, validity, and utility of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in assessments of bariatric surgery candidates.  

PubMed

In the current study, we examined the reliability, validity, and clinical utility of Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2011) scores in a sample of 759 bariatric surgery candidates. We provide descriptives for all scales, internal consistency and standard error of measurement estimates for all substantive scales, external correlates of substantive scales using chart review and self-report criteria, and relative risk ratios to assess the clinical utility of the instrument. Results generally support the reliability, validity, and clinical utility of MMPI-2-RF scale scores in the psychological evaluation of bariatric surgery candidates. Limitations, future directions, and practical application of these results are discussed. PMID:23914953

Tarescavage, Anthony M; Wygant, Dustin B; Boutacoff, Lana I; Ben-Porath, Yossef S

2013-12-01

71

Japanese Version of Home Form of the ADHD-RS: An Evaluation of Its Reliability and Validity  

ERIC Educational Resources Information Center

Using the Japanese version of home form of the ADHD-RS, this survey attempted to compare the scores between the US and Japan and examined the correlates of ADHD-RS. We collected responses from parents or rearers of 5977 children (3119 males and 2858 females) in nursery, elementary, and lower-secondary schools. A confirmed factor analysis of…

Tani, Iori; Okada, Ryo; Ohnishi, Masafumi; Nakajima, Shunji; Tsujii, Masatsugu

2010-01-01

72

Balancing the Need for Reliability and Time Efficiency: Short Forms of the Wechsler Adult Intelligence Scale-III  

ERIC Educational Resources Information Center

Tables permitting the conversion of short-form composite scores to full-scale IQ estimates have been published for previous editions of the Wechsler Adult Intelligence Scale (WAIS). Equivalent tables are now needed for selected subtests of the WAIS-III. This article used Tellegen and Briggs's formulae to convert the sum of scaled scores for four…

Jeyakumar, Sharon L. E.; Warriner, Erin M.; Raval, Vaishali V.; Ahmad, Saadia A.

2004-01-01

73

Reliability and validity of the Spanish version of the Child Health and Illness Profile (CHIP) Child-Edition, Parent Report Form (CHIP-CE/PRF)  

PubMed Central

Background The objectives of the study were to assess the reliability, and the content, construct, and convergent validity of the Spanish version of the CHIP-CE/PRF, to analyze parent-child agreement, and compare the results with those of the original U.S. version. Methods Parents from a representative sample of children aged 6-12 years were selected from 9 primary schools in Barcelona. Test-retest reliability was assessed in a convenience subsample of parents from 2 schools. Parents completed the Spanish version of the CHIP-CE/PRF. The Achenbach Child Behavioural Checklist (CBCL) was administered to a convenience subsample. Results The overall response rate was 67% (n = 871). There was no floor effect. A ceiling effect was found in 4 subdomains. Reliability was acceptable at the domain level (internal consistency = 0.68-0.86; test-retest intraclass correlation coefficients = 0.69-0.85). Younger girls had better scores on Satisfaction and Achievement than older girls. Comfort domain score was lower (worse) in children with a probable mental health problem, with high effect size (ES = 1.45). The level of parent-child agreement was low (0.22-0.37). Conclusions The results of this study suggest that the parent version of the Spanish CHIP-CE has acceptable psychometric properties although further research is needed to check reliability at sub-domain level. The CHIP-CE parent report form provides a comprehensive, psychometrically sound measure of health for Spanish children 6 to 12 years old. It can be a complementary perspective to the self-reported measure or an alternative when the child is unable to complete the questionnaire. In general, the results are similar to the original U.S. version. PMID:20678198

2010-01-01

74

Parallel processing  

SciTech Connect

This book provides a introduction to the fundamental principles and practice of parallel processing. After a general introduction to the many facets of parallelism, the first part of the book is devoted to the development of a coherent theoretical framework. Particular attention is paid to the modeling, semantics and complexity of interacting parallel processes. The second part of the book considers the more practical aspects such as parallel processor architecture, parallel and distributed programming, and concurrent transaction handling in databases.

Krishnamurthy, E.V. (Waikato Univ., Hamilton (New Zealand))

1989-01-01

75

General peroxidase activity of a parallel G-quadruplex-hemin DNAzyme formed by Pu39WT - a mixed G-quadruplex forming sequence in the Bcl-2 P1 promoter  

PubMed Central

Background A 39-base-pair sequence (Pu39WT) located 58 to 19 base pairs upstream of the Bcl-2 P1 promoter has been implicated in the formation of an intramolecular mixed G-quadruplex structure and is believed to play a major role in the regulation of bcl-2 transcription. However, an extensive functional exploration requires further investigation. To further exploit the structure–function relationship of the Pu39WT-hemin DNAzyme, the secondary structure and peroxidase activity of the Pu39WT-hemin complex were investigated. Results Experimental results showed that when Pu39WT was incubated with hemin, it formed a uniparallel G-quadruplex-hemin complex in K+ or Na+ solution, rather than a mixed hybrid without bound hemin. Also, Pu39WT-hemin showed peroxidase activity (ABTS2?) in the presence of H2O2 to produce the colored radical anion (ABTS•-), which could then be used to determine the parameters governing the catalytic efficiency and reveal the peroxidase activity of the Pu39WT-hemin DNAzyme. Conclusions These results demonstrate the general peroxidase activity of Pu39WT-hemin DNAzyme, which is an intramolecular parallel G-quadruplex structure. This peroxidase activity of hemin complexed with the G-quadruplex-forming sequence in the Bcl-2 gene promoter may imply a potential mechanism of hemin-mediated cellular injury. PMID:25050134

2014-01-01

76

Reliability and structural integrity  

NASA Technical Reports Server (NTRS)

An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.

Davidson, J. R.

1976-01-01

77

Network reliability  

NASA Technical Reports Server (NTRS)

Network control (or network management) functions are essential for efficient and reliable operation of a network. Some control functions are currently included as part of the Open System Interconnection model. For local area networks, it is widely recognized that there is a need for additional control functions, including fault isolation functions, monitoring functions, and configuration functions. These functions can be implemented in either a central or distributed manner. The Fiber Distributed Data Interface Medium Access Control and Station Management protocols provide an example of distributed implementation. Relative information is presented here in outline form.

Johnson, Marjory J.

1985-01-01

78

Parallel Algorithms  

NSDL National Science Digital Library

Content prepared for the Supercomputing 2002 session on "Using Clustering Technologies in the Classroom". Contains a series of exercises for teaching parallel computing concepts through kinesthetic activities.

Gray, Paul

79

Parallel Optimisation  

NSDL National Science Digital Library

An introduction to optimisation techniques that may improve parallel performance and scaling on HECToR. It assumes that the reader has some experience of parallel programming including basic MPI and OpenMP. Scaling is a measurement of the ability for a parallel code to use increasing numbers of cores efficiently. A scalable application is one that, when the number of processors is increased, performs better by a factor which justifies the additional resource employed. Making a parallel application scale to many thousands of processes requires not only careful attention to the communication, data and work distribution but also to the choice of the algorithms to use. Since the choice of algorithm is too broad a subject and very particular to application domain to include in this brief guide we concentrate on general good practices towards parallel optimisation on HECToR.

80

Parallel image compression  

NASA Technical Reports Server (NTRS)

A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

Reif, John H.

1987-01-01

81

Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach  

ERIC Educational Resources Information Center

Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

2012-01-01

82

Parallel processing for control applications  

SciTech Connect

Parallel processing has been a topic of discussion in computer science circles for decades. Using more than one single computer to control a process has many advantages that compensate for the additional cost. Initially multiple computers were used to attain higher speeds. A single cpu could not perform all of the operations necessary for real time operation. As technology progressed and cpu's became faster, the speed issue became less significant. The additional processing capabilities however continue to make high speeds an attractive element of parallel processing. Another reason for multiple processors is reliability. For the purpose of this discussion, reliability and robustness will be the focal paint. Most contemporary conceptions of parallel processing include visions of hundreds of single computers networked to provide 'computing power'. Indeed our own teraflop machines are built from large numbers of computers configured in a network (and thus limited by the network). There are many approaches to parallel configfirations and this presentation offers something slightly different from the contemporary networked model. In the world of embedded computers, which is a pervasive force in contemporary computer controls, there are many single chip computers available. If one backs away from the PC based parallel computing model and considers the possibilities of a parallel control device based on multiple single chip computers, a new area of possibilities becomes apparent. This study will look at the use of multiple single chip computers in a parallel configuration with emphasis placed on maximum reliability.

Telford, J. W. (John W.)

2001-01-01

83

Parallel biocomputing  

PubMed Central

Background With the advent of high throughput genomics and high-resolution imaging techniques, there is a growing necessity in biology and medicine for parallel computing, and with the low cost of computing, it is now cost-effective for even small labs or individuals to build their own personal computation cluster. Methods Here we briefly describe how to use commodity hardware to build a low-cost, high-performance compute cluster, and provide an in-depth example and sample code for parallel execution of R jobs using MOSIX, a mature extension of the Linux kernel for parallel computing. A similar process can be used with other cluster platform software. Results As a statistical genetics example, we use our cluster to run a simulated eQTL experiment. Because eQTL is computationally intensive, and is conceptually easy to parallelize, like many statistics/genetics applications, parallel execution with MOSIX gives a linear speedup in analysis time with little additional effort. Conclusions We have used MOSIX to run a wide variety of software programs in parallel with good results. The limitations and benefits of using MOSIX are discussed and compared to other platforms. PMID:21418580

2011-01-01

84

Scalable parallel communications  

NASA Technical Reports Server (NTRS)

Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

1992-01-01

85

Slab-Dip Variability and Trench-Parallel Flow beneath Non-Uniform Overriding Plates: Insights form 3D Numerical Models  

NASA Astrophysics Data System (ADS)

Forces driving plate tectonics are reasonably well known but some factors controlling the dynamics and the geometry of subduction processes are still poorly understood. The effect of the thermal state of the subducting and overriding plates on the slab dip have been systematically studied in previous works by means of 2D and 3D numerical modeling. These models showed that kinematically-driven slabs subducting under a cold overriding plate are affected by an increased hydrodynamic suction, due to the lower temperature of the mantle wedge, which leads to a lower subduction angle, and eventually to the formation of flat slab segments. In these models the subduction is achieved by imposing a constant velocity at the top of the overriding plate, which may lead to unrealistic results. Here we present the results of 3D non-Newtonian thermo-mechanical numerical models, considering a dynamically-driven self-sustained subduction, to test the influence of a non-uniform overriding plate. Variations of the thermal state of the overriding plate along the trench cause variation in the hydrodynamic suction, which lead to variations of the slab dip along strike (Fig. 1) and a significant trench-parallel flow. When the material can flow around the edges of the slab, through the addition of lateral plates, the trench parallel flow is enhanced (Fig. 2), whereas the variations on the slab dip are diminished.; Effect of a non-uniform overriding plate on slab-dip. 3D view of the 1000 C isosurface. ; Effect of a non-uniform overriding plate on trench-parallel flow. Map view of the slab at different depths and times, showing the viscosity (colormap) and the velocity (arrows).

Rodríguez-González, J.; Billen, M. I.; Negredo, A. M.

2012-12-01

86

Evaluation of General Classes of Reliability Estimators Often Used in Statistical Analyses of Quasi-Experimental Designs  

NASA Astrophysics Data System (ADS)

In this paper major reliability estimators are analyzed and there comparatively result are discussed. There strengths and weaknesses are evaluated in this case study. Each of the reliability estimators has certain advantages and disadvantages. Inter-rater reliability is one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers. As an alternative, you could look at the correlation of ratings of the same single observer repeated on two different occasions. Each of the reliability estimators will give a different value for reliability. In general, the test-retest and inter-rater reliability estimates will be lower in value than the parallel forms and internal consistency ones because they involve measuring at different times or with different raters. Since reliability estimates are often used in statistical analyses of quasi-experimental designs.

Saini, K. K.; Sehgal, R. K.; Sethi, B. L.

2008-10-01

87

Parallel MATLAB: Parallel For Loops  

E-print Network

.......... FSU: Florida State University AOE: Department of Aerospace and Ocean Engineering ARC: Advanced Research Computing ICAM: Interdisciplinary Center for Applied Mathematics 1 / 69 #12;MATLAB Parallel are completely independent; there are also some restrictions on array-data access. OpenMP implements a directive

Crawford, T. Daniel

88

Parallelizing Timed Petri Net simulations  

NASA Technical Reports Server (NTRS)

The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

Nicol, David M.

1993-01-01

89

Reliability analysis  

NASA Technical Reports Server (NTRS)

The objective was to search for and demonstrate approaches and concepts for fast wafer probe tests of mechanisms affecting the reliability of MOS technology and, based on these, develop and optimize test chips and test procedures. Progress is reported on four important wafer-level reliability problems: gate-oxide radiation hardness; hot-electron effects; time-dependence dielectric breakdown; and electromigration.

1985-01-01

90

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Word and Passage Reading Fluency Assessments: Grade 3. Technical Report #1218  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

91

Manufacturing & Reliability  

E-print Network

(i.e liquid nitrogen) up to 1400C. Monotonic as well as cyclic fatigue testing is possible via remote.) and mechanical characterization (e.g. mechanical testing, reliability testing, fatigue, etc.) expertise

Rollins, Andrew M.

92

Parallel Anisotropic Tetrahedral Adaptation  

NASA Technical Reports Server (NTRS)

An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

Park, Michael A.; Darmofal, David L.

2008-01-01

93

Parallel Transports in Webs  

E-print Network

For connected reductive linear algebraic structure groups it is proven that every web is holonomically isolated. The possible tuples of parallel transports in a web form a Lie subgroup of the corresponding power of the structure group. This Lie subgroup is explicitly calculated and turns out to be independent of the chosen local trivializations. Moreover, explicit necessary and sufficient criteria for the holonomical independence of webs are derived. The results above can even be sharpened: Given an arbitrary neighbourhood of the base points of a web, then this neighbourhood contains some segments of the web whose parameter intervals coincide, but do not include 0 (that corresponds to the base points of the web), and whose parallel transports already form the same Lie subgroup as those of the full web do.

Christian Fleischhack

2003-03-31

94

Parallel Computing Explained  

NSDL National Science Digital Library

Several tutorials on parallel computing. Overview of parallel computing. Porting and code parallelization. Scalar, cache, and parallel code tuning. Timing, profiling and performance analysis. Overview of IBM Regatta P690.

Ncsa

95

Cobra: a Comprehensive Bundle-based Reliable Architecture  

E-print Network

Cobra: a Comprehensive Bundle-based Reliable Architecture Andrea Pellegrini Valeria Bertacco this issue we propose Cobra, a distributed, scalable, highly parallel reliable architecture. Cobra, making use of the available hardware resources. Cobra organizes the system's units dynamically using

Bertacco, Valeria

96

Converting thread-level parallelism to instruction-level parallelism via simultaneous multithreading  

Microsoft Academic Search

To achieve high performance, contemporary computer systems rely on two forms of parallelism: instruction-level parallelism (ILP) and thread-level parallelism (TLP). Wide-issue super-scalar processors exploit ILP by executing multiple instructions from a single program in a single cycle. Multiprocessors (MP) exploit TLP by executing different threads in parallel on different processors. Unfortunately, both parallel processing styles statically partition processor resources, thus

Jack L. Lo; Joel S. Emer; Henry M. Levy; Rebecca L. Stamm; Dean M. Tullsen; S. J. Eggers

1997-01-01

97

Parallel Programming in the Age of Ubiquitous Parallelism  

NASA Astrophysics Data System (ADS)

Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

Pingali, Keshav

2014-04-01

98

The Ohio Scales Youth Form: Expansion and Validation of a Self-Report Outcome Measure for Young Children  

ERIC Educational Resources Information Center

We examined the validity and reliability of a self-report outcome measure for children between the ages of 8 and 11. The Ohio Scales Problem Severity scale is a brief, practical outcome measure available in three parallel forms: Parent, Youth, and Agency Worker. The Youth Self-Report form is currently validated for children ages 12 and older. The…

Dowell, Kathy A.; Ogles, Benjamin M.

2008-01-01

99

Speculative parallelization of partially parallel loops  

E-print Network

with even one cross- processor flow dependence because we have to re-execute sequentially. Moreover, the existing, partial parallelism of loops is not exploited. We demonstrate a generalization of the speculative doall parallelization tech- nique, called...

Dang, Francis Hoai Dinh

2009-05-15

100

Special parallel processing workshop  

SciTech Connect

This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

NONE

1994-12-01

101

Photovoltaic module reliability workshop  

SciTech Connect

The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

Mrig, L. (ed.)

1990-01-01

102

Coherent parallel C  

Microsoft Academic Search

Coherent Parallel C (CPC) is an extension of C for parallelism. The extensions are not simply parallel for loops; instead, a data parallel programming model is adopted. This means that one has an entire process for each data object. An example of an “object” is one mesh point in a finite element solver. How the processes are actually distributed on

Edward W. Felten; Steve W. Otto

1988-01-01

103

Parallel Mandelbrot Set Model  

NSDL National Science Digital Library

The Parallel Mandelbrot Set Model is a parallelization of the sequential MandelbrotSet model, which does all the computations on a single processor core. This parallelization is able to use a computer with more than one cores (or processors) to carry out the same computation, thus speeding up the process. The parallelization is done using the model elements in the Parallel Java group. These model elements allow easy use of the Parallel Java library created by Alan Kaminsky. In particular, the parallelization used for this model is based on code in Chapters 11 and 12 of Kaminsky's book Building Parallel Java. The Parallel Mandelbrot Set Model was developed using the Easy Java Simulations (EJS) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double click the ejs_chaos_ParallelMandelbrotSet.jar file to run the program if Java is installed.

Franciscouembre

2011-11-24

104

Optical Interferometric Parallel Data Processor  

NASA Technical Reports Server (NTRS)

Image data processed faster than in present electronic systems. Optical parallel-processing system effectively calculates two-dimensional Fourier transforms in time required by light to travel from plane 1 to plane 8. Coherence interferometer at plane 4 splits light into parts that form double image at plane 6 if projection screen placed there.

Breckinridge, J. B.

1987-01-01

105

Theoretical dynamic response of electrostatic parallel plate  

NASA Astrophysics Data System (ADS)

Theoretical dynamic response of an electrostatic parallel plate to a time varying voltage has been obtained as a function of the applied voltage. The nonlinear equation of motion is theoretically solved for the dynamic response when ac drive voltage with dc bias voltage actuates the movable plate of the parallel plate with small amplitude. The dynamic response is expressed in a closed form to perform research on a variety of microactuators and sensors employing parallel plates.

Lee, Ki Bang

2007-10-01

106

DUST EXTINCTION FROM BALMER DECREMENTS OF STAR-FORMING GALAXIES AT 0.75 {<=} z {<=} 1.5 WITH HUBBLE SPACE TELESCOPE/WIDE-FIELD-CAMERA 3 SPECTROSCOPY FROM THE WFC3 INFRARED SPECTROSCOPIC PARALLEL SURVEY  

SciTech Connect

Spectroscopic observations of H{alpha} and H{beta} emission lines of 128 star-forming galaxies in the redshift range 0.75 {<=} z {<=} 1.5 are presented. These data were taken with slitless spectroscopy using the G102 and G141 grisms of the Wide-Field-Camera 3 (WFC3) on board the Hubble Space Telescope as part of the WFC3 Infrared Spectroscopic Parallel survey. Interstellar dust extinction is measured from stacked spectra that cover the Balmer decrement (H{alpha}/H{beta}). We present dust extinction as a function of H{alpha} luminosity (down to 3 Multiplication-Sign 10{sup 41} erg s{sup -1}), galaxy stellar mass (reaching 4 Multiplication-Sign 10{sup 8} M {sub Sun }), and rest-frame H{alpha} equivalent width. The faintest galaxies are two times fainter in H{alpha} luminosity than galaxies previously studied at z {approx} 1.5. An evolution is observed where galaxies of the same H{alpha} luminosity have lower extinction at higher redshifts, whereas no evolution is found within our error bars with stellar mass. The lower H{alpha} luminosity galaxies in our sample are found to be consistent with no dust extinction. We find an anti-correlation of the [O III] {lambda}5007/H{alpha} flux ratio as a function of luminosity where galaxies with L {sub H{alpha}} < 5 Multiplication-Sign 10{sup 41} erg s{sup -1} are brighter in [O III] {lambda}5007 than H{alpha}. This trend is evident even after extinction correction, suggesting that the increased [O III] {lambda}5007/H{alpha} ratio in low-luminosity galaxies is likely due to lower metallicity and/or higher ionization parameters.

Dominguez, A.; Siana, B.; Masters, D. [Department of Physics and Astronomy, University of California Riverside, Riverside, CA 92521 (United States)] [Department of Physics and Astronomy, University of California Riverside, Riverside, CA 92521 (United States); Henry, A. L.; Martin, C. L. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)] [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Scarlata, C.; Bedregal, A. G. [Minnesota Institute for Astrophysics, University of Minnesota, Minneapolis, MN 55455 (United States)] [Minnesota Institute for Astrophysics, University of Minnesota, Minneapolis, MN 55455 (United States); Malkan, M.; Ross, N. R. [Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095 (United States)] [Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095 (United States); Atek, H.; Colbert, J. W. [Spitzer Science Center, Caltech, Pasadena, CA 91125 (United States)] [Spitzer Science Center, Caltech, Pasadena, CA 91125 (United States); Teplitz, H. I.; Rafelski, M. [Infrared Processing and Analysis Center, Caltech, Pasadena, CA 91125 (United States)] [Infrared Processing and Analysis Center, Caltech, Pasadena, CA 91125 (United States); McCarthy, P.; Hathi, N. P.; Dressler, A. [Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States)] [Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States); Bunker, A., E-mail: albertod@ucr.edu [Department of Physics, Oxford University, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom)

2013-02-15

107

Fault-tolerant parallel processor  

SciTech Connect

This paper addresses issues central to the design and operation of an ultrareliable, Byzantine resilient parallel computer. Interprocessor connectivity requirements are met by treating connectivity as a resource that is shared among many processing elements, allowing flexibility in their configuration and reducing complexity. Redundant groups are synchronized solely by message transmissions and receptions, which aslo provide input data consistency and output voting. Reliability analysis results are presented that demonstrate the reduced failure probability of such a system. Performance analysis results are presented that quantify the temporal overhead involved in executing such fault-tolerance-specific operations. Empirical performance measurements of prototypes of the architecture are presented. 30 refs.

Harper, R.E.; Lala, J.H. (Charles Stark Draper Laboratory, Inc., Cambridge, MA (USA))

1991-06-01

108

Compiling for NUMA Parallel Machines  

E-print Network

Compiling for NUMA Parallel Machines Abstract A common feature of many scalable parallel machines is non-uniform memory access (NUMA). A parallelizing compiler for NUMA parallel ma- chines must exploit both parallelism and data locality

Zanibbi, Richard

109

Parallel Activation in Bilingual Phonological Processing  

ERIC Educational Resources Information Center

In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

Lee, Su-Yeon

2011-01-01

110

Parallel integrated frame synchronizer chip  

NASA Technical Reports Server (NTRS)

A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

2000-01-01

111

DC Circuits: Parallel Resistances  

NSDL National Science Digital Library

In this interactive learning activity, students will learn about parallel circuits. They will measure and calculate the resistance of parallel circuits and answer several questions about the example circuit shown.

2013-07-30

112

Parallel text search methods  

Microsoft Academic Search

A comparison of recently proposed parallel text search methods to alternative available search strategies that use serial processing machines suggests parallel methods do not provide large-scale gains in either retrieval effectiveness or efficiency.

Gerard Salton; Chris Buckley

1988-01-01

113

Parallel flow diffusion battery  

DOEpatents

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

Yeh, Hsu-Chi (Albuquerque, NM); Cheng, Yung-Sung (Albuquerque, NM)

1984-08-07

114

Parallel flow diffusion battery  

DOEpatents

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

Yeh, H.C.; Cheng, Y.S.

1984-01-01

115

Parallel Implicit Algorithms for CFD  

NASA Technical Reports Server (NTRS)

The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

Keyes, David E.

1998-01-01

116

Parallel simulation today  

NASA Technical Reports Server (NTRS)

This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

Nicol, David; Fujimoto, Richard

1992-01-01

117

Special issue on parallelism  

Microsoft Academic Search

The articles presented in our Special Issue on parallel processing on the supercomputing scale reflect, to some extent, splits in the community developing these machines. There are several schools of thought on how best to implement parallel processing at both the hard- and software levels. Controversy exists over the wisdom of aiming for general- or special-purpose parallel machines, and what

Karen A. Frenkel

1986-01-01

118

DYNAMIC LANGUAGE PARALLELIZATION  

E-print Network

DYNAMIC LANGUAGE PARALLELIZATION By Lorenz F. Huelsbergen A thesis submitted in partial fulfillment { MADISON 1993 #12;c Copyright 1993 by Lorenz F. Huelsbergen ii #12;DYNAMIC LANGUAGE PARALLELIZATION Lorenz F. Huelsbergen, Ph.D. University of Wisconsin{Madison 1993 Dynamic language parallelization is a new

Huelsbergen, Lorenz

119

Decomposing the Potentially Parallel  

NSDL National Science Digital Library

This course provides an introduction to the issues involved in decomposing problems onto parallel machines, and to the types of architectures and programming styles commonly found in parallel computers. The list of topics discussed includes types of decomposition, task farming, regular domain decomposition, unbalanced grids, and parallel molecular dynamics.

Elspeth Minty, Robert Davey, Alan Simpson, David Henty

120

Reliability of Computation in the Cerebellum  

PubMed Central

The mossy fiber-granule cell-parallel fiber-Purkinje cell system of the cerebellar cortex is investigated from the viewpoint of reliability of computation. It is shown that the effects of variability in the inputs to a Purkinje cell can be reduced by having a large number of parallel fibers whose activities are statistically independent. The mossy fiber-granule cell relay is shown to be capable of performing the required function of transforming the activity in a small number of mossy fibers into activity in a much larger number of parallel fibers, while ensuring that there is little correlation between the activities of individual parallel fibers. The effects of variability in the outputs of Purkinje cells may be reduced by redundancy and convergence schemes, as evidenced by the geometrical pattern of parallel fibers and Purkinje cells and the convergence of these cells onto their target neurons. PMID:5579146

Sabah, N. H.

1971-01-01

121

Towards Distributed Memory Parallel Program Analysis  

SciTech Connect

This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

Quinlan, D; Barany, G; Panas, T

2008-06-17

122

Parallel algorithm development  

SciTech Connect

Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

Adams, T.F.

1996-06-01

123

Parallel Discrete Event Simulation Benno Overeinder Bob Hertzberger Peter Sloot  

E-print Network

as the complex of activities associated with constructing models of real world systems and simulating them in reliable evaluation of the effectiveness of these strategies. 1 Introduction In the Parallel Scientific gives an introduction to discrete event simulation. In section 3 a parallel view to the sequential

124

Parallel Atomistic Simulations  

SciTech Connect

Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

HEFFELFINGER,GRANT S.

2000-01-18

125

High Performance Parallel Computational Nanotechnology  

NASA Technical Reports Server (NTRS)

At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

Saini, Subhash; Craw, James M. (Technical Monitor)

1995-01-01

126

Analyzing Fuzzy System Reliability Using Confidence Interval 1  

E-print Network

Abstract: In this paper, a new method has been developed for analyzing the fuzzy system reliability of a series and parallel system using fuzzy confidence interval, where the reliability of each component of each system is unknown. To compute system reliability, we are estimated reliability of each component of the systems using fuzzy statistical data. Arithmetic operations over trapezoidal fuzzy number are used to analyze the fuzzy reliability. Numerical example is presented finally and the calculating was performed by using programming in software R. Key words: Reliability • fuzzy confidence interval • fuzzy number

Ezzatallah Baloui Jamkhaneh; Azam Nozari; Ali Nadi Ghara

127

Parallel digital forensics infrastructure.  

SciTech Connect

This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick

2009-10-01

128

Languages for parallel architectures  

SciTech Connect

This book presents mathematical methods for modelling parallel computer architectures, based on the results of ESPRIT's project 415 on computer languages for parallel architectures. Presented are investigations incorporating a wide variety of programming styles, including functional,logic, and object-oriented paradigms. Topics cover include Philips's parallel object-oriented language POOL, lazy-functional languages, the languages IDEAL, K-LEAF, FP2, and Petri-net semantics for the AADL language.

Bakker, J.W.

1989-01-01

129

Optimistic parallelism requires abstractions  

Microsoft Academic Search

The problem of writing software for multicore processors is greatly simplified if we could automatically parallelize sequential programs. Although auto-parallelization has been studied for many decades, it has succeeded only in a few application areas such as dense matrix computations. In particular, auto-parallelization of irregular programs, which are organized around large, pointer-based data struc- tures like graphs, has seemed intractable.

Milind Kulkarni; Keshav Pingali; Bruce Walter; Ganesh Ramanarayanan; Kavita Bala; L. Paul Chew

2007-01-01

130

Complexity of parallel algorithms  

SciTech Connect

This thesis addresses a number of theoretical issues in parallel computation. There are many open questions relating to what can be done with parallel computers and what are the most effective techniques to use to develop parallel algorithms. The author examines various problems in hope of gaining insight to the general questions. One topic investigated is the relationship between sequential and parallel algorithms. Introduced is the concept of a P-complete algorithm to capture what it means for an algorithm to be inherently sequential . It is shown that a number of sequential greedy algorithms are P-complete, including the greedy algorithm for finding a path in a graph. However, a problem is not necessarily difficult if an algorithm to solve it is P-complete. In some cases, the natural sequential algorithm is P-complete but a different technique gives a fast parallel algorithm. This shows that it is necessary to use different techniques for parallel computation than are used for sequential computation. Fast parallel algorithms for a number of simple graph theory problems are given. The algorithms illustrate a number of different techniques that are useful for parallel algorithms. The final topic that we address is parallel approximation of P-complete problems.

Anderson

1985-11-01

131

Complexity of parallel algorithms  

SciTech Connect

This thesis addresses a number of theoretical issues in parallel computation. There are many open questions relating to what can be done with parallel computers and what are the most effective techniques to use to develop parallel algorithms. Various problems are examined in hope of gaining insight to the general questions. One topic investigated is the relationship between sequential and parallel algorithms. The concept of a P-complete algorithm is introduced to capture what it means for an algorithm to be inherently sequential. It is shown that a number of sequential greedy algorithms are P-complete, including the greedy algorithm for finding a path in a graph. However, an algorithm being P-complete does not necessarily mean that the problem is difficult. In some cases, the natural sequential algorithm is P-complete but a different technique gives a fast parallel algorithm. This shows that it is necessary to use different techniques for parallel computation than are used for sequential computation. Fast parallel algorithms are given for a number of simple graph theory problems. The algorithms illustrate a number of different techniques that are useful for parallel algorithms. A number of results on approximating P-complete problems with parallel algorithms are given that are similar to results on approximating NP-complete problems with sequential algorithms.

Anderson, R.J.

1986-01-01

132

Parallel MATLAB at VT: Parallel For Loops  

E-print Network

.......... FSU: Florida State University AOE: Department of Aerospace and Ocean Engineering ARC: Advanced Research Computing ICAM: Interdisciplinary Center for Applied Mathematics 1 / 56 #12;Matlab Parallel on the order of execution. There are also restrictions on array-data access. OpenMP implements a directive

Crawford, T. Daniel

133

Parallel MATLAB at VT: Parallel For Loops  

E-print Network

.......... FSU: Florida State University AOE: Department of Aerospace and Ocean Engineering ARC: Advanced Research Computing ICAM: Interdisciplinary Center for Applied Mathematics 1 / 71 #12;MATLAB Parallel are completely independent; there are also some restrictions on array-data access. OpenMP implements a directive

Crawford, T. Daniel

134

Parallel MATLAB at VT: Parallel For Loops  

E-print Network

.......... FSU: Florida State University AOE: Department of Aerospace and Ocean Engineering ARC: Advanced Research Computing ICAM: Interdisciplinary Center for Applied Mathematics 1 / 72 #12;MATLAB Parallel independent; there are also some restrictions on array-data access. OpenMP implements a directive

Crawford, T. Daniel

135

Parallelizing and De-parallelizing Elimination Orders.  

National Technical Information Service (NTIS)

The order in which the variables of a linear system are processed determines the total amounts of fill and work to perform LU decomposition on the system. We identify a trade off between the amounts of fill and work for a given order and the parallelism i...

C. F. Bornstein

1998-01-01

136

Comparison of Reliability Measures under Factor Analysis and Item Response Theory  

ERIC Educational Resources Information Center

Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…

Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng

2012-01-01

137

ANN-based Reliability Analysis for Deep Excavation  

Microsoft Academic Search

In this study, a reliability evaluation method, integrated with artificial neural network (ANN), and first-order reliability method (FORM), or Monte-Carlo simulation (MCS), is explored. By performing a case study on the reliability of deep excavation within soft ground, an analysis procedure for reliability analysis is proposed. The evaluation model of ANN-based FORM or ANN-based MCS is superior to traditional reliability

Fu-Kuo Huang; G. S. Wang

2007-01-01

138

Case studies in asynchronous data parallelism  

SciTech Connect

Is the owner-computes style of parallelism, captured in a variety of data parallel languages, attractive as a paradigm for designing explicit control parallel codes? This question gives rise to a number of others. Will such use be unwieldy? Will the resulting code run well? What can such an approach offer beyond merely replicating, in a more labor intensive way, the services and coverage of data parallel languages? We investigate these questions via a simple example and {open_quotes}real world{close_quotes} case studies developed using C-Linda, a language for explicit parallel programming formed by the merger of C with the Linda coordination language. The results demonstrate owner-computes is an effective design strategy in Linda.

Carriero, N.; Gelernter, D. [Yale Univ., New Haven, CT (United States)

1994-04-01

139

Reliability Evaluation using Triangular Intuitionistic Fuzzy Numbers Arithmetic Operations  

E-print Network

Abstract—In general fuzzy sets are used to analyze the fuzzy system reliability. Here intuitionistic fuzzy set theory for analyzing the fuzzy system reliability has been used. To analyze the fuzzy system reliability, the reliability of each component of the system as a triangular intuitionistic fuzzy number is considered. Triangular intuitionistic fuzzy number and their arithmetic operations are introduced. Expressions for computing the fuzzy reliability of a series system and a parallel system following triangular intuitionistic fuzzy numbers have been described. Here an imprecise reliability model of an electric network model of dark room is taken. To compute the imprecise reliability of the above said system, reliability of each component of the systems is represented by triangular intuitionistic fuzzy numbers. Respective numerical example is presented. Keywords—Fuzzy set, Intuitionistic fuzzy number, System reliability, Triangular intuitionistic fuzzy number.

G. S. Mahapatra; T. K. Roy

140

Reliability Generalization: "Lapsus Linguae"  

ERIC Educational Resources Information Center

This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability

Smith, Julie M.

2011-01-01

141

Optimistic parallelism requires abstractions  

Microsoft Academic Search

Irregular applications, which manipulate large, pointer-based data structures like graphs, are difficult to parallelize manually. Automatic tools and techniques such as restructuring compilers and run-time speculative execution have failed to uncover much parallelism in these applications, in spite of a lot of effort by the research community. These difficulties have even led some researchers to wonder if there is any

Milind Kulkarni; Keshav Pingali; Bruce Walter; Ganesh Ramanarayanan; Kavita Bala; L. Paul Chew

2007-01-01

142

Parallelization of thermochemical nanolithography.  

PubMed

One of the most pressing technological challenges in the development of next generation nanoscale devices is the rapid, parallel, precise and robust fabrication of nanostructures. Here, we demonstrate the possibility to parallelize thermochemical nanolithography (TCNL) by employing five nano-tips for the fabrication of conjugated polymer nanostructures and graphene-based nanoribbons. PMID:24337109

Carroll, Keith M; Lu, Xi; Kim, Suenne; Gao, Yang; Kim, Hoe-Joon; Somnath, Suhas; Polloni, Laura; Sordan, Roman; King, William P; Curtis, Jennifer E; Riedo, Elisa

2014-01-01

143

Parallel Programming Workshop  

NSDL National Science Digital Library

This is an online course for parallel programming. Topics include MPI basics, point-to-point communication, derived datatypes, virtual topologies, collective communication, parallel I/O, and performance analysis and profiling. Other languages will be discussed such as OpenMP and High Performance Fortran (HPF). A Computational Fluid Dynamics section includes flux functions, Riemann solver, Euler equations, and Navier-Stokes equations.

144

Parallel discrete event simulation  

Microsoft Academic Search

Parallel discrete event simulation (PDES), sometimes called distributed simulation, refers to the execution of a single discrete event simulation program on a parallel computer. PDES has attracted a considerable amount of interest in recent years. From a pragmatic standpoint, this interest arises from the fact that large simulations in engineering, computer science, economics, and military applications, to mention a few,

Richard M. Fujimoto

1990-01-01

145

Conscious Parallelism Revisited  

Microsoft Academic Search

Conscious parallelism, sometimes called tacit collusion, occurs where firms adopt their business practices based on what other firms are doing, rather than competing for customers. The most obvious manifestation occurs where prices across companies in an industry not only become suspiciously similar, but also change rapidly in strikingly parallel ways. Suggested examples are legion and varied: airline tickets, gasoline, cellular

Reza Dibadj

2010-01-01

146

Reliable Evaluations of URL Normalization  

Microsoft Academic Search

URL normalization is a process of transforming URL strings into canonical form. Through this process, duplicate URL representations for web pages can be reduced significantly. There are a number of normalization methods. In this paper, we describe four metrics for evaluating normalization methods. The reliability and consistency of a URL is also considered in our evaluation. With the metrics proposed,

Sung Jin Kim; Hyo Sook Jeong; Sang Ho Lee

2006-01-01

147

A Design Methodology for Data-Parallel Applications  

Microsoft Academic Search

A methodology for the design and development of data parallel applications and components is presented. Data- parallelism is a well understood form of parallel computation, yet developing simple applications can involve substantial efforts to express the problem in low-level notations. We describe a process of software development for data-parallel applications starting from high-level specifications, generating repeated refinements of designs to

Lars S. Nyland; Jan F. Prins; Allen Goldberg; Peter H. Mills

2000-01-01

148

A trial for a reliable shape measurement using interferometry and deflectometry  

NASA Astrophysics Data System (ADS)

Phase measuring deflectometry is an emerging technique to measure specular complex surface, such as aspherical surface and free-form surface. It is very attractive for its wide dynamic range of vertical scale and application range. Because it is a gradient based surface profilometry, we have to integrate the measured data to get surface shape. It can be cause of low accuracy. On the other hand, interferometry is accurate and well-known method for precision shape measurement. In interferometry, the original measured data is phase of interference signal, which directly shows the surface shape of the target. However interferometry is too precise to measure aspherical surface, free-form surface and usual surface in common industry. To assure the accuracy in ultra-precision measurement, reliability is the most important thing. Reliability can be kept by cross-checking. Then I will propose measuring method using both interferometer and deflectometry for reliable shape measurement. In this concept, global shape is measured using deflectometry and local shape around flat area is measured using interferometry. The result of deflectometry is global and precise. But it include ambiguity due to slope integration. In interferometry, only a small area can be measured, which is almost parallel to the reference surface. But it is accurate and reliable. To combine both results, it should be global, precise and reliable measurement. I will present the concept of combination of interferometry and deflectometry and some preliminary experimental results.

Hanayama, Ryohei

2014-07-01

149

Non-Cartesian parallel imaging reconstruction.  

PubMed

Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. J. Magn. Reson. Imaging 2014;40:1022-1040. © 2014 Wiley Periodicals, Inc. PMID:24408499

Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

2014-11-01

150

On mesh rezoning algorithms for parallel platforms  

SciTech Connect

A mesh rezoning algorithm for finite element simulations in a parallel-distributed environment is described. The cornerstones of the algorithm are: the parallel computation of distortion norms on the element and subdomain level, the exchange of the individual subdomain norms to form a subdomain distortion vector, the classification of subdomains and the rezoning behavior prescribed within each subdomain as a response to its own classification and the classification of neighboring subdomains.

Plaskacz, E.J.

1995-07-01

151

Can there be reliability without reliability?  

NASA Astrophysics Data System (ADS)

A recent article by Pamela Moss asks the title question, 'Can there be validity without reliability?' If by reliability we mean only KR-2O coefficients or inter-rater correlations, the answer is yes. Sometimes these particular indices for evaluating evidence suit the problem we encounter; sometimes they don't. If by reliability we mean credibility of evidence, where credibility is defined as 'appropriate to the intended inference, the answer is no, we cannot have validity without reliability. Because 'validity' encompasses the process of reasoning as well as the data, uncritically accepting observations as strong evidence, when they may be incorrect, misleading, unrepresentative, or fraudulent, may lead coincidentally to correct conclusions but not to valid ones. This paper discusses and illustrates a broader conception of 'reliability' in educational assessment, to ground a deeper understanding of the issues raised by Professor Moss's question.

Mislevy, Robert J.

1994-10-01

152

Bilingual parallel programming  

SciTech Connect

Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

Foster, I.; Overbeek, R.

1990-01-01

153

Series/Parallel Batteries  

NSDL National Science Digital Library

It is important for students to understand how resistors, capacitors, and batteries combine in series and parallel. The combination of batteries has a lot of practical applications in science competitions. This lab also reinforces how to use a voltmeter t

Horton, Michael

2009-05-30

154

Parallel Plate Antenna.  

National Technical Information Service (NTIS)

The invention as disclosed is a parallel plate antenna having a number of stacked horizontal plates and two vertical plates. Alternating ones of the horizontal plates are electrically coupled to one vertical plate such that the horizontal plates coupled t...

D. F. Rivera

2009-01-01

155

Parallels with nature  

NASA Astrophysics Data System (ADS)

Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

2014-10-01

156

Calculational parallel programming: parallel programming with homomorphism and mapreduce  

Microsoft Academic Search

Parallel skeletons are designed to encourage programmers to build parallel programs from ready-made components for which efficient implementations are known to exist, making both parallel programming and parallelization process simpler. Homomorphism and mapReduce are two known parallel skeletons. Homomorphism, widely studied in the program calculation community for more than twenty years, ideally suits the divide-and-conquer parallel computation paradigm over lists,

Zhenjiang Hu; K. Emoto; Z. Hu; K. Kakehi; K. Matsuzaki; M. Takeichi

2010-01-01

157

Human Reliability Program Overview  

SciTech Connect

This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

Bodin, Michael

2012-09-25

158

Power electronics reliability analysis.  

SciTech Connect

This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

Smith, Mark A.; Atcitty, Stanley

2009-12-01

159

Parallelization for geophysical waveform analysis  

E-print Network

&M University to aid the parallel programmer by providing standard implementations of common parallel programming tasks. Our research involves using STAPL to apply parallel methods to a problem that has already been solved sequentially: Seismic ray tracing...

Kurth, Derek Edward

2013-02-22

160

Bayesian reliability analysis for fuzzy lifetime data  

Microsoft Academic Search

Lifetime data are important in reliability analysis. Classical reliability estimation is based on precise lifetime data. It is usually assumed that observed lifetime data are precise real numbers. However, some collected lifetime data might be imprecise and are represented in the form of fuzzy numbers. Thus, it is necessary to generalize classical statistical estimation methods for real numbers to fuzzy

Hong-zhong Huang; Ming J. Zuo; Zhan-quan Sun

2006-01-01

161

Reliability in aposematic signaling  

PubMed Central

In light of recent work, we will expand on the role and variability of aposematic signals. The focus of this review will be the concepts of reliability and honesty in aposematic signaling. We claim that reliable signaling can solve the problem of aposematic evolution, and that variability in reliability can shed light on the complexity of aposematic systems. PMID:20539774

2010-01-01

162

A high-speed linear algebra library with automatic parallelism  

NASA Technical Reports Server (NTRS)

Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

Boucher, Michael L.

1994-01-01

163

Sublattice parallel replica dynamics.  

PubMed

Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998)] by combining it with the synchronous sublattice approach of Shim and Amar [ and , Phys. Rev. B 71, 125432 (2005)], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers. PMID:25019913

Martínez, Enrique; Uberuaga, Blas P; Voter, Arthur F

2014-06-01

164

Sublattice parallel replica dynamics  

NASA Astrophysics Data System (ADS)

Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998), 10.1103/PhysRevB.57.R13985] by combining it with the synchronous sublattice approach of Shim and Amar [Y. Shim and J. G. Amar, Phys. Rev. B 71, 125432 (2005), 10.1103/PhysRevB.71.125432], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

Martínez, Enrique; Uberuaga, Blas P.; Voter, Arthur F.

2014-06-01

165

Reliability models for dataflow computer systems  

NASA Technical Reports Server (NTRS)

The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

Kavi, K. M.; Buckles, B. P.

1985-01-01

166

Parallel FFT & Isoefficiency 1 The Fast Fourier Transform in Parallel  

E-print Network

Parallel FFT & Isoefficiency 1 The Fast Fourier Transform in Parallel the Fastest Fourier Transform / 25 #12;Parallel FFT & Isoefficiency 1 The Fast Fourier Transform in Parallel the Fastest Fourier & Isoefficiency L-14 14 February 2014 2 / 25 #12;the Discrete Fourier Transform (DFT) A periodic function f(t) can

Verschelde, Jan

167

Reliability evaluation of solar photovoltaic arrays  

Microsoft Academic Search

The operational lifetime of large solar PV arrays is investigated using the probability theory for the assessment of reliability. Arrays based on the following three solar cell interconnection schemes have been considered: (i) simple series-parallel (SP) array, (ii) the total-crossed-tied (TCT) array which is obtained from the SP array by connecting ties across each row of junctions and (iii) the

Nalin K. Gautam; N. D. Kaushika

2002-01-01

168

User's guide to the Reliability Estimation System Testbed (REST)  

NASA Technical Reports Server (NTRS)

The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

1992-01-01

169

Scalable Parallel Crash Simulations  

SciTech Connect

We are pleased to submit our efforts in parallelizing the PRONTO application suite for con- sideration in the SuParCup 99 competition. PRONTO is a finite element transient dynamics simulator which includes a smoothed particle hydrodynamics (SPH) capability; it is similar in scope to the well-known DYNA, PamCrash, and ABAQUS codes. Our efforts over the last few years have produced a fully parallel version of the entire PRONTO code which (1) runs fast and scalably on thousands of processors, (2) has performed the largest finite-element transient dynamics simulations we are aware of, and (3) includes several new parallel algorithmic ideas that have solved some difficult problems associated with contact detection and SPH scalability. We motivate this work, describe the novel algorithmic advances, give performance numbers for PRONTO running on Sandia's Intel Teraflop machine, and highlight two prototypical large-scale computations we have performed with the parallel code. We have successfully parallelized a large-scale production transient dynamics code with a novel algorithmic approach that utilizes multiple decompositions for different key segments of the computations. To be able to simulate a more than ten million element model in a few tenths of second per timestep is unprecedented for solid dynamics simulations, especially when full global contact searches are required. The key reason is our new algorithmic ideas for efficiently parallelizing the contact detection stage. To our knowledge scalability of this computation had never before been demonstrated on more than 64 processors. This has enabled parallel PRONTO to become the only solid dynamics code we are aware of that can run effectively on 1000s of processors. More importantly, our parallel performance compares very favorably to the original serial PRONTO code which is optimized for vector supercomputers. On the container crush problem, a Teraflop node is as fast as a single processor of the Cray Jedi. This means that on the Teraflop machine we can now run simulations with tens of millions of elements thousands of times faster than we could on the Jedi! This is enabling transient dynamics simulations of unprecedented scale and fidelity. Not only can previous applications be run with vastly improved resolution and speed, but qualitatively new and different analyses have been made possible.

Attaway, Stephen; Barragy, Ted; Brown, Kevin; Gardner, David; Gruda, Jeff; Heinstein, Martin; Hendrickson, Bruce; Metzinger, Kurt; Neilsen, Mike; Plimpton, Steve; Pott, John; Swegle, Jeff; Vaughan, Courtenay

1999-06-01

170

Declarative Parallel Programming for GPUs  

E-print Network

, Arun CHAUHAN,Andrew LUMSDAINE Indiana University, Bloomington, USA ParCo 2011 September 1, 2011 #12;Arun Chauhan, Declarative parallel programming for GPUs, ParCo 2011 Parallelism Mainstream Parallelism nets Focus of today's Parallel Programming Models Courtesy:Vivek Sarkar, Rice University #12;Arun

Chauhan, Arun

171

Parallel Seismic Ray Tracing  

E-print Network

of the method while others are intended to be representative of basic geological features such as salt domes. We also present a theoretical model to understand the performance of the pWFC algorithm. We evaluate the performance of the proposed parallel...

Jain, Tarun K

2013-12-09

172

Parallel programming with PCN  

SciTech Connect

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

Foster, I.; Tuecke, S.

1993-01-01

173

High performance parallel architectures  

SciTech Connect

In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

Anderson, R.E. (Lawrence Livermore National Lab., CA (USA))

1989-09-01

174

Parallel Circuits Lab  

NSDL National Science Digital Library

This in-class lab exercise will give students a familiarity with basic series and parallel circuits as well as measuring voltage, current and resistance. The worksheet provided leads students through the experiment step by step. Spaces for student measurements and conclusions are provided on the sheet. This document may be downloaded in PDF file format.

2012-05-04

175

Optimizing parallel reduction operations  

SciTech Connect

A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

Denton, S.M.

1995-06-01

176

Parallelizing MATLAB Arun Chauhan  

E-print Network

Parallelizing MATLAB Arun Chauhan Indiana University ParaM Supercomputing, OSC booth, 2004-11-10 #12;The Performance Gap ParaM, Arun Chauhan, Indiana University Supercomputing, OSC booth, 2004-11-10 #12;MATLAB Example function mcc demo x = 1; y = x / 10; z = x * 20; r = y + z; ParaM, Arun Chauhan

Chauhan, Arun

177

Parallel Consensual Neural Networks  

Microsoft Academic Search

Optimized combination, regularization, and pruning is proposed for the Parallel Consensual Neural Networks (PC- NNs) which is a neural network architecture based on the consensus of a collection of stage neural networks trained on the same input data with dieren t representations. Here, a regularization scheme is presented for the PCNN and in training a regularized cost function is minimized.

J. A. Benediktsson; J. Larsen; J. R. Sveinsson; L. K. Hansen

178

Massively parallel processor computer  

NASA Technical Reports Server (NTRS)

An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

Fung, L. W. (inventor)

1983-01-01

179

Parallel hierarchical global illumination  

SciTech Connect

Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

Snell, Q.O.

1997-10-08

180

Parallel Traveling Salesman Problem  

NSDL National Science Digital Library

The traveling salesman problem is a classic optimization problem in which one seeks to minimize the path taken by a salesman in traveling between N cities, where the salesman stops at each city one and only one time, never retracing his/her route. This implementation is designed to run on UNIX systems with X-Windows, and includes parallelization using MPI.

Joiner, David; Hassinger, Jonathan

181

Parallelism and evolutionary algorithms  

Microsoft Academic Search

This paper contains a modern vision of the paral- lelization techniques used for evolutionary algorithms (EAs). The work is motivated by two fundamental facts: first, the different families of EAs have naturally converged in the last decade while parallel EAs (PEAs) seem still to lack unified studies, and second, there is a large number of improvements in these algorithms and

Enrique Alba; Marco Tomassini

2002-01-01

182

Recalibrating software reliability models  

NASA Technical Reports Server (NTRS)

In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

1989-01-01

183

Parallel Web prefetching on cluster server  

Microsoft Academic Search

Prefetching is an important technique for single Web server to reduce the average Web access latency and applying it on cluster server will produce better performance. Two models for parallel Web prefetching on cluster server described in the form of I\\/O automaton are proposed in this paper according to the different service approaches of Web cluster server: session persistence and

Cairong Yan; Junyi Shen; Qinke Peng

2005-01-01

184

A Laser Surface Textured Parallel Thrust Bearing  

Microsoft Academic Search

The potential use of a new technology of laser surface texturing (LST) in parallel thrust bearings is theoretically investigated. The surface texture has the form of micro-dimples with pre-selected diameter, depth, and area density. It can be applied to only a portion of the bearing area (partial LST) or the full bearing area (full LST). Optimum parameters of the dimples,

V. Brizmer; Y. Kligerman; I. Etsion

2003-01-01

185

Making programmable BMS safe and reliable  

SciTech Connect

Burner management systems ensure safe admission of fuel to the furnace and prevent explosions. This article describes how programmable control systems can be every bit as safe and reliable as hardwired or standard programmable logic controller-based designs. High-pressure boilers are required by regulatory agencies and insurance companies alike to be equipped with a burner management system (BMS) to ensure safe admission of fuel to the furnace and to prevent explosions. These systems work in parallel with, but independently of, the combustion and feedwater control systems that start up, monitor, and shut down burners and furnaces. Safety and reliability are the fundamental requirements of a BMS. Programmable control system for BMS applications are now available that incorporate high safety and reliability into traditional microprocessor-based designs. With one of these control systems, a qualified systems engineer applying relevant standards, such as the National Fire Protection Assn (NFPA) 85 series, can design and implement a superior BMS.

Cusimano, J.A.

1995-12-01

186

Science Grade 7, Long Form.  

ERIC Educational Resources Information Center

The Grade 7 Science course of study was prepared in two parallel forms. A short form designed for students who had achieved a high measure of success in previous science courses; the long form for those who have not been able to maintain the pace. Both forms contain similar content. The Grade 7 guide is the first in a three-year sequence for…

New York City Board of Education, Brooklyn, NY. Bureau of Curriculum Development.

187

Reliability of Generation Supply  

Microsoft Academic Search

A new method for estimation of the reliability of generation supply in a single compact system or in an interconnected system is described. Measures of reliability calculated using the new method are 1) capacity deficiency rate, 2) expected duration of capacity deficient period, and 3) \\

Alton Patton; Damon Holditch

1968-01-01

188

Parallel Consensual Neural Networks  

NASA Technical Reports Server (NTRS)

A new neural network architecture is proposed and applied in classification of remote sensing/geographic data from multiple sources. The new architecture is called the parallel consensual neural network and its relation to hierarchical and ensemble neural networks is discussed. The parallel consensual neural network architecture is based on statistical consensus theory. The input data are transformed several times and the different transformed data are applied as if they were independent inputs and are classified using stage neural networks. Finally, the outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote sensing data and geographic data are given. The performance of the consensual neural network architecture is compared to that of a two-layer (one hidden layer) conjugate-gradient backpropagation neural network. The results with the proposed neural network architecture compare favorably in terms of classification accuracy to the backpropagation method.

Benediktsson, J. A.; Sveinsson, J. R.; Ersoy, O. K.; Swain, P. H.

1993-01-01

189

Parallel multilevel preconditioners  

SciTech Connect

In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

1989-01-01

190

Parallelization: Infectious Disease  

NSDL National Science Digital Library

Epidemiology is the study of infectious disease. Infectious diseases are said to be "contagious" among people if they are transmittable from one person to another. Epidemiologists can use models to assist them in predicting the behavior of infectious diseases. This module will develop a simple agent-based infectious disease model, develop a parallel algorithm based on the model, provide a coded implementation for the algorithm, and explore the scaling of the coded implementation on high performance cluster resources.

Weeden, Aaron

191

A Bayesian approach to reliability and confidence  

NASA Technical Reports Server (NTRS)

The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

Barnes, Ron

1989-01-01

192

Visual rays are parallel.  

PubMed

We show that human observers using monocular viewing treat the pencil of 'visual rays' that diverges from the vantage point as experientally parallel. This oddity becomes very noticeable in the case of wide-angle presentations, where the angle subtended by a pair of visual rays may be as large as the angular size of the display. In our presentations such angles subtended over 100 deg. There are various ways to demonstrate the effect; in this study we measure the attitudes of pictorial objects that appear to be situated in mutually parallel attitudes in pictorial space. Our finding is that such objects appear parallel if they are similarly oriented with respect to the local visual rays. This leads to 'errors' in the judgment of mutual orientations of up to 100 deg. Although this appears to be the first quantitative study of the effect, we trace it to qualitative reports by Helmholtz (late 19th century) and Kepler (early 17th century) as well as speculation by early authors (AD 500). The effect has apparently been noticed by visual artists from the late middle ages to the present day. PMID:21125944

Koenderink, Jan; van Doorn, Andrea; de Ridder, Huib; Oomes, Stijn

2010-01-01

193

Inspection criteria ensure quality control of parallel gap soldering  

NASA Technical Reports Server (NTRS)

Investigation of parallel gap soldering of electrical leads resulted in recommendation on material preparation, equipment, process control, and visual inspection criteria to ensure reliable solder joints. The recommendations will minimize problems in heat-dwell time, amount of solder, bridging conductors, and damage of circuitry.

Burka, J. A.

1968-01-01

194

Parallel Compression Checkpointing for Socket-Level Heterogeneous Systems  

Microsoft Academic Search

Checkpointing is an effective fault tolerant tech- nique to improve the reliability of large scale parallel comput- ing systems. However, checkpointing causes a large number of computation nodes to store a huge amount of data into file system simultaneously. It does not only require a huge storage space to store system state, but also brings a tremendous pressure on the

Yongpeng Liu; Hong Zhu; Yongyan Liu; Feng Wang; Baohua Fan

2011-01-01

195

A control method for high power UPSs in parallel operation  

Microsoft Academic Search

The parallel operation of static inverters is, in a large number of cases, the appropriate solution to achieve the high power required by some applications or to improve system reliability. To higher the power in a lot of industrial applications is equivalent to adding a new UPS unit to an already existing one. In UPS systems there are situations where

A. P. Martins; A. S. Carvalho; A. S. Araujo

1995-01-01

196

Supporting tasks with adaptive groups in data parallel programming  

Microsoft Academic Search

A set of communication operations is defined which allows a form of task parallelism to be achieved in a data parallel architecture. The set of processors can be subdivided recursively into groups, and a communication operation inside a group never conflicts with communications taking place in other groups. The groups may be subdivided and recombined at any time, allowing the

John O'Donnell

2005-01-01

197

Darwinian Evolution in Parallel Universes: A Parallel Genetic Algorithm for  

E-print Network

Darwinian Evolution in Parallel Universes: A Parallel Genetic Algorithm for Variable Selection Mu outcome of interest commonly arises in various industrial engineering applications. The genetic algorithm modification. Our idea is to run a number of GAs in parallel without allowing each GA to fully converge

Zhu, Mu

198

Reliability and maintainability: a common ground for cooperation [aircraft  

Microsoft Academic Search

Except for flight critical systems and safety, the commercial aircraft manufacturers and the airlines have not placed much emphasis on reliability and maintainability. When they did include reliability, it was mainly in the form of redundant systems rather than more robust and reliable parts. With a worldwide economic slowdown and a declining defense budget, both the commercial airline industry and

T. J. Howard; J. P. Laverdure

1994-01-01

199

From Programming Models for Massively Parallel Computers (MMPM'95), IEEE Computer Society Press, 1996.  

E-print Network

of the Proteus programming language [7, 10]. Nested-parallel languages such as nesl and Proteus are characterized of parallelism is the apply-to-each function; both nesl and Proteus have a special form for this, the iterator

Prins, Jan

200

Matrix representation of continued fractions and its use in parallel computation algorithms  

Microsoft Academic Search

Matrix analogs of some continued fractions of general form are considered and appropriate computation algorithms are described. The study is connected with the general parallelism problem and the use of parallel computing systems.

V. I. Panchuk; V. V. Garbovskii

1989-01-01

201

A Global Approach to Absolute Parallelism Geometry  

NASA Astrophysics Data System (ADS)

In this paper we provide a global investigation of the geometry of parallelizable manifolds (or absolute parallelism geometry) frequently used in applications. We discuss different linear connections and curvature tensors from a global point of view. We give an existence and uniqueness theorem for a remarkable linear connection, called the canonical connection. Different curvature tensors are expressed in a compact form in terms of the torsion tensor of the canonical connection only. Using the Bianchi identities, some interesting identities are derived. An important special fourth-order tensor, which we refer to as Wanas tensor, is globally defined and investigated. Finally a "double-view" for the fundamental geometric objects of an absolute parallelism space is established: The expressions of these geometric objects are computed in the parallelization basis and are compared with the corresponding local expressions in the natural basis. Physical aspects of some geometric objects considered are pointed out.

Youssef, Nabil L.; Elsayed, Waleed A.

2013-08-01

202

The STAPL Parallel Container Framework  

E-print Network

, and thread safety. This dissertation presents the STAPL Parallel Container Framework (PCF), which is designed to facilitate the development of generic parallel containers. We introduce a set of concepts and a methodology for assembling a pContainer from...

Tanase, Ilie Gabriel

2012-02-14

203

The Galley Parallel File System  

NASA Technical Reports Server (NTRS)

As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

Nieuwejaar, Nils; Kotz, David

1996-01-01

204

On parallel machine scheduling 1  

E-print Network

On parallel machine scheduling 1 machines with setup times. The setup has to be performed by a single server. The objective is to minimize even for the case of two identical parallel machines. This paper presents a pseudopolynomial

Magdeburg, Universität

205

Parallel Pascal - An extended Pascal for parallel computers  

NASA Technical Reports Server (NTRS)

Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

Reeves, A. P.

1984-01-01

206

CSM parallel structural methods research  

NASA Technical Reports Server (NTRS)

Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

Storaasli, Olaf O.

1989-01-01

207

Roo: A parallel theorem prover  

SciTech Connect

We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

Lusk, E.L.; McCune, W.W.; Slaney, J.K.

1991-11-01

208

The Journey Toward Reliability  

NSDL National Science Digital Library

Kansas State University faculty members have partnered with industry to assist in the implementation of a reliability centered manufacturing (RCM) program. This paper highlights faculty members experiences, benefits to industry of implementing a reliability centered manufacturing program, and faculty members roles in the RCM program implementation. The paper includes lessons learned by faculty members, short-term extensions of the faculty-industry partnership, and a long-term vision for a RCM institute at the university level.

Brockway, Kathy V.; Spaulding, Greg

2010-03-15

209

Parallel Kinematic Machines (PKM)  

SciTech Connect

The purpose of this 3-year cooperative research project was to develop a parallel kinematic machining (PKM) capability for complex parts that normally require expensive multiple setups on conventional orthogonal machine tools. This non-conventional, non-orthogonal machining approach is based on a 6-axis positioning system commonly referred to as a hexapod. Sandia National Laboratories/New Mexico (SNL/NM) was the lead site responsible for a multitude of projects that defined the machining parameters and detailed the metrology of the hexapod. The role of the Kansas City Plant (KCP) in this project was limited to evaluating the application of this unique technology to production applications.

Henry, R.S.

2000-03-17

210

Parallel Processing System  

NASA Technical Reports Server (NTRS)

In order to process very high resolution image data from spacecraft sensors, Goddard Space Flight Center commissioned the development of a Massively Parallel Processor (MPP) based upon simultaneous processing of image picture elements (pixels) rather than serial processing. It resulted in a considerable increase in computational speed. MasPar Computer Corporation's MasPar MP-1 incorporates this technology, allowing users to attack a variety of computationally-intensive problems. The MP-1 is no longer manufactured but has been replaced by the MP-2, a more advanced model.

1991-01-01

211

Parallel Eclipse Project Checkout  

NASA Technical Reports Server (NTRS)

Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any Eclipse-based repository with a similar structure. It also can apply build parameters and preferences automatically at the end of the checkout.

Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

2011-01-01

212

Software reliability studies  

NASA Technical Reports Server (NTRS)

The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.

Wilson, Larry W.

1989-01-01

213

Parallel Vegetation Stripe Formation Through Hydrologic Interactions  

NASA Astrophysics Data System (ADS)

It has long been a challenge to theoretical ecologists to describe vegetation pattern formations such as the "tiger bush" stripes and "leopard bush" spots in Niger, and the regular maze patterns often observed in bogs in North America and Eurasia. To date, most of simulation models focus on reproducing the spot and labyrinthine patterns, and on the vegetation bands which form perpendicular to surface and groundwater flow directions. Various hypotheses have been invoked to explain the formation of vegetation patterns: selective grazing by herbivores, fire, and anisotropic environmental conditions such as slope. Recently, short distance facilitation and long distance competition between vegetation (a.k.a scale dependent feedback) has been proposed as a generic mechanism for vegetation pattern formation. In this paper, we test the generality of this mechanism by employing an existing, spatially explicit, advection-reaction-diffusion type model to describe the formation of regularly spaced vegetation bands, including those that are parallel to flow direction. Such vegetation patterns are, for example, characteristic of the ridge and slough habitat in the Florida Everglades and which are thought to have formed parallel to the prevailing surface water flow direction. To our knowledge, this is the first time that a simple model encompassing a nutrient accumulation mechanism along with biomass development and flow is used to demonstrate the formation of parallel stripes. We also explore the interactive effects of plant transpiration, slope and anisotropic hydraulic conductivity on the resulting vegetation pattern. Our results highlight the ability of the short distance facilitation and long distance competition mechanism to explain the formation of the different vegetation patterns beyond semi-arid regions. Therefore, we propose that the parallel stripes, like the other periodic patterns observed in both isotropic and anisotropic environments, are self-organized and form as a result of scale dependent feedback. Results from this study improve upon the current understanding on the formation of parallel stripes and provide a more general theoretical framework for future empirical and modeling efforts.

Cheng, Yiwei; Stieglitz, Marc; Turk, Greg; Engel, Victor

2010-05-01

214

Weakly parallel tests in latent trait theory with some criticisms of classical test theory  

Microsoft Academic Search

A new concept of weakly parallel tests, in contrast to strongly parallel tests in latent trait theory, is proposed. Some criticisms of the fundamental concepts in classical test theory, such as the reliability of a test and the standard error of estimation, are given.

Fumiko Samejima

1977-01-01

215

Applied Parallel Metadata Indexing  

SciTech Connect

The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

Jacobi, Michael R [Los Alamos National Laboratory

2012-08-01

216

Tolerant (parallel) Programming  

NASA Technical Reports Server (NTRS)

In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

DiNucci, David C.; Bailey, David H. (Technical Monitor)

1997-01-01

217

Automated grading of venous beading: an algorithm and parallel implementation  

NASA Astrophysics Data System (ADS)

A consistent, reliable method of quantifying diabetic retinopathy is required, both for patient assessment and eventually for use in screening tests for diabetes. To this end, an algorithm for determining the degree of venous beading in digitized ocular fundus images has been developed. A parallel implementation of the algorithm has also been investigated. The algorithm thresholds the fundus image to extract vein silhouettes. Morphological closing is used to fill any anomolous holes. Thinning is used to determine vein centerlines. Vein diameters are measured normal to the centerlines. A frequency analysis of vein diameter with distance along the centerline is then performed to permit estimation of veinous beading. For the parallel implementation, the binary vein silhouette and the vein centerline are rotated so that vein diameter may be estimated in one direction only. The time complexity of the parallel algorithm is O(N). Algorithm performance is demonstrated with real fundus images. A simulation of the parallel algorithm is used with actual fundus images.

Shen, Zhijiang; Gregson, Peter H.; Cheng, Heng-Da; Kozousek, V.

1991-11-01

218

Parallel Computing in SCALE  

SciTech Connect

The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement activities has been developed to provide an integrated framework for future methods development. Some of the major components of the SCALE parallel computing development plan are parallelization and multithreading of computationally intensive modules and redesign of the fundamental SCALE computational architecture.

DeHart, Mark D [ORNL] [ORNL; Williams, Mark L [ORNL] [ORNL; Bowman, Stephen M [ORNL] [ORNL

2010-01-01

219

Proposed reliability cost model  

NASA Technical Reports Server (NTRS)

The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

Delionback, L. M.

1973-01-01

220

Orbiter Autoland reliability analysis  

NASA Technical Reports Server (NTRS)

The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.

Welch, D. Phillip

1993-01-01

221

Reliability (and Fault Tree) Analysis Using Expert Opinions  

Microsoft Academic Search

In this article we introduce a formal procedure for the use of expert opinions in reliability (and fault tree) analysis. We consider the case of multicomponent parallel redundant systems for which there could be a single expert or a group of experts giving us opinions about each component. Inherent in our approach are a procedure for reflecting our judgment of

Dennis V. Lindley; Nozer D. Singpurwalla

1986-01-01

222

Photon detection with parallel asynchronous processing  

NASA Technical Reports Server (NTRS)

An approach to photon detection with a parallel asynchronous signal processor is described. The visible or IR photon-detection capability of the silicon p(+)-n-n(+) detectors and the parallel asynchronous processing are addressed separately. This approach would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture consisting of a stack of planar arrays of the devices would form a 2D array processor with a 2D array of inputs located directly behind a focal-plane detector array. A 2D image data stream would propagate in neuronlike asynchronous pulse-coded form through the laminar processor. Such systems can integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The possibility of multispectral image processing is addressed.

Coon, D. D.; Perera, A. G. U.

1990-01-01

223

Toward Parallel Document Clustering  

SciTech Connect

A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

Mogill, Jace A.; Haglin, David J.

2011-09-01

224

Run-time methods for parallelizing partially parallel loops  

Microsoft Academic Search

In this paper we give a new run-time technique for finding an optimal parallel execution schedule for a partially parallel loop, i.e., a loop whose parallelization requires synchronization to ensure that the iterations are executed in the correct order. Given the original loop, the compiler generates inspector code that performs run-time preprocessing of the loop's access pattern, and scheduler code

Lawrence Rauchwergert; Nancy M. Amato; David A. Paduat

1995-01-01

225

Parallel TreeSPH  

NASA Astrophysics Data System (ADS)

We describe PTreeSPH, a gravity treecode combined with an SPH hydrodynamics code designed for parallel supercomputers having distributed memory. Our computational algorithm is based on the popular TreeSPH code of Hernquist & Katz (1989)[ApJS, 70, 419]. PTreeSPH utilizes a domain decomposition procedure and a synchronous hypercube communication paradigm to build self-contained subvolumes of the simulation on each processor at every timestep. Computations then proceed in a manner analogous to a serial code. We use the Message Passing Interface (MPI) communications package, making our code easily portable to a variety of parallel systems. PTreeSPH uses individual smoothing lengths and timesteps, with a communication algorithm designed to minimize exchange of information while still providing all information required to accurately perform SPH computations. We have incorporated periodic boundary conditions with forces calculated using a quadrupole Ewald summation method, and comoving integration under a variety of cosmologies. Following algorithms presented in Katz et al. (1996)[ApJS, 105, 19], we have also included radiative cooling, heating from a parameterized ionizing background, and star formation. A cosmological simulation from z = 49 to z = 2 with 64 3 gas particles and 64 3 dark matter particles requires ˜ 1800 node-hours on a Cray T3D, with a communications overhead of ˜ 8%, load balanced to ? 95% level. When used on the new Cray T3E, this code will be capable of performing cosmological hydrodynamical simulations down to z = 0 with ˜ 2 × 10 6 particles, or to z = 2 with ˜ 10 7 particles, in a reasonable amount of time. Even larger simulations will be practical in situations where the matter is not highly clustered or when periodic boundaries are not required.

Davé, Romeel; Dubinski, John; Hernquist, Lars

1997-08-01

226

Quantifying reliability uncertainty : a proof of concept.  

SciTech Connect

This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna (Los Alamos National Laboratory, Los Alamos, NM); Lorio, John F.; Fatherley, Quinn (Los Alamos National Laboratory, Los Alamos, NM); Anderson-Cook, Christine (Los Alamos National Laboratory, Los Alamos, NM); Wilson, Alyson G. (Los Alamos National Laboratory, Los Alamos, NM); Zurn, Rena M.

2009-10-01

227

Parallel structural optimization with different parallel analysis interfaces  

NASA Technical Reports Server (NTRS)

The real benefit of structural optimization techniques is in the application of these techniques to large structures such as full vehicles or full aircraft. For these structures, however, the sequential computer's time and memory requirements prohibit the solutions. With the rapid development of parallel computers, parallel processing of large scale structural optimization problems is achievable. In this paper we discuss the parallel processing of structural optimization problems with parallel structural analysis. Two different types of interface between the optimization and analysis routines are developed and tested.

El-Sayed, Mohamed E. M.; Hsiung, Ching-Kuo

1990-01-01

228

Reliable Shapelet Image Analysis  

E-print Network

Aims: We discuss the applicability and reliability of the shapelet technique for scientific image analysis. Methods: We quantify the effects of non-orthogonality of sampled shapelet basis functions and misestimation of shapelet parameters. We perform the shapelet decomposition on artificial galaxy images with underlying shapelet models and galaxy images from the GOODS survey, comparing the publicly available IDL implementation with our new C++ implementation. Results: Non-orthogonality of the sampled basis functions and misestimation of the shapelet parameters can cause substantial misinterpretation of the physical properties of the decomposed objects. Additional constraints, image preprocessing and enhanced precision have to be incorporated in order to achieve reliable decomposition results.

P. Melchior; M. Meneghetti; M. Bartelmann

2006-08-17

229

Cerebro : forming parallel internets and enabling ultra-local economies  

E-print Network

Internet-based mobile communications have been increasing rapidly [5], yet there is little or no progress in platforms that enable applications for discovery, context-awareness and sharing of data and services in a peer-wise ...

Ypodimatopoulos, Polychronis Panagiotis

2008-01-01

230

Designing reliability into accelerators  

SciTech Connect

For the next generation of high performance, high average luminosity colliders, the ``factories,`` reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

Hutton, A.

1992-08-01

231

Designing reliability into accelerators  

SciTech Connect

For the next generation of high performance, high average luminosity colliders, the factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

Hutton, A.

1992-08-01

232

Columbus safety and reliability  

NASA Astrophysics Data System (ADS)

Analyses carried out to ensure Columbus reliability, availability, and maintainability, and operational and design safety are summarized. Failure modes/effects/criticality is the main qualitative tool used. The main aspects studied are fault tolerance, hazard consequence control, risk minimization, human error effects, restorability, and safe-life design.

Longhurst, F.; Wessels, H.

1988-10-01

233

Consider insulation reliability  

Microsoft Academic Search

This paper reports that when calcium silicate and two brands of mineral wool were compared in a series of laboratory tests, calcium silicate was more reliable. And in-service experience with mineral wool at a Canadian heavy crude refinery provided examples of many of the lab's findings. Lab tests, conducted under controlled conditions following industry accepted practices, showed calcium silicate insulation

Gamboa

1993-01-01

234

PARALLEL DATABASE MACHINES Kjell Bratbergsengen  

E-print Network

PARALLEL DATABASE MACHINES Kjell Bratbergsengen Department of Computer Systems and Telematics in the Database Technology Group at The Department of Computer Systems and Telematics, NTH has been supported

235

Parallel processing and expert systems  

NASA Technical Reports Server (NTRS)

Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

Yan, Jerry C.; Lau, Sonie

1991-01-01

236

Parallel computation with the force  

NASA Technical Reports Server (NTRS)

A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

Jordan, H. F.

1985-01-01

237

Parallel Bifold: Large-Scale Parallel Pattern Mining with Constraints  

E-print Network

Parallel Bifold: Large-Scale Parallel Pattern Mining with Constraints Mohammad El-Hajj, Osmar R. Za¨iane Department of Computing Science, UofA, Edmonton, AB, Canada {mohammad, zaiane}@cs.ualberta.ca University Of Alberta Abstract. When computationally feasible, mining huge databases pro- duces tremendously large

Zaiane, Osmar R.

238

Parallelism in gene transcription among sympatric lake whitefish ( Coregonus clupeaformis Mitchill) ecotypes  

Microsoft Academic Search

We tested the hypothesis that phenotypic parallelism between dwarf and normal whitefish ecotypes ( Coregonus clupeaformis, Salmonidae) is accompanied by parallelism in gene transcription. The most striking phenotypic differences between these forms implied energetic metabolism and swimming activity. Therefore, we predicted that genes showing parallel expression should mainly belong to functional groups associated with these phenotypes. Transcriptome profiles were obtained

N. DEROME; P. DUCHESNE; L. BERNATCHEZ

2006-01-01

239

Reliability Degradation Due to Stockpile Aging  

SciTech Connect

The objective of this reseach is the investigation of alternative methods for characterizing the reliability of systems with time dependent failure modes associated with stockpile aging. Reference to 'reliability degradation' has, unfortunately, come to be associated with all types of aging analyes: both deterministic and stochastic. In this research, in keeping with the true theoretical definition, reliability is defined as a probabilistic description of system performance as a funtion of time. Traditional reliability methods used to characterize stockpile reliability depend on the collection of a large number of samples or observations. Clearly, after the experiments have been performed and the data has been collected, critical performance problems can be identified. A Major goal of this research is to identify existing methods and/or develop new mathematical techniques and computer analysis tools to anticipate stockpile problems before they become critical issues. One of the most popular methods for characterizing the reliability of components, particularly electronic components, assumes that failures occur in a completely random fashion, i.e. uniformly across time. This method is based primarily on the use of constant failure rates for the various elements that constitute the weapon system, i.e. the systems do not degrade while in storage. Experience has shown that predictions based upon this approach should be regarded with great skepticism since the relationship between the life predicted and the observed life has been difficult to validate. In addition to this fundamental problem, the approach does not recognize that there are time dependent material properties and variations associated with the manufacturing process and the operational environment. To appreciate the uncertainties in predicting system reliability a number of alternative methods are explored in this report. All of the methods are very different from those currently used to assess stockpile reliability, but have been used extensively in various forms outside Sandia National Laboratories. It is hoped that this report will encourage the use of 'nontraditional' reliabilty and uncertainty techniques in gaining insight into stockpile reliability issues.

Robinson, David G.

1999-04-01

240

High Performance Parallel Architectures  

NASA Technical Reports Server (NTRS)

Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

El-Ghazawi, Tarek; Kaewpijit, Sinthop

1998-01-01

241

PSHFT - COMPUTERIZED LIFE AND RELIABILITY MODELLING FOR TURBOPROP TRANSMISSIONS  

NASA Technical Reports Server (NTRS)

The computer program PSHFT calculates the life of a variety of aircraft transmissions. A generalized life and reliability model is presented for turboprop and parallel shaft geared prop-fan aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on the statistical two parameter Weibull failure distribution method and classical fatigue theories. The computer program developed to calculate the transmission model is modular. In its present form, the program can analyze five different transmissions arrangements. Moreover, the program can be easily modified to include additional transmission arrangements. PSHFT uses the properties of a common block two-dimensional array to separate the component and transmission property values from the analysis subroutines. The rows correspond to specific components with the first row containing the values for the entire transmission. Columns contain the values for specific properties. Since the subroutines (which determine the transmission life and dynamic capacity) interface solely with this property array, they are separated from any specific transmission configuration. The system analysis subroutines work in an identical manner for all transmission configurations considered. Thus, other configurations can be added to the program by simply adding component property determination subroutines. PSHFT consists of a main program, a series of configuration specific subroutines, generic component property analysis subroutines, systems analysis subroutines, and a common block. The main program selects the routines to be used in the analysis and sequences their operation. The series of configuration specific subroutines input the configuration data, perform the component force and life analyses (with the help of the generic component property analysis subroutines), fill the property array, call up the system analysis routines, and finally print out the analysis results for the system and components. PSHFT is written in FORTRAN 77 and compiled on a MicroSoft FORTRAN compiler. The program will run on an IBM PC AT compatible with at least 104k bytes of memory. The program was developed in 1988.

Savage, M.

1994-01-01

242

FIELD RELIABILITY OF ELECTRONIC SYSTEMS  

E-print Network

reliability to safety and risk analyses approaches with a broader, system oriented view of reliability Between Predicted and Observed Reliability 24 3.7. Source 7 25 3.7.1. Description (Military Avionics Between Predicted and Observed Reliability 26 3.8. Source 8 27 3.8.1. Description (Military Avionics

243

Measuring agreement in medical informatics reliability studies  

Microsoft Academic Search

Agreement measures are used frequently in reliability studies that involve categorical data. Simple measures like observed agreement and specific agreement can reveal a good deal about the sample. Chance-corrected agreement in the form of the kappa statistic is used frequently based on its correspondence to an intraclass correlation coefficient and the ease of calculating it, but its magnitude depends on

George Hripcsak; Daniel F. Heitjan

2002-01-01

244

Validity and reliability of Turkish version of \\  

Microsoft Academic Search

BACKGROUND: The Hospital Survey on Patient Safety Culture (HSOPS) is used to assess safety culture in many countries. Accordingly, the questionnaire has been translated into Turkish for the study of patient safety culture in Turkish hospitals. The aim of this study is threefold: to determine the validity and reliability of the translated form of HSOPS, to evaluate physicians' and nurses'

Said Bodur; Emel Filiz

2010-01-01

245

General Aviation Aircraft Reliability Study  

NASA Technical Reports Server (NTRS)

This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

2001-01-01

246

Symbolic analysis for parallelizing compilers  

Microsoft Academic Search

The notion of dependence captures that most important properties of a program for efficient execution on parallel computers. The dependence structure of a program defines that necessary constraints of the order of execution of the program components and provides sufficient information for the exploitation of the available parallelism. Static discovery and management of the dependence structure of programs save a

Mohammad R. Haghighat; Constantine D. Polychronopoulos

1996-01-01

247

Parallelism in random access machines  

Microsoft Academic Search

A model of computation based on random access machines operating in parallel and sharing a common memory is presented. The computational power of this model is related to that of traditional models. In particular, deterministic parallel RAM's can accept in polynomial time exactly the sets accepted by polynomial tape bounded Turing machines; nondeterministic RAM's can accept in polynomial time exactly

Steven Fortune; James Wyllie

1978-01-01

248

Fast data parallel polygon rendering  

Microsoft Academic Search

This paper describes a data parallel method for polygon rendering on a massively parallel machine. This method, based on a simple shading model, is targeted for applications which require very fast rendering for extremely large sets of polygons. Such sets are found in many scientific visualization applications. The renderer can handle arbitrarily complex polygons which need not be meshed. Issues

Frank A. Ortega; Charles D. Hansen; James P. Ahrens

1993-01-01

249

Parallelizing Monte Carlo with PMC  

SciTech Connect

PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

1994-11-01

250

Loading databases using dataflow parallelism  

Microsoft Academic Search

This paper describes a parallel database load prototype for Digital's Rdb database product. The prototype takes a dataflow approach to database parallelism. It includes an explorer that discovers and records the cluster configuration in a database, a client CUI interface that gathers the load job description from the user and from the Rdb catalogs, and an optimizer that picks the

Tom Barclay; Robert Barnes; Jim Gray; Prakash Sundaresan

1994-01-01

251

Power electronics reliability.  

SciTech Connect

The project's goals are: (1) use experiments and modeling to investigate and characterize stress-related failure modes of post-silicon power electronic (PE) devices such as silicon carbide (SiC) and gallium nitride (GaN) switches; and (2) seek opportunities for condition monitoring (CM) and prognostics and health management (PHM) to further enhance the reliability of power electronics devices and equipment. CM - detect anomalies and diagnose problems that require maintenance. PHM - track damage growth, predict time to failure, and manage subsequent maintenance and operations in such a way to optimize overall system utility against cost. The benefits of CM/PHM are: (1) operate power conversion systems in ways that will preclude predicted failures; (2) reduce unscheduled downtime and thereby reduce costs; and (3) pioneering reliability in SiC and GaN.

Kaplar, Robert James; Brock, Reinhard C.; Marinella, Matthew; King, Michael Patrick; Stanley, James K.; Smith, Mark A.; Atcitty, Stanley

2010-10-01

252

Reliability of photovoltaic modules  

NASA Astrophysics Data System (ADS)

In order to assess the reliability of photovoltaic modules, four categories of known array failure and degradation mechanisms are discussed, and target reliability allocations have been developed within each category based on the available technology and the life-cycle-cost requirements of future large-scale terrestrial applications. Cell-level failure mechanisms associated with open-circuiting or short-circuiting of individual solar cells generally arise from cell cracking or the fatigue of cell-to-cell interconnects. Power degradation mechanisms considered include gradual power loss in cells, light-induced effects, and module optical degradation. Module-level failure mechanisms and life-limiting wear-out mechanisms are also explored.

Ross, R. G., Jr.

1986-01-01

253

Reliable broadcast protocols  

NASA Technical Reports Server (NTRS)

A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.

Joseph, T. A.; Birman, Kenneth P.

1989-01-01

254

Parallel Adaptive Mesh Refinement  

SciTech Connect

As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the ability of both meshing methods to resolve simulation details by varying the local grid spacing.

Diachin, L; Hornung, R; Plassmann, P; WIssink, A

2005-03-04

255

Parallel language constructs for tensor product computations on loosely coupled architectures  

NASA Technical Reports Server (NTRS)

A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.

Mehrotra, Piyush; Van Rosendale, John

1989-01-01

256

Compact, Reliable EEPROM Controller  

NASA Technical Reports Server (NTRS)

A compact, reliable controller for an electrically erasable, programmable read-only memory (EEPROM) has been developed specifically for a space-flight application. The design may be adaptable to other applications in which there are requirements for reliability in general and, in particular, for prevention of inadvertent writing of data in EEPROM cells. Inadvertent writes pose risks of loss of reliability in the original space-flight application and could pose such risks in other applications. Prior EEPROM controllers are large and complex and do not provide all reasonable protections (in many cases, few or no protections) against inadvertent writes. In contrast, the present controller provides several layers of protection against inadvertent writes. The controller also incorporates a write-time monitor, enabling determination of trends in the performance of an EEPROM through all phases of testing. The controller has been designed as an integral subsystem of a system that includes not only the controller and the controlled EEPROM aboard a spacecraft but also computers in a ground control station, relatively simple onboard support circuitry, and an onboard communication subsystem that utilizes the MIL-STD-1553B protocol. (MIL-STD-1553B is a military standard that encompasses a method of communication and electrical-interface requirements for digital electronic subsystems connected to a data bus. MIL-STD- 1553B is commonly used in defense and space applications.) The intent was to both maximize reliability while minimizing the size and complexity of onboard circuitry. In operation, control of the EEPROM is effected via the ground computers, the MIL-STD-1553B communication subsystem, and the onboard support circuitry, all of which, in combination, provide the multiple layers of protection against inadvertent writes. There is no controller software, unlike in many prior EEPROM controllers; software can be a major contributor to unreliability, particularly in fault situations such as the loss of power or brownouts. Protection is also provided by a powermonitoring circuit.

Katz, Richard; Kleyner, Igor

2010-01-01

257

Semiconductor device reliability  

Microsoft Academic Search

Recent advances in reliability (R) testing and analysis for semiconductor components are discussed in reviews and reports. Topics addressed include R models and failure mechanisms, failure analysis, optoelectronic R, compound-semiconductor R, high-speed-circuit R, R stress screening, and lifetime extrapolation and test standardization. Consideration is given to temperature and use-condition effects on LED degradation, screening and burn-in of electrooptic devices, R

A. Christou; B. A. Unger

1990-01-01

258

Reliability and testing  

NASA Technical Reports Server (NTRS)

Reliability and its interdependence with testing are important topics for development and manufacturing of successful products. This generally accepted fact is not only a technical statement, but must be also seen in the light of 'Human Factors.' While the background for this paper is the experience gained with electromechanical/electronic space products, including control and system considerations, it is believed that the content could be also of interest for other fields.

Auer, Werner

1996-01-01

259

Reliability Centred Maintenance  

Microsoft Academic Search

Reliability centred maintenance (RCM) is a method for maintenance planning that was developed within the aircraft industry\\u000a and later adapted to several other industries and military branches. A high number of standards and guidelines have been issued\\u000a where the RCM methodology is tailored to different application areas, e.g., IEC 60300-3-11, MIL-STD-217, NAVAIR 00-25-403 (NAVAIR 2005), SAE JA 1012 (SAE 2002),

Marvin Rausand; Jørn Vatn

260

The Galley Parallel File System  

NASA Technical Reports Server (NTRS)

Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

Nieuwejaar, Nils; Kotz, David

1996-01-01

261

Parallel contingency statistics with Titan.  

SciTech Connect

This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

Thompson, David C.; Pebay, Philippe Pierre

2009-09-01

262

Software reliability studies  

NASA Technical Reports Server (NTRS)

There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

Hoppa, Mary Ann; Wilson, Larry W.

1994-01-01

263

Probabilistic structural mechanics research for parallel processing computers  

NASA Technical Reports Server (NTRS)

Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

1991-01-01

264

Computation and parallel implementation for early vision  

NASA Technical Reports Server (NTRS)

The problem of early vision is to transform one or more retinal illuminance images-pixel arrays-to image representations built out of such primitive visual features such as edges, regions, disparities, and clusters. These transformed representations form the input to later vision stages that perform higher level vision tasks including matching and recognition. Researchers developed algorithms for: (1) edge finding in the scale space formulation; (2) correlation methods for computing matches between pairs of images; and (3) clustering of data by neural networks. These algorithms are formulated for parallel implementation of SIMD machines, such as the Massively Parallel Processor, a 128 x 128 array processor with 1024 bits of local memory per processor. For some cases, researchers can show speedups of three orders of magnitude over serial implementations.

Gualtieri, J. Anthony

1990-01-01

265

Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux  

SciTech Connect

In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

Guo Zehua; Tang Xianzhu [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

2012-06-15

266

Voltage Sensing using MEMS Parallel-Plate Actuation  

Microsoft Academic Search

Microelectromechanical systems (MEMS) have been proposed as DC electrical metrology references. The design reported here is the first to enhance the qualities of a MEMS DC reference with potential tuning and sensing via an isolated and monolithically integrated MEMS technology and, thereby, convert a stable parallel-plate voltage reference to a simple, sensitive, low-burden voltage sensor. This on-chip system reliably measures

Russell Y. Webb; Noel C. MacDonald

267

Experimenting with parasail: parallel specification and implementation language  

Microsoft Academic Search

This tutorial provides an opportunity to experiment with a new language designed to support the safe, secure, and productive development of parallel programs. ParaSail is a new language with pervasive parallelism coupled with extensive compile-time checking of annotations in the form of assertions, preconditions, postconditions, etc. ParaSail does all checking at compile time, and eliminates race conditions, null dereferences, uninitialized

S. Tucker Taft

2011-01-01

268

Using Coloured Petri Nets for design of parallel raytracing environment  

E-print Network

This paper deals with the parallel raytracing part of virtual-reality system PROLAND, developed at the home institution of authors. It describes an actual implementation of the raytracing part and introduces a Coloured Petri Nets model of the implementation. The model is used for an evaluation of the implementation by means of simulation-based performance analysis and also forms the basis for future improvements of its parallelization strategy.

Korecko, Stefan

2010-01-01

269

A NEW HYBRID RELIABILITY ANALYSIS METHOD: THE DESIGN POINT - RESPONSE SURFACE - SIMULATION METHOD  

Microsoft Academic Search

Classical reliability methods such as First- and Second-Order Reliability Methods (FORM and SORM) have been important breakthroughs toward feasible and reliable integration of probabilistic information and uncertainty analysis into advanced design methods and modern design codes. These methods have been successfully used in solving challenging reliability problems. Nevertheless, caution should be used in the applications of these methods since their

M. Barbato; J. P. Conte

2008-01-01

270

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 2. Technical Report #1201  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the second-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Lai, Cheng-Fei; Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Tindal, Gerald

2012-01-01

271

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 7. Technical Report #1206  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald

2012-01-01

272

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 5. Technical Report #1204  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the fifth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Park, Bitnara Jasmine; Irvin, P. Shawn; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

2012-01-01

273

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 6. Technical Report #1205  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the sixth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

274

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 3. Technical Report #1202  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the third-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Lai, Cheng-Fei; Irvin, P. Shawn; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

2012-01-01

275

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 4. Technical Report #1203  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the fourth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Park, Bitnara Jasmine; Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

276

Parallel keyed hash function construction based on chaotic maps  

NASA Astrophysics Data System (ADS)

Recently, a variety of chaos-based hash functions have been proposed. Nevertheless, none of them works efficiently in parallel computing environment. In this Letter, an algorithm for parallel keyed hash function construction is proposed, whose structure can ensure the uniform sensitivity of hash value to the message. By means of the mechanism of both changeable-parameter and self-synchronization, the keystream establishes a close relation with the algorithm key, the content and the order of each message block. The entire message is modulated into the chaotic iteration orbit, and the coarse-graining trajectory is extracted as the hash value. Theoretical analysis and computer simulation indicate that the proposed algorithm can satisfy the performance requirements of hash function. It is simple, efficient, practicable, and reliable. These properties make it a good choice for hash on parallel computing platform.

Xiao, Di; Liao, Xiaofeng; Deng, Shaojiang

2008-06-01

277

Is Monte Carlo embarrassingly parallel?  

SciTech Connect

Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

2012-07-01

278

THE PARALLEL JAVA 2 LIBRARY Parallel Programming in 100% Java  

E-print Network

, and reducers Parallel Java 2 cluster middleware Future work: · GPU support--GPU kernels written in C/CUDA, CPU Set main program. public void main (String[] args) throws Exception { // Validate command line

Kaminsky, Alan

279

EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS  

SciTech Connect

We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

F. PETRINI; W. FENG

1999-09-01

280

Template based parallel checkpointing in a massively parallel computer system  

SciTech Connect

A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

Archer, Charles Jens (Rochester, MN); Inglett, Todd Alan (Rochester, MN)

2009-01-13

281

On Component Reliability and System Reliability for Space Missions  

NASA Technical Reports Server (NTRS)

This paper is to address the basics, the limitations and the relationship between component reliability and system reliability through a study of flight computing architectures and related avionics components for NASA future missions. Component reliability analysis and system reliability analysis need to be evaluated at the same time, and the limitations of each analysis and the relationship between the two analyses need to be understood.

Chen, Yuan; Gillespie, Amanda M.; Monaghan, Mark W.; Sampson, Michael J.; Hodson, Robert F.

2012-01-01

282

Parallel inverse iteration with reorthogonalization  

SciTech Connect

A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK`s current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN`s and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

Fann, G.I.; Littlefield, R.J.

1993-03-01

283

Parallel inverse iteration with reorthogonalization  

SciTech Connect

A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK's current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN's and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

Fann, G.I.; Littlefield, R.J.

1993-03-01

284

Unpowered to powered failure rate ratio - A key reliability parameter  

NASA Technical Reports Server (NTRS)

It is shown that the initial assumption of the ratio of unpowered to powered failure rates can have a strong influence on the design of a modular system intended for space missions. The analysis is performed for parallel systems and for triple modular redundant systems. Parallel systems are shown to be much more sensitive to the unpowered to powered failure rate ratio than the TMR/Spares systems, however, regardless of which standby redundancy technique is considered, the dependence of the system reliability on this ratio increases as the number of standby spares increases.

Taylor, D. S.

1974-01-01

285

Appendix E: Parallel Pascal development system  

NASA Technical Reports Server (NTRS)

The Parallel Pascal Development System enables Parallel Pascal programs to be developed and tested on a conventional computer. It consists of several system programs, including a Parallel Pascal to standard Pascal translator, and a library of Parallel Pascal subprograms. The library includes subprograms for using Parallel Pascal on a parallel system with a fixed degree of parallelism, such as the Massively Parallel Processor, to conveniently manipulate arrays which have dimensions than the hardware. Programs can be conveninetly tested with small sized arrays on the conventional computer before attempting to run on a parallel system.

1985-01-01

286

System Support for Implicitly Parallel Programming  

Microsoft Academic Search

Implicit parallelization involves developing parallel al- gorithms and applications in environments that pro- vide sequential semantics,e.g., the C programming lan- guage. System tools convert the parallel algorithms into a set of threads partitioned appropriately for a particular parallel machine organization. The result- ing parallel programs are easier and faster to develop, debug and maintain, because the programmer can re- quest

Matthew I. Frank

287

How reliable are DSM resources  

SciTech Connect

Planners are increasingly concerned with the reliability of DSM resources, but [open quotes]reliability[close quotes] means different things to different people. What's essential is that DSM resources be evaluated in a manner consistent with that of generating resources.

Jackson, P.W.

1993-05-01

288

Parallel Inversion of Sparse Matrices  

Microsoft Academic Search

This paper presents a parallel algorithm for obtaining the inverse of a large, nonsingular symmetric matrix A of dimension nxn. The inversion method proposed is based on the triangular factors of A. The task of obtaining the \\

Ramon Betancourt; Fernando L. Alvarado

1986-01-01

289

Software Logging under Speculative Parallelization  

E-print Network

Mar a Llaber az, V ctor Vi~nals, Lawrence Rauchwergerx, and Josep Torrellasy Universidad de Zaragoza://iacoma.cs.uiuc.edu x Texas A&M University rwerger@cs.tamu.edu Summary. Speculative parallelization aggressively runs

Garzarán, María Jesús

290

Parallelization of Stellar Atmosphere Codes  

NASA Astrophysics Data System (ADS)

Parallel computing has turned out as a enabling technology to solve complex physical systems. However, the transition from shared memory, vector computers to massively parallel, distributed memory systems and, recently, to hybrid systems poses new challenges to the scientist. We want to present a cook-book (with a very strong, personal bias) based on our experience with parallization of our code. Some of the general tools and communication libraries are discussed. Our approach includes a mixture of algorithm based, grid based and physical module based parallelization. The advantages, scalability and limitations of each are discussed at example calculations for supernovae. We hope to show that effective parallelization becomes easier with increasing complexity of the physical problem making stellar atmosphere beyond the classical assumptions very suitable.

Höflich, P.

2003-01-01

291

Debugging Serial and Parallel Codes  

NSDL National Science Digital Library

Introduction to debugger software. Serial debugging of array indexing, arguments mismatch, infinite loops, pointer misuse, and memory allocation. Parallel debugging of process count, shared memory, MPI I/O, collective communications, and OpenMP scope.

Ncsa

292

Designing and Building Parallel Programs  

NSDL National Science Digital Library

Designing and Building Parallel Programs [Online] is an innovative traditional print and online resource publishing project. It incorporates the content of a textbook published by Addison-Wesley into an evolving online resource.

293

Parallel Computing on Semidefinite Programs  

E-print Network

Apr 22, 2003 ... Mathematics and Computer Science Division ... Three criteria that influence the parallel scalability of the solver are ... these methods lack polynomial convergence in theory and sometimes exhibit slow convergence in practice.

2003-04-22

294

Spiral parallel magnetic resonance imaging.  

PubMed

Spiral k-space scanning is a rapid magnetic resonance imaging (MRI) technique that can provide an order of magnitude reduction in scan time compared to conventional spin warp techniques. Parallel imaging is another method for reducing scan time that exploits spatially varying radiofrequency (RF) coil sensitivities to reduce the amount of data required to reconstruct an image. Combining spiral scanning with parallel imaging provide a scan time reduction factor that is the product of the reduction factors for each of the techniques and thus can permit very rapid imaging. Image reconstruction for spiral parallel MRI is more involved than for spin warp parallel MRI and is an area of active research. Two techniques for performing this image reconstruction are PILS, a simple image-domain method that relies on localized coil sensitivities, and BOSCO, a method that is based on successive convolution operations in k-space. PMID:17946823

Meyer, Craig H; Hu, Peng

2006-01-01

295

Demonstrating Forces between Parallel Wires.  

ERIC Educational Resources Information Center

Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

Baker, Blane

2000-01-01

296

Turbomachinery CFD on parallel computers  

NASA Technical Reports Server (NTRS)

The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

1992-01-01

297

Program for computer aided reliability estimation  

NASA Technical Reports Server (NTRS)

A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.

Mathur, F. P. (inventor)

1972-01-01

298

UNCORRECTED Reliability analysis of hybrid ceramic/steel gun barrels  

E-print Network

UNCORRECTED PROOF Reliability analysis of hybrid ceramic/steel gun barrels M. GRUJICIC1 , J. R-5069, USA Received in final form 25 February 2002 AB ST R AC T Failure of the ceramic gun-barrel lining probability for the lining is also discussed. Keywords failure; gun-barrel lining; reliability; thermo

Grujicic, Mica

299

Snow reliability in ski resorts considering artificial snowmaking  

Microsoft Academic Search

Snow reliability is the key factor to make skiing on slopes possible and to ensure added value in winter tourism. In this context snow reliability is defined by the duration of a snowpack on the ski runs of at least 50 mm snow water equivalent (SWE), within the main season (Dec-Mar). Furthermore the snowpack should form every winter and be

M. Hofstätter; H. Formayer; P. Haas

2009-01-01

300

Dynamic reliability behavior for sliding wear of carburized steel  

Microsoft Academic Search

In this paper two types of the dynamic reliability model are proposed and compared with the Weibull distribution of sliding wear. These models are concerned with the relationship between the hazard function and the dynamic reliability. One considers a differential equation form, called the DE model, the other takes an algebraic dependence, AE model.The physical problem used in the simulation

K. S. Wang; C. S. Chen; J. J. Huang

1997-01-01

301

The Reliability of Environmental Measures of the College Alcohol Environment.  

ERIC Educational Resources Information Center

Assesses the inter-rater reliability of two environmental scanning tools designed to identify alcohol-related advertisements targeting college students. Inter-rater reliability for these forms varied across different rating categories and ranged from poor to excellent. Suggestions for future research are addressed. (Contains 26 references and 6…

Clapp, John D.; Whitney, Mike; Shillington, Audrey M.

2002-01-01

302

Scaling properties of geometric parallelization  

NASA Astrophysics Data System (ADS)

We present a universal scaling law for all geometrically parallelized computer simulation algorithms. For algorithms with local interaction laws we calculate the scaling exponents for zero and infinite lattice size. The scaling is tested on local (cellular automata, Metropolis Ising) as well as cluster (Swendsen-Wang) algorithms. The practical aspects of the scaling properties lead to a simple recipe for finding the optimum number of processors to be used for the parallel simulation of a particular system.

Jakobs, A.; Gerling, R. W.

1992-01-01

303

Shifting the Parallel Programming Paradigm  

Microsoft Academic Search

Multicore computer architectures are now the standard for desktop computers, high-end servers and personal laptops. Due to the multicore shift in computer architecture, soft- ware engineers must write multithreaded programs to har- ness the resources of these parallel machines. Unfortunately, today's parallel programming techniques are difficult to rea- son about, highly error-prone and challenging to maintain for large-scale software. This

Justin E. Gottschlich; Dwight Y. Winkler; Mark W. Holmes; Jeremy G. Siek; Raytheon Compan

304

MCNPX RUNNING PARALLEL UNDER PVM  

Microsoft Academic Search

SUMMARY Gaps in the MCNPX code release 2.1.5 and release 2.2.3 were closed to enable running the code in multitasking mode on distributed memory parallel machines via the Parallel Virtual Machine (PVM) software. Performance tests were performed to check the runtime behavior of the code. These test show that the code scales well on small sized cluster machines, and provides

Franz X. Gallmeier; Phillip D. Ferguson

2008-01-01

305

Fuzzy Reliability Measures of Fuzzy  

Microsoft Academic Search

Software systems are becoming increasingly more complex and testing it to a reliable system requires a great deal of effort. Stochastically testing promises a solution to this increased testing burden and gives the opportunity to analyze reliability, mean time to failure etc. Today there is hundreds of reliability models with more models developed every year. Still, there exist no models

B. Praba; R. Sujatha; S. Srikrishna

2009-01-01

306

Reliability as an interdomain service  

Microsoft Academic Search

Reliability is a critical requirement of the Internet. The availabil- ity and resilience of the Internet under failures can have significant global effects. However, in the current Internet routing architecture , achieving the high level of reliability demanded by many mission- critical activities can be costly. In this paper, we first propose a novel solution framework called reliability as an

Hao Wang; Yang Richard Yang; Paul H. Liu; Jia Wang; Alexandre Gerber; Albert G. Greenberg

2007-01-01

307

Reliability as an interdomain service  

Microsoft Academic Search

Reliability is a critical requirement of the Internet. The availability and resilience of the Internet under failures can have significant global effects. However, in the current Internet routing architecture, achieving the high level of reliability demanded by many mission critical activities can be costly. In this paper, we first propose a novel solution framework called reliability as an interdomain service

Hao Wang; Yang Richard Yang; Paul H. Liu; Jia Wang; Alexandre Gerber; Albert Greenberg

2007-01-01

308

Enhancing the Performance of a Multiplayer Game by Using a Parallelizing Compiler  

E-print Network

a very popular form of digital entertainment in recent years. They have been delivered in state form of digital entertainment that is presented on many different platforms. To deliver the most; parallel Computing; parallelizing compilers, OSCAR I. INTRODUCTION Video Games have been a very popular

Kasahara, Hironori

309

Issues with Multithreaded Parallelism on Multicore Architectures  

E-print Network

Issues with Multithreaded Parallelism on Multicore Architectures Marc Moreno Maza University of Western Ontario, London, Ontario (Canada) CS3101 (Moreno Maza) Issues with Multithreaded Parallelism on Multicore Architectures CS3101 1 / 35 #12;Plan (Moreno Maza) Issues with Multithreaded Parallelism

Moreno Maza, Marc

310

Spectrophotometric Assay of Mebendazole in Dosage Forms Using Sodium Hypochlorite  

NASA Astrophysics Data System (ADS)

A simple, selective and sensitive spectrophotometric method is described for the determination of mebendazole (MBD) in bulk drug and dosage forms. The method is based on the reaction of MBD with hypochlorite in the presence of sodium bicarbonate to form the chloro derivative of MBD, followed by the destruction of the excess hypochlorite by nitrite ion. The color was formed by the oxidation of iodide with the chloro derivative of MBD to iodine in the presence of starch and forming the blue colored product, which was measured at 570 nm. The optimum conditions that affect the reaction were ascertained and, under these conditions, a linear relationship was obtained in the concentration range of 1.25-25.0·g/ml MBD. The calculated molar absorptivity and Sandell sensitivity values are 9.56·103 l·mol-1·cm-1 and 0.031 ?g/cm2, respectively. The limits of detection and quantification are 0.11 and 0.33 ?g/ml, respectively. The proposed method was applied successfully to the determination of MBD in bulk drug and dosage forms, and no interference was observed from excipients present in the dosage forms. The reliability of the proposed method was further checked by parallel determination by the reference method and also by recovery studies.

Swamy, N.; Prashanth, K. N.; Basavaiah, K.

2014-07-01

311

Reliability analysis of continuous fiber composite laminates  

NASA Technical Reports Server (NTRS)

A composite lamina may be viewed as a homogeneous solid whose directional strengths are random variables. Calculation of the lamina reliability under a multi-axial stress state can be approached by either assuming that the strengths act separately (modal or independent action), or that they interact through a quadratic interaction criterion. The independent action reliability may be calculated in closed form, while interactive criteria require simulations; there is currently insufficient data to make a final determination of preference between them. Using independent action for illustration purposes, the lamina reliability may be plotted in either stress space or in a non-dimensional representation. For the typical laminated plate structure, the individual lamina reliabilities may be combined in order to produce formal upper and lower bounds of reliability for the laminate, similar in nature to the bounds on properties produced from variational elastic methods. These bounds are illustrated for a (0/plus or minus 15)sub s Graphite/Epoxy (GR/EP) laminate. And addition, simple physically plausible phenomenological rules are proposed for redistribution of load after a lamina has failed. These rules are illustrated by application to (0/plus or minus 15)sub s and (90/plus or minus 45/0)sub s GR/EP laminates and results are compared with respect to the proposed bounds.

Thomas, David J.; Wetherhold, Robert C.

1990-01-01

312

Asynchronous parallel status comparator  

DOEpatents

Disclosed is an apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition. 4 figs.

Arnold, J.W.; Hart, M.M.

1992-12-15

313

Asynchronous parallel status comparator  

DOEpatents

Apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition.

Arnold, Jeffrey W. (828 Hickory Ridge Rd., Aiken, SC 29801); Hart, Mark M. (223 Limerick Dr., Aiken, SC 29803)

1992-01-01

314

Parallel 3-D spherical-harmonics transport methods  

SciTech Connect

This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The authors have developed massively parallel algorithms and codes for solving the radiation transport equation on 3-D unstructured spatial meshes consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. Three self-adjoint forms of the transport equation are solved: the even-parity form, the odd-parity form, and the self-adjoint angular flux form. The authors developed this latter form, which offers several significant advantages relative to the traditional forms. The transport equation is discretized in space using a trilinear finite-element approximation, in direction using a spherical-harmonic approximation, and in energy using the multigroup approximation. The discrete equations are solved used a parallel conjugate-gradient. All of the parallel algorithms were implemented on the CM-5 computer at LANL. Calculations are presented which demonstrate that the solution technique is both highly parallel and efficient.

Morel, J.E.; McGhee, J.M. [Los Alamos National Lab., NM (United States). Computing, Information, and Communications Div.; Manteuffel, T. [Univ. of Colorado, Boulder, CO (United States). Dept. of Mathematics

1997-08-01

315

Reliable Quantum Computers  

E-print Network

The new field of quantum error correction has developed spectacularly since its origin less than two years ago. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment. Recovery from errors can work effectively even if occasional mistakes occur during the recovery procedure. Furthermore, encoded quantum information can be processed without serious propagation of errors. Hence, an arbitrarily long quantum computation can be performed reliably, provided that the average probability of error per quantum gate is less than a certain critical value, the accuracy threshold. A quantum computer storing about 10^6 qubits, with a probability of error per quantum gate of order 10^{-6}, would be a formidable factoring engine. Even a smaller, less accurate quantum computer would be able to perform many useful tasks. (This paper is based on a talk presented at the ITP Conference on Quantum Coherence and Decoherence, 15-18 December 1996.)

John Preskill

1997-05-16

316

Consider insulation reliability  

SciTech Connect

This paper reports that when calcium silicate and two brands of mineral wool were compared in a series of laboratory tests, calcium silicate was more reliable. And in-service experience with mineral wool at a Canadian heavy crude refinery provided examples of many of the lab's findings. Lab tests, conducted under controlled conditions following industry accepted practices, showed calcium silicate insulation was stronger, tougher and more durable than the mineral wools to which it was compared. For instance, the calcium silicate insulation exhibited only some minor surface cracking when heated to 1,200[degrees]F (649[degrees]C), while the mineral wools suffered binder burnout resulting in sagging, delamination and a general loss of dimensional stability.

Gamboa (Manville Mechanical Insulations, a Div. of Schuller International Inc., Denver, CO (United States))

1993-01-01

317

Physiologic Trend Detection and Artifact Rejection: A Parallel Implementation of a Multi-state Kalman Filtering Algorithm  

PubMed Central

Using a parallel implementation of the multi-state Kalman filtering algorithm, we have developed an accurate method of reliably detecting and identifying trends, abrupt changes, and artifacts from multiple physiologic data streams in real-time. The Kalman filter algorithm was implemented within an innovative software architecture for parallel computation: a parallel process trellis. Examples, processed in real-time, of both simulated and actual data serve to illustrate the potential value of the Kalman filter as a tool in physiologic monitoring.

Sittig, Dean F.; Factor, Michael

1989-01-01

318

Supporting data intensive applications with medium grained parallelism  

SciTech Connect

ADAMS is an ambitious effort to provide new database access paradigms for the kinds of scientific applications that require massively parallel access to very large data sets in order to be effective. Many of the Grand Challenge Problems fall into this category, as well as those kinds of scientific research which depend on widely distributed shared sets of disparate data. The essence of the ADAMS approach is to view data purely in functional terms, rather than the more traditional structural view in which multiple data items are aggregated into records or tuples of flat files. Further, ADAMS has been implemented as an embedded interface so that scientists can develop applications in the host programming language of their choice, often Fortran, Pascal, or C, and still access shared data generated in other environments. The syntax and semantics of ADAMS is essentially complete. The functional nature of the ADAMS data interface paradigm simplifies its implementation in a distributed environment, e.g., the Mentat run-time system, because one must only distribute functional servers, not pieces of data structures. However, this only opens up the possibility of effective parallel database processing; to realize this potential far more work must be done in the areas of data dependence, intra-statement parallelism, parallel query optimization, and maintaining consistency and reliability in concurrent systems. Discovering how to make effective parallel data access an actually in real scientific applications is the point of this research.

Pfaltz, J.L.; French, J.C.; Grimshaw, A.S.; Son, S.H.

1992-04-01

319

Transfer form  

Cancer.gov

10/02 Transfer Investigational Agent Form This form is to be used for an intra-institutional transfer, one transfer/form. Division of Cancer Prevention National Cancer Institute National Institutes of Health TRANSFER FROM: Investigator transferring agent:

320

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2011 CFR

...2011-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2011-10-01

321

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2012 CFR

...2012-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2012-10-01

322

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2010 CFR

...2010-10-01 true Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2010-10-01

323

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2013 CFR

...2013-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2013-10-01

324

The Reliability of a Criterion-Referenced Composite with the Parts of the Composite Having Different Cutting Scores.  

ERIC Educational Resources Information Center

Rajaratnam, Cronbach and Gleser's generalizability formula for stratified-parallel tests and Raju's coefficient beta are generalized to estimate the reliability of a composite of criterion-referenced tests, where the parts have different cutting scores. (Author/GK)

Raju, Nambury S.

1982-01-01

325

Parallel computation using boundary elements in solid mechanics  

NASA Technical Reports Server (NTRS)

The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

Chien, L. S.; Sun, C. T.

1990-01-01

326

Parallel plasma fluid turbulence calculations  

SciTech Connect

The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center`s CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.

Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

1994-12-31

327

Computational methods for efficient structural reliability and reliability sensitivity analysis  

NASA Technical Reports Server (NTRS)

This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

Wu, Y.-T.

1993-01-01

328

Reliable communication in the presence of failures  

Microsoft Academic Search

The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying

Kenneth P. Birman; Thomas A. Joseph

1987-01-01

329

Failure analysis of the electrostatic parallel-plate micro-actuator  

Microsoft Academic Search

Non-parallel electrode plates resulted in further oblique of the upper electrode until it reached another equilibrium position. The relationship between the obliquity beta in the final equilibrium situation and the initial obliquity a is constructed. Efficient failure and analysis are becoming essential in electrostatic parallel-plate especially for high reliability and safety critical applications. The simulation was carried in the CoventorWare.

Fengli Liu; Yongping Hao

2008-01-01

330

Massively Parallel MRI Detector Arrays  

PubMed Central

Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

Keil, Boris; Wald, Lawrence L

2013-01-01

331

Visualizing Parallel Computer System Performance  

NASA Technical Reports Server (NTRS)

Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

Malony, Allen D.; Reed, Daniel A.

1988-01-01

332

Hybrid parallel programming with MPI and Unified Parallel C.  

SciTech Connect

The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity because of their ability to provide a shared global address space that spans the memories of multiple compute nodes. However, taking advantage of UPC can require a large recoding effort for existing parallel applications. In this paper, we explore a new hybrid parallel programming model that combines MPI and UPC. This model allows MPI programmers incremental access to a greater amount of memory, enabling memory-constrained MPI codes to process larger data sets. In addition, the hybrid model offers UPC programmers an opportunity to create static UPC groups that are connected over MPI. As we demonstrate, the use of such groups can significantly improve the scalability of locality-constrained UPC codes. This paper presents a detailed description of the hybrid model and demonstrates its effectiveness in two applications: a random access benchmark and the Barnes-Hut cosmological simulation. Experimental results indicate that the hybrid model can greatly enhance performance; using hybrid UPC groups that span two cluster nodes, RA performance increases by a factor of 1.33 and using groups that span four cluster nodes, Barnes-Hut experiences a twofold speedup at the expense of a 2% increase in code size.

Dinan, J.; Balaji, P.; Lusk, E.; Sadayappan, P.; Thakur, R.; Mathematics and Computer Science; The Ohio State Univ.

2010-01-01

333

Parallel algorithms for mapping pipelined and parallel computations  

NASA Technical Reports Server (NTRS)

Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

Nicol, David M.

1988-01-01

334

Constructions: Parallel Through A Point  

NSDL National Science Digital Library

After review of Construction Basics, the technique of constructing a parallel line through a point not on the line will be learned. Let's review the basics of Constructions in Geometry first: Constructions - General Rules Review of how to copy an angle is helpful; please review that here: Constructions: Copy a Line Segment and an Angle Now, using a paper, pencil, straight edge, and compass, you will learn how to construct a parallel through a point. A video demonstration is available to help you. (Windows Media ...

Neubert, Mrs.

2010-12-31

335

Benchmarking parallel image analysis systems  

NASA Astrophysics Data System (ADS)

Parallel image analysis systems are shown by the Abingdon Cross benchmark to excel over all other architectures both in terms of speed and cost performance. The Abingdon Cross benchmark is task specific so that the performance of the system under test can be optimized and adjusted to take full advantage of the system's capabilities. The authors have devised a new set of benchmarks for parallel image analysis systems consisting of individual tests of the following operations: (1) Point process, (2) integer convolution, (3) Fourier transform, (4) Boolean algebra, (5) histograming, (6) maximum ranking, (7) median ranking, (8) erosion! dilation, (9) memory to disk transfer, and (10) memory to display transfer.

Preston, Kendall, Jr.; Seigart, Carol

1990-07-01

336

Reliability of Wireless Sensor Networks  

PubMed Central

Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

Damaso, Antonio; Rosa, Nelson; Maciel, Paulo

2014-01-01

337

Forms of matter and forms of radiation  

E-print Network

The theory of defects in ordered and ill-ordered media is a well-advanced part of condensed matter physics. Concepts developed in this field also occur in the study of spacetime singularities, namely: i)- the topological theory of quantized defects (Kibble's cosmic strings) and ii)- the Volterra process for continuous defects, used to classify the Poincar\\'e symmetry breakings. We reassess the classification of Minkowski spacetime defects in the same theoretical frame, starting from the conjecture that these defects fall into two classes, as on they relate to massive particles or to radiation. This we justify on the empirical evidence of the Hubble's expansion. We introduce timelike and null congruences of geodesics treated as ordered media, viz. 'm'-crystals of massive particles and 'r'-crystals of massless particles, with parallel 4-momenta in M^4. Classifying their defects (or 'forms') we find (i) 'm'- and 'r'- Volterra continuous line defects and (ii) quantized topologically stable 'r'-defects, these latter forms being of various dimensionalities. Besides these 'perfect' forms, there are 'imperfect' disclinations that bound misorientation walls in three dimensions. We also speculate on the possible relation of these forms with the large-scale structure of the Universe.

Maurice Kleman

2009-05-28

338

Parallelization of implicit finite difference schemes in computational fluid dynamics  

NASA Technical Reports Server (NTRS)

Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.

Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel

1990-01-01

339

Parallel counters for signed binary signals  

Microsoft Academic Search

A parallel counter is a combinational logic circuit that receives a set of binary count signals in parallel and determines the final count after some fixed delay. In this paper, a more general parallel counter is presented whose count inputs have three states (i.e. down, none, and up or, equivalently, -1, 0, and 1). Such parallel upldown counters find applications

Behrooz Parhami

1989-01-01

340

Determinacy and Repeatability of Parallel Program Schemata  

E-print Network

Determinacy and Repeatability of Parallel Program Schemata Jack B. Dennis Computer Science of asynchronous parallel compu- tations dates back to at least the 1960s. For example, the doctoral thesis of Earl, Texas Email: vsarkar@rice.edu Abstract--The concept of "determinism" of parallel programs and parallel

Kavi, Krishna

341

Exploiting heterogeneous parallelism on a multithreaded multiprocessor  

Microsoft Academic Search

This paper describes an integrated architecture, compiler, runtime, and operating system solution to exploiting heterogeneous parallelism. The architecture is a pipelined multi-threaded multiprocessor, enabling the execution of very fine (multiple operations within an instruction) to very coarse (multiple jobs) parallel activities. The compiler and runtime focus on managing parallelism within a job, while the operating system focuses on managing parallelism

Gail A. Alverson; Robert Alverson; David Callahan; Brian Koblenz; Allan Porterfield; Burton J. Smith

1992-01-01

342

Parallel object-oriented programming in SYMPAL  

Microsoft Academic Search

An object-oriented programming model in a parallel system is presented. It is designed for modelling, describing and solving a wide variety of AI problems. AI applications, in particular, deal with knowledge-bases and a lot of problems have parallel solutions. Therefore, while parallel computers are becoming widespread, there is a need for a rich language that enables the exploitation of parallelism

I. Danieli; S. Cohen

1988-01-01

343

Reliability Differentiation of Electricity Transmission  

Microsoft Academic Search

In many jurisdictions, electric utility restructuring has included the creation of an independent system operator (ISO) to dispatch generation, establish the market-clearing price, and allocate limited transmission capacity among users. This paper differentiates reliability through rate unbundling. We propose a capacity reservation tariff (CRT) to induce the users to self-select their preferred levels of reliability. Based on these self-selected reliability

Chi-Keung Woo; Ira Horowitz; Jennifer Martin

1998-01-01

344

A fourth generation reliability predictor  

NASA Technical Reports Server (NTRS)

A reliability/availability predictor computer program has been developed and is currently being beta-tested by over 30 US companies. The computer program is called the Hybrid Automated Reliability Predictor (HARP). HARP was developed to fill an important gap in reliability assessment capabilities. This gap was manifested through the use of its third-generation cousin, the Computer-Aided Reliability Estimation (CARE III) program, over a six-year development period and an additional three-year period during which CARE III has been in the public domain. The accumulated experience of the over 30 establishments now using CARE III was used in the development of the HARP program.

Bavuso, Salvatore J.; Martensen, Anna L.

1988-01-01

345

File concepts for parallel I/O  

NASA Technical Reports Server (NTRS)

The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

Crockett, Thomas W.

1989-01-01

346

Parallel Programming Examples using MPI  

NSDL National Science Digital Library

Despite the rate at which computers have advanced in recent history, human imagination has advanced faster. Often greater computing power can be achieved by having multiple computers work together on a single problem. This tutorial discusses how Message Passing Interface (MPI) can be used to implement parallel programming solutions in a variety of cases.

Joiner, David; The Shodor Education Foundation, Inc.

347

Availability of a Parallel System  

Microsoft Academic Search

This paper considers a system which has two subsystems in parallel. One subsystem has a general life time and the other is 2-out-of-n:F. The first subsystem is given priority for repair. This paper discusses the availability of the system.

R. Ramanarayanan; K. Usha

1980-01-01

348

Parallel Programming with Declarative Ada  

E-print Network

, the Ada language, and Ada programming practice is given by Booch 2 . Explicit Parallelism in Ada Standard Declarative programming languages e.g., functional and logic pro- gramming languages are semantically elegant on a modern structured imperative language with single-assignment variables. Such a language combines

349

Portable Parallel Programming in HPC  

Microsoft Academic Search

HPC++ is a C++ library and language extension framework that is being developed by the HPC++ consortium as a standard model for portable parallel C++ programming. This paper provides a brief introduction to HPC++ style programming and outlines some of the unresolved issues

Peter H. Beckman; Dennis Gannon; Elizabeth Johnson

1996-01-01

350

The plane with parallel coordinates  

Microsoft Academic Search

By means ofParallel Coordinates planar “graphs” of multivariate relations are obtained. Certain properties of the relationship correspond tothe geometrical properties of its graph. On the plane a point ?? line duality with several interesting properties is induced. A new duality betweenbounded and unbounded convex sets and hstars (a generalization of hyperbolas) and between Convex Unions and Intersections is found. This

Alfred Inselberg

1985-01-01

351

A rugged scalable parallel system  

Microsoft Academic Search

Tremendous strides are being made in the development of applications for scalable, parallel, high performance computing systems. One of the factors limiting further applications has been the lack of small, rugged, embeddable systems to support embedded airborne, shipboard, and landbased installations operating in severe environments. Litton Guidance and Control Systems, together with MasPar Computer Corporation, and supported by the Advanced

Alan L. Smeyne; John R. Nickolls

1995-01-01

352

Parallel Reduction Part I. Preliminaries  

E-print Network

-Reduce #12;4­2 BIG CPU, BIG DATA magine a square dartboard (Figure 4.1) with sides of length 1 the Parallel Java 2 Library class edu.rit.util.Random, rather than the standard Java class java.util.Random, for two reasons: my PRNG class is faster than Java's PRNG class; and my PRNG class has features useful

Kaminsky, Alan

353

Query Optimization for Parallel Execution  

Microsoft Academic Search

The decreasing cost of computing makes it economically viable to reduce the response time of decision support queries by using parallel execution to exploit inexpen- sive resources. This goal poses the following query op- timization problem: Mzntmzze response ttme subject to constraints on throughput, which we motivate as the dual of the traditional DBMS problem, We address this novel problem

Sumit Ganguly; Waqar Hasan; Ravi Krishnamurthy

1992-01-01

354

Ejs Parallel Plate Capacitor Model  

NSDL National Science Digital Library

The Ejs Parallel Plate Capacitor model displays a parallel-plate capacitor which consists of two identical metal plates, placed parallel to one another. The capacitor can be charged by connecting one plate to the positive terminal of a battery and the other plate to the negative terminal. The dielectric constant and the separation of the plates can be changed via sliders. You can modify this simulation if you have Ejs installed by right-clicking within the plot and selecting "Open Ejs Model" from the pop-up menu item. Ejs Parallel Plate Capacitor model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_bu_capacitor.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models for Newtonian mechanics are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Duffy, Andrew

2008-07-14

355

Parallel Processing and Information Retrieval.  

ERIC Educational Resources Information Center

This issue contains nine articles that provide an overview of trends and research in parallel information retrieval. Topics discussed include network design for text searching; the Connection Machine System; PThomas, an adaptive information retrieval system on the Connection Machine; algorithms for document clustering; and system architecture for…

Rasmussen, Edie M.; And Others

1991-01-01

356

Managing Checkpoints for Parallel Programs  

E-print Network

Managing Checkpoints for Parallel Programs Jim Pruyne and Miron Livny Department of Computer Sciences University of Wisconsin{Madison fpruyne, mirong@cs.wisc.edu Abstract Checkpointing is a valuable tool for any scheduling sys- tem to have. With the ability to checkpoint, schedulers are not locked

Feitelson, Dror

357

Managing Checkpoints for Parallel Programs  

E-print Network

Managing Checkpoints for Parallel Programs Jim Pruyne and Miron Livny Department of Computer Sciences University of Wisconsin--Madison fpruyne, mirong@cs.wisc.edu Abstract Checkpointing is a valuable tool for any scheduling sys­ tem to have. With the ability to checkpoint, schedulers are not locked

Wisconsin at Madison, University of

358

PARALLEL AND DISTRIBUTED SIMULATION SYSTEMS  

Microsoft Academic Search

Originating from basic research conducted in the 1970's and 1980's, the parallel and distributed simulation field has ma- tured over the last few decades. Today, operational systems have been fielded for applications such as military training, analysis of communication networks, and air traffic control systems, to mention a few. This tutorial gives an overview of technologies to distribute the execution

Richard M. Fujimoto

1999-01-01

359

Tutorial: Parallel Simulation on Supercomputers  

SciTech Connect

This tutorial introduces typical hardware and software characteristics of extant and emerging supercomputing platforms, and presents issues and solutions in executing large-scale parallel discrete event simulation scenarios on such high performance computing systems. Covered topics include synchronization, model organization, example applications, and observed performance from illustrative large-scale runs.

Perumalla, Kalyan S [ORNL

2012-01-01

360

Systematic parallel programming Jurgen Dingel  

E-print Network

Systematic parallel programming J¨urgen Dingel May 19, 2000 CMU­CS­99­172 School of Computer (DMSO) and the Semi­ conductor Research Corporation (SRC). The views and conclusions contained­level implementation from a trusted, high­level specification. The calculus thus helps structuring and documenting

361

Systematic parallel programming Jurgen Dingel  

E-print Network

Systematic parallel programming Jurgen Dingel May 19, 2000 CMU-CS-99-172 School of Computer Science and the Semi- conductor Research Corporation SRC. The views and conclusions contained in this document allows the stepwise formal derivation of an abstract, low-level implementation from a trusted, high

362

Forms & Guidelines  

Cancer.gov

2003 Step 1: Developing a Cancer Prevention Clinical Trial Forms & Guidelines General Guidelines for Consortia Lead Organization to add Participating to Consortium (doc, 63kb) NCI Request for Proposals, Current DCP Letter of Intent Submission Form

363

Interrelation Between Safety Factors and Reliability  

NASA Technical Reports Server (NTRS)

An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.

Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)

2001-01-01

364

Tax Forms  

NSDL National Science Digital Library

As thoughts in the US turn to taxes (April 15 is just around the corner), Mary Jane Ledvina of the Louisiana State University regional government depository library has provided a simple, effective pointers page to downloadable tax forms. Included are federal tax forms and those for 43 states. Of course, available forms vary by state. Most forms are in Adobe Acrobat (.pdf) format. This is a simple, crisply designed page that should save time, although probably not headaches.

Ledvina, Mary J.

1997-01-01

365

CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED  

EPA Science Inventory

This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter, and the power-law relationship between...

366

CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED  

EPA Science Inventory

This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter , and the power-law relationship betwe...

367

Characterizing Correctness Properties of Parallel Programs Using Fixpoints  

Microsoft Academic Search

We have shown that correctness properties of parallel programs can be described using computation trees and that from these descriptions fixpoint characterizations can be generated. We have also given conditions on the form of computation tree descriptions to ensure that a correctness property can be characterized using continuous fixpoints. A consequence is that a correctness property such as inevitability under

E. Allen Emerson; Edmund M. Clarke

1980-01-01

368

Enabling active storage on parallel I\\/O software stacks  

Microsoft Academic Search

As data sizes continue to increase, the concept of active storage is well fitted for many data analysis kernels. Nevertheless, while this concept has been investigated and deployed in a number of forms, enabling it from the parallel I\\/O software stack has been largely unexplored. In this paper, we propose and evaluate an active storage system that allows data analysis,

Seung Woo Son; Samuel Lang; Philip Carns; Robert Ross; Rajeev Thakur; Berkin Ozisikyilmaz; Prabhat Kumar; Wei-Keng Liao; Alok Choudhary

2010-01-01

369

Evidence for parallel elongated structures in the mesosphere  

NASA Technical Reports Server (NTRS)

The physical cause of partial reflection from the mesosphere is of interest. Data are presented from an image-forming radar at Brighton, Colorado, that suggest that some of the radar scattering is caused by parallel elongated structures lying almost directly overhead. Possible physical sources for such structures include gravity waves and roll vortices.

Adams, G. W.; Brosnahan, J. W.; Walden, D. C.

1983-01-01

370

Mirror versus parallel bimanual reaching  

PubMed Central

Background In spite of their importance to everyday function, tasks that require both hands to work together such as lifting and carrying large objects have not been well studied and the full potential of how new technology might facilitate recovery remains unknown. Methods To help identify the best modes for self-teleoperated bimanual training, we used an advanced haptic/graphic environment to compare several modes of practice. In a 2-by-2 study, we compared mirror vs. parallel reaching movements, and also compared veridical display to one that transforms the right hand’s cursor to the opposite side, reducing the area that the visual system has to monitor. Twenty healthy, right-handed subjects (5 in each group) practiced 200 movements. We hypothesized that parallel reaching movements would be the best performing, and attending to one visual area would reduce the task difficulty. Results The two-way comparison revealed that mirror movement times took an average 1.24 s longer to complete than parallel. Surprisingly, subjects’ movement times moving to one target (attending to one visual area) also took an average of 1.66 s longer than subjects moving to two targets. For both hands, there was also a significant interaction effect, revealing the lowest errors for parallel movements moving to two targets (p?parallel movements with a veridical display (moving to two separate targets). These results point to the expected levels of challenge for these bimanual training modes, which could be used to advise therapy choices in self-neurorehabilitation. PMID:23837908

2013-01-01

371

Bedding parallel veins and their relationship to folding  

NASA Astrophysics Data System (ADS)

Laminated bedding parallel veins hosted in turbiditic sandstone shale sequences from central Victoria, Australia, consist of stacked, millimetre thick, sub-parallel sheets of quartz separated by micaceous layers, wall rock slivers and pressure solution seams. They have very high length to thickness ratios, are laterally continuous over tens to hundreds of metres, and have relatively uniform thickness compared to other vein types. They are intimately associated with and folded by chevron folds, and the quartz grain shape elongation lineation is commonly orthogonal to mesoscopic and macroscopic fold hinge lines. The bedding parallel veins have two morphological forms. Type I are thin (commonly 5-10 cm) laminated veins which have complex microstructures dominated by phyllosilicate inclusion surfaces, related to oblique opening along bedding with varying rates of deposition (opening) relative to shear displacement (slip) along the bedding surfaces. More common are Type II, thicker (generally <20 cm), banded veins of alternating milky-white quartz with wall rock inclusion laminae (formerly fragments) bounded by stylolitic partings parallel to both bedding and the vein margins. The inclusion surfaces in Type I veins track the opening direction during vein formation. Vein opening-sense criteria suggest cyclical pore fluid pressure fluctuations which predate the amplification and propagation of the host chevron folds; i.e. prior to attainment of significant limb dip. Different layer parallel shortening and amplification rates for individual layers within the sedimentary sequence may lead to bedding parallel veins with an opening sense unrelated to the flexural slip folds which eventually follow.

Jessell, M. W.; Willman, C. E.; Gray, D. R.

1994-06-01

372

18 CFR 39.5 - Reliability Standards.  

Code of Federal Regulations, 2010 CFR

...2010-04-01 2010-04-01 false Reliability Standards. 39.5 Section 39...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability...

2010-04-01

373

Avionics design for reliability bibliography  

NASA Technical Reports Server (NTRS)

A bibliography with abstracts was presented in support of AGARD lecture series No. 81. The following areas were covered: (1) program management, (2) design for high reliability, (3) selection of components and parts, (4) environment consideration, (5) reliable packaging, (6) life cycle cost, and (7) case histories.

1976-01-01

374

System reliability of suspension bridges  

Microsoft Academic Search

Provisions for the design of existing suspension bridges often rely on a deterministic basis. Consequently, the reliability of these bridges cannot be assessed if current provisions are applied. In order to develop cost-effective design and maintenance strategies for suspension bridges a system reliability-based approach has to be used. This is accomplished by a probabilistic finite element geometrically nonlinear analysis approach.

Kiyohiro Imai; Dan M. Frangopol

2002-01-01

375

Economic design of reliable networks  

Microsoft Academic Search

This paper describes a general approach to the optimal design of communications networks when considering both economics and reliability. The approach uses a genetic algorithm to identify the best topology of network arcs to collectively meet cost and network reliability considerations. This approach is distinct because it is highly flexible and can readily solve many versions of the network design

Darren L. Deeter; Alice E. Smith

1998-01-01

376

Electric Reliability & Hurricane Preparedness Plan  

E-print Network

NASA: 99+% reliability #12;Hurricane KATRINA #12;MPC Infrastructure Damage · 9,000 broken poles · 2Electric Reliability & Hurricane Preparedness Plan Joe Bosco Account Executive October 17, 2012 #12 w/o power (100%) · Power restored in 11 days · Manpower ­ 11,000 #12;MPC's Hurricane Preparedness

377

Software-Reliability-Engineered Testing  

Microsoft Academic Search

Software testing often results in delays to market and high cost without assuring product reliability. Software reliability engineered testing (SRET), an AT&T best practice, carefully engineers testing to overcome these weaknesses. The article describes SRET in the context of an actual project at AT&T, which is called Fone Follower. The author selected this example because of its simplicity; it in

John D. Musa

1996-01-01

378

Form classification  

NASA Astrophysics Data System (ADS)

The problem of form classification is to assign a single-page form image to one of a set of predefined form types or classes. We classify the form images using low level pixel density information from the binary images of the documents. In this paper, we solve the form classification problem with a classifier based on the k-means algorithm, supported by adaptive boosting. Our classification method is tested on the NIST scanned tax forms data bases (special forms databases 2 and 6) which include machine-typed and handwritten documents. Our method improves the performance over published results on the same databases, while still using a simple set of image features.

Reddy, K. V. Umamaheswara; Govindaraju, Venu

2008-01-01

379

Three-dimensional parallel vortex rings in Bose-Einstein condensates  

SciTech Connect

We construct three-dimensional structures of topological defects hosted in trapped wave fields, in the form of vortex stars, vortex cages, parallel vortex lines, perpendicular vortex rings, and parallel vortex rings, and we show that the latter exist as robust stationary, collective states of nonrotating Bose-Einstein condensates. We discuss the stability properties of excited states containing several parallel vortex rings hosted by the condensate, including their dynamical and structural stability.

Crasovan, Lucian-Cornel [ICFO-Institut de Ciencies Fotoniques, and Department of Signal Theory and Communications, Universitat Politecnica de Catalunya, ES 08034 Barcelona (Spain); Department of Theoretical Physics, Institute of Atomic Physics, P.O. Box MG-6, Bucharest (Romania); Perez-Garcia, Victor M. [Departamento de Matematicas, ETSI Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain); Danaila, Ionut [Laboratoire Jacques-Louis Lions, Universite Paris 6, 175 Rue du Chevaleret, 75013 Paris (France); Mihalache, Dumitru [Department of Theoretical Physics, Institute of Atomic Physics, P.O. Box MG-6, Bucharest (Romania); Torner, Lluis [ICFO-Institut de Ciencies Fotoniques, and Department of Signal Theory and Communications, Universitat Politecnica de Catalunya, ES 08034 Barcelona (Spain)

2004-09-01

380

Statistical modeling of software reliability  

NASA Technical Reports Server (NTRS)

This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

Miller, Douglas R.

1992-01-01

381

Scalable Parallel Random Number Generators Library, The (SPRNG)  

NSDL National Science Digital Library

Computational stochasitc approaches (Monte Carlo methods) based on the random sampling are becoming extremely important research tools not only in their "traditional" fields such as physics, chemistry or applied mathematics but also in social sciences and, recently, in various branches of industry. An indication of importance is, for example, the fact that Monte Carlo calculations consume about one half of the supercomputer cycles. One of the indispensable and important ingredients for reliable and statistically sound calculations is the source of pseudo random numbers. The goal of our project is to develop, implement and test a scalable package for parallel pseudo random number generation which will be easy to use on a variety of architectures, especially in large-scale parallel Monte Carlo applications.

Michael Mascagni, Ashok S.

382

Task parallelism and high-performance languages  

SciTech Connect

The definition of High Performance Fortran (HPF) is a significant event in the maturation of parallel computing: it represents the first parallel language that has gained widespread support from vendors and users. The subject of this paper is to incorporate support for task parallelism. The term task parallelism refers to the explicit creation of multiple threads of control, or tasks, which synchronize and communicate under programmer control. Task and data parallelism are complementary rather than competing programming models. While task parallelism is more general and can be used to implement algorithms that are not amenable to data-parallel solutions, many problems can benefit from a mixed approach, with for example a task-parallel coordination layer integrating multiple data-parallel computations. Other problems admit to both data- and task-parallel solutions, with the better solution depending on machine characteristics, compiler performance, or personal taste. For these reasons, we believe that a general-purpose high-performance language should integrate both task- and data-parallel constructs. The challenge is to do so in a way that provides the expressivity needed for applications, while preserving the flexibility and portability of a high-level language. In this paper, we examine and illustrate the considerations that motivate the use of task parallelism. We also describe one particular approach to task parallelism in Fortran, namely the Fortran M extensions. Finally, we contrast Fortran M with other proposed approaches and discuss the implications of this work for task parallelism and high-performance languages.

Foster, I.

1996-03-01

383

Scalable Performance Environments for Parallel Systems  

NASA Technical Reports Server (NTRS)

As parallel systems expand in size and complexity, the absence of performance tools for these parallel systems exacerbates the already difficult problems of application program and system software performance tuning. Moreover, given the pace of technological change, we can no longer afford to develop ad hoc, one-of-a-kind performance instrumentation software; we need scalable, portable performance analysis tools. We describe an environment prototype based on the lessons learned from two previous generations of performance data analysis software. Our environment prototype contains a set of performance data transformation modules that can be interconnected in user-specified ways. It is the responsibility of the environment infrastructure to hide details of module interconnection and data sharing. The environment is written in C++ with the graphical displays based on X windows and the Motif toolkit. It allows users to interconnect and configure modules graphically to form an acyclic, directed data analysis graph. Performance trace data are represented in a self-documenting stream format that includes internal definitions of data types, sizes, and names. The environment prototype supports the use of head-mounted displays and sonic data presentation in addition to the traditional use of visual techniques.

Reed, Daniel A.; Olson, Robert D.; Aydt, Ruth A.; Madhyastha, Tara M.; Birkett, Thomas; Jensen, David W.; Nazief, Bobby A. A.; Totty, Brian K.

1991-01-01

384

Parallel Worldline Numerics: Implementation and Error Analysis  

E-print Network

We give an overview of the worldline numerics technique, and discuss the parallel CUDA implementation of a worldline numerics algorithm. In the worldline numerics technique, we wish to generate an ensemble of representative closed-loop particle trajectories, and use these to compute an approximate average value for Wilson loops. We show how this can be done with a specific emphasis on cylindrically symmetric magnetic fields. The fine-grained, massive parallelism provided by the GPU architecture results in considerable speedup in computing Wilson loop averages. Furthermore, we give a brief overview of uncertainty analysis in the worldline numerics method. There are uncertainties from discretizing each loop, and from using a statistical ensemble of representative loops. The former can be minimized so that the latter dominates. However, determining the statistical uncertainties is complicated by two subtleties. Firstly, the distributions generated by the worldline ensembles are highly non-Gaussian, and so the standard error in the mean is not a good measure of the statistical uncertainty. Secondly, because the same ensemble of worldlines is used to compute the Wilson loops at different values of $T$ and $x_\\mathrm{ cm}$, the uncertainties associated with each computed value of the integrand are strongly correlated. We recommend a form of jackknife analysis which deals with both of these problems.

Dan Mazur; Jeremy S. Heyl

2014-07-28

385

A Parallel Quantum Computer Simulator  

E-print Network

A Quantum Computer is a new type of computer which can efficiently solve complex problems such as prime factorization. A quantum computer threatens the security of public key encryption systems because these systems rely on the fact that prime factorization is computationally difficult. Errors limit the effectiveness of quantum computers. Because of the exponential nature of quantum com puters, simulating the effect of errors on them requires a vast amount of processing and memory resources. In this paper we describe a parallel simulator which accesses the feasibility of quantum computers. We also derive and validate an analytical model of execution time for the simulator, which shows that parallel quantum computer simulation is very scalable.

Kevin M. Obenland; Alvin M. Despain

1998-04-16

386

Parallel multiplex laser feedback interferometry  

SciTech Connect

We present a parallel multiplex laser feedback interferometer based on spatial multiplexing which avoids the signal crosstalk in the former feedback interferometer. The interferometer outputs two close parallel laser beams, whose frequencies are shifted by two acousto-optic modulators by 2? simultaneously. A static reference mirror is inserted into one of the optical paths as the reference optical path. The other beam impinges on the target as the measurement optical path. Phase variations of the two feedback laser beams are simultaneously measured through heterodyne demodulation with two different detectors. Their subtraction accurately reflects the target displacement. Under typical room conditions, experimental results show a resolution of 1.6 nm and accuracy of 7.8 nm within the range of 100 ?m.

Zhang, Song; Tan, Yidong; Zhang, Shulian, E-mail: zsl-dpi@mail.tsinghua.edu.cn [State Key Laboratory of Precision Measurements, Department of Precision Instruments, Tsinghua University, Beijing 100084 (China)] [State Key Laboratory of Precision Measurements, Department of Precision Instruments, Tsinghua University, Beijing 100084 (China)

2013-12-15

387

Instruction-level parallel processing.  

PubMed

The performance of microprocessors has increased steadily over the past 20 years at a rate of about 50% per year. This is the cumulative result of architectural improvements as well as increases in circuit speed. Moreover, this improvement has been obtained in a transparent fashion, that is, without requiring programmers to rethink their algorithms and programs, thereby enabling the tremendous proliferation of computers that we see today. To continue this performance growth, microprocessor designers have incorporated instruction-level parallelism (ILP) into new designs. ILP utilizes the parallel execution ofthe lowest level computer operations-adds, multiplies, loads, and so on-to increase performance transparently. The use of ILP promises to make possible, within the next few years, microprocessors whose performance is many times that of a CRAY-IS. This article provides an overview of ILP, with an emphasis on ILP architectures-superscalar, VLIW, and dataflow processors-and the compiler techniques necessary to make ILP work well. PMID:17831442

Fisher, J A; Rau, R

1991-09-13

388

Parallel processing spacecraft communication system  

NASA Technical Reports Server (NTRS)

An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

1998-01-01

389

Parallel supercomputing with commodity components  

SciTech Connect

We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10{sup 15} floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

Warren, M.S.; Goda, M.P. [Los Alamos National Lab., NM (United States); Becker, D.J. [Goddard Space Flight Center, Greenbelt, MD (United States)] [and others

1997-09-01

390

Parallel multiplex laser feedback interferometry  

NASA Astrophysics Data System (ADS)

We present a parallel multiplex laser feedback interferometer based on spatial multiplexing which avoids the signal crosstalk in the former feedback interferometer. The interferometer outputs two close parallel laser beams, whose frequencies are shifted by two acousto-optic modulators by 2? simultaneously. A static reference mirror is inserted into one of the optical paths as the reference optical path. The other beam impinges on the target as the measurement optical path. Phase variations of the two feedback laser beams are simultaneously measured through heterodyne demodulation with two different detectors. Their subtraction accurately reflects the target displacement. Under typical room conditions, experimental results show a resolution of 1.6 nm and accuracy of 7.8 nm within the range of 100 ?m.

Zhang, Song; Tan, Yidong; Zhang, Shulian

2013-12-01

391

Synthetic gene networks as potential flexible parallel logic gates  

NASA Astrophysics Data System (ADS)

We show how a synthetic gene network can function, in an optimal window of noise, as a robust logic gate. Interestingly, noise enhances the reliability of the logic operation. Further, the noise level can also be used to switch logic functionality, for instance toggle between AND, OR and XOR gates. We also consider a two-dimensional model of a gene network, where we show how two complementary gate operations can be achieved simultaneously. This indicates the flexible parallel processing potential of this biological system.

Ando, Hiroyasu; Sinha, Sudeshna; Storni, Remo; Aihara, Kazuyuki

2011-03-01

392

Dynamic Scheduling on Parallel Machines  

Microsoft Academic Search

The problem of online job scheduling on various parallel architectures is studied. An O((log log n)1\\/2 )-competitive algorithm for online dynamic scheduling on an n ×n mesh is given. It is proved that this algorithm is optimal up to a constant factor. The algorithm is not greedy, and the lower bound proof shows that no greedy-like algorithm can be very

Anja Feldmann; Jiri Sgall; Shang-hua Teng

1991-01-01

393

Universal schemes for parallel communication  

Microsoft Academic Search

In this paper we isolate a combinatorial problem that, we believe, lies at the heart of this question and provide some encouragingly positive solutions to it. We show that there exists an N-processor realistic computer that can simulate arbitrary idealistic N-processor parallel computations with only a factor of O(log N) loss of runtime efficiency. The main innovation is an O(log

Leslie G. Valiant; Gordon J. Brebner

1981-01-01

394

Parallel strategies for SAR processing  

NASA Astrophysics Data System (ADS)

This article proposes a series of strategies for improving the computer process of the Synthetic Aperture Radar (SAR) signal treatment, following the three usual lines of action to speed up the execution of any computer program. On the one hand, it is studied the optimization of both, the data structures and the application architecture used on it. On the other hand it is considered a hardware improvement. For the former, they are studied both, the usually employed SAR process data structures, proposing the use of parallel ones and the way the parallelization of the algorithms employed on the process is implemented. Besides, the parallel application architecture classifies processes between fine/coarse grain. These are assigned to individual processors or separated in a division among processors, all of them in their corresponding architectures. For the latter, it is studied the hardware employed on the computer parallel process used in the SAR handling. The improvement here refers to several kinds of platforms in which the SAR process is implemented, shared memory multicomputers, and distributed memory multiprocessors. A comparison between them gives us some guidelines to follow in order to get a maximum throughput with a minimum latency and a maximum effectiveness with a minimum cost, all together with a limited complexness. It is concluded and described, that the approach consisting of the processing of the algorithms in a GNU/Linux environment, together with a Beowulf cluster platform offers, under certain conditions, the best compromise between performance and cost, and promises the major development in the future for the Synthetic Aperture Radar computer power thirsty applications in the next years.

Segoviano, Jesus A.

2004-12-01

395

Fully Parallel Stochastic LDPC Decoders  

Microsoft Academic Search

Stochastic decoding is a new approach to iterative decoding on graphs. This paper presents a hardware architecture for fully parallel stochastic low-density parity-check (LDPC) de- coders. To obtain the characteristics of the proposed architecture, we apply this architecture to decode an irregular state-of-the-art (1056,528) LDPC code on a Xilinx Virtex-4 LX200 field-pro- grammable gate-array (FPGA) device. The implemented decoder achieves

Saeed Sharifi Tehrani; Shie Mannor; Warren J. Gross

2008-01-01

396

Middle Path Coarse Grain Parallelization  

E-print Network

Backend OpenMP Fortran OpenMP Backend MPI Fortran MPI Backend ½ �� � BPA RB SB Program Near fine grain layer 3rd layer Near fine grain parallelism in loop body BPA RB SB BPA RB SB BPA RB SB BPA RB SB BPA RB SB BPA RB SB ¾ ¾º½º½ � � Á � � � ¾ Data Dependency Extended Contorol Dependency Conditional Branch

Kasahara, Hironori

397

Parallel Monte Carlo Simulation for control system design  

NASA Technical Reports Server (NTRS)

The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.

Schubert, Wolfgang M.

1995-01-01

398

Mapping Pixel Windows To Vectors For Parallel Processing  

NASA Technical Reports Server (NTRS)

Mapping performed by matrices of transistor switches. Arrays of transistor switches devised for use in forming simultaneous connections from square subarray (window) of n x n pixels within electronic imaging device containing np x np array of pixels to linear array of n(sup2) input terminals of electronic neural network or other parallel-processing circuit. Method helps to realize potential for rapidity in parallel processing for such applications as enhancement of images and recognition of patterns. In providing simultaneous connections, overcomes timing bottleneck or older multiplexing, serial-switching, and sample-and-hold methods.

Duong, Tuan A.

1996-01-01

399

Reliability and structural integrity. [analytical model for calculating crack detection probability  

NASA Technical Reports Server (NTRS)

An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.

Davidson, J. R.

1973-01-01

400

Claims about the Reliability of Student Evaluations of Instruction: The Ecological Fallacy Rides Again  

ERIC Educational Resources Information Center

The vast majority of the research on student evaluation of instruction has assessed the reliability of groups of courses and yielded either a single reliability coefficient for the entire group, or grouped reliability coefficients for each student evaluation of teaching (SET) item. This manuscript argues that these practices constitute a form of…

Morley, Donald D.

2012-01-01

401

Reliability Aspects of Design of Combined Piled-Raft Foundations (CPRF)  

Microsoft Academic Search

SUMMARY The reliability aspects of structural behaviour of CPRF are investigated. The problem is connected with the stochastic model of soil properties. In this approach the influence of autocorrelation for the stiffness modulus is considered. The calculations are made by means of the First Order Reliability Method (FORM) according to Level II of the reliability analysis. The first results are

Carsten Ahner; Dmitri Soukhov; Gert König

1998-01-01

402

Validity of reliability: comparison of interrater reliabilities of psychopathological symptoms.  

PubMed

We have tested the stability of interrater reliability of psychiatric symptoms over a quarter of century using 2 rating scales. Interrater reliabilities of items of 2 psychiatric rating scales employed by 2 consecutive follow-ups were compared. Interrater reliabilites proved to be by and large stable. Interrater reliability depends on the standard deviation of the items scores. In addition to the traditional approach, a new statistical method for unifying the assessments from multiple raters is also presented. Using this method, we demonstrated that probabilities of correct ratings are higher in the absence of manifest symptoms, or in the presence of symptoms, as compared with cases characterized by middle scores. To interpret the relationships revealed in the setting of the experiment, we introduce for its theoretical designation the term "validity of reliability." It is recommended for evaluation of results of rating scales in the context of psychiatric nosology. PMID:17632252

Pethõ, Bertalan; Tusnády, Gábor; Vargha, András; Tolna, Judit; Farkas, Márta; Vizkeleti, Györgyi; Tóth, Agoston; Szilágyi, András; Bitter, István; Kelemen, András; Czobor, Pál

2007-07-01

403

IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 9, NO. 3, MARCH 1998 283 Parallel Computation  

E-print Network

performance demonstrates that parallel computational methods can significantly reduce the computational time developed parallel methods to reduce the time required to perform two com- putationally intensive analyses: homologous sequence searching and multiple sequence alignment. Our parallel searching method reduces

404

Highly parallel sparse Cholesky factorization  

NASA Technical Reports Server (NTRS)

Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

Gilbert, John R.; Schreiber, Robert

1990-01-01

405

Catalytic Parallel Kinetic Resolution under Homogeneous Conditions  

PubMed Central

Two complementary chiral catalysts, the phosphine 8d and the DMAP-derived ent-23b, are used simultaneously to selectively activate one of a mixture of two different achiral anhydrides as acyl donors under homogeneous conditions. The resulting activated intermediates 25 and 26 react with the racemic benzylic alcohol 5 to form enantioenriched esters (R)-24 and (S)-17 by fully catalytic parallel kinetic resolution (PKR). The aroyl ester (R)-24 is obtained with near-ideal enantioselectivity for the PKR process, but (S)-17 is contaminated by ca. 8% of the minor enantiomer (R)-17 resulting from a second pathway via formation of mixed anhydride 24 and its activation by 8d. PMID:20557113

Duffey, Trisha A.; MacKay, James A.; Vedejs, Edwin

2010-01-01

406

A parallel stereo algorithm that produces dense depth maps and preserves image features  

Microsoft Academic Search

To compute reliable dense depth maps, a stereo algorithm must preserve depth discontinuities and avoid gross errors. In this paper, we show how simple and parallel techniques can be combined to achieve this goal and deal with complex real world scenes. Our algorithm relies on correlation followed by interpolation. During the correlation phase the two images play a symmetric role

Pascal Fua

1992-01-01

407

A parallel-plate actuated test structure for fatigue analysis of MEMS  

Microsoft Academic Search

Silicon, heavily used as a structural material in MEMS, is subject to several reliability concerns most importantly fatigue that can limit the utility of MEMS devices in commercial and defense applications. A novel parallel-plate actuated test structure for fatigue analysis of MEMS is designed in this paper, and the structure is fabricated by bulk micromachining. Firstly, according to the predefined

Qi Min; Junyong Tao; Yun'an Zhang; Xun Chen

2011-01-01

408

Parallel global optimisation meta-heuristics using an asynchronous island-model  

Microsoft Academic Search

We propose an asynchronous island-model algo- rithm distribution framework and test the popular Differential Evolution algorithm performance when a few processors are available. We confirm that the island-model introduces the possibility of creating new algorithms consistently going beyond the performances of parallel Differential Evolution multi starts. Moreover, we suggest that using heterogeneous strategies along different islands consistently reaches the reliability

Dario Izzo; Marek Rucinski; Christos Ampatzis

2009-01-01

409

Scheduling Asymmetric Parallelism on a PlayStation3 Cluster Filip Blagojevic 1  

E-print Network

and is reliable in predicting opti- mal mappings of nested parallelism in MPI programs on the PS3 cluster. The presented co-scheduling heuristics reduce slack time on the accelerator cores of the PS3 and improve of Sony PlayStation3 (PS3) nodes. Our analysis reveals the sensitivity of computation and communication

Nikolopoulos, Dimitris

410

Design of voltage controller for parallel operated self excited induction generator — Micro grid  

Microsoft Academic Search

Micro-grid system is currently a conceptual solution to fulfill the commitment of reliable power delivery for future power systems. Renewable power sources such as wind, hydro offer the best potential to supply free power for future micro-grid systems. A micro grid consists of six parallel operated self excited induction generators driven by either wind or hydro system. The self excited

Monika Jain; Sushma Gupta; Gayatri Agnihotri

2011-01-01

411

Native Speakers' versus L2 Learners' Sensitivity to Parallelism in Vp-Ellipsis  

ERIC Educational Resources Information Center

This article examines sensitivity to structural parallelism in verb phrase ellipsis constructions in English native speakers as well as in three groups of advanced second language (L2) learners. The results of a set of experiments, based on those of Tanenhaus and Carlson (1990), reveal subtle but reliable differences among the various learner…

Duffield, Nigel G.; Matsuo, Ayumi

2009-01-01

412

Parallel channel instabilities in boiling water reactor systems: boundary conditions for out of phase oscillations  

Microsoft Academic Search

In this paper we study the boundary conditions during out of phase oscillations, in a system formed by two parallel channels coupled to multimodal neutron kinetics. The fact that the pressure drop can change with time, but remains the same in all the parallel channels, leads us to analytical integration of the time derivative term of the channel momentum equation,

J. L. Muñoz-Cobo; M. Z. Podowski; S. Chiva

2002-01-01

413

NWChem: scalable parallel computational chemistry  

SciTech Connect

NWChem is a general purpose computational chemistry code specifically designed to run on distributed memory parallel computers. The core functionality of the code focuses on molecular dynamics, Hartree-Fock and density functional theory methods for both plane-wave basis sets as well as Gaussian basis sets, tensor contraction engine based coupled cluster capabilities and combined quantum mechanics/molecular mechanics descriptions. It was realized from the beginning that scalable implementations of these methods required a programming paradigm inherently different from what message passing approaches could offer. In response a global address space library, the Global Array Toolkit, was developed. The programming model it offers is based on using predominantly one-sided communication. This model underpins most of the functionality in NWChem and the power of it is exemplified by the fact that the code scales to tens of thousands of processors. In this paper the core capabilities of NWChem are described as well as their implementation to achieve an efficient computational chemistry code with high parallel scalability. NWChem is a modern, open source, computational chemistry code1 specifically designed for large scale parallel applications2. To meet the challenges of developing efficient, scalable and portable programs of this nature a particular code design was adopted. This code design involved two main features. First of all, the code is build up in a modular fashion so that a large variety of functionality can be integrated easily. Secondly, to facilitate writing complex parallel algorithms the Global Array toolkit was developed. This toolkit allows one to write parallel applications in a shared memory like approach, but offers additional mechanisms to exploit data locality to lower communication overheads. This framework has proven to be very successful in computational chemistry but is applicable to any engineering domain. Within the context created by the features above NWChem has grown into a general purpose computational chemistry code that supports a wide variety of energy expressions and capabilities to calculate properties based there upon. The main energy expressions are classical mechanics force fields, Hartree-Fock and DFT both for finite systems and condensed phase systems, coupled cluster, as well as QM/MM. For most energy expressions single point calculations, geometry optimizations, excited states, and other properties are available. Below we briefly discuss each of the main energy expressions and the critical points involved in scalable implementations thereof.

van Dam, Hubertus JJ; De Jong, Wibe A.; Bylaska, Eric J.; Govind, Niranjan; Kowalski, Karol; Straatsma, TP; Valiev, Marat

2011-11-01

414

The process group approach to reliable distributed computing  

NASA Technical Reports Server (NTRS)

The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

Birman, Kenneth P.

1992-01-01

415

Reliability and Maintainability (RAM) Training  

NASA Technical Reports Server (NTRS)

The theme of this manual is failure physics-the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low-cost reliable products. In a broader sense the manual should do more. It should underscore the urgent need CI for mature attitudes toward reliability. Five of the chapters were originally presented as a classroom course to over 1000 Martin Marietta engineers and technicians. Another four chapters and three appendixes have been added, We begin with a view of reliability from the years 1940 to 2000. Chapter 2 starts the training material with a review of mathematics and a description of what elements contribute to product failures. The remaining chapters elucidate basic reliability theory and the disciplines that allow us to control and eliminate failures.

Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Packard, Michael H. (Editor)

2000-01-01

416

A reliable liquid helium detector  

NASA Technical Reports Server (NTRS)

Detector and indicator system, utilizing commercial perforated germanium cryogenic thermometer as level sensor containing adjustable level discriminator with indicators, operates reliably over pressure range from 50 to 900 mm Hg without electronic adjustments.

Krawczonek, W. M.; Stephenson, B.

1972-01-01

417

Asynchronous Migration in Parallel Genetic Programming  

E-print Network

Asynchronous Migration in Parallel Genetic Programming Shisanu Tongchim and Prabhas Chongstitvatana, every subpopulation was migrated between processors using a fully connected topology. The parallel MPI as a message passing library. In the first stage of the implementation, the migration

Fernandez, Thomas

418

Testing Techniques for Parallel Software (TTPS).  

National Technical Information Service (NTIS)

The Testing Techniques for Parallel Software (TTPS) project was conducted by Optimization Technology, Inc. for Rome Laboratory. The purpose was to investigate and implement candidate parallel software testing techniques to improve the quality and reliabil...

R. C. Cox

1996-01-01

419

Parallel Coupled Micro-Macro Actuators  

E-print Network

This thesis presents a new actuator system consisting of a micro-actuator and a macro-actuator coupled in parallel via a compliant transmission. The system is called the Parallel Coupled Micro-Macro Actuator, or PaCMMA. ...

Morrell, John Bryant

1996-01-01

420

DC Circuits: Series-Parallel Resistances  

NSDL National Science Digital Library

In this interactive learning activity, students will learn more about series-parallel circuits. They will measure and calculate the resistance of series-parallel circuits and answer several questions about the example circuit shown.

2013-07-29

421

On-the-fly pipeline parallelism  

E-print Network

Pipeline parallelism organizes a parallel program as a linear sequence of s stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element ...

Lee, I-Ting Angelina

422

Instrumentation for parallel magnetic resonance imaging  

E-print Network

of arrays of sensors. In parallelization, multiple MR scanners (or multiple sensors) are used to collect images from different samples simultaneously. This allows for an increase in the throughput, not the inherent speed, of the MR experiment. Parallel...

Brown, David Gerald

2007-04-25

423

Adaptively Parallel Processor Allocation for Cilk Jobs  

E-print Network

The problem of allocating processor resources fairly and efficiently to parallel jobs has been studied extensively in the past. Most of this work, however, assumes that the instantaneous parallelism of the jobs is known ...

Sen, Siddhartha

424

High performance parallel algorithms for incompressible flows  

E-print Network

Object-Oriented design for the algorithms and its parallel implementation in multi-threading and multi-processing environments is presented. Inexpensive parallel matrix-vector products using bounded buffers for inter-processor communication are suggested...

Sambavaram, Sreekanth Reddy

2012-06-07

425

Reliable Computing 1 (2) (1995), pp. 109-140 Parallel interval-basedreasoning in medical  

E-print Network

-AI-IZj'Ec, ,~K. AHaEPCOH CTaT~,a npononmaeT pna pa6oT, B~mo,~aaMonorpaqbavo[33], B KOTOpbXxonnczanaKrrca6aao inference and data manipulation is based. The semantic justification of these logics is provided

Kearfott, R. Baker

426

Laser and LED reliability update  

NASA Astrophysics Data System (ADS)

The reliability of various types of InGaAsP/InP lasers and LEDs is reviewed with regard to failure modes and system requirements. A systematic exposition, including the degradation modes that govern lifetime, is given. Optical transmission systems are reviewed in general terms. Surface-emitting LEDs and lasers are mainly discussed; edge-emitting LEDs are considered briefly. Degradation modes of optical devices and spectral aspects of reliability for distributed-feedback (DFB) lasers are described.

Fukuda, Mitsuo

1988-10-01

427

Advanced techniques in reliability model representation and solution  

NASA Technical Reports Server (NTRS)

The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

Palumbo, Daniel L.; Nicol, David M.

1992-01-01

428

Parallel machine architecture and compiler design facilities  

NASA Technical Reports Server (NTRS)

The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

1990-01-01

429

Overview – Parallel Computing: Numerics, Applications, and Trends  

Microsoft Academic Search

This book is intended for researchers and practitioners as a foundation for modern parallel computing with several of its\\u000a important parallel applications, and also for students as a basic or supplementary book to accompany advanced courses on parallel\\u000a computing. Fifteen chapters cover the most important issues in parallel computing, from basic principles through more complex\\u000a theoretical problems and applications, together

Marián Vajteršic; Peter Zinterhof; Roman Trobec

430

Hybrid Parallel Programming on HPC Platforms  

Microsoft Academic Search

Summary Most HPC systems are clusters of shared memory nodes. Parallel programming must combine the distributed mem- ory parallelization on the node inter-connect with the shared memory parallelization inside of each node. Various hybrid MPI+OpenMP programming models are compared with pure MPI. Benchmark results of several platforms are presented. This paper analyzes the strength and weakness of several parallel programming

Rolf Rabenseifner

2003-01-01

431

Integrating Parallelizing Compilation Technologies for SMP Clusters  

Microsoft Academic Search

In this paper, a source to source parallelizing complier system, AutoPar, is presentd. The system transforms FORTRAN programs to multi-level hybrid MPI\\/OpenMP parallel programs. Integrated parallel optimizing technologies are utilized extensively to derive an effective program decomposition in the whole program scope. Other features such as synchronization optimization and communication optimization improve the performance scalability of the generated parallel programs,

Xiao-Bing Feng; Li Chen; Yi-Ran Wang; An Xiao-mi; Lin Ma; Chun-lei Sang; Zhao-Qing Zhang

2005-01-01

432

Automatic Multilevel Parallelization Using OpenMP  

NASA Technical Reports Server (NTRS)

In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

2002-01-01

433

EUROPA Parallel C++ Version 2.1  

E-print Network

EUROPA Parallel C++ Version 2.1 The EUROPA Working Group on Parallel C++ Architecture SIG July 10.Roberts@cs.ucl.ac.uk), Winder Russel (R.Winder@dcs.kcl.ac.uk). Abstract This paper presents the deønition of EUROPA: a framework within which parallel C++ environments can be developed and standardised. EUROPA (also called EC++) sets

Caromel, Denis

434

Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism  

ERIC Educational Resources Information Center

The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

Agarwal, Mayank

2009-01-01

435

Compiling parallel programs by optimizing performance  

Microsoft Academic Search

This paper describes how Crystal, a language based on familiar mathematical notation and lambda calculus, addresses the issues of programmability and performance for parallel supercomputers. Some scientifc programmers and theoreticians may ask, “What is new about Crystal?” or “How is it different from existing functional languages?” The answers lie in its model of parallel computation and a theory of parallel

Marina Chen; Young-Il Choo; Jingke Li

1988-01-01

436

Parallelism generics for Ada 2005 and beyond  

Microsoft Academic Search

The Ada programming language is seemingly well-positioned to take advantage of emerging multi-core technologies. While it has always been possible to write parallel algorithms in Ada, there are certain classes of problems however, where the level of effort to write parallel algorithms outweighs the ease and simplicity of a sequential approach. This can result in lost opportunities for parallelism and

Brad J. Moore

2010-01-01

437

Agent communication based SAR image parallel processing  

Microsoft Academic Search

Airborne SAR remote sensing image has the characteristic of large data volume and computation burden, so the processing needs very large computer memory and stronger computation ability. Based on the introduction of the SAR image processing procedure, we study the SAR image processing using computer parallel computation technology. The parallel processing mechanism is based on the parallel computer cluster operation

Tianhe Chi; Xin Zhang; Hongqiao Wu; Jinyun Fang

2003-01-01

438

GPU Massively Parallel Part I. Preliminaries  

E-print Network

Chapter 21 GPU Massively Parallel Part I. Preliminaries Part II. Tightly Coupled Multicore Part III. Loosely Coupled Cluster Part IV. GPU Acceleration Chapter 21. GPU Massively Parallel Chapter 22. GPU Parallel Reduction Chapter 23. Multi-GPU Programming Chapter 24. GPU Sequential Dependencies

Kaminsky, Alan

439

Hyperdimensional Data Analysis Using Parallel Coordinates  

Microsoft Academic Search

This article presents the basic results of using the parallel coordinate representation as a high-dimensional data analysis tool. Several alternatives are reviewed. The basic algorithm for parallel coordinates is laid out and a discussion of its properties as a projective transformation is given. Several duality results are discussed along with their interpretations as data analysis tools. Permutations of the parallel

Edward J. Wegman

1990-01-01

440

Class Overview: 1. Concepts of parallel structures  

E-print Network

data parallelism ! Unlimited / Fixed / Scalable Parallelism 5. Working groups discussion Concepts memory) by changing (iii): ! ! sum2 = b + 1; ! 01/31/2011 ! Parallel & Distributed Computing Class Notes. Locality: ! Temporal locality: references to memory clustered in time. ! ! For example: in count3s

Iamnitchi, Adriana

441

Experimental Study of Multipopulation Parallel Genetic Programming.  

E-print Network

, there is no doubt about the nature of the Parallel Genetic Programming (PGP) algorithm. All authors agree studies have reported the efficiency of parallel Genetic Algorithms and have studied the relationshipExperimental Study of Multipopulation Parallel Genetic Programming. Fernández F.1 , Tomassini M. 2

Fernandez, Thomas

442

Generating random numbers in parallel Mark Hoemmen  

E-print Network

the random numbers in parallel too. We won't make you implement all of this yourself in HW 1, but we do think to parallelize random number generation yourself some time. We'll begin by explaining what we mean by "random random numbers in some of your parallel applications in the future, and you may very well have

California at Berkeley, University of

443

Issues with Multithreaded Parallelism on Multicore Architectures  

E-print Network

Issues with Multithreaded Parallelism on Multicore Architectures Marc Moreno Maza University of Western Ontario, London, Ontario (Canada) CS3101 (Moreno Maza) Issues with Multithreaded Parallelism on Multicore Architectures CS3101 1 / 35 Plan (Moreno Maza) Issues with Multithreaded Parallelism on Multicore

Moreno Maza, Marc

444

Happe Honeywell Associative Parallel Processing Ensemble  

Microsoft Academic Search

Many problems, inherent in air traffic control, weather analysis and prediction, nuclear reaction, missile tracking, and hydrodynamics have common processing characteristics that can most efficiently be solved using parallel “non-conventional” techniques. Because of high sensor data rates, these parallel problem solving techniques cannot be economically applied using the standard sequential computer. The application of special processing techniques such as parallel\\/associative

Orin E. Marvel

1973-01-01

445

ASYNCHRONOUS PARALLEL PATTERN SEARCH FOR NONLINEAR OPTIMIZATION  

E-print Network

ASYNCHRONOUS PARALLEL PATTERN SEARCH FOR NONLINEAR OPTIMIZATION PATRICIA D. HOUGH, TAMARA G. KOLDA. 1, pp. 134­156 Abstract. We introduce a new asynchronous parallel pattern search (APPS). Parallel pattern search can be quite useful for engineering optimization problems characterized by a small number

Kolda, Tamara G.

446

ASYNCHRONOUS PARALLEL PATTERN SEARCH FOR NONLINEAR OPTIMIZATION #  

E-print Network

ASYNCHRONOUS PARALLEL PATTERN SEARCH FOR NONLINEAR OPTIMIZATION # PATRICIA D. HOUGH + , TAMARA G Mathematics Vol. 23, No. 1, pp. 134--156 Abstract. We introduce a new asynchronous parallel pattern search (APPS). Parallel pattern search can be quite useful for engineering optimization problems characterized

Kolda, Tamara G.

447

Parallel Processing at the High School Level.  

ERIC Educational Resources Information Center

This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

Sheary, Kathryn Anne

448

Parallel VLSI Circuit Analysis and Optimization  

E-print Network

(Inter- and Intra- algorithm parallelism, using 8 threads) . . . . . . . . . . . . . . . . . 73 VI HMAPS implementation 1 (Inter-algorithm parallelism only, us- ing 4 threads) vs Newton+Gear2 . . . . . . . . . . . . . . . . . . . . 75 VII HMAPS... implementation 2 (Inter- and Intra-algorithm parallelism, using 8 threads) vs Newton+Gear2 . . . . . . . . . . . . . . . . . . . 75 VIII Computational component cost (in seconds) breakdown for each example circuit...

Ye, Xiaoji

2012-02-14

449

Complete classification of parallel Lorentz surfaces in four-dimensional neutral pseudosphere  

SciTech Connect

A Lorentz surface of an indefinite space form is called parallel if its second fundamental form is parallel with respect to the Van der Waerden-Bortolotti connection. Such surfaces are locally invariant under the reflection with respect to the normal space at each point. Parallel surfaces are important in geometry as well as in general relativity since extrinsic invariants of such surfaces do not change from point to point. Parallel Lorentz surfaces in four-dimensional (4D) Lorentzian space forms are classified by Chen and Van der Veken [''Complete classification of parallel surfaces in 4-dimensional Lorentz space forms,'' Tohoku Math. J. 61, 1 (2009)]. Recently, explicit classification of parallel Lorentz surfaces in the pseudo-Euclidean 4-space E{sub 2}{sup 4} and in the pseudohyperbolic 4-space H{sub 2}{sup 4}(-1) are obtained recently by Chen et al. [''Complete classification of parallel Lorentzian surfaces in Lorentzian complex space forms,'' Int. J. Math. 21, 665 (2010); ''Complete classification of parallel Lorentz surfaces in neutral pseudo hyperbolic 4-space,'' Cent. Eur. J. Math. 8, 706 (2010)], respectively. In this article, we completely classify the remaining case; namely, parallel Lorentz surfaces in 4D neutral pseudosphere S{sub 2}{sup 4}(1). Our result states that there are 24 families of such surfaces in S{sub 2}{sup 4}(1). Conversely, every parallel Lorentz surface in S{sub 2}{sup 4}(1) is obtained from one of the 24 families. The main result indicates that there are major differences between Lorentz surfaces in the de Sitter 4-space dS{sub 4} and in the neutral pseudo 4-sphere S{sub 2}{sup 4}.

Chen, Bang-Yen [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824-1027 (United States)

2010-08-15

450

P-HARP: A parallel dynamic spectral partitioner  

SciTech Connect

Partitioning unstructured graphs is central to the parallel solution of problems in computational science and engineering. The authors have introduced earlier the sequential version of an inertial spectral partitioner called HARP which maintains the quality of recursive spectral bisection (RSB) while forming the partitions an order of magnitude faster than RSB. The serial HARP is known to be the fastest spectral partitioner to date, three to four times faster than similar partitioners on a variety of meshes. This paper presents a parallel version of HARP, called P-HARP. Two types of parallelism have been exploited: loop level parallelism and recursive parallelism. P-HARP has been implemented in MPI on the SGI/Cray T3E and the IBM SP2. Experimental results demonstrate that P-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.25 seconds on a 64-processor T3E. Experimental results further show that P-HARP can give nearly a 20-fold speedup on 64 processors. These results indicate that graph partitioning is no longer a major bottleneck that hinders the advancement of computational science and engineering for dynamically-changing real-world applications.

Sohn, A. [New Jersey Inst. of Tech., Newark, NJ (United States). Dept. of Computer and Information Science; Biswas, R. [National Aeronautics and Space Administration, Moffett Field, CA (United States). Ames Research Center; Simon, H.D. [Lawrence Berkeley National Lab., CA (United States). Computing Sciences Directorate

1997-05-01

451

Form approved:  

Cancer.gov

RETURNED AGENTS LIST Use one form for each Agent and Protocol NCI Protocol Number: Return. No.: Institution Address: Date Received: Principal Investigator (PI) for Study (Please type or print): Signature of Authorizing Official: Date of Return Shipment: Signature

452

Assessment Forms  

NSDL National Science Digital Library

This site contains printable, downloadable forms for teachers to use in the classroom. It includes a daily point sheet, group self-evaluation, portfolio assessment chart, student progress report, student progress self-evaluation, and others.

453

Parallel Assembly of LIGA Components  

SciTech Connect

In this paper, a prototype robotic workcell for the parallel assembly of LIGA components is described. A Cartesian robot is used to press 386 and 485 micron diameter pins into a LIGA substrate and then place a 3-inch diameter wafer with LIGA gears onto the pins. Upward and downward looking microscopes are used to locate holes in the LIGA substrate, pins to be pressed in the holes, and gears to be placed on the pins. This vision system can locate parts within 3 microns, while the Cartesian manipulator can place the parts within 0.4 microns.

Christenson, T.R.; Feddema, J.T.

1999-03-04

454

Standard Templates Adaptive Parallel Library  

E-print Network

Radix Sort. 44 . . 47 . . . . 49 . . 31 REFEREiVCES . . 52 LIST OF FI('URES Figurc I STL Components Figure 2: STAPL. I igurc 3: A P Range and Its S Ranges Figurc 4: STAPI. Mechanism . . , 14 . 27 Fi urc 5: Class Hierarchy of P Ran c Figure... Figurc 12: Radix Sort (Spceduf'is) . . , 51 Vt LIST OF TABLES Tattle Page Non-tMutattng Funcuon Templates (Equivaicni to STI. ) Mutatmg Function Templates (Equivalent to STL) . . Numeric Function Templates (Equivalent to STL) 5 STAPL Parallel...

Arzu, Francisco Jose

2012-06-07

455

Ultimate DWDM format in fiber-true bit-parallel solitons on WDM beams  

NASA Technical Reports Server (NTRS)

Whether true solitons can exist on WDM beams (and in what form) is a question that is generally unknown. This paper will discuss an answer to this question and a demonstration of the bit-parallel WDM transmission.

Yeh, C.; Bergman, L. A.

2000-01-01

456

Measurement, estimation, and prediction of software reliability  

NASA Technical Reports Server (NTRS)

Quantitative indices of software reliability are defined, and application of three important indices is indicated: (1) reliability measurement, (2) reliability estimation, and (3) reliability prediction. State of the art techniques for each of these procedures are presented together with considerations of data acquisition. Failure classifications and other documentation for comprehensive software reliability evaluation are described.

Hecht, H.

1977-01-01

457

Implementation and performance of parallelized elegant.  

SciTech Connect

The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

Wang, Y.; Borland, M.; Accelerator Systems Division (APS)

2008-01-01

458

Xyce parallel electronic simulator design.  

SciTech Connect

This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

2010-09-01

459

Multiplexing of injury codes for the parallel operation of enzyme logic gates.  

PubMed

The development of a highly parallel enzyme logic sensing concept employing a novel encoding scheme for the determination of multiple pathophysiological conditions is reported. The new concept multiplexes a contingent of enzyme-based logic gates to yield a distinct 'injury code' corresponding to a unique pathophysiological state as prescribed by a truth table. The new concept is illustrated using an array of NAND and AND gates to assess the biomedical significance of numerous biomarker inputs including creatine kinase, lactate dehydrogenase, norepinephrine, glutamate, alanine transaminase, lactate, glucose, glutathione disulfide, and glutathione reductase to assess soft-tissue injury, traumatic brain injury, liver injury, abdominal trauma, hemorrhagic shock, and oxidative stress. Under the optimal conditions, physiological and pathological levels of these biomarkers were detected through either optical or electrochemical techniques by monitoring the level of the outputs generated by each of the six logic gates. By establishing a pathologically meaningful threshold for each logic gate, the absorbance and amperometric assays tendered the diagnosis in a digitally encoded 6-bit word, defined as an 'injury code'. This binary 'injury code' enabled the effective discrimination of 64 unique pathological conditions to offer a comprehensive high-fidelity diagnosis of multiple injury conditions. Such processing of relevant biomarker inputs and the subsequent multiplexing of the logic gate outputs to yield a comprehensive 'injury code' offer significant potential for the rapid and reliable assessment of varied and complex forms of injury in circumstances where access to a clinical laboratory is not viable. While the new concept of parallel and multiplexed enzyme logic gates is illustrated here in connection to multi-injury diagnosis, it could be readily extended to a wide range of practical medical, industrial, security and environmental applications. PMID:20617272

Halámek, Jan; Windmiller, Joshua Ray; Zhou, Jian; Chuang, Min-Chieh; Santhosh, Padmanabhan; Strack, Guinevere; Arugula, Mary A; Chinnapareddy, Soujanya; Bocharova, Vera; Wang, Joseph; Katz, Evgeny

2010-09-01

460

Performance prediction for complex parallel applications  

SciTech Connect

Today`s massively parallel machines are typically message-passing systems consisting of hundreds or thousands of processors. Implementing parallel applications efficiently in this environment is a challenging task, and poor parallel design decisions can be expensive to correct. Tools and techniques that allow the fast and accurate evaluation of different parallelization strategies would significantly improve the productivity of application developers and increase throughput on parallel architectures. This paper investigates one of the major issues in building tools to compare parallelization strategies: determining what type of performance models of the application code and of the computer system are sufficient for a fast and accurate comparison of different strategies. The paper is built around a case study employing the Performance Prediction Tool (PerPreT) to predict performance of the Parallel Spectral Transform Shallow Water Model code (PSTSWM) on the Intel Paragon. 13 refs., 6 tabs.

Brehm, J. [Hannover Univ. (Germany). Inst. fuer Rechnerstrukturen und Betriebssysteme; Worley, P.H. [Oak Ridge National Lab., TN (United States)

1997-04-01

461

Optimal Resource Allocation for Security in Reliability Systems M. N. Azaiez  

E-print Network

systems, about protecting nuclear power plants against terrorist attacks or sabotage, or about ensuring1 Optimal Resource Allocation for Security in Reliability Systems M. N. Azaiez Industrial in the security of simple series and parallel systems. However, it is clearly important in practice to extend

Wang, Hai

462

Ultra precision and reliable bonding method  

NASA Technical Reports Server (NTRS)

The bonding of two materials through hydroxide-catalyzed hydration/dehydration is achieved at room temperature by applying hydroxide ions to at least one of the two bonding surfaces and by placing the surfaces sufficiently close to each other to form a chemical bond between them. The surfaces may be placed sufficiently close to each other by simply placing one surface on top of the other. A silicate material may also be used as a filling material to help fill gaps between the surfaces caused by surface figure mismatches. A powder of a silica-based or silica-containing material may also be used as an additional filling material. The hydroxide-catalyzed bonding method forms bonds which are not only as precise and transparent as optical contact bonds, but also as strong and reliable as high-temperature frit bonds. The hydroxide-catalyzed bonding method is also simple and inexpensive.

Gwo, Dz-Hung (Inventor)

2001-01-01

463

18 CFR 39.11 - Reliability reports.  

...AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS...reports. (a) The Electric Reliability Organization shall conduct assessments as determined...Commission. (b) The Electric Reliability Organization shall conduct assessments of...

2014-04-01

464

18 CFR 39.11 - Reliability reports.  

Code of Federal Regulations, 2011 CFR

...AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS...reports. (a) The Electric Reliability Organization shall conduct assessments as determined...Commission. (b) The Electric Reliability Organization shall conduct assessments of...

2011-04-01

465

18 CFR 39.11 - Reliability reports.  

Code of Federal Regulations, 2012 CFR

...AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS...reports. (a) The Electric Reliability Organization shall conduct assessments as determined...Commission. (b) The Electric Reliability Organization shall conduct assessments of...

2012-04-01

466

18 CFR 39.11 - Reliability reports.  

Code of Federal Regulations, 2010 CFR

...AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS...reports. (a) The Electric Reliability Organization shall conduct assessments as determined...Commission. (b) The Electric Reliability Organization shall conduct assessments of...

2010-04-01

467

18 CFR 39.11 - Reliability reports.  

Code of Federal Regulations, 2013 CFR

...AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS...reports. (a) The Electric Reliability Organization shall conduct assessments as determined...Commission. (b) The Electric Reliability Organization shall conduct assessments of...

2013-04-01

468

40 CFR 75.42 - Reliability criteria.  

Code of Federal Regulations, 2010 CFR

...2010-07-01 2010-07-01 false Reliability criteria. 75.42 Section 75.42...Alternative Monitoring Systems § 75.42 Reliability criteria. To demonstrate reliability equal to or better than the continuous...

2010-07-01

469

Reliability growth models for NASA applications  

NASA Technical Reports Server (NTRS)

The objective of any reliability growth study is prediction of reliability at some future instant. Another objective is statistical inference, estimation of reliability for reliability demonstration. A cause of concern for the development engineer and management is that reliability demands an excessive number of tests for reliability demonstration. For example, the Space Transportation Main Engine (STME) program requirements call for .99 reliability at 90 pct. confidence for demonstration. This requires running 230 tests with zero failure if a classical binomial model is used. It is therefore also an objective to explore the reliability growth models for reliability demonstration and tracking and their applicability to NASA programs. A reliability growth model is an analytical tool used to monitor the reliability progress during the development program and to establish a test plan to demonstrate an acceptable system reliability.

Taneja, Vidya S.

1991-01-01

470

High-performance retargetable simulator for parallel architectures. Technical report  

SciTech Connect

In this thesis, the authors describe Proteus, a high-performance simulation-based system for the evaluation of parallel algorithms and system software. Proteus is built around a retargetable parallel architecture simulator and a flexible data collection and display component. The simulator uses a combination of simulation and direct execution to achieve high performance, while retaining simulation accuracy. Proteus can be configured to simulate a wide range of shared memory and message passing MIMD architectures and the level of simulation detail can be chosen by the user. Detailed memory, cache and network simulation is supported. Parallel programs can be written using a programming model based on C and a set of runtime system calls for thread and memory management. The system allows nonintrusive monitoring of arbitrary information about an execution, and provides flexible graphical utilities for displaying recorded data. To validate the accuracy of the system, a number of published experiments were reproduced on Proteus. In all cases the results obtained by simulation are very close to those published, a fact that provides support for the reliability of the system. Performance measurements demonstrate that the simulator is one to two orders of magnitude faster than other similar multiprocessor simulators.

Dellarocas, C.N.

1991-06-01

471

Parallel program debugging with flowback analysis  

SciTech Connect

This thesis describes the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors. The goal of the debugging system is to present to the programmer a graphical view of the dynamic program dependences while keeping the execution-time overhead low. The author first describes the use of flowback analysis to provide information on causal relationship between events in a programs' execution without re-executing the program for debugging. Execution time overhead is kept low by recording only a small amount of trace during a program's execution. He uses semantic analysis and a technique called incremental tracing to keep the time and space overhead low. As part of the semantic analysis, he uses a static program dependence graph structure that reduces the amount of work done at compile time and takes advantage of the dynamic information produced during execution time. The cornerstone of the incremental tracing concept is to generate a coarse trace during execution and fill incrementally, during the interactive portion of the debugging session, the gap between the information gathered in the coarse trace and the information needed to do the flowback analysis using the coarse trace. Then, he describes how to extend the flowback analysis to parallel programs. The flowback analysis can span process boundaries; i.e., the most recent modification to a shared variable might be traced to a different process than the one that contains the current reference. The static and dynamic program dependence graphs of the individual processes are tied together with synchronization and data dependence information to form complete graphs that represent the entire program.

Choi, Jongdeok.

1989-01-01

472

PERFORMANCE MEASUREMENT OF MONTE CARLO PHOTON TRANSPORT ON PARALLEL MACHINES  

E-print Network

particle transport is an inherently parallel (or embarrassingly parallel) computational method that has there is also interest to explore multithreaded architectures to improve parallel performance of scientific

Majumdar, Amit

473

Computational Thermochemistry and Benchmarking of Reliable Methods  

SciTech Connect

During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

2006-06-20

474

Detection of faults and software reliability analysis  

NASA Technical Reports Server (NTRS)

Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.

Knight, John C.

1987-01-01

475

Assessment of NDE reliability data  

NASA Technical Reports Server (NTRS)

Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.

Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.

1975-01-01

476

76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...  

Federal Register 2010, 2011, 2012, 2013

...Interconnection Reliability Operating Limits; System Restoration Reliability...Interconnection Reliability Operating Limits (IROL) within its Wide-Area...NERC Glossary of Terms, ``Operational Planning Analysis'' and...Interconnection Reliability Operating Limits, Order No. 748, 134...

2011-07-19

477

Photovoltaic power system reliability considerations  

NASA Technical Reports Server (NTRS)

An example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems is presented. This particular application is for a solar cell power system demonstration project designed to provide electric power requirements for remote villages. The techniques utilized involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of fail-safe and planned spare parts engineering philosophy.

Lalli, V. R.

1980-01-01

478

Strong words or moderate words: A comparison of the reliability and validity of responses on attitude scales  

E-print Network

wording. This study tested this assumption using commonly accepted criteria for reliability and validity. Two forms of attitude scales were created--a strongly worded form and a moderately worded form--measuring two attitude objects--attitude towards...

Frey, Bruce B.; Edwards, Lisa M.

2011-01-01

479

First reliability test of a surface micromachined microengine using SHiMMeR  

SciTech Connect

The first-ever reliability stress test on surface micromachined microengines developed at Sandia National Laboratories (SNL) has been completed. We stressed 41 microengines at 36,000 RPM and inspected the functionality at 60 RPM. We have observed an infant mortality region, a region of low failure rate (useful life), and no signs of wearout in the data. The reliability data are presented and interpreted using standard reliability methods. Failure analysis results on the stressed microengines are presented. In our effort to study the reliability of MEMS, we need to observe the failures of large numbers of parts to determine the failure modes. To facilitate testing of large numbers of micromachines. The Sandia High Volume Measurement of Micromachine Reliability (SHiMMeR) system has computer controlled positioning and the capability to inspect moving parts. The development of this parallel testing system is discussed in detail.

Tanner, D.M.; Smith, N.F.; Bowman, D.J. [and others

1997-08-01

480

Integrated Task and Data Parallel Programming  

NASA Technical Reports Server (NTRS)

This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.

Grimshaw, A. S.

1998-01-01

481

Parallel network simulations with NEURON.  

PubMed

The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2,000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored. PMID:16732488

Migliore, M; Cannia, C; Lytton, W W; Markram, Henry; Hines, M L

2006-10-01

482

Parallel processing for scientific computations  

NASA Technical Reports Server (NTRS)

The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.

Alkhatib, Hasan S.

1991-01-01

483

Parallel computing in enterprise modeling.  

SciTech Connect

This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

2008-08-01

484

On the Theory of Quadrature Oscillations Obtained Through Parallel VCOs  

Microsoft Academic Search

This paper presents a theory of parallel quadrature voltage-controlled oscillators, which are widely used to generate in-phase and quadrature signals in communication systems. The theory is developed without making any hypothesis on the circuit topology and allows us to thoroughly analyze the synchronized oscillations, as well as the effect of unavoidable parameter mismatches, by means of closed-form expressions for the

Antonio Buonomo; Alessandro Lo Schiavo

2010-01-01

485

Dynamics of polysilicon parallel-plate electrostatic actuators  

Microsoft Academic Search

The response of a polysilicon parallel-plate electrostatic actuator to a.c. signals at different bias voltages has been measured with a laser interferometer. Using microhinges, large plates (with areas from 100 ?m2 to ? 0.1 mm2) with long thin support beams (such as 600 ?m × 3 ?m × 1.5 ?m) are rotated off the surface of the substrate to form

Patrick B. Chu; Phyllis R. Nelson; Mark L. Tachiki; Kristofer S. J. Pister

1996-01-01

486

A subliminal manipulation of the Extended Parallel Process Model  

E-print Network

the context of skin cancer. The goals of this study were to (1) assess the effects of subliminal embeds as fear appeals (2) within the framework of the Extended Parallel Process Model, the EPPM (Witte, 1992a). While this study demonstrated that subliminal... go unnoticed by individuals (Dixon, 1981). To extend the inquiry into subliminal message processing, this project places embedded pictures (a form of subliminal research) in the context of skin cancer This thesis uses the style of mm ni ' n...

Stephenson, Michael Taylor

2012-06-07

487

The ParaScope parallel programming environment  

NASA Technical Reports Server (NTRS)

The ParaScope parallel programming environment, developed to support scientific programming of shared-memory multiprocessors, includes a collection of tools that use global program analysis to help users develop and debug parallel programs. This paper focuses on ParaScope's compilation system, its parallel program editor, and its parallel debugging system. The compilation system extends the traditional single-procedure compiler by providing a mechanism for managing the compilation of complete programs. Thus, ParaScope can support both traditional single-procedure optimization and optimization across procedure boundaries. The ParaScope editor brings both compiler analysis and user expertise to bear on program parallelization. It assists the knowledgeable user by displaying and managing analysis and by providing a variety of interactive program transformations that are effective in exposing parallelism. The debugging system detects and reports timing-dependent errors, called data races, in execution of parallel programs. The system combines static analysis, program instrumentation, and run-time reporting to provide a mechanical system for isolating errors in parallel program executions. Finally, we describe a new project to extend ParaScope to support programming in FORTRAN D, a machine-independent parallel programming language intended for use with both distributed-memory and shared-memory parallel computers.

Cooper, Keith D.; Hall, Mary W.; Hood, Robert T.; Kennedy, Ken; Mckinley, Kathryn S.; Mellor-Crummey, John M.; Torczon, Linda; Warren, Scott K.

1993-01-01

488

Computer-Aided Parallelizer and Optimizer  

NASA Technical Reports Server (NTRS)

The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

Jin, Haoqiang

2011-01-01

489

SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws  

NASA Technical Reports Server (NTRS)

With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less costly than development of comparable parallel code. Moreover, SequenceL not only automatically parallelizes the code, but since it is based on CSP-NT, it is provably race free, thus eliminating the largest quality challenge the parallelized software developer faces.

Cooke, Daniel; Rushton, Nelson

2013-01-01

490

RELIABILITY MODELLING WITH FUZZY COVARIATES  

Microsoft Academic Search

Uncertainty is an intrinsic feature of data containing the underlying information of reliability engineering realities. Randomness and fuzziness are two different type uncertainties although there is certain link between them. Cox's PH (Proportional Hazards) models and Lawless and Thiagarajah's CIF (Conditional Intensity Function) models addressed the random uncertainty in a very general format. As a matter of reflection, conditional monitoring

R. Guo; C. E. Love

2002-01-01

491