Note: This page contains sample records for the topic parallel forms reliability from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results.
Last update: November 12, 2013.
1

Robinson's Measure of Agreement as a Parallel Forms Reliability Coefficient.  

ERIC Educational Resources Information Center

|A major deficiency in classical test theory is the reliance on Pearson product-moment (PPM) correlation concepts in the definition of reliability. PPM measures are totally insensitive to first moment differences in tests which leads to the dubious assumption of essential tan-equivalence. Robinson proposed a measure of agreement that is sensitive…

Willson, Victor L.

2

Reliability Speedup: An Effective Metric for Parallel Application with Checkpointing  

Microsoft Academic Search

With parallel computing system scaling up, the system reliability drastically decreases, so parallel applications running on such system must tolerate hardware failures. Checkpointing is widely used in the domain of large-scale parallel computing, which periodically saves the state of computation to stable storage. This produces in negligible fault tolerance overhead. The traditional speedup only measures the performance of failure-free system.

Zhiyuan Wang

2009-01-01

3

Swift : fast, reliable, loosely coupled parallel computation.  

SciTech Connect

A common pattern in scientific computing involves the execution of many tasks that are coupled only in the sense that the output of one may be passed as input to one or more others - for example, as a file, or via a Web Services invocation. While such 'loosely coupled' computations can involve large amounts of computation and communication, the concerns of the programmer tend to be different than in traditional high performance computing, being focused on management issues relating to the large numbers of datasets and tasks (and often, the complexities inherent in 'messy' data organizations) rather than the optimization of interprocessor communication. To address these concerns, we have developed Swift, a system that combines a novel scripting language called SwiftScript with a powerful runtime system based on CoG Karajan and Falkon to allow for the concise specification, and reliable and efficient execution, of large loosely coupled computations. Swift adopts and adapts ideas first explored in the GriPhyN virtual data system, improving on that system in many regards. We describe the SwiftScript language and its use of XDTM to describe the logical structure of complex file system structures. We also present the Swift system and its use of CoG Karajan, Falkon, and Globus services to dispatch and manage the execution of many tasks in different execution environments. We summarize application experiences and detail performance experiments that quantify the cost of Swift operations.

Zhao, Y.; Hategan, M.; Clifford, B.; Foster, I.; von Laszewski, G.; Nefedova, V.; Raicu, I.; Stef-Praun, T.; Wilde, M.; Mathematics and Computer Science; Univ. of Chicago

2007-01-01

4

Evaluating the Series or Parallel Structure Assumption for System Reliability  

Microsoft Academic Search

The structure of a system is important for specifying a mathematical form for calculating reliability from component level data. For some systems, full-system reliability data are expensive or difficult to obtain, and using component-level data to improve the estimation of system reliability can be highly beneficial. However, if the intended structure of the system during design does not match the

Christine M. Anderson-Cook

2008-01-01

5

Absence of parallel forms for the traditional individual intelligence tests  

Microsoft Academic Search

The issue of the absence of parallel forms for the traditional individual intelligence tests has received little attention\\u000a in the area of psychological testing ever since the early demise of the Wechsler Bellevue Form II and the delayed discontinuance\\u000a of Form M of the Stanford-Binet Intelligence Scale. Five reasons have been presented here to argue that the availability of\\u000a parallel

M. Y. Quereshi

2003-01-01

6

Towards Flexible, Reliable, High Throughput Parallel Discrete Event Simulations  

Microsoft Academic Search

\\u000a The excessive amount of time necessary to complete large-scale discrete-event simulations of complex systems such as telecommunication\\u000a networks, transportation systems, and multiprocessor computers continues to plague researchers and impede progress in many\\u000a important domains. Parallel discrete-event simulation techniques offer an attractive solution to addressing this problem by\\u000a enabling scalable execution, and much prior research has been focused on this approach.

Richard Fujimoto; Alfred Park; Jen-Chih Huang

2007-01-01

7

Analytical form of Shepp-Logan phantom for parallel MRI  

Microsoft Academic Search

We present an analytical form of ground-truth k-space data for the 2-D Shepp-Logan brain phantom in the presence of multiple and non-homogeneous receiving coils. The analytical form allows us to conduct realistic simulations and validations of reconstruction algorithms for parallel MRI. The key contribution of our work is to use a polynomial representation of the coil's sensitivity. We show that

Matthieu Guerquin-Kern; F. I. Karahanog?lu; Dimitri Van De Ville; Klaas P. Pruessmann; Michael Unser

2010-01-01

8

Bayesian Reliability Analysis of Complex Series\\/Parallel Systems of Binomial Subsystems and Components  

Microsoft Academic Search

A Bayesian procedure is presented for estimating the reliability (or availability) of a complex system of independent binomial series or parallel subsystems and components. Repeated identical components or subsystems are also permitted. The method uses either test or prior data (perhaps both or neither) at the system, subsystem, and component levels. Beta prior distributions are assumed throughout. The method is

H. F. Martz; R. A. Wailer

1990-01-01

9

Maximum-reliability parallel-serial structure with two types of component faults  

SciTech Connect

We consider the problem of finding the maximum-reliability parallel-serial structure with constraints on the number of elements. Given are n identical elements subject to faults of two types: an {open_quotes}open{close_quotes} fault with probability p and a {open_quotes}short circuit{close_quotes} with probability q. The faults are assumed statistically independent. The elements are used to construct a parallel-serial network. What n-component structure produces the maximum reliability? Such problems have been studied, but the algorithms proposed by previous authors are inapplicable for large n. Thus, the enumeration method is totally hopeless even for relatively small n (n {ge} 100), and the complexity of the algorithm rapidly increases with the increase of n. In this paper we consider a somewhat different statement of the problem: find the maximum-reliability parallel-serial network with at most n elements. This problem is more difficult but more interesting than the previous problem. First, we have to find the optimum not only among n-element networks, but also networks with fewer elements. Second, this is the proper formulation of the problem given constraints on the network parameters, e.g., on network cost or weight. We carry out a qualitative analysis of the problem and propose a solution algorithm which finds an exact or an approximate solution virtually for any n.

Kirilyuk, V.S.

1995-09-01

10

Parallel arrays of microtubles formed in electric and magnetic fields  

Microsoft Academic Search

The influence of electric and magnetic fields on microtubule assembly in vitro was studied. Both types of field caused alignment of microtubules in parallel arrays, as demonstrated by electron micrographs. These Iindings suggest a possible role of microtubules in the biological effects of exogenous as well as endogenous

Peter M. Vassilev; Reni T. Dronzine; Maria P. Vassileva; Georgi A. Georgiev

1982-01-01

11

Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data  

PubMed Central

Horn’s parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn’s seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton et al., all make assertions about the requisite distributional forms of the random data generated for use in PA. Readily available software is used to test whether the results of PA are sensitive to several distributional prescriptions in the literature regarding the rank, normality, mean, variance, and range of simulated data on a portion of the National Comorbidity Survey Replication (Pennell et al., 2004) by varying the distributions in each PA. The results of PA were found not to vary by distributional assumption. The conclusion is that PA may be reliably performed with the computationally simplest distributional assumptions about the simulated data.

Dinno, Alexis

2009-01-01

12

RELIABILITY OPTIMIZATION OF A NON-REPAIRABLE COMPOUND SERIES-PARALLEL SYSTEM  

Microsoft Academic Search

Product reliability and manufacturing cost are two essential factors in increasing competition within industries. Most studies on predict product or system reliability are based on failure rate models, and assume components of reliability system are independent of one another. The compatibility among components is thus ignored, making predictions of the system reliability imprecise. This study focusing primarily on a non-repairable

Chung-Ho Wang; Yi Hsu

2008-01-01

13

CONSTRUCTING PARALLEL SIMULATION EXERCISES FOR ASSESSMENT CENTERS AND OTHER FORMS OF BEHAVIORAL ASSESSMENT  

Microsoft Academic Search

Assessment centers rely on multiple, carefully constructed behavioral simulation exercises to measure individuals on multiple performance dimensions. Although methods for establishing parallelism among al- ternate forms of paper-and-pencil tests have been well researched (i.e., to equate tests on difficulty such that the scores can be compared), little re- search has considered the why and how of parallel simulation exercises. This

BRADLEY J. BRUMMEL; DEBORAH E. RUPP; SETH M. SPAIN

2009-01-01

14

Highly reliable 64-channel sequential and parallel tubular reactor system for high-throughput screening of heterogeneous catalysts  

Microsoft Academic Search

Highly reliable 64-channel sequential and parallel tubular reactor for high-throughput screening of heterogeneous catalysts is constructed with stainless steel. In order to have a uniform flow rate at each channel, 64 capillaries are placed between the outlet of multiport valve and the inlet of each reactor. Flow rate can be controlled within +\\/-1.5%. Flow distribution can be easily adjusted for

Kwang Seok Oh; Yong Ki Park; Seong Ihl Woo

2005-01-01

15

Alternate form reliability and equivalency of the rey auditory verbal learning test  

Microsoft Academic Search

Assessed alternate form reliability and equivalency for the Rey Auditory Verbal Learning Test (AVLT) in a clinical sample. A test-retest, counterbalanced design was utilized with a diagnostically heterogenous group of 85 VA Medical Center patients. The mean test-retest interval was 140 min. Alternate form reliability coefficients were highly significant, all p<.001, and ranged from .60 to .77. The forms yielded

Joseph J. Ryan; Michael E. Geisser; David M. Randall; Randy J. Georgemiller

1986-01-01

16

Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method  

ERIC Educational Resources Information Center

|In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct…

Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

2008-01-01

17

Reliabilities, Validities, and Cutoff Scores of the Depression Hopelessness Suicide Screening Form Among Women Offenders  

Microsoft Academic Search

Depression and hopelessness can be associated with negative outcomes among offenders, such as reduced treatment impact, institutional misconduct, suicide risk, and health care costs. This study evaluated the reliability and validity of the Depression Hopelessness Suicide Screening Form (DHS) among women offenders. The DHS Depression and Hopelessness scales showed good internal consistency and test—retest reliability. Convergent and discriminant validities were

Daryl G. Kroner; Tamara Kang; Jeremy F. Mills; Andrew J. R. Harris; Michelle M. Green

2011-01-01

18

Genomic evidence for the parallel evolution of coastal forms in the Senecio lautus complex.  

PubMed

Instances of parallel ecotypic divergence where adaptation to similar conditions repeatedly cause similar phenotypic changes in closely related organisms are useful for studying the role of ecological selection in speciation. Here we used a combination of traditional and next generation genotyping techniques to test for the parallel divergence of plants from the Senecio lautus complex, a phenotypically variable groundsel that has adapted to disparate environments in the South Pacific. Phylogenetic analysis of a broad selection of Senecio species showed that members of the S. lautus complex form a distinct lineage that has diversified recently in Australasia. An inspection of thousands of polymorphisms in the genome of 27 natural populations from the S. lautus complex in Australia revealed a signal of strong genetic structure independent of habitat and phenotype. Additionally, genetic differentiation between populations was correlated with the geographical distance separating them, and the genetic diversity of populations strongly depended on geographical location. Importantly, coastal forms appeared in several independent phylogenetic clades, a pattern that is consistent with the parallel evolution of these forms. Analyses of the patterns of genomic differentiation between populations further revealed that adjacent populations displayed greater genomic heterogeneity than allopatric populations and are differentiated according to variation in soil composition. These results are consistent with a process of parallel ecotypic divergence in face of gene flow. PMID:23710896

Roda, Federico; Ambrose, Luke; Walter, Gregory M; Liu, Huanle L; Schaul, Andrea; Lowe, Andrew; Pelser, Pieter B; Prentis, Peter; Rieseberg, Loren H; Ortiz-Barrientos, Daniel

2013-05-25

19

A hot carrier parallel testing technique to give a reliable extrapolation  

Microsoft Academic Search

A technique for wafer level parallel testing of hot-carrier lifetime has been developed. Transistors of the same dimensions were adjacently arranged on the same chip, and the lifetimes were extrapolated from their measured lifetime at a low drain voltage for practical use. This technique was applied to the optimization of LDD (lightly doped drain) sidewall thickness. This technique eliminates the

Norio Koike; Manko Ito; Hiroko Kuriyama

1990-01-01

20

Alternate Forms Reliability of the Assessment of Motor and Process Skills.  

ERIC Educational Resources Information Center

|The alternate-forms reliability of the Assessment of Motor and Process Skills (AMPS) (A. Fisher, 1997), where alternate forms means different pairs of AMPS tasks, was studied with 91 people who had performed four AMPS tasks. Results support use of the AMPS activities of daily-living motor and process scales. (SLD)|

Kirkley, Karen N.; Fisher, Anne G.

1999-01-01

21

Parallel FE Approximation of the Even/Odd Parity Form of the Linear Boltzmann Equation  

SciTech Connect

A novel solution method has been developed to solve the linear Boltzmann equation on an unstructured triangular mesh. Instead of tackling the first-order form of the equation, this approach is based on the even/odd-parity form in conjunction with the conventional mdtigroup discrete-ordinates approximation. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, and the method is well suited for massively parallel computers.

Drumm, Clifton R.; Lorenz, Jens

1999-07-21

22

Reliability  

Microsoft Academic Search

This special volume of Statistical Sciences presents some innovative, if not provocative, ideas in the area of reliability, or perhaps more appropriately named, integrated system assessment. In this age of exponential growth in science, engineering and technology, the capability to evaluate the performance, reliability and safety of complex systems presents new challenges. Today's methodology must respond to the ever-increasing demands

Sallie Keller-McNulty; Alyson Wilson; Christine Anderson-Cook

2007-01-01

23

Highly reliable 64-channel sequential and parallel tubular reactor system for high-throughput screening of heterogeneous catalysts  

NASA Astrophysics Data System (ADS)

Highly reliable 64-channel sequential and parallel tubular reactor for high-throughput screening of heterogeneous catalysts is constructed with stainless steel. In order to have a uniform flow rate at each channel, 64 capillaries are placed between the outlet of multiport valve and the inlet of each reactor. Flow rate can be controlled within +/-1.5%. Flow distribution can be easily adjusted for sequential and parallel mode of operation. The reactor diameter is too big to have a uniform temperature distribution. Hence, the reactor body is separated into three radial zones and controlled independently with nine thermocouples. Temperature accuracy is +/-0.5 °C at 300 °C and +/-1 °C at 500 °C in sequential mode, while it is +/-2.5 °C in the range of 250-500 °C in parallel mode. The temperature, flow rate, reaction sequence, and product analysis are controlled by LABVIEW™ software and monitored simultaneously with displaying a live graph. The accuracy in the conversion is +/-2% at the level of 73% conversion when all reactors are loaded with same amount of catalyst. A quaternary catalyst library of 56 samples composed of Pt, Cu, Fe, and Co supported on AlSBA-15 (SBA-15 substituted with Al) are evaluated in the selective catalytic reduction of NO at various temperatures with our system. The most active compositions are rapidly screened at various temperatures.

Oh, Kwang Seok; Park, Yong Ki; Woo, Seong Ihl

2005-06-01

24

Reliability  

Microsoft Academic Search

This special volume of Statistical Sciences presents some innovative, if not\\u000aprovocative, ideas in the area of reliability, or perhaps more appropriately\\u000anamed, integrated system assessment. In this age of exponential growth in\\u000ascience, engineering and technology, the capability to evaluate the\\u000aperformance, reliability and safety of complex systems presents new challenges.\\u000aToday's methodology must respond to the ever-increasing demands

Sallie Keller-McNulty; Alyson Wilson; Christine Anderson-Cook

2006-01-01

25

The Alternate Forms Reliability of the New Tasks Added to the Assessment of Motor and Process Skills.  

ERIC Educational Resources Information Center

|Evaluated the alternate forms reliability of new versus old tasks of the Assessment of Motor and Process Skills (AMPS) (A. Fisher, 1993). Participants were 44 persons from the AMPS database. Results support good alternate forms reliability of the motor and process ability measures and suggest that the newly calibrated tasks can be used reliably

Ellison, Stephanie; Fisher, Anne G.; Duran, Leslie

2001-01-01

26

Using the ASSIST Short Form for Evaluating an Information Technology Application: Validity and Reliability Issues  

Microsoft Academic Search

In this study, the Approaches and Study Skills Inventory for Students (ASSIST) short form was used to gain insight about learning style characteristics that might influence students' use of an online library of plant science learning objects. This study provides evidence concerning the in- ternal consistency reliability and construct validity of the Deep, Strategic and Surface scale scores when used

Carol A. Speth; Deana M. Namuth; Donald J. Lee

2007-01-01

27

Reliability and Construct Validity of Alternate Forms of the Learning Style Inventory  

Microsoft Academic Search

Learning style assessment provides a framework within which individual differences for specific ways of learning can be described. Kolb's Learning Style Inventory has been used to assess learners' preferences for specific phases of an experiential learning cycle. This study was designed to determine the reliability and construct validity of an alternate form of the Learning Style Inventory using a semantic

Jon C. Marshall; Sharon L. Merritt

1985-01-01

28

Secure Internet Banking with Privacy Enhanced Mail - A Protocol for Reliable Exchange of Secured Order Forms  

Microsoft Academic Search

The Protocol for Reliable Exchange of Secured Order Forms is a model for securing today's favourite Internet service for business, the World-Wide Web, and its capability for exchanging order forms. Based on the PEM Internet standards (RFC 1421–1424) the protocol includes integrity of communication contents and authenticity of its origin, which allows for non-repudiation services, as well as confidentiality. It

Stephan Kolletzki

1996-01-01

29

Validity and reliability of the Food-Life Questionnaire. Short form.  

PubMed

Measures of beliefs and attitudes towards food need to be valid, and easy to use and interpret. The present study aimed to establish the validity and reliability of a short-form of the Food-Life Questionnaire (FLQ). Participants (247 females; 118 males), recruited in South Australia, completed a questionnaire in 2012 incorporating the original FLQ, a revised short form (FLQ-SF), and measures of food choice and consumption. Validity (construct, criterion-related, and incremental) and reliability (internal consistency and short-form) were assessed. Factor analysis established that short-form items loaded onto five factors consistent with the original FLQ and explained 60% of variance. Moderate correlations were observed between the FLQ-SF and a measure of food choices (r=.32-.64), and the FLQ-SF predicted unhealthy food consumption over and above the full FLQ demonstrating criterion-related and incremental validity respectively. The final FLQ-SF included 21 items and had a Cronbach's alpha of .75. Short-form reliability was established with correlations between corresponding subscales of the FLQ and FLQ-SF ranging from r=.64-.84. Overall, the FLQ-SF is brief, psychometrically robust, and easy to administer. It should be considered an important tool in research informing public policies and programs that aim to improve food choices. PMID:23856433

Sharp, Gemma; Hutchinson, Amanda D; Prichard, Ivanka; Wilson, Carlene

2013-07-12

30

Validity and Reliability of MOS Short Form Health Survey (SF-36) for Use in India  

PubMed Central

Background: Health is defined as the state of complete physical, mental and social well-being than just the absence of disease or infirmity. In order to measure health in the community, a reliable and validated instrument is required. Objectives: To adapt and translate the Medical Outcomes Study Short-Form Health Survey (SF-36) for use in India, to study its validity and reliability and to explore its higher order factor structure. Materials and Methods: Face-to-face interviews were conducted in 184 adult subjects by two trained interviewers. Statistical analyses for establishing item-level validity, scale-level validity and reliability and tests of known group comparison were performed. The higher order factor structure was investigated using principal component analysis with varimax rotation. Results: The questionnaire was well understood by the respondents. Item-level validity was established using tests of item internal consistency, equality of item-scale correlations and item-discriminant validity. Tests of scale-level validity and reliability performed well as all the scales met the required internal consistency criteria. Tests of known group comparison discriminated well across groups differing in socio-demographic and clinical variables. The higher order factor structure was found to comprise of two factors, with factor loadings being similar to those observed in other Asian countries. Conclusion: The item-and scale-level statistical analyses supported the validity and reliability of SF-36 for use in India.

Sinha, Richa; van den Heuvel, Wim J A; Arokiasamy, Perianayagam

2013-01-01

31

Computer-assisted image analysis for measuring body segmental angles during a static strength element on parallel bars: validity and reliability  

Microsoft Academic Search

This study aimed to introduce a technique using computer-assisted image analysis for measuring body segmental angles during a static strength element on parallel bars. Criterion validity and intra-rater reliability of measurements were evaluated using digital photography, skin markers and a gravity-reference goniometer. Twenty male former gymnasts participated in this study. They performed a strength hold element on parallel bars (V-sit)

Theophanis Siatras

2011-01-01

32

Geriatric Depression Scale-Short Form–Validity and Reliability of the Hebrew Version  

Microsoft Academic Search

This study evaluated the validity and reliability of the Hebrew version of the Geriatric Depression Scale Short Form (GDS-SF) in an Israeli geriatric population. Twenty-seven inpatients with a diagnosis of major depression according to the DSM-IV criteria and 21 healthy volunteers were assessed with the GDS-SF Hebrew Version, Hamilton Depression Rating Scale (HAM-D), and Mini-Mental State Examination (MMSE). The GDS-SF

G. Zalsman; D. Aizenberg; M. Sigler; E. Nahshoni; A. Weizman

1998-01-01

33

Microelectromechanical filter formed from parallel-connected lattice networks of contour-mode resonators  

DOEpatents

A microelectromechanical (MEM) filter is disclosed which has a plurality of lattice networks formed on a substrate and electrically connected together in parallel. Each lattice network has a series resonant frequency and a shunt resonant frequency provided by one or more contour-mode resonators in the lattice network. Different types of contour-mode resonators including single input, single output resonators, differential resonators, balun resonators, and ring resonators can be used in MEM filter. The MEM filter can have a center frequency in the range of 10 MHz-10 GHz, with a filter bandwidth of up to about 1% when all of the lattice networks have the same series resonant frequency and the same shunt resonant frequency. The filter bandwidth can be increased up to about 5% by using unique series and shunt resonant frequencies for the lattice networks.

Wojciechowski, Kenneth E; Olsson, III, Roy H; Ziaei-Moayyed, Maryam

2013-07-30

34

Solution structure of all parallel G-quadruplex formed by the oncogene RET promoter sequence  

PubMed Central

RET protein functions as a receptor-type tyrosine kinase and has been found to be aberrantly expressed in a wide range of human diseases. A highly GC-rich region upstream of the promoter plays an important role in the transcriptional regulation of RET. Here, we report the NMR solution structure of the major intramolecular G-quadruplex formed on the G-rich strand of this region in K+ solution. The overall G-quadruplex is composed of three stacked G-tetrad and four syn guanines, which shows distinct features for all parallel-stranded folding topology. The core structure contains one G-tetrad with all syn guanines and two other with all anti-guanines. There are three double-chain reversal loops: the first and the third loops are made of 3?nt G-C-G segments, while the second one contains only 1?nt C10. These loops interact with the core G-tetrads in a specific way that defines and stabilizes the overall G-quadruplex structure and their conformations are in accord with the experimental mutations. The distinct RET promoter G-quadruplex structure suggests that it can be specifically involved in gene regulation and can be an attractive target for pathway-specific drug design.

Tong, Xiaotian; Lan, Wenxian; Zhang, Xu; Wu, Houming; Liu, Maili; Cao, Chunyang

2011-01-01

35

G-quadruplexes form ultrastable parallel structures in deep eutectic solvent.  

PubMed

G-quadruplex DNA is highly polymorphic. Its conformation transition is involved in a series of important life events. These controllable diverse structures also make G-quadruplex DNA a promising candidate as catalyst, biosensor, and DNA-based architecture. So far, G-quadruplex DNA-based applications are restricted done in aqueous media. Since many chemical reactions and devices are required to be performed under strictly anhydrous conditions, even at high temperature, it is challenging and meaningful to conduct G-quadruplex DNA in water-free medium. In this report, we systemically studied 10 representative G-quadruplexes in anhydrous room-temperature deep eutectic solvents (DESs). The results indicate that intramolecular, intermolecular, and even higher-order G-quadruplex structures can be formed in DES. Intriguingly, in DES, parallel structure becomes the G-quadruplex DNA preferred conformation. More importantly, compared to aqueous media, G-quadruplex has ultrastability in DES and, surprisingly, some G-quadruplex DNA can survive even beyond 110 °C. Our work would shed light on the applications of G-quadruplex DNA to chemical reactions and DNA-based devices performed in an anhydrous environment, even at high temperature. PMID:23282194

Zhao, Chuanqi; Ren, Jinsong; Qu, Xiaogang

2013-01-11

36

Human telomeric DNA forms parallel-stranded intramolecular G-quadruplex in K+ solution under molecular crowding condition.  

PubMed

The G-rich strand of human telomeric DNA can fold into a four-stranded structure called G-quadruplex and inhibit telomerase activity that is expressed in 85-90% tumor cells. For this reason, telomere quadruplex is emerging as a potential therapeutic target for cancer. Information on the structure of the quadruplex in the physiological environment is important for structure-based drug design targeting the quadruplex. Recent studies have raised significant controversy regarding the exact structure of the quadruplex formed by human telomeric DNA in a physiological relevant environment. Studies on the crystal prepared in K+ solution revealed a distinct propeller-shaped parallel-stranded conformation. However, many later works failed to confirm such structure in physiological K+ solution but rather led to the identification of a different hybrid-type mixed parallel/antiparallel quadruplex. Here we demonstrate that human telomere DNA adopts a parallel-stranded conformation in physiological K+ solution under molecular crowding conditions created by PEG. At the concentration of 40% (w/v), PEG induced complete structural conversion to a parallel-stranded G-quadruplex. We also show that the quadruplex formed under such a condition has unusual stability and significant negative impact on telomerase processivity. Since the environment inside cells is molecularly crowded, our results obtained under the cell mimicking condition suggest that the parallel-stranded quadruplex may be the more favored structure under physiological conditions, and drug design targeting the human telomeric quadruplex should take this into consideration. PMID:17705383

Xue, Yong; Kan, Zhong-yuan; Wang, Quan; Yao, Yuan; Liu, Jiang; Hao, Yu-hua; Tan, Zheng

2007-08-17

37

Multiple clusters of release sites formed by individual thalamic afferents onto cortical interneurons ensure reliable transmission  

PubMed Central

Summary Thalamic afferents supply the cortex with sensory information by contacting both excitatory neurons and inhibitory interneurons. Interestingly, thalamic contacts with interneurons constitute such a powerful synapse that even one afferent can fire interneurons, thereby driving feedforward inhibition. However, the spatial representation of this potent synapse on interneuron dendrites is poorly understood. Using Ca imaging and electron microscopy we show that an individual thalamic afferent forms multiple contacts with the interneuronal proximal dendritic arbor, preferentially near branch points. More contacts are correlated with larger amplitude synaptic responses. Each contact, consisting of a single bouton, can release up to 7 vesicles simultaneously, resulting in graded and reliable Ca transients. Computational modeling indicates that the release of multiple vesicles at each contact minimally reduces the efficiency of the thalamic afferent in exciting the interneuron. This strategy preserves the spatial representation of thalamocortical inputs across the dendritic arbor over a wide range of release conditions.

Bagnall, Martha W.; Hull, Court; Bushong, Eric A.; Ellisman, Mark H.; Scanziani, Massimo

2012-01-01

38

Closed-form forward position kinematics for a (3-1-1-1)2 fully parallel manipulator  

Microsoft Academic Search

This paper derives closed-form equations to solve the forward position kinematics of fully parallel manipulators with a 3-1-1-1 architecture on both the (planar) base and top platforms; the tetrahedrons emerging from the base and top platforms must have one leg in common. The closed-form solutions allow real-time execution on standard computer hardware: since only a few linear and quadratic equations

Herman Bruyninckx

1998-01-01

39

Parallel Computing of Multi-scale Finite Element Sheet Forming Analyses Based on Crystallographic Homogenization Method  

SciTech Connect

Since the multi-scale finite element analysis (FEA) requires large computation time, development of the parallel computing technique for the multi-scale analysis is inevitable. A parallel elastic/crystalline viscoplastic FEA code based on a crystallographic homogenization method has been developed using PC cluster. The homogenization scheme is introduced to compute macro-continuum plastic deformations and material properties by considering a polycrystal texture. Since the dynamic explicit method is applied to this method, the analysis using micro crystal structures computes the homogenized stresses in parallel based on domain partitioning of macro-continuum without solving simultaneous linear equations. The micro-structure is defined by the Scanning Electron Microscope (SEM) and the Electron Back Scan Diffraction (EBSD) measurement based crystal orientations. In order to improve parallel performance of elastoplasticity analysis, which dynamically and partially increases computational costs during the analysis, a dynamic workload balancing technique is introduced to the parallel analysis. The technique, which is an automatic task distribution method, is realized by adaptation of subdomain size for macro-continuum to maintain the computational load balancing among cluster nodes. The analysis code is applied to estimate the polycrystalline sheet metal formability.

Kuramae, Hiroyuki; Okada, Kenji; Uetsuji, Yasutomo; Nakamachi, Eiji [Osaka Institute of Technology, 5-16-1, Omiya, Asahi-ku, Osaka 535-8585 (Japan); Tam, Nguyen Ngoc; Nakamura, Yasunori [Osaka Sangyou University, 3-1-1, Nakagaito, Daito, Osaka 574-8530 (Japan)

2005-08-05

40

Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data  

ERIC Educational Resources Information Center

|Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make…

Dinno, Alexis

2009-01-01

41

The reliability of speeded tests  

Microsoft Academic Search

Some methods are presented for estimating the reliability of a partially speeded test without the use of a parallel form. The effect of these formulas on some test data is illustrated. Whenever an odd-even reliability is computed it is probably desirable to use one of the formulas noted in Section 2 of this paper in addition to the usual Spearman-Brown

Harold Gulliksen

1950-01-01

42

A Parallel GNFS Algorithm Based on a Reliable Look-Ahead Block Lanczos Method for Integer Factorization  

Microsoft Academic Search

The Rivest-Shamir-Adleman (RSA) algorithm is a very popular and secure public key cryptosystem, but its security relies on\\u000a the difficulty of factoring large integers. The General Number Field Sieve (GNFS) algorithm is currently the best known method\\u000a for factoring large integers over 110 digits. Our previous work on the parallel GNFS algorithm, which integrated the Montgomery’s\\u000a block Lanczos method to

Laurence Tianruo Yang; Li Xu; Man Lin; John P. Quinn

2006-01-01

43

A reliability study of springback on the sheet metal forming process under probabilistic variation of prestrain and blank holder force  

NASA Astrophysics Data System (ADS)

This work deals with a reliability assessment of springback problem during the sheet metal forming process. The effects of operative parameters and material properties, blank holder force and plastic prestrain, on springback are investigated. A generic reliability approach was developed to control springback. Subsequently, the Monte Carlo simulation technique in conjunction with the Latin hypercube sampling method was adopted to study the probabilistic springback. Finite element method based on implicit/explicit algorithms was used to model the springback problem. The proposed constitutive law for sheet metal takes into account the adaptation of plastic parameters of the hardening law for each prestrain level considered. Rackwitz-Fiessler algorithm is used to find reliability properties from response surfaces of chosen springback geometrical parameters. The obtained results were analyzed using a multi-state limit reliability functions based on geometry compensations.

Mrad, Hatem; Bouazara, Mohamed; Aryanpour, Gholamreza

2013-08-01

44

Comparisons between Classical Test Theory and Item Response Theory in Automated Assembly of Parallel Test Forms  

ERIC Educational Resources Information Center

|The automated assembly of alternate test forms for online delivery provides an alternative to computer-administered, fixed test forms, or computerized-adaptive tests when a testing program migrates from paper/pencil testing to computer-based testing. The weighted deviations model (WDM) heuristic particularly promising for automated test assembly…

Lin, Chuan-Ju

2008-01-01

45

Reliability of the International Physical Activity Questionnaire in Research Settings: Last 7-Day Self-Administered Long Form  

ERIC Educational Resources Information Center

The purpose of this study was to examine the test-retest reliability of the last 7-day long form International Physical Activity Questionnaire (Craig et al., 2003) and to examine the construct validity for the measure in a research setting. Participants were 151 male (n = 52) and female (n = 99) university students (M age = 24.15 years, SD = 5.01)…

Levy, Susan S.; Readdy, R. Tucker

2009-01-01

46

The Reliability and Validity of a Chinese-Translated Version of the Gifted Rating Scale- Preschool/Kindergarten Form  

ERIC Educational Resources Information Center

This study examines the reliability and validity of a Chinese-translated version of the Gifted Rating Scales-Preschool/Kindergarten Form (GRS-P) and explores the effect of gender and age on each of the subscales. Data were collected from 250 kindergarten children, with age ranging from 4 years, 0 months to 6 years, 11 months. Results indicated…

Siu, Angela F. Y.

2010-01-01

47

Alternate forms of the auditory-verbal learning test: issues of test comparability, longitudinal reliability, and moderating demographic variables  

Microsoft Academic Search

The present investigation examines the alternate-form and longitudinal reliability of two versions of the Auditory-Verbal Learning Test (AVLT) on a large, multiregional, healthy male sample. Subjects included 2,059 bisexual and homosexual HIV-seronegative males recruited from the Multicenter AIDS Cohort Study from centers in Baltimore, Chicago, Los Angeles, and Pittsburgh. The findings revealed no significant differences between forms upon initial or

Craig Lyons Uchiyama; Louis F. D'Elia; Ann M. Dellinger; James T. Becker; Ola A. Selnes; Jerry E. Wesch; Bai Bai Chen; Paul Satz; Wilfred van Gorp; Eric N. Miller

1995-01-01

48

From a Direct Solver To a Parallel Iterative Solver in 3-d fOrming Simulation  

Microsoft Academic Search

The industrial simulation code FORGE3, devoted to three- dimensional forming applications, has been equipped with a robust iterative solver. Using an implicit approach, this code carries out the large deformations of viscoplastic incompressible materials with unilateral contact condi tions. It is based on a mixed-velocity\\/pressure finite ele ment method using tetrahedral unstructured meshes. Central to the Newton iterations dealing with

Thierry Coupez; Stéphane Marie

1997-01-01

49

Role category questionnaire measures of cognitive complexity: Reliability and comparability of alternative forms  

Microsoft Academic Search

The measure of cognitive complexity (construct differentiation) based on the standard two?peer version of Crockett's Role Category Questionnaire (RCQ) was found to have high four?week test?retest reliability under conditions of either strict enforcement of a time limit for completion of the RCQ (\\

Daniel J. OKeefe; Gregory J. Shepherd; Thomas Streeter

1982-01-01

50

Psychometric Properties of the Social Problem Solving Inventory-Revised Short-Form: Is the Short Form a Valid and Reliable Measure for Young Adults?  

Microsoft Academic Search

The purpose of the present study was to examine the psychometric properties of the Social Problem-Solving Inventory-Revised\\u000a Short-Form (SPSI-R:SF), a 25-item self-report measure of real life social problem-solving ability. A sample of 219 Australian\\u000a university students aged 16–25 years participated in the study. The reliability of the SPSI-R:SF scales was adequate to excellent.\\u000a Evidence was demonstrated for convergent validity and divergent

Deanne Hawkins; Kate Sofronoff; Jeanie Sheffield

2009-01-01

51

Self-Formed Barrier with Cu-Mn alloy Metallization and its Effects on Reliability  

SciTech Connect

Advancement of semiconductor devices requires the realization of an ultra-thin (less than 5 nm thick) diffusion barrier layer between Cu interconnect and insulating layers. Self-forming barrier layers have been considered as an alternative barrier structure to the conventional Ta/TaN barrier layers. The present work investigated the possibility of the self-forming barrier layer using Cu-Mn alloy thin films deposited directly on SiO2. After annealing at 450 deg. C for 30 min, an amorphous oxide layer of 3-4 nm in thickness was formed uniformly at the interface. The oxide formation was accompanied by complete expulsion of Mn atoms from the Cu-Mn alloy, leading to a drastic decrease in resistivity of the film. No interdiffusion was observed between Cu and SiO2, indicating an excellent diffusion-barrier property of the interface oxide.

Koike, J.; Wada, M. [Dept. of Materials Science, Tohoku University, Sendai 980-8579 (Japan); Usui, T.; Nasu, H.; Takahashi, S.; Shimizu, N.; Yoshimaru, M.; Shibata, H. [Semiconductor Technology Academic Research Center (STARC), Yokohama, 222-0033 (Japan)

2006-02-07

52

Specific sequences from the carboxyl terminus of human p53 gene product form anti-parallel tetramers in solution.  

PubMed Central

Human p53 is a tumor-suppressor gene product associated with control of the cell cycle and with growth suppression, and it is known to form homotetramers in solution. To investigate the relationship of structure to tetramerization, nine peptides corresponding to carboxyl-terminal sequences in human p53 were chemically synthesized, and their equilibrium associative properties were determined by analytical ultracentrifugation. Secondary structure, as determined by circular dichroism measurements, was correlated with oligomerization properties of each peptide. The sedimentation profiles of peptides 319-393 and 319-360 fit a two-state model of peptide monomers in equilibrium with peptide tetramers. Successive deletion of amino- and carboxyl-terminal residues from 319-360 reduced tetramer formation. Further, substitution of alanine for Leu-323, Tyr-327, and Leu-330 abolished tetramerization. Circular dichroism studies showed that peptide 319-351 had the highest alpha-helix content, while the other peptides that did not form tetramers had low helical structure. These studies define a minimal region and identify certain critical residues involved in tetramerization. Cross-linking studies between monomer units in the tetramer suggest that the helices adopt an anti-parallel arrangement. We propose that conformational shifts in the helical structure of the p53 tetramerization domain result in a repositioning of subunits relative to one another. This repositioning provides an explanation relating conformational changes at the carboxyl terminus with changes in sequence-specific DNA binding by the highly conserved central domain. Images

Sakamoto, H; Lewis, M S; Kodama, H; Appella, E; Sakaguchi, K

1994-01-01

53

The Four Canonical TPR Subunits of Human APC/C Form Related Homo-Dimeric Structures and Stack in Parallel to Form a TPR Suprahelix.  

PubMed

The anaphase-promoting complex or cyclosome (APC/C) is a large E3 RING-cullin ubiquitin ligase composed of between 14 and 15 individual proteins. A striking feature of the APC/C is that only four proteins are involved in directly recognizing target proteins and catalyzing the assembly of a polyubiquitin chain. All other subunits, which account for >80% of the mass of the APC/C, provide scaffolding functions. A major proportion of these scaffolding subunits are structurally related. In metazoans, there are four canonical tetratricopeptide repeat (TPR) proteins that form homo-dimers (Apc3/Cdc27, Apc6/Cdc16, Apc7 and Apc8/Cdc23). Here, we describe the crystal structure of the N-terminal homo-dimerization domain of Schizosaccharomyces pombe Cdc23 (Cdc23(Nterm)). Cdc23(Nterm) is composed of seven contiguous TPR motifs that self-associate through a related mechanism to those of Cdc16 and Cdc27. Using the Cdc23(Nterm) structure, we generated a model of full-length Cdc23. The resultant "V"-shaped molecule docks into the Cdc23-assigned density of the human APC/C structure determined using negative stain electron microscopy (EM). Based on sequence conservation, we propose that Apc7 forms a homo-dimeric structure equivalent to those of Cdc16, Cdc23 and Cdc27. The model is consistent with the Apc7-assigned density of the human APC/C EM structure. The four canonical homo-dimeric TPR proteins of human APC/C stack in parallel on one side of the complex. Remarkably, the uniform relative packing of neighboring TPR proteins generates a novel left-handed suprahelical TPR assembly. This finding has implications for understanding the assembly of other TPR-containing multimeric complexes. PMID:23583778

Zhang, Ziguo; Chang, Leifu; Yang, Jing; Conin, Nora; Kulkarni, Kiran; Barford, David

2013-04-11

54

The Short-Form McGill Pain Questionnaire as an outcome measure: Test–retest reliability and responsiveness to change  

Microsoft Academic Search

Abilities of the Short-Form McGill Pain Questionnaire to assess change have scarcely been addressed in previous studies. The aim of the present study was to examine test–retest reliability, sensitivity to change and responsiveness to clinically important change using a Norwegian version (NSF-MPQ) in different groups of patients. ICC(1,1) values for test–retest reliability (relative reliability) assessed 1–3 days apart for total,

Liv Inger Strand; Anne Elisabeth Ljunggren; Baard Bogen; Tove Ask; Tom Backer Johnsen

2008-01-01

55

Initial validation of the Spanish childhood trauma questionnaire-short form: factor structure, reliability and association with parenting.  

PubMed

The present study examines the internal consistency and factor structure of the Spanish version of the Childhood Trauma Questionnaire-Short Form (CTQ-SF) and the association between the CTQ-SF subscales and parenting style. Cronbach's ? and confirmatory factor analyses (CFA) were performed in a female clinical sample (n = 185). Kendall's ? correlations were calculated between the maltreatment and parenting scales in a subsample of 109 patients. The Spanish CTQ-SF showed adequate psychometric properties and a good fit of the 5-factor structure. The neglect and abuse scales were negatively associated with parental care and positively associated with overprotection scales. The results of this study provide initial support for the reliability and validity of the Spanish CTQ-SF. PMID:23266990

Hernandez, Ana; Gallardo-Pujol, David; Pereda, Noemí; Arntz, Arnoud; Bernstein, David P; Gaviria, Ana M; Labad, Antonio; Valero, Joaquín; Gutiérrez-Zotes, Jose Alfonso

2012-12-24

56

Reliability and psychometric properties of the Greek translation of the State-Trait Anxiety Inventory form Y: Preliminary data  

PubMed Central

Background The State-Trait Anxiety Inventory form Y is a brief self-rating scale for the assessment of state and trait anxiety. The aim of the current preliminary study was to assess the psychometric properties of its Greek translation. Materials and methods 121 healthy volunteers 27.22 ± 10.61 years old, and 22 depressed patients 29.48 ± 9.28 years old entered the study. In 20 of them the instrument was re-applied 1–2 days later. Translation and Back Translation was made. The clinical diagnosis was reached with the SCAN v.2.0 and the IPDE. The Symptoms Rating Scale for Depression and Anxiety (SRSDA) and the EPQ were applied for cross-validation purposes. The Statistical Analysis included the Pearson Correlation Coefficient and the calculation of Cronbach's alpha. Results The State score for healthy subjects was 34.30 ± 10.79 and the Trait score was 36.07 ± 10.47. The respected scores for the depressed patients were 56.22 ± 8.86 and 53.83 ± 10.87. Both State and Trait scores followed the normal distribution in control subjects. Cronbach's alpha was 0.93 for the State and 0.92 for the Trait subscale. The Pearson Correlation Coefficient between State and Trait subscales was 0.79. Both subscales correlated fairly with the anxiety subscale of the SRSDA. Test-retest reliability was excellent, with Pearson coefficient being between 0.75 and 0.98 for individual items and equal to 0.96 for State and 0.98 for Trait. Conclusion The current study provided preliminary evidence concerning the reliability and the validity of the Greek translation of the STAI-form Y. Its properties are generally similar to those reported in the international literature, but further research is necessary.

Fountoulakis, Konstantinos N; Papadopoulou, Marina; Kleanthous, Soula; Papadopoulou, Anna; Bizeli, Vasiliki; Nimatoudis, Ioannis; Iacovides, Apostolos; Kaprinis, George S

2006-01-01

57

Comparison of Educators' and Industrial Managers' Work Motivation Using Parallel Forms of the Work Components Study Questionnaire.  

ERIC Educational Resources Information Center

The idea that educators would differ from business managers on Herzberg's motivation factors and Blum's security orientations was posited. Parallel questionnaires were used to measure the motivational variables. The sample was composed of 432 teachers, 118 administrators, and 192 industrial managers. Data were analyzed using multivariate and…

Thornton, Billy W.; And Others

58

Binding of oligonucleotides to a viral hairpin forming RNA triplexes with parallel G*GoC triplets  

PubMed Central

Infrared and UV spectroscopies have been used to study the assembly of a hairpin nucleotide sequence (nucleotides 3–30) of the 5? non-coding region of the hepatitis C virus RNA (5?-GGCGGGGAUUAUCCCCGCUGUGAGGCGG-3?) with a RNA 20mer ligand (5?-CCGCCUCACAAAGGUGGGGU-3?) in the presence of magnesium ion and spermidine. The resulting complex involves two helical structural domains: the first one is an intermolecular duplex stem at the bottom of the target hairpin and the second one is a parallel triplex generated by the intramolecular hairpin duplex and the ligand. Infrared spectroscopy shows that N-type sugars are exclusively present in the complex. This is the first case of formation of a RNA parallel triplex with purine motif and shows that this type of targeting RNA strands to viral RNA duplexes can be used as an alternative to antisense oligonucleotides or ribozymes.

Carmona, Pedro; Molina, Marina

2002-01-01

59

Verbal and Visual Parallelism  

ERIC Educational Resources Information Center

This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

Fahnestock, Jeanne

2003-01-01

60

Test reliability and effective test length  

Microsoft Academic Search

Measures of effective test length are developed for speeded and power tests, which are independent of the number of items in the test or of the time required for administration. These measures are used in determining reliability for (1) speeded and power tests, where a separately timed short parallel form is administered in addition to the full-length test; (2) power

William H. Angoff

1953-01-01

61

PARALLEL STRINGS - PARALLEL UNIVERSES  

Microsoft Academic Search

Sometimes different parts of the battery community just don't seem to operate on the same level, and attitudes towards parallel battery strings are a prime example of this. Engineers at telephone company central offices are quite happy operating 20 or more parallel strings on the same dc bus, while many manufacturers warn against connecting more than four or five strings

Jim McDowall; Saft America

62

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 1. Technical Report #1216  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest/and alternate form) and G-Theory/D-Study research on the easy CBM reading measures, grades 1-5. Data were gathered in the spring 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due…

Anderson, Daniel; Park, Jasmine, Bitnara; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

2012-01-01

63

Parent Ratings Using the Chinese Version of the Parent Gifted Rating Scales-School Form: Reliability and Validity for Chinese Students  

ERIC Educational Resources Information Center

This study examined the reliability and validity of the scores of a Chinese-translated version of the Gifted Rating Scales-School Form (GRS-S) using parents as raters and explored the effects of gender and grade on the ratings. A total of 222 parents participated in the study and rated their child independently using the Chinese version of the…

Li, Huijun; Lee, Donghyuck; Pfeiffer, Steve I.; Petscher, Yaacov

2008-01-01

64

RAD9 and DNA polymerase epsilon form parallel sensory branches for transducing the DNA damage checkpoint signal in Saccharomyces cerevisiae.  

PubMed

In response to DNA damage and replication blocks, yeast cells arrest at distinct points in the cell cycle and induce the transcription of genes whose products facilitate DNA repair. Examination of the inducibility of RNR3 in response to UV damage has revealed that the various checkpoint genes can be arranged in a pathway consistent with their requirement to arrest cells at different stages of the cell cycle. While RAD9, RAD24, and MEC3 are required to activate the DNA damage checkpoint when cells are in G1 or G2, POL2 is required to sense UV damage and replication blocks when cells are in S phase. The phosphorylation of the essential central transducer, Rad53p, is dependent on POL2 and RAD9 in response to UV damage, indicating that RAD53 functions downstream of both these genes. Mutants defective for both pathways are severely deficient in Rad53p phosphorylation and RNR3 induction and are significantly more sensitive to DNA damage and replication blocks than single mutants alone. These results show that POL2 and RAD9 function in parallel branches for sensing and transducing the UV DNA damage signal. Each of these pathways subsequently activates the central transducers Mec1p/Esr1p/Sad3p and Rad53p/Mec2p/Sad1p, which are required for both cell-cycle arrest and transcriptional responses. PMID:8895664

Navas, T A; Sanchez, Y; Elledge, S J

1996-10-15

65

Evaluation methods for operational reliability of adhesive joints based on high-strength film-forming adhesives  

Microsoft Academic Search

Analysis of extensive experimental data related to evaluation of crack resistance for adhesive joints is presented. Testing techniques to provide determination in action of crack resistance for any adhesive used in various structures are suggested. These techniques are aimed at showing the suitability of an adhesive to trouble-free conditions of its service providing operational reliability of an adhesive structure.

G. N. Finogenov; N. S. Rogov

1994-01-01

66

Evaluation methods for operational reliability of adhesive joints based on high-strength film-forming adhesives  

SciTech Connect

Analysis of extensive experimental data related to evaluation of crack resistance for adhesive joints is presented. Testing techniques to provide determination in action of crack resistance for any adhesive used in various structures are suggested. These techniques are aimed at showing the suitability of an adhesive to trouble-free conditions of its service providing operational reliability of an adhesive structure.

Finogenov, G.N.; Rogov, N.S. [All-Russian Institute of Aviation Materials, Moscow (Russian Federation)

1994-07-01

67

Specific Sequences from the Carboxyl Terminus of Human p53 Gene Product Form AntiParallel Tetramers in Solution  

Microsoft Academic Search

Human p53 is a tumor-suppressor gene product associated with control of the cell cycle and with growth suppression, and it is known to form homotetramers in solution. To investigate the relationship of structure to tetramerization, nine peptides corresponding to carboxyl-terminal sequences in human p53 were chemically synthesized, and their equilibrium associative properties were determined by analytical ultracentrifugation. Secondary structure, as

Hiroshi Sakamoto; Marc S. Lewis; Hiroaki Kodama; Ettore Appella; Kazuyasu Sakaguchi

1994-01-01

68

Parallel zippers formed by alpha-helical peptide columns in crystals of Boc-Aib-Glu(OBzl)-Leu-Aib-Ala-Leu-Aib-Ala-Lys(Z)-Aib-OMe.  

PubMed Central

The crystal structure of the decapeptide Boc-Aib-Glu(OBzl)-Leu-Aib-Ala-Leu-Aib-Ala-Lys(Z)-Aib-OMe (where Aib is alpha-aminoisobutyryl, Boc is t-butoxycarbonyl, OBzl is benzyl ester, and Z is benzyloxycarbonyl) illustrates a parallel zipper arrangement of interacting helical peptide columns. Head-to-tail NH...OC hydrogen bonding extends the alpha-helices formed by the decapeptide into long columns in the crystal. An additional NH...OC hydrogen bond in the head-to-tail region, between the extended side chains of Glu(OBzl), residue 2 in one molecule, and Lys(Z), residue 9 in another molecule, forms a "double tooth" on the side of the column. These double teeth are repeated regularly on the helical columns with spaces of six residues between them (approximately 10 A). The double teeth on a pair of parallel columns (all carbonyl groups pointed in the same direction) interdigitate in a zipper motif. All contacts in the zipper portion are of the van der Waals type. The peptide, with formula C66H103N11O17.H2O, crystallizes in space group P2(1)2(1)2(1) with a = 10.677(4) A, b = 16.452(6) A, and c = 43.779(13) A; overall agreement R = 10.2% for 3527 observed reflections (magnitude of /F0/ greater than 3 sigma); resolution 0.9 A. Images

Karle, I L; Flippen-Anderson, J L; Uma, K; Balaram, P

1990-01-01

69

The reliability and validity of the Polish version of the Breastfeeding Self-Efficacy Scale-Short Form: Translation and psychometric assessment  

Microsoft Academic Search

BackgroundThe majority of women discontinue breastfeeding before the recommended 6 months postpartum. If health professionals are to improve low breastfeeding duration and exclusivity rates, they need to reliably assess high-risk women and identify predisposing factors that are amenable to intervention. One possible modifiable variable is breastfeeding confidence. The Breastfeeding Self-Efficacy Scale Short Form (BSES-SF) is a 14-item measure designed to

Karolina Wutke; Cindy-Lee Dennis

2007-01-01

70

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 5. Technical Report #1220  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Lai, Cheng-Fei; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

2012-01-01

71

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 2. Technical Report #1217  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest an alternate form) and G-Theory/D-Study on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from the convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due to…

Anderson, Daniel; Lai, Cheg-Fei; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

2012-01-01

72

Health-related quality of life of HIV-infected women: evidence for the reliability, validity and responsiveness of the Medical Outcomes Study Short-Form 20  

Microsoft Academic Search

The purpose of this study was to assess the reliability, validity and responsiveness of a health-related quality of life (HRQOL) instrument, the Medical Outcomes Short-Form 20-ltem General Health Survey (MOS SF-20), in a sample of women with the human immunodeficiency virus (HIV). Longitudinal data were collected on 202 HIV-infected women without AIDS who were receiving care at Kings County Hospital

M. Y. Smith; J. Feldman; P. Kelly; J. A. DeHovitz; K. Chirgwin; H. Minkoff

1996-01-01

73

Oligo-[alpha]-deoxynucleotides covalently linked to an intercalating agent. Double helices with parallel strands are formed with complementary oligo-[beta]-deoxynucleotides.  

PubMed Central

An oligo-[alpha]-deoxynucleotide of sequence (5')d(TCTAAACTC) (3') was synthesized using the alpha-anomers of deoxynucleosides and its 5'-phosphate was covalently linked to a 9-amino acridine derivative via a pentamethylene linker. Two oligo-[beta]-deoxynucleotides containing the complementary sequence in either the 5'----3' or the 3'----5' orientation were synthesized using natural [beta]-deoxynucleosides. Complex formation was investigated by absorption and fluorescence spectroscopies. No change in spectroscopic properties was detected with the anti-parallel [beta] sequence. Absorption changes were induced in the visible absorption band of the acridine derivative at 2 degrees C when the acridine-substituted oligo-[alpha]-deoxynucleotide was mixed in equimolecular amounts with the complementary [beta]-sequence in the parallel orientation. Hypochromism was observed in the UV range. The fluorescence of the acridine derivative was quenched by the guanine base present in the second position of the complementary sequence. Cooperative dissociation curves were observed and identical values of melting temperatures were obtained by absorption and fluorescence. An increase in salt concentration stabilized the complex with a delta Tm of 8 degrees C when NaCl concentration increased from 0.1 to 1 M. These results demonstrate that an oligo-[alpha]-deoxynucleotide covalently linked to an intercalating agent is able to form a double helix with an oligo-[beta]-deoxynucleotide. The two strands of this [alpha]-[beta] double helix adopt a parallel 5'----3' orientation. The acridine ring is able to intercalate between the first two base pairs on the 5'-side of the duplex structure.

Sun, J S; Asseline, U; Rouzaud, D; Montenay-Garestier, T; Nguyen, T T; Helene, C

1987-01-01

74

American Shoulder and Elbow Surgeons Standardized Shoulder Assessment Form, patient self-report section: Reliability, validity, and responsiveness  

Microsoft Academic Search

The purpose of this study was to examine the psychometric properties of the American Shoulder and Elbow Surgeons Standardized Shoulder Assessment Form (ASES), patient self-report section. Patients with shoulder dysfunction (n = 63) completed the ASES, The University of Pennsylvania Shoulder Score, and the Short Form–36 during the initial evaluation, 24 to 72 hours after the initial visit, and after

Lori A Michener; Philip W McClure; Brian J Sennett

2002-01-01

75

$B\\\\to\\\\rho$ form factors including higher twist contributions and reliability of pQCD approach  

Microsoft Academic Search

We discuss $\\\\btorho$ form factors within the framework of perturbative QCD, including the higher twist contributions and study the validity of such an approach in calculating quantities such as form factors, which in principle and quite generally are thought to be completely non-perturbative objects and are expected to receive large contributions from the non-perturbative regime in the calculations. It is

Namit Mahajan

2004-01-01

76

Reliability and sensitivity measures of the Greek version of the short form of the McGill Pain Questionnaire  

Microsoft Academic Search

The translation of existing healthcare measurement scales is considered a feasible, efficient and popular approach to produce internationally comparable measures. The short form of the McGill Pain Questionnaire is one of the most widely used and translated instruments to measure the pain experience. The Greek version of the short form of the McGill Pain Questionnaire (GR-SFMPQ) has recently been developed

George Georgoudis; Jacqueline A. Oldham; Paul J. Watson

2001-01-01

77

Local probabilistic sensitivity measures for comparing FORM and Monte Carlo calculations illustrated with dike ring reliability calculations  

NASA Astrophysics Data System (ADS)

We define local probabilistic sensitivity measures as proportional to E ( X i | Z = z )/ z , where Z is a function of random variables X 1 ,..., X n . These measures are local in that they depend only on the neighborhood of Z = z , but unlike other local sensitivity measures, the local probabilistic sensitivity of X i does not depend on values of other input variables. For the independent linear normal model, or indeed for any model for which X i has linear regression on Z , the above measure equals X i ( Z , X i )/ Z . When linear regression does not hold, the new sensitivity measures can be compared with the correlation coefficients to indicate degree of departure from linearity. We say that Z is probabilistically dissonant in X i at Z = z if Z is increasing (decreasing) in X i at z , but probabilistically decreasing (increasing) at z . Probabilistic dissonance is rather common in complicated models. The new measures are able to pick up this probabilistic dissonance. These notions are illustrated with data from an ongoing uncertainty analysis of dike ring reliability.

Cooke, Roger M.; van Noortwijk, Jan M.

1999-03-01

78

Reliability and psychometric properties of the Greek translation of the State-Trait Anxiety Inventory form Y: Preliminary data  

Microsoft Academic Search

BACKGROUND: The State-Trait Anxiety Inventory form Y is a brief self-rating scale for the assessment of state and trait anxiety. The aim of the current preliminary study was to assess the psychometric properties of its Greek translation. MATERIALS AND METHODS: 121 healthy volunteers 27.22 ± 10.61 years old, and 22 depressed patients 29.48 ± 9.28 years old entered the study.

Konstantinos N Fountoulakis; Marina Papadopoulou; Soula Kleanthous; Anna Papadopoulou; Vasiliki Bizeli; Ioannis Nimatoudis; Apostolos Iacovides; George S Kaprinis

2006-01-01

79

Reliability and validity of the Spanish version of the Child Health and Illness Profile (CHIP) Child-Edition, Parent Report Form (CHIP-CE/PRF)  

PubMed Central

Background The objectives of the study were to assess the reliability, and the content, construct, and convergent validity of the Spanish version of the CHIP-CE/PRF, to analyze parent-child agreement, and compare the results with those of the original U.S. version. Methods Parents from a representative sample of children aged 6-12 years were selected from 9 primary schools in Barcelona. Test-retest reliability was assessed in a convenience subsample of parents from 2 schools. Parents completed the Spanish version of the CHIP-CE/PRF. The Achenbach Child Behavioural Checklist (CBCL) was administered to a convenience subsample. Results The overall response rate was 67% (n = 871). There was no floor effect. A ceiling effect was found in 4 subdomains. Reliability was acceptable at the domain level (internal consistency = 0.68-0.86; test-retest intraclass correlation coefficients = 0.69-0.85). Younger girls had better scores on Satisfaction and Achievement than older girls. Comfort domain score was lower (worse) in children with a probable mental health problem, with high effect size (ES = 1.45). The level of parent-child agreement was low (0.22-0.37). Conclusions The results of this study suggest that the parent version of the Spanish CHIP-CE has acceptable psychometric properties although further research is needed to check reliability at sub-domain level. The CHIP-CE parent report form provides a comprehensive, psychometrically sound measure of health for Spanish children 6 to 12 years old. It can be a complementary perspective to the self-reported measure or an alternative when the child is unable to complete the questionnaire. In general, the results are similar to the original U.S. version.

2010-01-01

80

The potassium permanganate method. A reliable method for differentiating amyloid AA from other forms of amyloid in routine laboratory practice.  

PubMed Central

Alterations in affinity of amyloid for Congo red after incubation of tissue sections with potassium permanganate, as described by Wright el al, were studied. The affinity of amyloid for Congo red after incubation with potassium permanganate did not change in patients with myeloma-associated amyloidosis, familial amyloidotic polyneuropathy, medullary carcinoma of the thyroid, pancreatic island amyloid, and cerebral amyloidosis. Affinity for Congo red was lost after incubation with potassium permanganate in tissue sections from patients with secondary amyloidosis and amyloidosis complicating familial Mediterranean fever (consisting of amyloid AA). Patients with primary amyloidosis could be divided into two groups, one with potassium-permanganate--sensitive and one with potassium-permanganate--resistant amyloid deposits. These two groups correlated with the clinical classification in typical organ distribution (presenting with nephropathy) and atypical organ distribution (presenting with cardiomyopathy, nephropathy, and glossopathy) and the expected presence of amyloid AA or amyloid AL. Potassium permanganate sensitivity seems to be restricted to amyloid AA. The potassium permanganate method can be important in dividing the major forms of generalized amyloidosis in AA amyloid and non-AA amyloid. This can be used for differentiating early stages of the disease and cases otherwise difficult to classify. It is important to define patient groups properly, especially in evaluating the effect of therapeutic measures. (Am J Pathol 97:43--58, 1979). Images p[58]-a Figure 1 Figure 2 p[56]-a

van Rijswijk, M. H.; van Heusden, C. W.

1979-01-01

81

Parallel processing  

SciTech Connect

This report examines the current techniques of parallel processing, transputers, vector and vector supercomputers and covers such areas as transputer applications, programming models and language design for parallel processing.

Jesshop, C.

1987-01-01

82

The investigation of supply chain's reliability measure: a case study  

NASA Astrophysics Data System (ADS)

In this paper, using supply chain operational reference, the reliability evaluation of available relationships in supply chain is investigated. For this purpose, in the first step, the chain under investigation is divided into several stages including first and second suppliers, initial and final customers, and the producing company. Based on the formed relationships between these stages, the supply chain system is then broken down into different subsystem parts. The formed relationships between the stages are based on the transportation of the orders between stages. Paying attention to the system elements' location, which can be in one of the five forms of series namely parallel, series/parallel, parallel/series, or their combinations, we determine the structure of relationships in the divided subsystems. According to reliability evaluation scales on the three levels of supply chain, the reliability of each chain is then calculated. Finally, using the formulas of calculating the reliability in combined systems, the reliability of each system and ultimately the whole system is investigated.

Taghizadeh, Houshang; Hafezi, Ehsan

2012-10-01

83

Parallel Algorithms  

NSDL National Science Digital Library

Content prepared for the Supercomputing 2002 session on "Using Clustering Technologies in the Classroom". Contains a series of exercises for teaching parallel computing concepts through kinesthetic activities.

Gray, Paul

84

Parallelizing assistant for parallel architectures  

SciTech Connect

This research presents the design and implementation of a prototype programming tool for vectorization and parallelization assistance, called the Workstation Vectorization and Parallelization Assistance Environment (WVPAE). The proposed working environment for WVPAE is the workstation. The WVPAE is designed to function as an experimentation facility for interactive vectorization and parallelization assistance during the implementation or maintenance of parallel-computing applications. The target high-level language for implementing parallel-computing applications is chosen to be a Fortran-like language, and the target parallel machine has to be specified by the user. The target parallel-machine architecture can be either a vector or a multiprocessor machine. The assistance provided by the WVPAE is based on analyzing user programs to discover all barriers that may cause either vectorization blocking or parallelization blocking. Vectorization and parallelization barriers are categorized in this research into barriers due to sequential language constructs and those due to dependence relationships. The WVPAE tool provides diagnostic messages and advice for most of the barriers defined in this research.

Arafeh, B.R.

1986-01-01

85

Software reliability  

Microsoft Academic Search

The first session of the software reliability area will address Software Reliability Needs. It includes three invited papers that deal, respectively, with the origination of reliability requirements, with issues of reliability measurements, and with reliability modeling and prediction. All of them represent the cutting edge of the current technology and treat the subject in a broad manner that may be

Herbert Hecht

1980-01-01

86

Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach  

ERIC Educational Resources Information Center

|Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

2012-01-01

87

c-MYC promoter G-quadruplex formed at the 5?-end of NHE III1 element: insights into biological relevance and parallel-stranded G-quadruplex stability  

PubMed Central

We studied the structures and stabilities of G-quadruplexes formed in Myc1234, the region containing the four consecutive 5? runs of guanines of c-MYC promoter NHE III1, which have recently been shown to form in a supercoiled plasmid system in aqueous solution. We determined the NMR solution structure of the 1:2:1 parallel-stranded loop isomer, one of the two major loop isomers formed in Myc1234 in K+ solution. This major loop isomer, although sharing the same folding structure, appears to be markedly less stable than the major loop isomer formed in the single-stranded c-MYC NHE III1 oligonucleotide, the Myc2345 G-quadruplex. Our NMR structures indicated that the different thermostabilities of the two 1:2:1 parallel c-MYC G-quadruplexes are likely caused by the different base conformations of the single nucleotide loops. The observation of the formation of the Myc1234 G-quadruplex in the supercoiled plasmid thus points to the potential role of supercoiling in the G-quadruplex formation in promoter sequences. We also performed a systematic thermodynamic analysis of modified c-MYC NHE III1 sequences, which provided quantitative measure of the contributions of various loop sequences to the thermostabilities of parallel-stranded G-quadruplexes. This information is important for understanding the equilibrium of promoter G-quadruplex loop isomers and for their drug targeting.

Mathad, Raveendra I.; Hatzakis, Emmanuel; Dai, Jixun; Yang, Danzhou

2011-01-01

88

Parallel quicksort  

SciTech Connect

This paper reports on the development of a parallel version of quicksort on a CRCW PRAM. The algorithm uses n processors and a linear space to sort n keys in the expected time O(log n) with large probability.

Vrto, I. (Inst. of Technical Cybernetics, Slovac Academy of Sciences, Dubravska Cesta 9, 842-37 Bratislava (CS)); Chelbus, B.S. (Dept. of Computer Science, Univ. of California, Riverside, CA (US))

1991-04-01

89

Moment methods for structural reliability  

Microsoft Academic Search

First-order reliability method (FORM) is considered to be one of the most reliable computational methods. In the last decades, researchers have examined the shortcomings of FORM, primarily accuracy and the difficulties involved in searching for the design point by iteration using the derivatives of the performance function. In order to improve upon FORM, several structural reliability methods have been developed

Yan-Gang Zhao; Tetsuro Ono

2001-01-01

90

Parallel processing for control applications  

SciTech Connect

Parallel processing has been a topic of discussion in computer science circles for decades. Using more than one single computer to control a process has many advantages that compensate for the additional cost. Initially multiple computers were used to attain higher speeds. A single cpu could not perform all of the operations necessary for real time operation. As technology progressed and cpu's became faster, the speed issue became less significant. The additional processing capabilities however continue to make high speeds an attractive element of parallel processing. Another reason for multiple processors is reliability. For the purpose of this discussion, reliability and robustness will be the focal paint. Most contemporary conceptions of parallel processing include visions of hundreds of single computers networked to provide 'computing power'. Indeed our own teraflop machines are built from large numbers of computers configured in a network (and thus limited by the network). There are many approaches to parallel configfirations and this presentation offers something slightly different from the contemporary networked model. In the world of embedded computers, which is a pervasive force in contemporary computer controls, there are many single chip computers available. If one backs away from the PC based parallel computing model and considers the possibilities of a parallel control device based on multiple single chip computers, a new area of possibilities becomes apparent. This study will look at the use of multiple single chip computers in a parallel configuration with emphasis placed on maximum reliability.

Telford, J. W. (John W.)

2001-01-01

91

SHAKE parallelization  

PubMed Central

SHAKE is a widely used algorithm to impose general holonomic constraints during molecular simulations. By imposing constraints on stiff degrees of freedom that require integration with small time steps (without the constraints) we are able to calculate trajectories with time steps larger by approximately a factor of two. The larger time step makes it possible to run longer simulations. Another approach to extend the scope of Molecular Dynamics is parallelization. Parallelization speeds up the calculation of the forces between the atoms and makes it possible to compute longer trajectories with better statistics for thermodynamic and kinetic averages. A combination of SHAKE and parallelism is therefore highly desired. Unfortunately, the most widely used SHAKE algorithm (of bond relaxation) is inappropriate for parallelization and alternatives are needed. The alternatives must minimize communication, lead to good load balancing, and offer significantly better performance than the bond relaxation approach. The algorithm should also scale with the number of processors. We describe the theory behind different implementations of constrained dynamics on parallel systems, and their implementation on common architectures.

Elber, Ron; Ruymgaart, A. Peter; Hess, Berk

2011-01-01

92

Parallel computing and multitasking  

NASA Astrophysics Data System (ADS)

Over the past decade we have witnessed an evolution of scientific computers in which more and more concurrent or parallel arithmetic operations are allowed. The segmented pipeline arithmetic functional units, direct vectorization, indirect vectorization, multiprocessing and finally multitasking represent stages of development of parallel computation. Algorithms for the solution of physics problems must be tailored, if possible, to the forms required for these various kinds of parallelism. We report on some experiences we have had building and running various parallelized physics codes with particular emphasis on the Cray-2. We show that the implementation of multitasking and the subsequent debugging effort are straightforward. These techniques are applicable to more methods, including implicit ones, than was originally predicted. We present arguments that favor the use of interactive timesharing operating systems, particularly for the multitasking situation.

Anderson, David V.; Horowitz, Eric J.; Koniges, Alice E.; McCoy, Michael G.

1986-12-01

93

Parallel Resistors  

NSDL National Science Digital Library

Students will measure the resistance of resistors that they have drawn on paper with a graphite pencil. They will then connect two resistors in parallel and measure the resistance of the combination. In this activity, it is important that students color v

Horton, Michael

2009-05-30

94

Stitch-Bond Parallel-Gap Welding for IC Circuits: Stitch-bonded flatbacks can be superior to soldered dual-in-lines where size, weight, and reliability are important.  

National Technical Information Service (NTIS)

This citation summarizes a one-page announcement of technology available for utilization. Flatback integrated circuits installed by stitch-bond/parallel-gap welding can be considerably more economical for complex circuit boards than conventional solder-in...

1981-01-01

95

Parallel hierarchical radiosity rendering.  

National Technical Information Service (NTIS)

In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coe...

M. Carter

1993-01-01

96

Nutritional form for the elderly is a reliable and valid instrument for the determination of undernutrition risk, and it is associated with health-related quality of life  

Microsoft Academic Search

Undernutrition is a common problem associated with clinical complications such as impaired immune response, reduced muscle strength, impaired wound healing, and susceptibility to infections; therefore, it is an important treatment target to reduce morbidity and mortality associated with chronic diseases and aging. The aim of the present study was to apply a reliable and valid instrument for the determination of

Tímea Gombos; Krisztina Kertész; Ágnes Csíkos; Ulrika Söderhamn; Olle Söderhamn; Zoltán Prohászka

2008-01-01

97

Parallel biocomputing  

Microsoft Academic Search

Background  With the advent of high throughput genomics and high-resolution imaging techniques, there is a growing necessity in biology\\u000a and medicine for parallel computing, and with the low cost of computing, it is now cost-effective for even small labs or individuals\\u000a to build their own personal computation cluster.\\u000a \\u000a \\u000a \\u000a \\u000a Methods  Here we briefly describe how to use commodity hardware to build a low-cost,

Kenneth S Kompass; Thomas J Hoffmann; John S Witte

2011-01-01

98

Measuring health status in British patients with rheumatoid arthritis: reliability, validity and responsiveness of the short form 36-item health survey (SF36)  

Microsoft Academic Search

SUMMARY The objective was to assess the performance of the SF-36 health survey (SF-36) in a sample of patients with rheumatoid arthritis (RA) stratified by functional class. The eight SF-36 subscales and the two summary scales (the physical and mental component scales) were assessed for test-retest reliability, construct validity and responsiveness to self-reported change in health. In 233 patients with

D. A. RUTA; N. P. HURST; P. KIND; M. HUNTER; A. STUBBINGS

1998-01-01

99

Parallel Information Processing.  

ERIC Educational Resources Information Center

|Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

Rasmussen, Edie M.

1992-01-01

100

Highly reliable copper dual-damascene interconnects with self-formed MnSi\\/sub x\\/O\\/sub y\\/ barrier Layer  

Microsoft Academic Search

Copper (Cu) dual-damascene interconnects with a self-formed MnSi xOy barrier layer were successfully fabricated. Transmission electron microscopy shows that approximately 2-nm thick and continuous MnSixOy layer was formed at the interface of Cu and dielectric SiO2, and that no barrier was formed at the via bottom because no oxygen was at the via bottom during annealing. No leakage-current increase was

Takamasa Usui; Hayato Nasu; Shingo Takahashi; Noriyoshi Shimizu; T. Nishikawa; Masaki Yoshimaru; Hideki Shibata; Makoto Wada; Junichi Koike

2006-01-01

101

Reliability, Validity, and Responsiveness of a Modified International Knee Documentation Committee Subjective Knee Form (Pedi-IKDC) in Children With Knee Disorders  

Microsoft Academic Search

Background: The International Knee Documentation Committee (IKDC) Subjective Knee Form is a knee-specific measure of symptoms, function, and sports activity. A modified IKDC Subjective Knee Form (pedi-IKDC) has been developed for use in children and adolescents. The purpose of this study was to determine the psychometric characteristics of the pedi-IKDC in children and adolescents with knee disorders.Hypothesis: The pedi-IKDC is

Mininder S. Kocher; Jeremy T. Smith; Maura D. Iversen; Katherine Brustowicz; Olabode Ogunwole; Jason Andersen; Won Joon Yoo; Eric D. McFeely; Allen F. Anderson; David Zurakowski

2011-01-01

102

Development, reliability and factor analysis of a self-administered questionnaire which originates from the World Health Organization's Composite International Diagnostic Interview - Short Form (CIDI-SF) for assessing mental disorders  

PubMed Central

Background The Composite International Diagnostic Interview – Short Form consists of short form scales for evaluating psychiatric disorders. Also for this version training of the interviewer is required. Moreover, the confidentiality could be not adequately protected. This study focuses on the preliminary validation of a brief self-completed questionnaire which originates from the CIDI-SF. Sampling and Methods A preliminary version was assessed for content and face validity. An intermediate version was evaluated for test-retest reliability. The final version of the questionnaire was evaluated for factor exploratory analysis, and internal consistency. Results After the modifications by the focus groups, the questionnaire included 29 initial probe questions and 56 secondary questions. The test retest reliability weighted Kappas were acceptable to excellent for the vast majority of questions. Factor analysis revealed six factors explaining 53.6% of total variance. Cronbach's alpha was 0.89 for the questionnaire and 0.89, 0.67, 0.71, 0.71, 0.49, and 0.67, for the six factors respectively. Conclusion The questionnaire has satisfactory reliability, and internal consistency, and might be efficient for using in community research and clinical practice. In the future, the questionnaire could be further validated (i.e., concurrent validity, discriminant validity).

2008-01-01

103

Parallel fast gauss transform  

SciTech Connect

We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

Sampath, Rahul S [ORNL; Sundar, Hari [Siemens Corporate Research; Veerapaneni, Shravan [New York University

2010-01-01

104

Parallel hierarchical radiosity rendering  

SciTech Connect

In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

Carter, M.

1993-07-01

105

A parallel radiosity algorithm for virtual reality  

Microsoft Academic Search

This paper presents a parallel algorithm for radiosity computation in virtual reality environment by computing the parallel eigenvalues and eigenvectors of form factor matrix. This leads to a novel approach to radiosity computation for virtual reality which called parallel eigenvector radiosity. Through the performance evaluation, this method significantly decreases the execution time for complex environments on cluster of PCs.

Qiong Zhang; Zhichao Li; Riwei Wang

2010-01-01

106

Parallel data detection in page-oriented optical memory  

NASA Astrophysics Data System (ADS)

We discuss a novel two-dimensional parallel technique for reliable data detection in page-oriented optical memories. The method is motivated by decision feedback techniques and is fully parallel, offering convenient, locally connected electronic implementation. The algorithm is shown to offer significant improvements over simple threshold detection and in some cases can approach the maximum-likelihood bound of data reliability.

Neifeld, Mark A.; Chugg, K. M.; King, B. M.

1996-09-01

107

The feasibility, reliability and validity of the McGill Quality of Life Questionnaire-Cardiff Short Form (MQOL-CSF) in palliative care population  

Microsoft Academic Search

In terminally-ill patients, effective measurement of health-related quality of life (HRQoL) needs to be done while imposing minimal burden. In an attempt to ensure that routine HRQoL assessment is simple but capable of eliciting adequate information, the McGill Quality of Life Questionnaire-Cardiff Short Form (MQOL-CSF: 8 items) was developed from its original version, the McGill Quality of Life Questionnaire (MQOL:

Pei Lin Lua; Sam Salek; Ilora Finlay; Chris Lloyd-Richards

2005-01-01

108

The Ohio Scales Youth Form: Expansion and Validation of a Self-Report Outcome Measure for Young Children  

ERIC Educational Resources Information Center

|We examined the validity and reliability of a self-report outcome measure for children between the ages of 8 and 11. The Ohio Scales Problem Severity scale is a brief, practical outcome measure available in three parallel forms: Parent, Youth, and Agency Worker. The Youth Self-Report form is currently validated for children ages 12 and older. The…

Dowell, Kathy A.; Ogles, Benjamin M.

2008-01-01

109

Totally Parallel Multilevel Algorithms.  

National Technical Information Service (NTIS)

Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergen...

P. O. Frederickson

1988-01-01

110

Implementation of Parallel Algorithms.  

National Technical Information Service (NTIS)

Contents: Intermediate Representation for Parallel Implementation; Data Movement on Processor Arrays; Data-Parallel Implementations of Fast Multipole Algorithms for N-Body Interaction; Rate Control in Parallel Algorithms; Implementing Asynchronous Paralle...

J. H. Reif R. Wagner

1993-01-01

111

Special parallel processing workshop  

SciTech Connect

This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

NONE

1994-12-01

112

SSD Reliability  

NASA Astrophysics Data System (ADS)

SSD are complex electronic systems prone to wear-out and failure mechanisms mainly related to their basic component: the Flash memory. The reliability of a Flash memory depends on many technological and architectural aspects, from the physical concepts on which the store paradigm is achieved to the interaction among cells, from possible new physical mechanisms arising as the technology scales down to the countermeasures adopted within the memory controller to face erroneous behaviors.

Zambelli, C.; Olivo, P.

113

Stable unstable reliability theory.  

PubMed

Classical reliability theory assumes that individuals have identical true scores on both testing occasions, a condition described as stable. If some individuals' true scores are different on different testing occasions, described as unstable, the estimated reliability can be misleading. A model called stable unstable reliability theory (SURT) frames stability or instability as an empirically testable question. SURT assumes a mixed population of stable and unstable individuals in unknown proportions, with w(i) the probability that individual i is stable. w(i) becomes i's test score weight which is used to form a weighted correlation coefficient r(w) which is reliability under SURT. If all w(i) = 1 then r(w) is the classical reliability coefficient; thus classical theory is a special case of SURT. Typically r(w) is larger than the conventional reliability r, and confidence intervals on true scores are typically shorter than conventional intervals. r(w) is computed with routines in a publicly available R package. PMID:22500569

Thomas, Hoben; Lohaus, Arnold; Domsch, Holger

2011-02-02

114

Teaching parallel programming early  

Microsoft Academic Search

In this position paper, we point out the importance of teaching a basic understanding of parallel computations and parallel programming early in computer science education, in order to give students the necessary expertise to cope with future computer architectures that will exhibit an explicitly parallel programming model. We elaborate on a programming model, namely shared- memory bulk-synchronous parallel programming with

Christoph W. Kessler

2006-01-01

115

Photovoltaic module reliability workshop  

SciTech Connect

The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

Mrig, L. (ed.)

1990-01-01

116

Making Massively Parallel Systems Work  

Microsoft Academic Search

Massively parallel systems are based on distributed memory concepts and consistof several hundreds to thousands of nodes interconnected by a very highbandwidth network. Making these systems work requires a very careful operatingsystem design. A distributed operating system is required that takesthe form of a functionally dedicated server system. This approach reduces systemoverhead on the nodes and enables a problem-oriented mapping

R. Berg; J. Cordsen; J. Heuer; J. Nolte; B. Oestmann; M. Sander; H. Schmidt; F. Schön; W. Schröder-preikschat

1990-01-01

117

Reliability Simulation  

NASA Astrophysics Data System (ADS)

Reliability offers significant modeling challenges. Typically, parts cannot be tested until failure under normal operating conditions. Since the target is frequently a decade or longer of useful life, this is impractical. Consequently, accelerated testing is performed. This procedure only works when the physics is well understood, and the failure mechanism is not accelerated by factors not under the control of the testing. Consequently, modeling of the failure mechanism is crucial in making extrapolated predictions of lifetime. Technology Computer-Aided Design tools have advanced to the point where multiple physics can be included and the testing simulated fully. This chapter describes such an extended tool and provides examples of applying it to the understanding of two different failure mechanisms.

Law, M. E.; Griglione, M.; Patrick, E.; Rowsey, N.; Horton, D.

118

Revisiting parallel catadioptric goniophotometers  

NASA Astrophysics Data System (ADS)

A thorough knowledge of the angular distribution of light scattered by an illuminated surface under different angles is essential in numerous industrial and research applications. Traditionally, the angular distribution of a reflected or transmitted light flux as function of the illumination angle, described by the Bidirectional Scattering Distribution Function (BSDF), is measured with a point-by-point scanning goniophotometer yielding impractically long acquisition times. Significantly faster measurements can be achieved by a device capable of simultaneously imaging the far-field distribution of light scattered by a sample onto a two-dimensional sensor array. Such an angular-to-spatial mapping function can be realized with a parallel catadioptric mapping goniophotometer (CMG). In this contribution, we formally establish the design requirement for a reliable CMG. Based on heuristic considerations we show that, to avoid degrading the angular-to-spatial function, the acceptance angle of the lens system inherent to a CMG must be smaller than 60°. By means of a parametric study, we investigate the practical design limitations of a CMG caused by the constraints imposed by the properties of a real lens system. Our study reveals that the values of the key design parameters of a CMG fall within a relatively small range. This imposes the shape of the ellipsoidal reflector and drastically restricts the room for a design trade-off between the sample size and the angular resolution. We provide a quantitative analysis for the key parameters of a CMG for two relevant cases.

Karamata, Boris; Andersen, Marilyne

2013-04-01

119

Exploiting fine grain parallelism in Prolog  

SciTech Connect

The goals of this paper is to design a Prolog system that automatically exploits parallelism in Prolog with low overhead memory management and task management schemes, and to demonstrate by means of detailed simulations that such a Prolog system can indeed achieve a significant speedup over the fastest sequential Prolog systems. The authors achieve these goals by first identifying the large sources of overhead in parallel Prolog execution: side-effects caused by parallel tasks, choicepoints created by parallel tasks, tasks creation, task scheduling, task suspension and context switching. The authors then identify a form of parallelism, called flow parallelism, that can be exploited with low overhead because parallel execution is restricted to goals that do not cause side-effects and do not create choicepoints. The authors develop a master-slave model of parallel execution that eliminates task suspension and context switching. The model uses program partitioning and task scheduling techniques that do not require task suspension and context switching to prevent deadlock. The authors identify architectural techniques to support the parallel execution model and develop the Flow Parallel Prolog Machines (FPPM) architecture and implementation. Finally, the authors evaluate the performance of FPPM and investigate the design tradeoffs using measurements on a detailed, register- transfer level simulator. FPPM achieves an average speedup of about a factor of 2 (as much as a factor of 5 for some programs) over the current highest performance sequential Prolog implementation, the VLSI-BAM. The speedups over other parallel Prolog systems are much larger.

Singhal, A.

1990-01-01

120

Parallel Computing in Optimization.  

National Technical Information Service (NTIS)

One of the major developments in computing in recent years has been the introduction of a variety of parallel computers, and the development of algorithms that effectively utilize their capabilities. Very little of this parallel algorithm development, how...

R. B. Schnabel

1984-01-01

121

Parallel Particle Swarm Optimizer.  

National Technical Information Service (NTIS)

Time requirements for the solving of complex large-scale engineering problems can be substantially reduced by using parallel computation. Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel impleme...

J. F. Schutte B. Fregly R. T. Haftka A. D. George

2003-01-01

122

Parallel flow diffusion battery  

DOEpatents

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

Yeh, Hsu-Chi (Albuquerque, NM); Cheng, Yung-Sung (Albuquerque, NM)

1984-08-07

123

Parallel flow diffusion battery  

DOEpatents

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

Yeh, H.C.; Cheng, Y.S.

1984-01-01

124

Model of Parallel Performance.  

National Technical Information Service (NTIS)

This report introduces a general model of parallel performance. With the goal of developing conceptual and empirical methods for characterizing and understanding parallel algorithms, new definitions of speedup and efficiency have been formulated. These de...

E. A. Carmona M. D. Rice

1989-01-01

125

Pthreads for Dynamic Parallelism  

Microsoft Academic Search

Expressing a large number of lightweight, parallel threads in a shared address space significantly eases the task of writing a parallel program. Threads can be dynamically created to execute individual parallel tasks; the implementation schedules these threads onto the processors and effectively balances the load. However, unless the threads scheduler is designed carefully, such a p arallel program may suffer

Girija J. Narlikar; Guy E. Blelloch

1998-01-01

126

Decomposing the Potentially Parallel  

NSDL National Science Digital Library

This course provides an introduction to the issues involved in decomposing problems onto parallel machines, and to the types of architectures and programming styles commonly found in parallel computers. The list of topics discussed includes types of decomposition, task farming, regular domain decomposition, unbalanced grids, and parallel molecular dynamics.

Elspeth Minty, Robert Davey, Alan Simpson, David Henty

127

Reliability of a science admission test (HAM-Nat) at Hamburg medical school  

PubMed Central

Objective: The University Hospital in Hamburg (UKE) started to develop a test of knowledge in natural sciences for admission to medical school in 2005 (Hamburger Auswahlverfahren für Medizinische Studiengänge, Naturwissenschaftsteil, HAM-Nat). This study is a step towards establishing the HAM-Nat. We are investigating parallel forms reliability, the effect of a crash course in chemistry on test results, and correlations of HAM-Nat test results with a test of scientific reasoning (similar to a subtest of the "Test for Medical Studies", TMS). Methods: 316 first-year students participated in the study in 2007. They completed different versions of the HAM-Nat test which consisted of items that had already been used (HN2006) and new items (HN2007). Four weeks later half of the participants were tested on the HN2007 version of the HAM-Nat again, while the other half completed the test of scientific reasoning. Within this four week interval students were offered a five day chemistry course. Results: Parallel forms reliability for four different test versions ranged from rtt=.53 to rtt=.67. The retest reliabilities of the HN2007 halves were rtt=.54 and rtt =.61. Correlations of the two HAM-Nat versions with the test of scientific reasoning were r=.34 und r=.21. The crash course in chemistry had no effect on HAM-Nat scores. Conclusions: The results suggest that further versions of the test of natural sciences will not easily conform to the standards of internal consistency, parallel-forms reliability and retest reliability. Much care has to be taken in order to assemble items which could be used interchangeably for the construction of new test versions. The test of scientific reasoning and the HAM-Nat are tapping different constructs. Participation in a chemistry course did not improve students’ achievement, probably because the content of the course was not coordinated with the test and many students lacked of motivation to do well in the second test.

Hissbach, Johanna; Klusmann, Dietrich; Hampe, Wolfgang

2011-01-01

128

Reliability considerations for full-bridge DC-DC converter in fuel-cell applications  

Microsoft Academic Search

In this paper, the reliability of a full bridge DC-DC converter of fuel cell application has been determined. In the full bridge topology, MOSFETs should be paralleled in order to increase current rating. In this paper it is shown that this paralleling extremely decreases the reliability. But IPM switch application significantly increases the reliability in high power applications.

A. H. Ranjbar; B. Abdi; G. B. Gharehpetian; J. Milimonfared

2008-01-01

129

Towards Distributed Memory Parallel Program Analysis  

SciTech Connect

This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

Quinlan, D; Barany, G; Panas, T

2008-06-17

130

Parallel I/O Systems  

NSDL National Science Digital Library

* Redundant disk array architectures,* Fault tolerance issues in parallel I/O systems,* Caching and prefetching,* Parallel file systems,* Parallel I/O systems, * Parallel I/O programming paradigms, * Parallel I/O applications and environments, * Parallel programming with parallel I/O

Apon, Amy

131

System Reliability Sensitivity Measures  

Microsoft Academic Search

System reliability sensitivity measures are proposed to assist designers and reliability analysts in prioritizing reliability improvement and testing activities. In complex system designs, a common goal is to improve system reliability by increasing the reliability of the components to be used within the system. Component reliability is generally estimated from field failure data or test data. Unfortunately, these data are

Tongdan Jin; David W. Coit

132

Introduction to parallel computing  

SciTech Connect

Today's supercomputers and parallel computers provide an unprecedented amount of computational power in one machine. A basic understanding of the parallel computing techniques that assist in the capture and utilization of that computational power is essential to appreciate the capabilities and the limitations of parallel supercomputers. In addition, an understanding of technical vocabulary is critical in order to converse about parallel computers. The relevant techniques, vocabulary, currently available hardware architectures, and programming languages which provide the basic concepts of parallel computing are introduced in this document. This document updates the document entitled Introduction to Parallel Supercomputing, M88-42, October 1988. It includes a new section on languages for parallel computers, updates the hardware related sections, and includes current references.

Lafferty, E.L.; Michaud, M.C.; Prelle, M.J.; Goethert, J.B.

1992-05-01

133

Parallel operation control technique of voltage source inverters in UPS  

Microsoft Academic Search

The control technique of a parallel operation system of voltage source inverters with other inverters or with utility source has been applied in many fields, especially in uninterruptible power supply (UPS). The multi-module UPS can flexibly implement expansion of power system capacities. Furthermore, it can be used to build up a parallel redundant system in order to improve the reliability

Duan Shanxu; Meng Yu; Xiong Jian; Kang Yong; Chen Jian

1999-01-01

134

Parallel operation of voltage source inverters with minimal intermodule reactors  

Microsoft Academic Search

Realization of large horsepower motor drives using parallel-connected voltage source inverters rated at smaller power levels would be highly desirable. A robust technique for such a realization would result in several benefits including modularity, ease of maintenance, n+1 redundancy, reliability, etc. Techniques for parallel operation of voltage source inverters with relatively large load inductance have been well established in the

Bin Shi; Giri Venkataramanan

2004-01-01

135

Flutter reliability analysis of suspension bridges  

Microsoft Academic Search

A reliability analysis method is proposed in this paper through a combination of the advantages of the response surface method (RSM), finite element method (FEM), first-order reliability method (FORM) and the importance sampling updating method. The method is especially applicable for the reliability evaluation of complex structures of which the limit state surfaces are not known explicitly. After the accuracy

Jin Cheng; C. S. Cai; Ru-cheng Xiao; S. R. Chen

2005-01-01

136

Linearization in parallel pCRL  

Microsoft Academic Search

We describe a linearization algorithm for parallel pCRL processes similar to the one implemented in the linearizer of the CRL Toolset. This algorithm nds its roots in formal language theory: the 'grammar' dening a process is transformed into a variant of Greibach Normal Form. Next, any such form is further reduced to linear form, i.e., to an equation that resembles

Jan Friso Groote; Alban Ponse; Yaroslav S. Usenko

2001-01-01

137

Quantifying reliability uncertainty : a proof of concept  

Microsoft Academic Search

This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go\\/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative

Kathleen V. Diegert; Michael A. Dvorack; James T. Ringland; Michael Joseph Mundt; Aparna Huzurbazar; John F. Lorio; Quinn Fatherley; Christine Anderson-Cook; Alyson G. Wilson; Rena M. Zurn

2009-01-01

138

Two portable parallel tridiagonal solvers  

SciTech Connect

Many scientific computer codes involve linear systems of equations which are coupled only between nearest neighbors in a single dimension. The most common situation can be formulated as a tridiagonal matrix relating source terms and unknowns. This system of equations is commonly solved using simple forward and back substitution. The usual algorithm is spectacularly ill suited for parallel processing with distributed data, since information must be sequentially communicated across all domains. Two new tridiagonal algorithms have been implemented in FORTRAN 77. The two algorithms differ only in the form of the unknown which is to be found. The first and simplest algorithm solves for a scalar quantity evaluated at each point along the single dimension being considered. The second algorithm solves for a vector quantity evaluated at each point. The solution method is related to other recently published approaches, such as that of Bondeli. An alternative parallel tridiagonal solver, used as part of an Alternating Direction Implicit (ADI) scheme, has recently been developed at LLNL by Lambert. For a discussion of useful parallel tridiagonal solvers, see the work of Mattor, et al. Previous work appears to be concerned only with scalar unknowns. This paper presents a new technique which treats both scalar and vector unknowns. There is no restriction upon the sizes of the subdomains. Even though the usual tridiagonal formulation may not be theoretically optimal when used iteratively, it is used in so many computer codes that it appears reasonable to write a direct substitute for it. The new tridiagonal code can be used on parallel machines with a minimum of disruption to pre-existing programming. As tested on various parallel computers, the parallel code shows efficiency greater than 50% (that is, more than half of the available computer operations are used to advance the calculation) when each processor is given at least 100 unknowns for which to solve.

Eltgroth, P.G.

1994-07-15

139

Parallel digital forensics infrastructure.  

SciTech Connect

This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick

2009-10-01

140

Global approach to detection of parallelism  

SciTech Connect

Modern computers will increasingly rely on parallelism to achieve high computation rates. Techniques to automatically detect and exploit parallelism have shown effective for computers with vector capabilities. To employ similar techniques for asynchronous multiprocessor machines, the analysis and transformations used for vectorization must be extended to apply to entire programs rather than single loops. Three subproblems are addressed. A sequential-to-parallel conversion technique is presented. This algorithm, called a parallel code generator, is suitable for conversion of entire subroutines to parallel form. This algorithm is shown to be optimal for a restrictive form of the conversion problems. Additional transformations can be added to the basic parallel-code generator. Loop interchange is added to the conversion problem, but it is shown that finding the optimal solution is then NP-complete. The presence of loop-carried dependences results in a less-efficient parallel code. Loop alignment is a general tool for removing loop-carried dependences and improving the effectiveness of the parallel code generator. Loop alignment is hampered by alignment conflicts. A transformation called code replication can be used to break alignment conflicts at the cost of additional computation in the output program.

Callahan, C.D. II

1987-01-01

141

Reliability in engineering design  

Microsoft Academic Search

Reliability measures are examined, taking into account the reliability function, the expected life, the failure rate and the hazard function, the reliability and hazard function for well-known distributions, hazard models and product life, the estimation of the hazard function and the reliability function from empirical data, and comments on distribution selection. Static reliability models are considered along with aspects of

K. C. Kapur; L. R. Lamberson

1977-01-01

142

Automatic Generation of Parallel Programs with Dynamic Load Balancing  

Microsoft Academic Search

Existing parallelizing compilers are targeted towards parallel architectures where all processors are dedicated to a single application. However, a new type of parallel system has become available in the form of high perfor- mance workstations connected by high speed networks. Such systems pose new problems for compilers because the available processing power on each workstation may change with time due

Bruce S. Siegell; Peter Steenkiste

1994-01-01

143

Low-power approaches for parallel, free-space photonic interconnects  

SciTech Connect

Future advances in the application of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMS) and board-level parallel connections. Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. These will make use of new device-level technologies such as vertical cavity surface-emitting lasers and special low-power parallel photoreceiver circuits. Depending on the application, these device technologies will often be monolithically integrated to reduce the amount of board or module real estate required by the photonics. Highly parallel MCM and board-level applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated in photonic and optoelectronic technologies. An example is found in two-dimensional point-to-point array interconnects for MCM stacking. These interconnects are based on high-efficiency Vertical Cavity Surface Emitting Lasers (VCSELs), Heterojunction Bipolar Transistor (HBT) photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques. Individual channels have been demonstrated at 100 Mb/s, operating with a direct 3.3V CMOS electronic interface while using 45 mW of electrical power. These results demonstrate how optoelectronic device technologies can be optimized for low-power parallel link applications.

Carson, R.F.; Lovejoy, M.L.; Lear, K.L.; WSarren, M.E.; Seigal, P.K.; Craft, D.C.; Kilcoyne, S.P.; Patrizi, G.A.; Blum, O.

1995-12-31

144

Parallelizing Conditional Recurrences  

Microsoft Academic Search

Recursive functions which use conditional constructs are common in functional (and imperative) programs. We present a collection of techniques for handling such functions for a parallel synthesis method. These techniques can help us enlarge the class of sequential functions which could be systematically transformed to parallel equivalent.

Wei-ngan Chin; John Darlington; Yike Guo

1996-01-01

145

The Nas Parallel Benchmarks  

Microsoft Academic Search

A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of five parallel kernels and three simulated application benchmarks. Together theymimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications.The principal distinguishing feature of these benchmarks is their penciland paper specification---all details of these benchmarks are

D. Bailey; E. Barszcz; J. Barton; D. Browning; R. Carter; L. Dagum

1994-01-01

146

Parallel discrete event simulation  

Microsoft Academic Search

Parallel discrete event simulation (PDES), sometimes called distributed simulation, refers to the execution of a single discrete event simulation program on a parallel computer. PDES has attracted a considerable amount of interest in recent years. From a pragmatic standpoint, this interest arises from the fact that large simulations in engineering, computer science, economics, and military applications, to mention a few,

Richard M. Fujimoto

1990-01-01

147

Performance of Parallel Algorithms.  

National Technical Information Service (NTIS)

A notation to express the performance of a parallel computation is developed. A formalization of the performance measures for a parallel algorithm in which Amdahl's law is a special case is given. The general formulation of Amdahl's law is summarized. The...

J. J. Lukkien

1989-01-01

148

Massively parallel mathematical sieves  

SciTech Connect

The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

Montry, G.R.

1989-01-01

149

Parallel computing works  

SciTech Connect

An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

Not Available

1991-10-23

150

Comparison of Reliability Measures under Factor Analysis and Item Response Theory  

ERIC Educational Resources Information Center

|Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure…

Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng

2012-01-01

151

Compositional C++: Compositional Parallel Programming  

Microsoft Academic Search

A compositional parallel program is a program constructed by composing component programs in parallel, where the composed program inherits properties of its components. In this paper, we describe a small extension of C++ called Compositional C++ or CC++ which is an object-oriented notation that supports compositional parallel programming. CC++ integrates different paradigms of parallel programming: data-parallel, task-parallel and object-parallel paradigms;

K. Mani Chandy; Carl Kesselman

1992-01-01

152

Optimal reliability of systems subject to imperfect fault-coverage  

Microsoft Academic Search

This paper maximizes the reliability of systems subjected to imperfect fault-coverage. The results include the effect of common-cause failures and `maximum allowable spare limit'. The generalized results are presented and then the policies for some specific systems are given. The systems considered include parallel, parallel-series, series parallel, k-out-of-n, and NMR (k-out-of-(2k-1)) systems. The results are generalized for the non s-identical

Suprasad V. Amari; Joanne Bechta Dugan; Ravindra B. Misra

1999-01-01

153

Thermodynamic stability and folding kinetics of the major G-quadruplex and its loop-isomers formed in the Nuclease Hypersensitive Element in the human c-Myc promoter-Effect of loops and flanking segments on the stability of parallel-stranded intramolecular G-quadruplexes  

PubMed Central

Overexpression of the c-Myc proto-oncogene is associated with a broad spectrum of human cancers. The nuclease hypersensitivity element III1 (NHE III1) of the c-Myc promoter can form transcriptionally active and silenced forms and the formation of DNA G-quadruplex structures has been shown to be critical for c-Myc transcriptional silencing. The major G-quadruplex formed in the c-Myc NHE III1 is a mixture of four loop-isomers, which have all been shown to be biologically relevant to c-Myc transcriptional control. In this study we performed a thorough thermodynamic and kinetic study of the four c-Myc loop-isomers in K+ solution. The four loop-isomers all form parallel-stranded G-quadruplexes with short loop lengths. While the parallel-stranded G-quadruplex has been known to favor short loop lengths, our results show that the difference in thermodynamic and kinetic properties of the four loop-isomers, and hence between the parallel G-quadruplexes with similar loop lengths, is more significant than previously recognized. At 20 mM K+, the average difference of the Tm values between the most stable loop-isomer 14/23 and the least stable loop-isomer 11/20 is greater than 10 degrees. In addition, the capping structures formed by the extended flanking segments are shown to contribute to a stabilization of 2–3°C in Tm for the c-Myc promoter G-quadruplex. Understanding the intrinsic thermodynamic stability and kinetic properties of the c-Myc G-quadruplex loop-isomers can help understand their biological roles and drug targeting.

Hatzakis, Emmanuel; Okamoto, Keika; Yang, Danzhou

2010-01-01

154

Java Parallel Secure Stream for Grid Computing  

SciTech Connect

The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. This paper presents a pure Java package called JPARSS (Java Par-allel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addi-tion X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed.

Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

2001-09-01

155

Parallel visual computation  

NASA Astrophysics Data System (ADS)

The functional abilities and parallel architecture of the human visual system are a rich source of ideas about visual processing. Any visual task that we can perform quickly and effortlessly is likely to have a computational solution using a parallel algorithm. Recently, several such parallel algorithms have been found that exploit information implicit in an image to compute intrinsic properties of surfaces, such as surface orientation, reflectance and depth. These algorithms require a computational architecture that has similarities to that of visual cortex in primates.

Ballard, Dana H.; Hinton, Geoffrey E.; Sejnowski, Terrence J.

1983-11-01

156

Parallel processing in Ada  

SciTech Connect

Ada was designed from the beginning with parallel processing applications in mind. Its tasking mechanism is a coherent response to the language issues involved in parallel processing, and carefully balances the often conflicting goals of high-level language features on the one hand and efficient implementation on the other. The purpose of this discussion is to place the design of Ada's parallel processing in its proper historical and technical context. In the process we will show how Ada itself has clarified some issues and thus established trends in language design.

Mundie, D.A.; Fisher, D.A.

1986-08-01

157

Wind Turbine Reliability Database Update.  

National Technical Information Service (NTIS)

This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a five-step process of data partnerships, data definition...

J. A. Stinebaugh P. S. Veers R. R. Hill V. A. Peters

2009-01-01

158

Parallel Program Archetypes.  

National Technical Information Service (NTIS)

The research supported by this grant falls into three categories: distributed systems, parallel programming, and theory of concurrent compositions. We developed a distributed systems framework, called Infospheres, that allows any Java programmer to create...

M. Chandy

1997-01-01

159

High Performance Parallel Computing.  

National Technical Information Service (NTIS)

The accomplishments of the research project 'High Performance Parallel Computing' for the year 1983 span algorithm formulation, paralle programming languages, basic software for the Texas Reconfigurable Array Computer and validation of design concpets for...

J. C. Browne G. J. Lipovski M. Malek

1985-01-01

160

Simplified Parallel Domain Traversal  

SciTech Connect

Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep by performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.

Erickson III, David J [ORNL

2011-01-01

161

Parallel Lisp Simulator,  

National Technical Information Service (NTIS)

CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper des...

J. S. Weening

1988-01-01

162

Partitioning and parallel radiosity  

NASA Astrophysics Data System (ADS)

This paper proposes a theoretical framework, based on domain subdivision for parallel radiosity. Moreover, three various implementation approaches, taking advantage of partitioning algorithms and global shared memory architecture, are presented.

Merzouk, S.; Winkler, C.; Paul, J. C.

1996-03-01

163

Parallel lumigraph reconstruction  

Microsoft Academic Search

This paper presents three techniques for reconstructing Lumigraphs\\/Lightfields on commercial ccNUMA parallel distributed shared memory computers. The first method is a parallel extension of the software-based method proposed in the Lightfield paper. This expands the ray\\/two-plane intersection test along the film plane, which effectively becomes scan conversion. The second method extends this idea by using a shear\\/warp factorization that accelerates

Peter-Pike Sloan; Charles Hansen

1999-01-01

164

UCLA Parallel PIC Framework  

Microsoft Academic Search

The UCLA Parallel PIC Framework (UPIC) has been developed to provide trusted components for the rapid construction of new, parallel Particle-in-Cell (PIC) codes. The Framework uses object-based ideas in Fortran95, and is designed to provide support for various kinds of PIC codes on various kinds of hardware. The focus is on student programmers. The Framework supports multiple numerical methods, different

Viktor K. Decyk; Charles D. Norton

2004-01-01

165

Parallel DC notch filter  

NASA Astrophysics Data System (ADS)

In the process of image acquisition, the object of interest may not be evenly illuminated. So an image with shading irregularities would be produced. This type of image is very difficult to analyze. Consequently, a lot of research work concentrates on this problem. In order to remove the light illumination problem, one of the methods is to filter the image. The dc notch filter is one of the spatial domain filters used for reducing the effect of uneven light illumination on the image. Although the dc notch filter is a spatial domain filter, it is still rather time consuming to apply, especially when it is implemented on a microcomputer. To overcome the speed problem, a parallel dc notch filter is proposed. Based on the separability of the algorithm dc of notch filter, image parallelism (parallel image processing model) is used. To improve the performance of the microcomputer, an INMOS IMS B008 Module Mother Board with four IMS T800-17 is installed in the microcomputer. In fact, the dc notch filter is implemented on the transputer network. This parallel dc notch filter creates a great improvement in the computation time of the filter in comparison with the sequential one. Furthermore, the speed-up is used to analyze the performance of the parallel algorithm. As a result, parallel implementation of the dc notch filter on a transputer network gives a real-time performance of this filter.

Kwok, Kam-Cheung; Chan, Ming-Kam

1991-12-01

166

Low-power, parallel photonic interconnections for Multi-Chip Module applications  

SciTech Connect

New applications of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMs). Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. MCM-based applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated currently in photonic and optoelectronic technologies. The work described is a parallel link array, designed for vertical (Z-Axis) interconnection of the layers in a MCM-based signal processor stack, operating at a data rate of 100 Mb/s. This interconnect is based upon high-efficiency VCSELs, HBT photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques.

Carson, R.F.; Lovejoy, M.L.; Lear, K.L.

1994-12-31

167

ParPEST: a pipeline for EST data analysis based on parallel computing  

PubMed Central

Background Expressed Sequence Tags (ESTs) are short and error-prone DNA sequences generated from the 5' and 3' ends of randomly selected cDNA clones. They provide an important resource for comparative and functional genomic studies and, moreover, represent a reliable information for the annotation of genomic sequences. Because of the advances in biotechnologies, ESTs are daily determined in the form of large datasets. Therefore, suitable and efficient bioinformatic approaches are necessary to organize data related information content for further investigations. Results We implemented ParPEST (Parallel Processing of ESTs), a pipeline based on parallel computing for EST analysis. The results are organized in a suitable data warehouse to provide a starting point to mine expressed sequence datasets. The collected information is useful for investigations on data quality and on data information content, enriched also by a preliminary functional annotation. Conclusion The pipeline presented here has been developed to perform an exhaustive and reliable analysis on EST data and to provide a curated set of information based on a relational database. Moreover, it is designed to reduce execution time of the specific steps required for a complete analysis using distributed processes and parallelized software. It is conceived to run on low requiring hardware components, to fulfill increasing demand, typical of the data used, and scalability at affordable costs.

D'Agostino, Nunzio; Aversano, Mario; Chiusano, Maria Luisa

2005-01-01

168

Can There Be Reliability Without Reliability.  

National Technical Information Service (NTIS)

A recent article by Pamela Moss asks the title question, Can there be validity without reliability. If by reliability we mean only KR-2O coefficients or inter-rater correlations, the answer is yes. Sometimes these particular indices for evaluating evidenc...

R. J. Mislevy

1994-01-01

169

Can There Be Reliability without "Reliability?"  

ERIC Educational Resources Information Center

|An "Educational Researcher" article by Pamela Moss (1994) asks the title question, "Can there be validity without reliability?" Yes, she answers, if by reliability one means "consistency among independent observations intended as interchangeable" (Moss, 1994, p. 7), quantified by internal consistency indices such as KR-20 coefficients and…

Mislevy, Robert J.

2004-01-01

170

Performance Bounds for Parallel Processors.  

National Technical Information Service (NTIS)

A general model of computation on a p-parallel processor is proposed, distinguishing clearly between the logical parallelism (p* processes) inherent in a computation, and the physical parallelism (p processor) available in the computer organization. This ...

R. B. L. Lee

1976-01-01

171

Realistic analytical phantoms for parallel magnetic resonance imaging.  

PubMed

The quantitative validation of reconstruction algorithms requires reliable data. Rasterized simulations are popular but they are tainted by an aliasing component that impacts the assessment of the performance of reconstruction. We introduce analytical simulation tools that are suited to parallel magnetic resonance imaging and allow one to build realistic phantoms. The proposed phantoms are composed of ellipses and regions with piecewise-polynomial boundaries, including spline contours, Bézier contours, and polygons. In addition, they take the channel sensitivity into account, for which we investigate two possible models. Our analytical formulations provide well-defined data in both the spatial and k-space domains. Our main contribution is the closed-form determination of the Fourier transforms that are involved. Experiments validate the proposed implementation. In a typical parallel magnetic resonance imaging reconstruction experiment, we quantify the bias in the overly optimistic results obtained with rasterized simulations-the inverse-crime situation. We provide a package that implements the different simulations and provide tools to guide the design of realistic phantoms. PMID:22049364

Guerquin-Kern, M; Lejeune, L; Pruessmann, K P; Unser, M

2011-10-28

172

Component specification for parallel coupling infrastructure.  

SciTech Connect

Coupled systems comprise multiple mutually interacting subsystems, and are an increasingly common computational science application, most notably as multiscale and multiphysics models. Parallel computing, and in particular message-passing programming have spurred the development of these models, but also present a parallel coupling problem (PCP) in the form of intermodel data dependencies. The PCP complicates model coupling through requirements for the description, transfer, and transformation of the distributed data that models in a parallel coupled system exchange. Component-based software engineering has been proposed as one means of conquering software complexity in scientific applications, and given the compound nature of coupled models, it is a natural approach to addressing the parallel coupling problem. We define a software component specification for solving the parallel coupling problem. This design draws from the already successful Common Component Architecture (CCA). We abstract the parallel coupling problem's elements and map them onto a set of CCA components, defining a parallel coupling infrastructure toolkit. We discuss a reference implementation based on the Model Coupling Toolkit. We demonstrate how these components might be deployed to solve a relevant coupling problems in climate modeling.

Larson, J. W.; Norris, B.; Mathematics and Computer Science; Australian National Univ.

2007-01-01

173

Mechanically reliable scales and coatings  

SciTech Connect

As the first stage in examining the mechanical reliability of protective surface oxides, the behavior of alumina scales formed on iron-aluminum alloys during high-temperature cyclic oxidation was characterized in terms of damage and spallation tendencies. Scales were thermally grown on specimens of three iron-aluminum composition using a series of exposures to air at 1000{degrees}C. Gravimetric data and microscopy revealed substantially better integrity and adhesion of the scales grown on an alloy containing zirconium. The use of polished (rather than just ground) specimens resulted in scales that were more suitable for subsequent characterization of mechanical reliability.

Tortorelli, P.F.; Alexander, K.B.

1995-07-01

174

Multi-ASIP based parallel and scalable implementation of motion estimation kernel for high definition videos  

Microsoft Academic Search

Parallel implementations of motion estimation for high definition videos typically exploit various forms of parallelism (GOP, frame-, slice- and macroblock-level) to deliver real-time throughput. Although parallel implementations deliver real-time throughput, they often suffer from limited flexibility and scalability due to the form of parallelism and architecture used. In this work, we use Group Of MacroBlocks (GOMB) and Intra-MB (IMB) parallelism

Hong Chinh Doan; Haris Javaid; Sri Parameswaran

2011-01-01

175

Parallel processing architecture  

DOEpatents

The parallel processing architecture provides a processor array which accepts input data at a faster rate that its processing elements are able to execute. The main features of this architecture are its programmability, scalability, high bandwidth communication and low cost. It provides high connectivity while maintaining minimum distance between processor elements. This architecture enables construction of a parallel processing with high bandwidth communication in six directions among the neighboring processors. It provides for future growth into more complex and optimized algorithms, and facilitiates incorporation of hardware advances with little effect on currently installed systems. Parallel processing architecture is useful for data sharing in an array, pattern recognition within a data array and sustaining a data input rate which is higher than the pattern recognition algorithm execution time (particle identification in high energy physics).

Crosetto, D.B.

1992-01-01

176

A Self-Learning Method of Parallel Texts Alignment  

Microsoft Academic Search

This paper describes a language independent method for alignment of parallel texts that re-uses acquired knowledge. The system\\u000a extracts word translation equivalents and re-uses them as correspondence points in order to enhance the alignment of parallel\\u000a texts. Points that may cause misalignment are filtered using confidence bands of linear regression analysis instead of heuristics,\\u000a which are not theoretically reliable. Homographs

António Ribeiro; José Gabriel Pereira Lopes; João Mexia

2000-01-01

177

Power electronics reliability analysis.  

SciTech Connect

This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

Smith, Mark A.; Atcitty, Stanley

2009-12-01

178

Printed wiring assembly and interconnection reliability  

NASA Astrophysics Data System (ADS)

This report presents reliability prediction models for printed wiring assemblies, solderless wrap assemblies, wrapped and soldered assemblies, and discrete wiring assemblies w/electroless deposited PTH for inclusion in MIL-HDBK-217. Collected field failure rate data were utilized to develop and evaluate the factors. The reliability prediction models are presented in a form compatible with MIL-HDBK-217D.

Coit, D. W.

1981-11-01

179

Printed wiring assembly and interconnection reliability  

Microsoft Academic Search

This report presents reliability prediction models for printed wiring assemblies, solderless wrap assemblies, wrapped and soldered assemblies, and discrete wiring assemblies w\\/electroless deposited PTH for inclusion in MIL-HDBK-217. Collected field failure rate data were utilized to develop and evaluate the factors. The reliability prediction models are presented in a form compatible with MIL-HDBK-217D.

D. W. Coit

1981-01-01

180

Languages for Parallel Processors  

NASA Astrophysics Data System (ADS)

The effective programming of parallel computers is much more complex then the programming of conventional serial computers. There are two fundamental models of highly parallel computer architectures: single instruction stream-multiple data stream in which a single program control unit is used to control a set of slave processing elements and multiple instruction stream-multiple data stream in which a set of interconnected independent processors cooperate on a single task. The high level programming language constructs appropriate for each model are discussed.

Reeves, A. P.

181

Scalable Parallel Crash Simulations  

SciTech Connect

We are pleased to submit our efforts in parallelizing the PRONTO application suite for con- sideration in the SuParCup 99 competition. PRONTO is a finite element transient dynamics simulator which includes a smoothed particle hydrodynamics (SPH) capability; it is similar in scope to the well-known DYNA, PamCrash, and ABAQUS codes. Our efforts over the last few years have produced a fully parallel version of the entire PRONTO code which (1) runs fast and scalably on thousands of processors, (2) has performed the largest finite-element transient dynamics simulations we are aware of, and (3) includes several new parallel algorithmic ideas that have solved some difficult problems associated with contact detection and SPH scalability. We motivate this work, describe the novel algorithmic advances, give performance numbers for PRONTO running on Sandia's Intel Teraflop machine, and highlight two prototypical large-scale computations we have performed with the parallel code. We have successfully parallelized a large-scale production transient dynamics code with a novel algorithmic approach that utilizes multiple decompositions for different key segments of the computations. To be able to simulate a more than ten million element model in a few tenths of second per timestep is unprecedented for solid dynamics simulations, especially when full global contact searches are required. The key reason is our new algorithmic ideas for efficiently parallelizing the contact detection stage. To our knowledge scalability of this computation had never before been demonstrated on more than 64 processors. This has enabled parallel PRONTO to become the only solid dynamics code we are aware of that can run effectively on 1000s of processors. More importantly, our parallel performance compares very favorably to the original serial PRONTO code which is optimized for vector supercomputers. On the container crush problem, a Teraflop node is as fast as a single processor of the Cray Jedi. This means that on the Teraflop machine we can now run simulations with tens of millions of elements thousands of times faster than we could on the Jedi! This is enabling transient dynamics simulations of unprecedented scale and fidelity. Not only can previous applications be run with vastly improved resolution and speed, but qualitatively new and different analyses have been made possible.

Attaway, Stephen; Barragy, Ted; Brown, Kevin; Gardner, David; Gruda, Jeff; Heinstein, Martin; Hendrickson, Bruce; Metzinger, Kurt; Neilsen, Mike; Plimpton, Steve; Pott, John; Swegle, Jeff; Vaughan, Courtenay

1999-06-01

182

Massively parallel computing system  

DOEpatents

A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

Benner, R.E.; Gustafson, J.L.; Montry, G.R.

1989-03-01

183

Architectural design for reliability  

SciTech Connect

Design-for-reliability concepts can be applied to the products of the construction industry, which includes buildings, bridges, transportation systems, dams, and other structures. The application of a systems approach to designing in reliability emphasizes the importance of incorporating uncertainty in the analyses, the benefits of optimization analyses, and the importance of integrating reliability, safety, and security. 4 refs., 3 figs.

Cranwell, R.M.; Hunter, R.L.

1997-08-01

184

Reliability in MEMS Packaging  

Microsoft Academic Search

Cost effective packaging and robust reliability are two critical factors for successful commercialization of MEMS and microsystems. While packaging contributes to the effective production cost of MEMS devices, reliability addresses consumer's confidence in and expectation on sustainable performance of the products. There are a number of factors that contribute to the reliability of MEMS; packaging, in particular, in bonding and

Tai-Ran Hsu

2006-01-01

185

Improving Reliability of a Residency Interview Process  

PubMed Central

Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station—impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity.

Serres, Michelle L.; Gundrum, Todd E.

2013-01-01

186

Pillar: A Parallel Implementation Language  

Microsoft Academic Search

As parallelism in microprocessors becomes mainstream, new program- ming languages and environments are emerging to meet the challenges of parallel programming. To support research on these languages, we are developing a low- level language infrastructure called Pillar (derived from Parallel Implementation Language). Although Pillar programs are intended to be automatically generated from source programs in each parallel language, Pillar programs

Todd Anderson; Neal Glew; Peng Guo; Brian T. Lewis; Wei Liu; Zhanglin Liu; Leaf Petersen; Mohan Rajagopalan; James M. Stichnoth; Gansha Wu; Dan Zhang

2007-01-01

187

Technique Used in Determining Field Operational Reliability.  

National Technical Information Service (NTIS)

The report depicts several of the problems involved in conducting an investigation to determine the operational reliability of an Army radio set utilized under actual field conditions. A reporting form was distributed to the troops prior to the exercise c...

J. W. D'Oria

1966-01-01

188

Parallel Computational Geometry  

Microsoft Academic Search

We present efficient parallel algorithms for several basic problems in computational geometry: convex hulls, Voronoi diagrams,\\u000a detecting line segment intersections, triangulating simple polygons, minimizing a circumscribing triangle, and recursive data-structures\\u000a for three-dimensional queries.

Alok Aggarwal; Bernard Chazelle; Leonidas J. Guibas; Colm Ó'dúnlaing; Chee-keng Yap

1988-01-01

189

Parallel Spectral Numerical Methods  

NSDL National Science Digital Library

This module teaches the principals of Fourier spectral methods, their utility in solving partial differential equation and how to implement them in code. Performance considerations for several Fourier spectral implementations are discussed and methods for effective scaling on parallel computers are explained.

Chen, Gong; Cloutier, Brandon; Li, Ning; Muite, Benson; Rigge, Paul

190

Pringle Parallel Computer.  

National Technical Information Service (NTIS)

The Pringle is a 64 processor MIMD computer with a 64 M (8 bit) instructions per second execution rate. The Pringle runs programs written for the Configurable, Highly Parallel (CHiP) Computer. That is, the Pringle executes the 64 separate instruction stre...

A. A. Kapauau J. T. Field D. B. Gannon L. Snyder

1984-01-01

191

Parallel Adaptive Mesh Refinement  

Microsoft Academic Search

As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution

L Diachin; R Hornung; P Plassmann; A WIssink

2005-01-01

192

Simple Fast Parallel Hashing  

Microsoft Academic Search

A hash table is a representation of a set in a linear size data structure that supports constant-time membership queries. We show how to construct a hash table for any given set of n keys in O(lg lg n) parallel time with high probability, using n processors on a weak version of a

Joseph Gil; Yossi Matias

1994-01-01

193

Parallelism and Functionalism  

Microsoft Academic Search

It has recently been argued by Paul Thagard (1986) that parallel computational models of cognition demonstrate the falsity of the popular theory of mind known as funct\\/onal\\/sm. It is my contention that his argument is seriously mistaken and rests on a misunderstanding of the functionalist position. While my primary aim is to defend functionalism from Thagard's attack, in the process

William M. Ramsey

1989-01-01

194

Optimizing parallel reduction operations  

SciTech Connect

A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

Denton, S.M.

1995-06-01

195

PARALLEL TRIANGULAR MESH REDUCTION  

Microsoft Academic Search

The visualization of large and complex models is required frequently. This is followed by number of operations which must be done before visualization itself, whether it is an analysis of input data or a model simplification. One of the techniques that enhance the computational power is parallel computation. It can be seen that multiprocessor computers are more often available even

MARTIN FRANC; VÁCLAV SKALA

2000-01-01

196

Interprocedural Analysis for Parallelization  

Microsoft Academic Search

This paper presents an extensive empirical evaluation of an interprocedural parallelizing compiler, developed as part of the Stanford SUIF compiler system. The system incorporates a comprehensive and integrated collection of analyses, including privatization and reduction recognition for both array and scalar variables, and symbolic analysis of array subscripts. The interprocedural analysis framework is designed to provide analysis results nearly as

Mary W. Hallt; Brian R. Murphy; Saman P. Amarasinghe; Shih-wei Liao; Monica S. Lam

1995-01-01

197

Parallel Traveling Salesman Problem  

NSDL National Science Digital Library

The traveling salesman problem is a classic optimization problem in which one seeks to minimize the path taken by a salesman in traveling between N cities, where the salesman stops at each city one and only one time, never retracing his/her route. This implementation is designed to run on UNIX systems with X-Windows, and includes parallelization using MPI.

Joiner, David; Hassinger, Jonathan

198

Parallel molecular genetic analysis  

Microsoft Academic Search

We describe recent progress in parallel molecular genetic analyses using DNA microarrays, gel-based systems, and capillary electrophoresis and utilization of these approaches in a variety of molecular biology assays. These applications include use of polymorphic markers for mapping of genes and disease-associated loci and carrier detection for genetic diseases. Application of these technologies in molecular diagnostics as well as fluorescent

Steven E McKenzie; Elaine Mansfield; Eric Rappaport; Saul Surrey; Paolo Fortina

1998-01-01

199

Quality enhancement of parallel MDP flows with mask suppliers  

NASA Astrophysics Data System (ADS)

For many maskshops, designed parallel mask data preparation (MDP) flows accompanying with a final data comparison are viewed as a reliable method that could reduce quality risks caused by mis-operation. However, in recent years, more and more mask data mistakes have shown that present parallel MDP flows could not capture all mask data errors yet. In this paper, we will show major failure models of parallel MDP flows from analyzing MDP quality accidents and share our approaches to achieve further improvement with mask suppliers together.

Deng, Erwin; Lee, Rachel; Lee, Chun Der

2013-06-01

200

Tempest: A Substrate for Portable Parallel Programs  

Microsoft Academic Search

This paper describes Tempest, a collection of mechanisms for communication and synchronization in parallel programs. With these mechanisms, authors of compilers, libraries, and applica- tion programs can exploit—across a wide range of hardware plat- forms—the best of shared memory, message passing, and hybrid combinations of the two. Because Tempest provides mechanisms, not policies, programmers can tailor communication to a pro-

Mark D. Hill; James R. Larus; David A. Wood

1995-01-01

201

Instruction scheduling for instruction level parallel processors  

Microsoft Academic Search

Nearly all personal computer and workstation processors, and virtually all high-performance embedded processor cores, now embody instruction level parallel (ILP) processing in the form of superscalar or very long instruction word (VLIW) architectures. ILP processors put much more of a burden on compilers; without \\

PAOLO FARABOSCHI; JOSEPH A. FISHER; CLIFF YOUNG

2001-01-01

202

Science Grade 7, Long Form.  

ERIC Educational Resources Information Center

The Grade 7 Science course of study was prepared in two parallel forms. A short form designed for students who had achieved a high measure of success in previous science courses; the long form for those who have not been able to maintain the pace. Both forms contain similar content. The Grade 7 guide is the first in a three-year sequence for…

New York City Board of Education, Brooklyn, NY. Bureau of Curriculum Development.

203

Ultrascalable petaflop parallel supercomputer  

DOEpatents

A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton On Hudson, NY); Chiu, George (Cross River, NY); Cipolla, Thomas M. (Katonah, NY); Coteus, Paul W. (Yorktown Heights, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hall, Shawn (Pleasantville, NY); Haring, Rudolf A. (Cortlandt Manor, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kopcsay, Gerard V. (Yorktown Heights, NY); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY); Takken, Todd (Brewster, NY)

2010-07-20

204

Parallel multilevel preconditioners  

SciTech Connect

In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

1989-01-01

205

Reliability quantification and visualization for electric microgrids  

NASA Astrophysics Data System (ADS)

The electric grid in the United States is undergoing modernization from the state of an aging infrastructure of the past to a more robust and reliable power system of the future. The primary efforts in this direction have come from the federal government through the American Recovery and Reinvestment Act of 2009 (Recovery Act). This has provided the U.S. Department of Energy (DOE) with 4.5 billion to develop and implement programs through DOE's Office of Electricity Delivery and Energy Reliability (OE) over the a period of 5 years (2008-2012). This was initially a part of Title XIII of the Energy Independence and Security Act of 2007 (EISA) which was later modified by Recovery Act. As a part of DOE's Smart Grid Programs, Smart Grid Investment Grants (SGIG), and Smart Grid Demonstration Projects (SGDP) were developed as two of the largest programs with federal grants of 3.4 billion and $600 million respectively. The Renewable and Distributed Systems Integration (RDSI) demonstration projects were launched in 2008 with the aim of reducing peak electricity demand by 15 percent at distribution feeders. Nine such projects were competitively selected located around the nation. The City of Fort Collins in co-operative partnership with other federal and commercial entities was identified to research, develop and demonstrate a 3.5MW integrated mix of heterogeneous distributed energy resources (DER) to reduce peak load on two feeders by 20-30 percent. This project was called FortZED RDSI and provided an opportunity to demonstrate integrated operation of group of assets including demand response (DR), as a single controllable entity which is often called a microgrid. As per IEEE Standard 1547.4-2011 (IEEE Guide for Design, Operation, and Integration of Distributed Resource Island Systems with Electric Power Systems), a microgrid can be defined as an electric power system which has following characteristics: (1) DR and load are present, (2) has the ability to disconnect from and parallel with the area Electric Power Systems (EPS), (3) includes the local EPS and may include portions of the area EPS, and (4) is intentionally planned. A more reliable electric power grid requires microgrids to operate in tandem with the EPS. The reliability can be quantified through various metrics for performance measure. This is done through North American Electric Reliability Corporation (NERC) metrics in North America. The microgrid differs significantly from the traditional EPS, especially at asset level due to heterogeneity in assets. Thus, the performance cannot be quantified by the same metrics as used for EPS. Some of the NERC metrics are calculated and interpreted in this work to quantify performance for a single asset and group of assets in a microgrid. Two more metrics are introduced for system level performance quantification. The next step is a better representation of the large amount of data generated by the microgrid. Visualization is one such form of representation which is explored in detail and a graphical user interface (GUI) is developed as a deliverable tool to the operator for informative decision making and planning. Electronic appendices-I and II contain data and MATLAB© program codes for analysis and visualization for this work.

Panwar, Mayank

206

Parallelization: Infectious Disease  

NSDL National Science Digital Library

Epidemiology is the study of infectious disease. Infectious diseases are said to be "contagious" among people if they are transmittable from one person to another. Epidemiologists can use models to assist them in predicting the behavior of infectious diseases. This module will develop a simple agent-based infectious disease model, develop a parallel algorithm based on the model, provide a coded implementation for the algorithm, and explore the scaling of the coded implementation on high performance cluster resources.

Weeden, Aaron

207

Scalable Parallel Crash Simulations  

Microsoft Academic Search

We are pleased to submit our efforts in parallelizing the PRONTO application suite for con- sideration in the SuParCup 99 competition. PRONTO is a finite element transient dynamics simulator which includes a smoothed particle hydrodynamics (SPH) capability; it is similar in scope to the well-known DYNA, PamCrash, and ABAQUS codes. Our efforts over the last few years have produced a

Stephen Attaway; Ted Barragy; Kevin Brown; David Gardner; Jeff Gruda; Martin Heinstein; Bruce Hendrickson; Kurt Metzinger; Mike Neilsen; Steve Plimpton; John Pott; Jeff Swegle; Courtenay Vaughan

1999-01-01

208

Xyce parallel electronic simulator.  

SciTech Connect

This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

2010-05-01

209

Parallel reduced area multipliers  

Microsoft Academic Search

As developed by Wallace and Dadda, a method for high-speed, parallel multiplication is to generate a matrix of partial products\\u000a and then reduce the partial products to two numbers whose sum is equal to the final product. The resulting two numbers are\\u000a then summed using a fast carry-propagate adder. This paper presents Reduced Area multipliers, which employ a modified reduction

K'andrea C. Bickerstaff; Michael J. Schulte; Earl E. Swartzlander Jr.

1995-01-01

210

Design for reliability - a Reliability Engineering Framework  

Microsoft Academic Search

More and more, reliability is seen as a key differentiator in an extremely competitive globalized market. Recent examples from the automotive sector illustrate very well how serious the impact from field problems can be, more in particular when safety risks are possibly involved: liability claims, recall actions, negative effect on the market share of a brand and so on. But

J. F. J. M. Caers; X. J. Zhao; J. Mooren; L. Stulens; E. Eggink

2010-01-01

211

Device for balancing parallel strings  

DOEpatents

A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

Mashikian, Matthew S. (Storrs, CT)

1985-01-01

212

Information hiding in parallel programs  

SciTech Connect

A fundamental principle in program design is to isolate difficult or changeable design decisions. Application of this principle to parallel programs requires identification of decisions that are difficult or subject to change, and the development of techniques for hiding these decisions. We experiment with three complex applications, and identify mapping, communication, and scheduling as areas in which decisions are particularly problematic. We develop computational abstractions that hide such decisions, and show that these abstractions can be used to develop elegant solutions to programming problems. In particular, they allow us to encode common structures, such as transforms, reductions, and meshes, as software cells and templates that can reused in different applications. An important characteristic of these structures is that they do not incorporate mapping, communication, or scheduling decisions: these aspects of the design are specified separately, when composing existing structures to form applications. This separation of concerns allows the same cells and templates to be reused in different contexts.

Foster, I.

1992-01-30

213

An MIMD parallel computer system  

NASA Astrophysics Data System (ADS)

The need to run large, compute-bound programs encountered in research, coupled with the high availability of mini- and microcomputers in the laboratory environment, has prompted the linking of independent processors to form multicomputer systems. Important characteristics of the system presented here are the lack of shared memory between processors and the use of purely standard hardware to effect the linking. The resulting MIMD machine is suitable for executing asynchronous and weakly synchronous parallel programs. This is facilatated by assembly language software support to handle communication and to organise the independent sections of executable code for the individual processors. The design principles involved in this hardware configuration and the attendant software are introduced. A brief description of program execution behaviour is given. Applications and examples of programming problems which have been implemented on the system are discussed. An empirical method for assessing the timewise gain which ensues from use of the system is presented and experimental results obtained are outlined.

Joubert, G. R.; Maeder, A. J.

1982-06-01

214

Operational safety reliability research  

SciTech Connect

Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant.

Hall, R.E.; Boccio, J.L.

1986-01-01

215

Reliability Analysis and Modeling of ZigBee Networks  

NASA Astrophysics Data System (ADS)

The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.

Lin, Cheng-Min

216

Parallel Processing Creates a Low-Cost Growth Path.  

ERIC Educational Resources Information Center

|Discusses the advantages of parallel processor computers in terms of expandibility, cost, performance and reliability, and suggests that such computers be used in library automation systems as a cost effective approach to planning for the growth of information services and computer applications. (CLB)|

Shekhel, Alex; Freeman, Eva

1987-01-01

217

Applying parallel data processing to automated test equipment  

SciTech Connect

Parallel processing techniques make possible simultaneous acquisitions of product test data through the use of multiple microprocessors. Dedicating one processor with its own memory, analog to digital converter, programmable level detectors, and event timer to each product test point provides flexibility and avoids contentions for measurement resources. Significant increases in speed and reliability over single-processor testers have been achieved.

Rolfe, E.J.; Boswell, M.D.

1985-10-01

218

Parallel adaptive tetrahedral mesh generation by the advancing front technique  

Microsoft Academic Search

A parallel adaptive tetrahedral mesh generation program using the advancing front method is described. The problem domain is initially defined by a course background mesh of tetrahedral elements which forms the input for finite element analysis and from which adaptive parameters are calculated. Parallel adaptive mesh generation is then carried out by dividing the background mesh into subdomains and refining

J. K. Wilson; B. H. V. Topping

1998-01-01

219

Analysis of parallel mechanism for human hip joint power assist  

Microsoft Academic Search

In this paper, we focus on a 6-DOF parallel mechanism which is for human hip joint power assist. Different from the conventional power assist system, the proposed human hip joint assisting mechanism is formed as parallel mechanism. Considering the anatomical mechanism of human hip joint, the 6-DOF assisting system consists of three serial chains, and each chain is a UPS

Yong Yu; Wenyuan Liang; Yunjian Ge

2010-01-01

220

Parallel scripting for applications at the petascale and beyond.  

SciTech Connect

Scripting accelerates and simplifies the composition of existing codes to form more powerful applications. Parallel scripting extends this technique to allow for the rapid development of highly parallel applications that can run efficiently on platforms ranging from multicore workstations to petascale supercomputers.

Wilde, M.; Zhang, Z.; Clifford, B.; Hategan, M.; Iskra, K.; Beckman, P.; Foster, I.; Raicu, I.; Espinosa, A.; Univ. of Chicago

2009-11-01

221

Supporting dynamic parallel object arrays  

Microsoft Academic Search

ABSTRACT We present efficient support for generalized arrays of parallel data driven objects. Array elements are regular C++ objects, and are scattered across the parallel machine. An individual element is addressed by its \\

Orion Sky Lawlor; Laxmikant V. Kalé

2003-01-01

222

Resistor Combinations for Parallel Circuits.  

ERIC Educational Resources Information Center

To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

McTernan, James P.

1978-01-01

223

Design of a Parallel Language.  

National Technical Information Service (NTIS)

Concurr is a new language for parallel systems. The language is designed as an easy-to-use parallel programming facility. The language also attempts to overcome some 'unnatural' restrictions of previous sequential languages. Clearly, new languages are nee...

J. R. Weisbecker

1988-01-01

224

Parallel Metaheuristics for Workforce Planning  

Microsoft Academic Search

Workforce planning is an important activity that enables organizations to determine the workforce needed for continued success.\\u000a A workforce planning problem is a very complex task requiring modern techniques to be solved adequately. In this work, we\\u000a describe the development of three parallel metaheuristic methods, a parallel genetic algorithm, a parallel scatter search,\\u000a and a parallel hybrid genetic algorithm, which

Enrique Alba; Gabriel Luque; Francisco Luna

2007-01-01

225

Reliability of malfunction tolerance  

Microsoft Academic Search

Generalized algorithm of fault tolerance is presented, using time, structural and information redundancy types. It is shown that algorithm of fault tolerance might be implemented using hardware and software. It is also shown that for the design of efficient fault tolerant system elements must be malfunction tolerant. The advantage of element malfunction tolerance is proven in reliability terms. Reliability analysis

Igor Schagaev

2008-01-01

226

Reliability of Seismograph Stations  

Microsoft Academic Search

IN a recent paper, Dr. H. Jeffreys1 works out a `reliability' factor for seismograph stations throughout the world, using information from the International Seismological Summary for 1930 January to 1931 March. Reliability results based on this information do not, however, represent present conditions. Seismology has made rapid headway since 1931, and a number of stations have improved both their recording

R. C. Hayes

1936-01-01

227

Wind energy - how reliable  

Microsoft Academic Search

The reliability of a wind energy system depends on the size of the propeller and the size of the back-up energy storage. Design of the optimum system for a given reliability level can be performed if a time series of wind speed data is available. However, a design based on conventional meteorological records, which sample the wind speed with a

D. J. Sherman

1980-01-01

228

Survey of Network Reliability.  

National Technical Information Service (NTIS)

We present a brief survey of the current state of the art in network reliability. We survey only exact methods; Monte Carlo methods are not surveyed. Most network reliability problems are, in the worst case, NP-hard and are, in a sense, more difficult tha...

A. Agrawal R. E. Barlow

1983-01-01

229

Bonded Retainers - Clinical Reliability  

Microsoft Academic Search

Bonded retainers have become a very important retention appliance in orthodontic treatment. They are popular because they are considered reliable, independent of patient cooperation, highly efficient, easy to fabricate, and almost invisible. Of these traits, reliability is the subject of this clinical study. A total of 549 patients with retainers were analyzed with regard to wearing time, extension of the

Dietmar Segner; Bettina Heinrici

2000-01-01

230

Statistical theory on reliability  

NASA Astrophysics Data System (ADS)

Considerable progress was made by the Principal Investigator, Professor Asit Basu, and his collaborators on the areas of tests for exponentiality, component life length estimation, sequential and influential methods, and Bayesian approaches for reparable systems. All of these results contribute to a better understanding of reliability principles and better techniques for applied reliability practice.

Basu, Asit P.

1991-12-01

231

Reliability, Recursion, and Risk.  

ERIC Educational Resources Information Center

The discrete mathematics topics of trees and computational complexity are implemented in a simple reliability program which illustrates the process advantages of the PASCAL programing language. The discussion focuses on the impact that reliability research can provide in assessment of the risks found in complex technological ventures. (Author/JJK)

Henriksen, Melvin, Ed.; Wagon, Stan, Ed.

1991-01-01

232

Design reliability engineering  

Microsoft Academic Search

Improved design techniques are needed to achieve high reliability at minimum cost. This is especially true of space systems where lifetimes of many years without maintenance are needed and severe mass limitations exist. Reliability must be designed into these systems from the start. Techniques are now being explored to structure a formal design processes that will be more complete and

R. Niall; N. M. Hunt; D. Buden

1989-01-01

233

Synchronous Parallel Kinetic Monte Carlo  

SciTech Connect

A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

Mart?nez, E; Marian, J; Kalos, M H

2006-12-14

234

Workforce planning with parallel algorithms  

Microsoft Academic Search

Workforce planning is an important activity that en- ables organizations to determine the workforce needed for continued success. A workforce planning problem is a very complex task that requires modern techniques to be solved adequately. In this work, we describe the development of two parallel metaheuristic methods, a parallel genetic algorithm and a parallel scatter search, which can find high-quality

Enrique Alba; Gabriel Luque; Francisco Luna

2006-01-01

235

Roo: A parallel theorem prover  

SciTech Connect

We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

Lusk, E.L.; McCune, W.W.; Slaney, J.K.

1991-11-01

236

Interprocedural parallelization analysis in SUIF  

Microsoft Academic Search

As shared-memory multiprocessor systems become widely available, there is an increasing need for tools to simplify the task of developing parallel programs. This paper describes one such tool, the automatic parallelization system in the Stanford SUIF compiler. This article represents a culmination of a several-year research effort aimed at making parallelizing compilers significantly more effective. We have developed a system

Mary W. Hall; Saman P. Amarasinghe; Brian R. Murphy; Shih-Wei Liao; Monica S. Lam

2005-01-01

237

Dependency-driven Parallel Programming  

Microsoft Academic Search

The appearance of low-cost highly parallel hardware architectures has raised the alarm that a radically new way of thinking is required in programming to face the continually increasing parallelism of hard- ware. In our data dependency based framework, we treat data dependencies as first class entities in pro- grams. Programming a highly parallel machine or chip is formulated as finding

Eva Burrows; Magne Haveraaen

238

Imprecise reliability: An introductory review  

Microsoft Academic Search

The main aim of the paper is to define what the imprecise reliability is, what problems can be solved by means of a framework of the imprecise reliability. From this point of view, various branches of reliability analysis are considered, including analysis of monotone systems, repairable sys- tems, multi-state systems, structural reliability, software reliability, hu- man reliability, fault tree analysis.

Lev V. Utkin

239

Reliability Engineering : Futility and Error  

Microsoft Academic Search

The conventional definition of reliability (and reliability engineering as discipline) suggests that reliability is a quantifiable performance requirement of a product or system. This means that reliability can be specified, designed for, predicted, measured and demonstrated, using applicable mathematical and statistical models. This paper argues against the futile and erroneous quantification of reliability. Many reliability engineering practices are incorrect, misleading,

R. W. A. Barnard

240

Benchmarking massively parallel architectures  

SciTech Connect

The purpose of this paper is to summarize some initial experiences related to measuring the performance of massively parallel processors (MPPs) at Los Alamos National Laboratory (LANL). Actually, the range of MPP architectures the authors have used is rather limited, being confined mostly to the Thinking Machines Corporation (TMC) Connection Machine CM-2 and CM-5. Some very preliminary work has been carried out on the Kendall Square KSR-1, and efforts related to other machines, such as the Intel Paragon and the soon-to-be-released CRAY T3D are planned. This paper will concentrate more on methodology rather than discuss specific architectural strengths and weaknesses; the latter is expected to be the subject of future reports. MPP benchmarking is a field in critical need of structure and definition. As the authors have stated previously, such machines have enormous potential, and there is certainly a dire need for orders of magnitude computational power over current supercomputers. However, performance reports for MPPs must emphasize actual sustainable performance from real applications in a careful, responsible manner. Such has not always been the case. A recent paper has described in some detail, the problem of potentially misleading performance reporting in the parallel scientific computing field. Thus, in this paper, the authors briefly offer a few general ideas on MPP performance analysis.

Lubeck, O.; Moore, J.; Simmons, M.; Wasserman, H.

1993-07-01

241

Benchmarking massively parallel architectures  

SciTech Connect

The purpose of this paper is to summarize some initial experiences related to measuring the performance of massively parallel processors (MPPs) at Los Alamos National Laboratory (LANL). Actually, the range of MPP architectures the authors have used is rather limited, being confined mostly to the Thinking Machines Corporation (TMC) Connection Machine CM-2 and CM-5. Some very preliminary work has been carried out on the Kendall Square KSR-1, and efforts related to other machines, such as the Intel Paragon and the soon-to-be-released CRAY T3D are planned. This paper will concentrate more on methodology rather than discuss specific architectural strengths and weaknesses; the latter is expected to be the subject of future reports. MPP benchmarking is a field in critical need of structure and definition. As the authors have stated previously, such machines have enormous potential, and there is certainly a dire need for orders of magnitude computational power over current supercomputers. However, performance reports for MPPs must emphasize actual sustainable performance from real applications in a careful, responsible manner. Such has not always been the case. A recent paper has described in some detail, the problem of potentially misleading performance reporting in the parallel scientific computing field. Thus, in this paper, the authors briefly offer a few general ideas on MPP performance analysis.

Lubeck, O.; Moore, J.; Simmons, M.; Wasserman, H.

1993-01-01

242

Parallel superconvergent multigrid  

SciTech Connect

We describe a class of multiscale algorithms for the solution of large sparse linear systems that are particularly well adapted to massively parallel supercomputers. While standard multigrid algorithms are unable to effectively use all processors when computing on coarse grids, the new algorithms utilize the same number of processors at all times. The basic idea is to solve many coarse scale problems simultaneously, combining the results in an optimal way to provide an improved fine scale solution. As a result, convergence rates are much faster than for standard multigrid methods - we have obtained V-cycle convergence rates as good as .0046 with one smoothing application per cycle, and .0013 with two smoothings. On massively parallel machines the improved convergence rate is attained at no extra computational cost since processors that would otherwise be sitting idle are utilized to provide the better convergence. On serial machines the algorithm is slower because of the extra time spent on multiple coarse scales, though in certain cases the improved convergence rate may justify this - particularly in cases where other methods do not converge. In constant coefficient situations the algorithm is easily analyzed theoretically using Fourier methods on a single grid. The fact that only one grid is involved substantially simplifies convergence proofs. A feature of the algorithms is the use of a matched pair of operators: an approximate inverse for smoothing and a superinterpolation operator to move the correction from coarse to fine scales, chosen to optimize the rate of convergence.

Frederickson, P.O.; McBryan, O.A.

1987-01-01

243

The reliability of multitest regimens with sacroiliac pain provocation tests  

Microsoft Academic Search

Background: Studies concerning the reliability of individual sacroiliac tests have inconsistent results. It has been suggested that the use of a test regimen is a more reliable form of diagnosis than individually performed tests. Objective: To assess the interrater reliability of multitest scores by using a regimen of 5 commonly used sacroiliac pain provocation tests. Methods: Two examiners examined 78

Dirk J. Kokmeyer; Peter van der Wurff; Geert Aufdemkampe; Theresa C. M. Fickenscher

2002-01-01

244

Eddy Current Distribution in Parallel Conductors . Wirbelstromverteilung in Parallelen Leitern.  

National Technical Information Service (NTIS)

A numerical analysis of the electrical properties and distribution of eddy currents in parallel conductors is presented. The conditions existing in conductors in the form of thin plates, wide plates, and circular cylinders are examined. Computer programs ...

M. Ehrich

1970-01-01

245

Automatic generation of synchronization instructions for parallel processors  

SciTech Connect

The development of high speed parallel multi-processors, capable of parallel execution of doacross and forall loops, has stimulated the development of compilers to transform serial FORTRAN programs to parallel forms. One of the duties of such a compiler must be to place synchronization instructions in the parallel version of the program to insure the legal execution order of doacross and forall loops. This thesis gives strategies usable by a compiler to generate these synchronization instructions. It presents algorithms for reducing the parallelism in FORTRAN programs to match a target architecture, recovering some of the parallelism so discarded, and reducing the number of synchronization instructions that must be added to a FORTRAN program, as well as basic strategies for placing synchronization instructions. These algorithms are developed for two synchronization instruction sets. 20 refs., 56 figs.

Midkiff, S.P.

1986-05-01

246

Improving Parallel I/O Performance with Data Layout Awareness  

SciTech Connect

Parallel applications can benefit greatly from massive computational capability, but their performance suffers from large latency of I/O accesses. The poor I/O performance has been attributed as a critical cause of the low sustained performance of parallel computing systems. In this study, we propose a data layout-aware optimization strategy to promote a better integration of the parallel I/O middleware and parallel file systems, two major components of the current parallel I/O systems, and to improve the data access performance. We explore the layout-aware optimization in both independent I/O and collective I/O, two primary forms of I/O in parallel applications. We illustrate that the layout-aware I/O optimization could improve the performance of current parallel I/O strategy effectively. The experimental results verify that the proposed strategy could improve parallel I/O performance by nearly 40% on average. The proposed layout-aware parallel I/O has a promising potential in improving the I/O performance of parallel systems.

Chen, Yong [ORNL; Sun, Xian-He [Illinois Institute of Technology; Thakur, Dr. Rajeev [Argonne National Laboratory (ANL); Song, Huaiming [Illinois Institute of Technology; Jin, Hui [Illinois Institute of Technology

2010-01-01

247

A systolic array parallelizing compiler  

SciTech Connect

This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

Tseng, P.S. (Bell Communications Research, Inc. (US))

1990-01-01

248

Reliability and validity of Turkish versions of the child, parent and staff cancer fatigue scales.  

PubMed

This study was designed to adapt the Turkish versions of scales to evaluate fatigue in children with cancer from the perspectives of the children, parents and staff. The objective of this study was to validate "Child Fatigue Scale-24 hours" (CFS-24 hours), "Parent Fatigue Scale-24 hours" (PFS-24 hours) and "Staff Fatigue Scale-24 hours" (SFS-24 hours) for use in Turkish clinical research settings. Translation of the scales into Turkish and validity and reliability tests were performed. The validity of the translated scales was assessed with language validity and content validity. The reliability of the translated scales was assessed with internal consistency. The scales were evaluated by considering the following: calculation of the Cronbach alpha coefficient for parallel form reliability with 52 pediatric cancer patients, 86 parents and 43 nurses. The internal consistency was estimated as 0.88 for the Child Fatigue Scale-24 hours, 0.77 for the Parent Fatigue Scale-24 hours, and 0.72 for the Staff Fatigue Scale-24 hours (Cronbach's ?). The Turkish version of the Child Fatigue Scale-24 hours, the Parent Fatigue Scale-24 hours and the Staff Fatigue Scale-24 hours were judged reliable and valid instruments to assess fatigue in children and showed good psychometric properties. These scales should assist in understanding to what extent initiatives can minimize or eliminate fatigue. Our scales are recommended for further studies and use in pediatric oncology clinics as routine measurements and nursing initiatives should be planned accordingly. PMID:22994723

Gerçeker, Gülçin Özalp; Yilmaz, Hatice Bal

2012-01-01

249

Toward Parallel Document Clustering  

SciTech Connect

A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

Mogill, Jace A.; Haglin, David J.

2011-09-01

250

Series and Parallel Circuits  

NSDL National Science Digital Library

Tony R. Kuphaldt is the creator of All About Circuits, a collection of online textbooks about circuits and electricity. The site is split into volumes, chapters, and topics to make finding and learning about these subjects convenient. Volume 1, Chapter 5: Series and Parallel Circuits begins by explaining the basic differences between the two types of circuits. The topics then progress to more difficult subject matter such as conductance, and OhmâÂÂs law, with a section on building circuits for a more hands-on component. This website would be a great jumping off point for educators who want to teach circuits or a fantastic supplemental resource for students who want or need to learn more.

Kuphaldt, Tony R.

2008-07-01

251

Parallel Imaging Microfluidic Cytometer  

PubMed Central

By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take.

Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

2011-01-01

252

Parallel Detection of Cathodoluminescence.  

NASA Astrophysics Data System (ADS)

Available from UMI in association with The British Library. A GEC P8600 Charge-coupled device has been used in the design and fabrication of a parallel detection system or optical multichannel analyser for the analysis of Cathodoluminescence Spectra. The P8600, whilst designed for video applications, is used as a linear array by merging entire rows of pixels together on the on-board output amplifier. A dual slope integration method of correlated double sampling has been used for noise reduction. An analysis of the performance of this system is given and the achieved noise level of 22 electrons is found to be in good agreement with that theoretically possible. A complete description of the circuits is given together with details of its use with a "Link 860" computer/analyser and a "Philips 400" electron microscope. To demonstrate the system, a study of the cathodoluminescent properties of Cadmium Telluride grown by molecular beam epitaxy has been made. In particular the effect of dislocations, stacking faults and twins on luminescence has been studied. Dislocations are seen to cause a quenching of excitonic emission with no corresponding increase in any other emission. The effect of stacking faults was seen to vary between different samples with an enhancement of long wavelength emission seen in poor quality samples. This supports the premise that the faults are nucleated by surface impurities which are also responsible for the enhanced emission. Some twin defects have been found to cause enhanced excitonic emission. This is compatible with the existence of natural quantum wells at twin faults proposed by other workers. The speed with which the parallel detection system can acquire spectra makes it a valuable tool in the study of beam sensitive materials. To demonstrate this, measurements were made of the decay rates of the weak cathodoluminescence from the organic crystal Coronene. These rates were seen to have time constants less than two minutes and such measurements would not have been amenable by conventional methods.

Day, John C. C.

253

Software Reliability Measurement.  

National Technical Information Service (NTIS)

The report contains plans for a complete software reliability measurement program using both manual and automatic data entry. The program is to be run in conjunction with SAMTEC at Vandenberg AFB in an effort to establish measurement and evaluation criter...

J. P. Johnson

1975-01-01

254

Human Reliability Research.  

National Technical Information Service (NTIS)

The research effort focused on two major areas, a survey and analysis of existing failure reporting systems, and the investigation of alternative indirect approaches to determining human performance and quantifying the human reliability contribution to we...

C. Beek K. Haynam G. Markisohn

1967-01-01

255

Reliability of photovoltaic modules  

Microsoft Academic Search

In order to assess the reliability of photovoltaic modules, four categories of known array failure and degradation mechanisms are discussed, and target reliability allocations have been developed within each category based on the available technology and the life-cycle-cost requirements of future large-scale terrestrial applications. Cell-level failure mechanisms associated with open-circuiting or short-circuiting of individual solar cells generally arise from cell

R. G. Ross Jr.

1986-01-01

256

Fault-tolerant and efficient parallel computation. Doctoral thesis  

SciTech Connect

Recent advances in computer technology made parallel machines a reality. Massively parallel systems use many general-purpose, inexpensive processing elements to attain computation speed-ups comparable to or better than those achieved by expensive, specialized machines with a small number of fast processors. In such setting, however, one would expect to see an increased number of processor failures attributable to hardware or software. This may eliminate the potential advantage of parallel computation. We believe that this presents a reliability bottleneck that is among fundamental problems in parallel computation. We investigate algorithmic ways of introducing fault-tolerance in multiprocessors under the constraint of preserving efficiency. This research demonstrates how in certain models of parallel computation it is possible to combine efficiency and fault-tolerance. We show that in the models we study, it is possible to develop efficient parallel algorithms without concern for fault-tolerance, and then correctly and efficiently execute these algorithms on parallel machines whose processors are subject to arbitrary dynamic failstop errors. By ensuring efficient executions for any patterns of failures, the efficiency is also maintained when failures are infrequent, or when the expected number of failures is small.

Shvartsman, A.A.

1992-05-01

257

Parallel RF transmission in MRI.  

PubMed

Following the development of parallel imaging, parallel transmission describes the use of multiple RF transmit coils. Parallel transmission can be applied to improve RF excitation, in particular, multidimensional, spatially selective RF excitation. For instance, parallel transmission is able to shorten spatially selective RF pulses in two or three dimensions, or to minimize the occurring SAR. One potential major application might be the compensation of patient-induced B(1) inhomogeneities, particularly at high main fields. This paper provides an overview of selected aspects of this new transmission approach. The basic principles of parallel transmission are discussed, initial experimental proofs are described, and the impact of error propagation on coil design for parallel transmission is outlined. PMID:16705630

Katscher, Ulrich; Börnert, Peter

2006-05-01

258

A Survey of Parallel Sorting Algorithms.  

National Technical Information Service (NTIS)

A rather comprehensive survey of parallel sorting algorithms is included herein. Parallel sorting algorithms are considered in two major categories - the internal parallel sorting algorithms and the external parallel sorting algorithms. Because external s...

D. J. DeWitt D. Friedland D. K. Hsiao M. J. Menon

1981-01-01

259

Parallel Monte Carlo reactor neutronics  

SciTech Connect

The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved.

Blomquist, R.N.; Brown, F.B.

1994-03-01

260

Parallel Search On Video Cards  

Microsoft Academic Search

Recent approaches exploiting the massively parallel ar- chitecture of graphics processors (GPUs) to acceler- ate database operations have achieved intriguing results. While parallel sorting received significant attention, par- allel search has not been explored. With p-ary search we present a novel parallel search algorithm for large-scale database index operations that scales with the number of processors and outperforms traditional thread-level

Tim Kaldewey; Jeff Hagen; Eric Sedlar

261

Cost-Effective Parallel Computing  

Microsoft Academic Search

Many academic papers imply that parallel computing is only worthwhile when applications achieve nearly linear speedup (i.e., execute nearly p times faster on p processors). This note shows that parallel computing is cost-effective whenever speedup exceeds costup---the parallel system cost divided by uniprocessor cost. Furthermore, when applications have large memory requirements (e.g., 512 megabytes), the costup---and hence speedup necessary to

David A. Wood; Mark D. Hill

1995-01-01

262

Reliability of MEMS  

NASA Astrophysics Data System (ADS)

MEMS Reliability, especially the study of the reliability of their physical characteristics, is an area that is still in its infancy [1]. However, reliable MEMS exists already and are produced in hundreds of millions MEMS devices and some of them are even intended to use in safety critical applications. The wide variety of materials and physical principles used make it difficult to give general statements about MEMS reliability. However, in several cases reliability is not even studied, confirmed or modeled. Consequently, the lack of long-term reliable devices reduces their level of acceptance considerably. The aging of MEMS is always connected with the occurrence of defects and their mobility. The creation rate and the mobility of the defects are precursors for the aging behavior. The mobility of defects will be enhanced by greater stress gradients. Both, the stress gradient and the defects can be easily determined by means of High resolution X-Ray techniques (HRXRD). The idea behind is now to connect mechanical stress, thermals load and even radiation damage which lead to the corresponding signal drift of MEMS devices with the structural properties like defect density and mobility. High resolution X-ray diffraction techniques (HRXRD) such as the rocking curve (RC) and the reciprocal space maps (RSM) are well suited to detect this features, leading to the drift of the MEMS devices. High Resolution X-ray diffraction (HRXRD) techniques are therefore very powerful tools to study aging through the determination of the stresses and defects in the devices. We are convinces that these advanced state-of-the art X-ray methods will serve as a useful tool for setting up a fundamental understanding of the reliability and also aging problems of MEMS.

Dommann, Alex; Neels, Antonia

2011-02-01

263

HPC Infrastructure for Solid Earth Simulation on Parallel Computers  

NASA Astrophysics Data System (ADS)

Recently, various types of parallel computers with various types of architectures and processing elements (PE) have emerged, which include PC clusters and the Earth Simulator. Moreover, users can easily access to these computer resources through network on Grid environment. It is well-known that thorough tuning is required for programmers to achieve excellent performance on each computer. The method for tuning strongly depends on the type of PE and architecture. Optimization by tuning is a very tough work, especially for developers of applications. Moreover, parallel programming using message passing library such as MPI is another big task for application programmers. In GeoFEM project (http://gefeom.tokyo.rist.or.jp), authors have developed a parallel FEM platform for solid earth simulation on the Earth Simulator, which supports parallel I/O, parallel linear solvers and parallel visualization. This platform can efficiently hide complicated procedures for parallel programming and optimization on vector processors from application programmers. This type of infrastructure is very useful. Source codes developed on PC with single processor is easily optimized on massively parallel computer by linking the source code to the parallel platform installed on the target computer. This parallel platform, called HPC Infrastructure will provide dramatic efficiency, portability and reliability in development of scientific simulation codes. For example, line number of the source codes is expected to be less than 10,000 and porting legacy codes to parallel computer takes 2 or 3 weeks. Original GeoFEM platform supports only I/O, linear solvers and visualization. In the present work, further development for adaptive mesh refinement (AMR) and dynamic load-balancing (DLB) have been carried out. In this presentation, examples of large-scale solid earth simulation using the Earth Simulator will be demonstrated. Moreover, recent results of a parallel computational steering tool using an MxN communication model will be shown. In an MxN communication model, the large-scale computation modules run on M PE's and high performance parallel visualization modules run on N PE's, concurrently. This can allow computation and visualization to select suitable parallel hardware environments respectively. Meanwhile, real-time steering can be achieved during computation so that the users can check and adjust the computation process in real time. Furthermore, different numbers of PE's can achieve better configuration between computation and visualization under Grid environment.

Nakajima, K.; Chen, L.; Okuda, H.

2004-12-01

264

1\\/f noise as a reliability estimation for solar panels  

Microsoft Academic Search

The purpose of this work is a study of the 1\\/f noise from a forward biased dark solar cell as a nondestructive reliability estimation of solar panels. It is shown that one cell with a given defect can be detected in a solar panel by low frequency noise measurements at obscurity. One real solar panel of 5 cells in parallel

R. Alabedra; B. Orsal

1984-01-01

265

Reliability (and Fault Tree) Analysis Using Expert Opinions  

Microsoft Academic Search

In this article we introduce a formal procedure for the use of expert opinions in reliability (and fault tree) analysis. We consider the case of multicomponent parallel redundant systems for which there could be a single expert or a group of experts giving us opinions about each component. Inherent in our approach are a procedure for reflecting our judgment of

Dennis V. Lindley; Nozer D. Singpurwalla

1986-01-01

266

A case study of the splithalf reliability coefficient  

Microsoft Academic Search

Different values for split-half reliability will be found for a single test if the items comprising the contrasted halves of the test are selected in different ways. The author presents evidence based on 4 arbitrary splits, such as the odd-even item split, on 30 random splits, and on 14 parallel splits in which the division was determined by item analysis

L. J. Cronbach

1946-01-01

267

Reliability of the natural remanent magnetization recorded in Chinese loess  

Microsoft Academic Search

Chinese loess-paleosol sequences undoubtedly have recorded geomagnetic events (both polarity reversals and excursions). However, the fidelity of the rapid paleomagnetic field oscillations during a polarity reversal remains uncertain. To test the reliability and consistency of the natural remanent magnetization records in Chinese loess, 10 subsets of parallel samples across the Matuyama-Brunhes (MB) reversal boundary were obtained from the Luochuan region

Chunsheng Jin; Qingsong Liu

2010-01-01

268

Towards quantitative software reliability assessment in incremental development processes  

Microsoft Academic Search

The iterative and incremental development is becoming a major development process model in industry, and allows us for a good deal of parallelism between development and testing. In this paper we develop a quantitative software reliability assessment method in incremental development processes, based on the familiar non-homogeneous Poisson processes. More specifically, we utilize the software metrics observed in each incremental

Toshiya Fujii; Tadashi Dohi; Takaji Fujiwara

2011-01-01

269

Parallel data processor  

US Patent & Trademark Office Database

A parallel processor has a controller for generating control signals, and a plurality of identical processing cells, each of which is connected to at least one neighboring cell and responsive to the controller for processing data in accordance with the control signals. Each processing cell includes a memory, a first register, a second register, and an arithmetic logic unit (ALU). An input of the first register is coupled to a memory output. The output of the first register is coupled to a second register located in a neighboring cell. An input of the second register is coupled to receive an output from a first register located in a neighboring cell. The output of the second register is coupled to an input of the ALU. In another feature, mask logic is interposed between A and B operand sources, and two inputs of the ALU. The mask logic also inputs a mask source, and in response to control signals, can output the A operand logically OR'ed with the mask, and can output the B operand logically AND'ed with the mask. In another feature, each cell includes a multiplexor coupled to a neighboring cell for selectively transmitting cell data to the neighbor, or for effectively bypassing the cell during data shift operations by transmitting data that is received from a neighboring cell to a neighboring cell. Other enhancements to a cell architecture are also disclosed.

2000-06-06

270

Quantifying reliability uncertainty : a proof of concept.  

SciTech Connect

This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna (Los Alamos National Laboratory, Los Alamos, NM); Lorio, John F.; Fatherley, Quinn (Los Alamos National Laboratory, Los Alamos, NM); Anderson-Cook, Christine (Los Alamos National Laboratory, Los Alamos, NM); Wilson, Alyson G. (Los Alamos National Laboratory, Los Alamos, NM); Zurn, Rena M.

2009-10-01

271

High-Level Abstract Parallel Programming Platform: Application to GIS Image Decomposition  

Microsoft Academic Search

In this paper, we designed and implemented a high-level abstract parallel programming platform that relieves the programmer from all the hassle involved in parallel programming. That is, what is requested from the programmer is only to specify the program is a suitable form that hides many of the hardware features. All the parallel processes control, that were very challenging, are

Salim Ghanemi

2008-01-01

272

Parallel Adaptive Mesh Refinement  

SciTech Connect

As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the ability of both meshing methods to resolve simulation details by varying the local grid spacing.

Diachin, L; Hornung, R; Plassmann, P; WIssink, A

2005-03-04

273

Distributed computing with parallel networking  

Microsoft Academic Search

For many large scientific applications, computing on a cluster is a viable, economical alternative to a dedicated parallel machine. Application performance on a cluster is largely determined by the speed of the underlying communication network. The authors use a parallel network approach to improve the communication network performance. More specifically, they use multiple networks based on Ethernet to improve the

K. Maly; M. Zubair; S. Kelbar

1993-01-01

274

PARALLEL METAHEURISTICS FOR COMBINATORIAL OPTIMIZATION  

Microsoft Academic Search

In this paper, we review parallel metaheuristics for approximating the global optimal solution of combinatorial optimization problems. Recent developments on parallel implementation of genetic algorithms, simulated an- nealing, tabu search, variable neighborhood search, and greedy randomized adaptive search procedures (GRASP) are discussed.

MAURICIO G. C. RESENDE; PANOS M. PARDALOS; SANDRA DUNI

1999-01-01

275

Parallel Programming with Interacting Processes  

Microsoft Academic Search

Abstract In this paper, we argue that interacting processes (IP) with multiparty interactions are an ideal model for parallel programming. The IP model with multiparty interactions was originally proposed by N. Francez and I. R. Forman [1] for distributed programming,of reactive applications. We analyze the IP model and provide the new insights into it from the parallel programming perspective. We

Peiyi Tang; Yoichi Muraoka

1999-01-01

276

Analytical Modeling of Pipeline Parallelism  

Microsoft Academic Search

Parallel programming is a requirement in the multi-core era. One of the most promising techniques to make paral- lel programming available for the general users is the use of parallel programming patterns. Functional pipeline par- allelism is a pattern that is well suited for many emerging applications, such as streaming and \\

Angeles G. Navarro; Rafael Asenjo; Siham Tabik; Calin Cascaval

2009-01-01

277

Supporting dynamic parallel object arrays  

Microsoft Academic Search

We present efficient support for generalized arrays of parallel data driven objects. The “array elements” are scattered across a parallel machine. Each array element is an object that can be thought of as a virtual processor. The individual elements are addressed by their “index”, which can be an arbitrary object rather than a simple integer. For example, it can be

Orion Sky Lawlor; Laxmikant V. Kalé

2001-01-01

278

Parallel Programming for Computer Vision  

Microsoft Academic Search

Two Unix environments developed for programming parallel computers to handle image-processing and vision applications are described. Visx is a portable environment for the development of vision applications that has been used for many years on serial computers in research. Visx was adapted to run on a multiprocessor with modest parallelism by using functional decomposition and standard operating-system capabilities to exploit

Anthony P. Reeves

1991-01-01

279

Parallel Programming Using Skeleton Functions  

Microsoft Academic Search

Prograxnming parallel machines is notoriously difficult. Factors contribut- ing to this difficulty include the complexity of concurrency, the effect of resource allocation on performance and the current diversity of parallel machine models. The net result is that effective portability, which de- pends crucially on the predictability of performance, has been lost. Functional programming languages have been put forward as solutions

John Darlington; A. J. Field; Peter G. Harrison; Paul H. J. Kelly; David W. N. Sharp; Qian Wu; R. While

1993-01-01

280

Another view on parallel speedup  

Microsoft Academic Search

In this paper three models of parallel speedup are studied. They are The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as its special cases. This study proposes a new metric for performance evaluation and leads to a better understanding of parallel processing.

Xian-He Sun; Lionel M. Ni

1990-01-01

281

Scattering points in parallel coordinates.  

PubMed

In this paper, we present a novel parallel coordinates design integrated with points (Scattering Points in Parallel Coordinates, SPPC), by taking advantage of both parallel coordinates and scatterplots. Different from most multiple views visualization frameworks involving parallel coordinates where each visualization type occupies an individual window, we convert two selected neighboring coordinate axes into a scatterplot directly. Multidimensional scaling is adopted to allow converting multiple axes into a single subplot. The transition between two visual types is designed in a seamless way. In our work, a series of interaction tools has been developed. Uniform brushing functionality is implemented to allow the user to perform data selection on both points and parallel coordinate polylines without explicitly switching tools. A GPU accelerated Dimensional Incremental Multidimensional Scaling (DIMDS) has been developed to significantly improve the system performance. Our case study shows that our scheme is more efficient than traditional multi-view methods in performing visual analysis tasks. PMID:19834165

Yuan, Xiaoru; Guo, Peihong; Xiao, He; Zhou, Hong; Qu, Huamin

282

Parallel contingency statistics with Titan.  

SciTech Connect

This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

Thompson, David C.; Pebay, Philippe Pierre

2009-09-01

283

Reliability of Sleep Measures.  

National Technical Information Service (NTIS)

The reliability of sleep measures was calculated over two nights (and within the nights) for twenty young adult males. Percent time in stages 1, 2, 3, and 4, percent movement time, number of movements, and number of stage changes were significantly correl...

J. Moses A. Lubin P. Naitoh L. C. Johnson

1972-01-01

284

Methods for reliable teleportation  

Microsoft Academic Search

Recent experimental results and proposals towards implementation of quantum teleportation are discussed. It is proved that reliable (theoretically, 100% probability of success) teleportation cannot be achieved using the methods applied in recent experiments, i.e., without quantum systems interacting with one another. Teleportation proposals involving atoms and electromagnetic cavities are reviewed and the most feasible methods are described. In particular, the

Lev Vaidman; Nadav Yoran

1999-01-01

285

Designing reliability into accelerators  

SciTech Connect

For the next generation of high performance, high average luminosity colliders, the ``factories,`` reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

Hutton, A.

1992-08-01

286

Designing reliability into accelerators  

SciTech Connect

For the next generation of high performance, high average luminosity colliders, the factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

Hutton, A.

1992-08-01

287

Flash memory reliability  

Microsoft Academic Search

With reference to the mainstream technology, the most relevant failure mechanisms which affect yield and reliability of Flash memory are reviewed, showing the primary role played by tunnel oxide defects. The effectiveness of a good test methodology combined with a proper product design for screening at wafer sort latent defects of tunnel oxide is highlighted as a key factor for

P. Cappelletti; A. Modelli

1998-01-01

288

Grid reliability management tools  

SciTech Connect

To summarize, Consortium for Electric Reliability Technology Solutions (CERTS) is engaged in a multi-year program of public interest R&D to develop and prototype software tools that will enhance system reliability during the transition to competitive markets. The core philosophy embedded in the design of these tools is the recognition that in the future reliability will be provided through market operations, not the decisions of central planners. Embracing this philosophy calls for tools that: (1) Recognize that the game has moved from modeling machine and engineering analysis to simulating markets to understand the impacts on reliability (and vice versa); (2) Provide real-time data and support information transparency toward enhancing the ability of operators and market participants to quickly grasp, analyze, and act effectively on information; (3) Allow operators, in particular, to measure, monitor, assess, and predict both system performance as well as the performance of market participants; and (4) Allow rapid incorporation of the latest sensing, data communication, computing, visualization, and algorithmic techniques and technologies.

Eto, J.; Martinez, C.; Dyer, J.; Budhraja, V.

2000-10-01

289

Fiber optics reliability  

Microsoft Academic Search

This book contains the proceedings of a conference of the International Society for Optical Engineering. The topics covered include: lonizing radiation therapy dosimetry applications; Single wavelength laser for coherent transmission; and Reliability of InGaAsP\\/InP light emitting diodes.

D. K. Paul; R. A. Greenwell; S. G. Wadekar

1988-01-01

290

Reliability in engineering design  

Microsoft Academic Search

In the design of any system, the design variables and parameters are probabilistic in nature. Thus, it is obvious that the factors that determine the stress and strength of the components are also probabilistic. This means that when the reliability aspects of design are evaluated, the probabilistic nature of the variables and parameters for a system must be considered. The

K. C. Kapur; L. R. Lamberson

1977-01-01

291

Applications of Parallel Platforms and Models in Evolutionary Multi-Objective Optimization  

Microsoft Academic Search

This chapter presents a review of modern parallel platforms and the way in which they can be exploited to implement parallel\\u000a multi-objective evolutionary algorithms. Regarding parallel platforms, a special emphasis is given to global metacomputing\\u000a which is an emerging form of parallel computing with promising applications in evolutionary (both multi- and singleobjective)\\u000a optimization. In addition, we present the well-known models

Antonio Lopez Jaimes; Carlos A. Coello Coello

292

Reliability Degradation Due to Stockpile Aging  

SciTech Connect

The objective of this reseach is the investigation of alternative methods for characterizing the reliability of systems with time dependent failure modes associated with stockpile aging. Reference to 'reliability degradation' has, unfortunately, come to be associated with all types of aging analyes: both deterministic and stochastic. In this research, in keeping with the true theoretical definition, reliability is defined as a probabilistic description of system performance as a funtion of time. Traditional reliability methods used to characterize stockpile reliability depend on the collection of a large number of samples or observations. Clearly, after the experiments have been performed and the data has been collected, critical performance problems can be identified. A Major goal of this research is to identify existing methods and/or develop new mathematical techniques and computer analysis tools to anticipate stockpile problems before they become critical issues. One of the most popular methods for characterizing the reliability of components, particularly electronic components, assumes that failures occur in a completely random fashion, i.e. uniformly across time. This method is based primarily on the use of constant failure rates for the various elements that constitute the weapon system, i.e. the systems do not degrade while in storage. Experience has shown that predictions based upon this approach should be regarded with great skepticism since the relationship between the life predicted and the observed life has been difficult to validate. In addition to this fundamental problem, the approach does not recognize that there are time dependent material properties and variations associated with the manufacturing process and the operational environment. To appreciate the uncertainties in predicting system reliability a number of alternative methods are explored in this report. All of the methods are very different from those currently used to assess stockpile reliability, but have been used extensively in various forms outside Sandia National Laboratories. It is hoped that this report will encourage the use of 'nontraditional' reliabilty and uncertainty techniques in gaining insight into stockpile reliability issues.

Robinson, David G.

1999-04-01

293

Parallel fabrication of polymer-protected nanogaps  

NASA Astrophysics Data System (ADS)

A method to create an array of sub-5 nm nanogaps with self-aligned holes in a protective polymer overlayer is presented. The parallel formation of the nanogaps, intended for electrical sensing of biomolecules in an aqueous environment, is achieved by electromigration using a simple voltage ramp across parallel-connected electrode patterns with individual constrictions. It was observed that the nanogap always formed on the cathode side of a bowtie electrode, with corresponding hillocks on the anode side, with the distance of the gap/hillock formation from the constriction depending on the ambient temperature. This technique provides a practical means to fabricate a series of polymer-protected nanogaps with considerably higher efficiency than afforded by the normally slow serial process of electromigration.

Zhang, H.; Thompson, C. V.; Stellacci, F.; Thong, J. T. L.

2010-09-01

294

Parallel fabrication of polymer-protected nanogaps.  

PubMed

A method to create an array of sub-5 nm nanogaps with self-aligned holes in a protective polymer overlayer is presented. The parallel formation of the nanogaps, intended for electrical sensing of biomolecules in an aqueous environment, is achieved by electromigration using a simple voltage ramp across parallel-connected electrode patterns with individual constrictions. It was observed that the nanogap always formed on the cathode side of a bowtie electrode, with corresponding hillocks on the anode side, with the distance of the gap/hillock formation from the constriction depending on the ambient temperature. This technique provides a practical means to fabricate a series of polymer-protected nanogaps with considerably higher efficiency than afforded by the normally slow serial process of electromigration. PMID:20739741

Zhang, H; Thompson, C V; Stellacci, F; Thong, J T L

2010-08-26

295

Parallel simulation of digital LSI circuits  

NASA Astrophysics Data System (ADS)

Integrated circuit technology has been advancing at phenomenal rate over the last several years, and promises to continue to do so. If circuit design is to keep pace with fabrication technology, radically new approaches to computer-aided design will be necessary. One appealing approach is general purpose parallel processing. This thesis explores the issues involved in developing a framework for circuit simulation which exploits the locality exhibited by circuit operation to achieve a high degree of parallelism. This framework maps the topology of the circuit onto the the multiprocessor, assigning the simulation of individual partitions to separate processors. A new form of snychronization is developed, based upon a history maintenance and roll back strategy. The circuit simulator PRSIM was designed and implemented to determine the efficacy of this approach. The results of several preliminary experiments are reported, along with an analysis of the behavior of PRSIM.

Arnold, J. M.

1985-02-01

296

Reliability Analysis in Distributed Systems  

Microsoft Academic Search

Reliability of a distributed processing system is an important design parameter that can be described in terms of the reliability of processing elements and communication links and also of the redundancy of programs and data files. The traditional terminal-pair reliability does not capture the redundancy of programs and files in a distributed system. Two reliability measures are introduced: distributed program

Cauligi S. Raghavendra; Viktor K. Prasanna; Salim Hariri

1988-01-01

297

Reliability issues at the LHC  

Microsoft Academic Search

The Lectures on reliability issues at the LHC will be focused on five main Modules on five days. Module 1: Basic Elements in Reliability Engineering Some basic terms, definitions and methods, from components up to the system and the plant, common cause failures and human factor issues. Module 2: Interrelations of Reliability & Safety (R&S) Reliability and risk informed approach,

P Kafka; James D Gillies

2002-01-01

298

Nuclear weapon reliability evaluation methodology  

Microsoft Academic Search

This document provides an overview of those activities that are normally performed by Sandia National Laboratories to provide nuclear weapon reliability evaluations for the Department of Energy. These reliability evaluations are first provided as a prediction of the attainable stockpile reliability of a proposed weapon design. Stockpile reliability assessments are provided for each weapon type as the weapon is fielded

1993-01-01

299

Software Reliability Engineering: A Roadmap  

Microsoft Academic Search

Software reliability engineering is focused on engineering techniques for developing and maintaining software systems whose reliability can be quantitatively evaluated. In order to estimate as well as to predict the reliability of software systems, failure data need to be properly measured by various means during software development and operational phases. Moreover, credible software reliability models are required to track underlying

Michael R. Lyu

2007-01-01

300

Parallel-Stranded DNA with Natural Base Sequences  

Microsoft Academic Search

Noncanonical parallel-stranded DNA double helices (ps-DNA) of natural nucleotide sequences are usually less stable than the canonical antiparallel-stranded DNA structures, which ensures reliable cell functioning. However, recent data indicate a possible role of ps-DNA in DNA loops or in regions of trinucleotide repeats connected with neurodegenerative diseases. The review surveys recent studies on the effect of nucleotide sequence on preference

A. K. Shchyolkina; O. F. Borisova; M. A. Livshits; T. M. Jovin

2003-01-01

301

Parallel incremental compilation. Doctoral thesis  

SciTech Connect

The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multi-processor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result. Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms.

Gafter, N.M.

1990-06-01

302

Acceleration of Solar Wind by Parallel Electric Fields  

NASA Astrophysics Data System (ADS)

A unique feature of the SW ions (H+ and He++) is that they appear as beams in the velocity space. We have investigated the possibility that electric fields parallel to the magnetic field direction could accelerate the solar coronal particles producing these beams. The current paradigm is that SW particles travel together with the same mean velocity. This restriction applies only to beams flowing perpendicular to B. Beams flowing parallel to B can have any speed. We have examined Cluster SW data and explain with a simple potential drop model how beams form as the solar thermal plasma is accelerated by a parallel electric field.

Parks, G.; McCarthy, M.; Lee, E.; Dandouras, I.; Reme, H.; Kistler, L.

2012-04-01

303

Reliable Group Communication in Distributed Systems  

Microsoft Academic Search

The design and implementation of a reliable group communication mechanism is presented. The mechanism guarantees a form of atomicity in that the messages are received by all operational members of the group or by none of them. Since the overhead in enforcing the order of messages is nontrivial, the mechanism provides two types of message transmission: one guarantees delivery of

S. Navaratnam; Samuel T. Chanson; Gerald W. Neufeld

1988-01-01

304

Parallel multiarea state estimation. Final report  

SciTech Connect

In this project new methods are developed to significantly speed up the computational process of power-system static state estimation and of the accompanying detection-identification. For this purpose, a power system is geographically and/or electrically decomposed into several subsystems. Raw measurements made within each subsystem are transmitted to its local center. Then, under the direction of the central computer at the control center of the entire system, local computers at local centers, operating simultaneously in a satellite configuration, process their local measurements and compute estimates for states in their respective portions of the system. Approximate Newton's methods are developed for minimizing the weighted estimation error. For the purpose of reducing computation time, the methods are designed to exploit parallel operation of local computers and to take advantage of the random sparse structure found in power network equations by employing sparse matrix techniques in parallel. Furthermore, they are designed to keep data transmission between computers to a minimum. These methods, if implemented, are expected to lead to major improvements in the response time, system size manageability and reliability of state estimation. They will significantly shorten the time interval between telemeasurement and report of state estimate (as compared with the conventional centralized single-area approach). They will allow the estimator to monitor large power systems without excessive degradation in response time. They will make it possible to estimate most of the states in the entire system even when malfunctions occur in small isolated areas.

Mukai, H.

1982-01-01

305

Optimal Kinematic Design of a 2DOF Planar Parallel Manipulator  

Microsoft Academic Search

Closed-form solutions were developed to optimize kinematics design of a 2-degree-of-freedom (2-DOF) planar parallel manipulator. The optimum design based on the workspace was presented. Meanwhile, a global, comprehensive conditioning index was introduced to evaluate the kinematic designs. The optimal parallel manipulator is incorporated into a 5-DOF hybrid machine tool which includes a 2-DOF rotational milling head and a long movement

Jun Wu; Tiemin Li; Xinjun Liu; Liping Wang

2007-01-01

306

Parallel in Time Simulation of Multiscale Stochastic Chemical Kinetics  

Microsoft Academic Search

A version of the time-parallel algorithm parareal is analyzed and applied to\\u000astochastic models in chemical kinetics. A fast predictor at the macroscopic\\u000ascale (evaluated in serial) is available in the form of the usual reaction rate\\u000aequations. A stochastic simulation algorithm is used to obtain an exact\\u000arealization of the process at the mesoscopic scale (in parallel).\\u000a The underlying

Stefan Engblom

2009-01-01

307

A parallel mechanism used on human hip joint power assist  

Microsoft Academic Search

In this paper, a 3-DOF parallel mechanism which is wearable for human hip joint power assist is proposed. Different from the conventional power assist system, the proposed wearable parallel mechanism is formed by both mechanical links\\/joints and human bones\\/joints. Considering the anatomical mechanism of human hip joint, the 3-DOF wearable system consists of two connected modules, the 3-DOF hip joint

Yong Yu; Wenyuan Liang; Yunjian Ge

2009-01-01

308

Implementing a parallel C++ runtime system for scalable parallel systems  

Microsoft Academic Search

pC++ is a language extension to C++ designed toallow programmers to compose "concurrent aggregate"collection classes which can be aligned and distributedover the memory hierarchy of a parallel machine ina manner modeled on the High Performance FortranForum (HPFF) directives for Fortran 90. pC++ allowsthe user to write portable and efficient code whichwill run on a wide range of scalable parallel computersystems.

A. Malony; B. Mohr; P. Beckman; D. Gannon; S. Yang; F. Bodin; S. Kesavan

1993-01-01

309

Software for parallel processing applications  

SciTech Connect

Parallel computing has been used to solve large computing problems in high-energy physics. Typical problems include offline event reconstruction, monte carlo event-generation and reconstruction, and lattice QCD calculations. Fermilab has extensive experience in parallel computing using CPS (cooperative processes software) and networked UNIX workstations for the loosely-coupled problems of event reconstruction and monte carlo generation and CANOPY and ACPMAPS for Lattice QCD. Both systems will be discussed. Parallel software has been developed by many other groups, both commercial and research-oriented. Examples include PVM, Express and network-Linda for workstation clusters and PCN and STRAND88 for more tightly-coupled machines.

Wolbers, S.

1992-10-01

310

Reliability and durability problems  

NASA Astrophysics Data System (ADS)

The papers presented in this volume focus on methods for determining the stress-strain state of structures and machines and evaluating their reliability and service life. Specific topics discussed include a method for estimating the service life of thin-sheet automotive structures, stressed state at the tip of small cracks in anisotropic plates under biaxial tension, evaluation of the elastic-dissipative characteristics of joints by vibrational diagnostics methods, and calculation of the reliability of ceramic structures for arbitrary long-term loading programs. Papers are also presented on the effect of prior plastic deformation on fatigue damage kinetics, axisymmetric and local deformation of cylindrical parts during finishing-hardening treatments, and adhesion of polymers to diffusion coatings on steels.

Bojtsov, B. V.; Kondrashov, V. Z.

311

Power electronics reliability.  

SciTech Connect

The project's goals are: (1) use experiments and modeling to investigate and characterize stress-related failure modes of post-silicon power electronic (PE) devices such as silicon carbide (SiC) and gallium nitride (GaN) switches; and (2) seek opportunities for condition monitoring (CM) and prognostics and health management (PHM) to further enhance the reliability of power electronics devices and equipment. CM - detect anomalies and diagnose problems that require maintenance. PHM - track damage growth, predict time to failure, and manage subsequent maintenance and operations in such a way to optimize overall system utility against cost. The benefits of CM/PHM are: (1) operate power conversion systems in ways that will preclude predicted failures; (2) reduce unscheduled downtime and thereby reduce costs; and (3) pioneering reliability in SiC and GaN.

Kaplar, Robert James; Brock, Reinhard C.; Marinella, Matthew; King, Michael Patrick; Stanley, James K.; Smith, Mark A.; Atcitty, Stanley

2010-10-01

312

Reliability Centred Maintenance  

Microsoft Academic Search

Reliability centred maintenance (RCM) is a method for maintenance planning that was developed within the aircraft industry\\u000a and later adapted to several other industries and military branches. A high number of standards and guidelines have been issued\\u000a where the RCM methodology is tailored to different application areas, e.g., IEC 60300-3-11, MIL-STD-217, NAVAIR 00-25-403 (NAVAIR 2005), SAE JA 1012 (SAE 2002),

Marvin Rausand; Jørn Vatn

313

ATLAS reliability analysis  

SciTech Connect

Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

Bartsch, R.R.

1995-09-01

314

Parallel Shear Flows Over Cavities.  

National Technical Information Service (NTIS)

Incompressible separated flows have long presented problems to the theoretician. Parallel shear flow over a cavity is an ideal flow configuration to evaluate numerically to shed light on fundamental relationships. It also provides a basis to predict flow ...

V. O'Brien

1970-01-01

315

Highly Parallel Sparse Cholesky Factorization.  

National Technical Information Service (NTIS)

Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model concep...

J. R. Gilbert R. Schreiber

1990-01-01

316

Highly Parallel Sparse Cholesky Factorization.  

National Technical Information Service (NTIS)

The paper develops and compares several fine-grained parallel algorithms to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed-memory SIMD machine whose programming model co...

J. R. Gilbert R. Schreiber

1990-01-01

317

Parallel computing and domain decomposition  

SciTech Connect

Domain decomposition techniques appear a natural way to make good use of parallel computers. In particular, these techniques divide a computation into a local part, which may be done without any interprocessor communication, and a part that involves communication between neighboring and distant processors. This paper discusses some of the issues in designing and implementing a parallel domain decomposition algorithm. A framework for evaluating the cost of parallelism is introduced and applied to answering questions such as which and how many processors should solve global problems and what impact load balancing has on the choice of domain decomposition algorithm. The sources of performance bottlenecks are discussed. This analysis suggests that domain decomposition techniques will be effective on high-performance parallel processors and on networks of workstations. 17 refs., 8 figs.

Gropp, W.

1991-01-01

318

Interpreting Quantum Parallelism by Sequents  

NASA Astrophysics Data System (ADS)

We introduce an interpretation of quantum superposition in predicative sequent calculus, in the framework of basic logic. Then we introduce a new predicative connective for the entanglement. Our aim is to represent quantum parallelism in terms of logical proofs.

Battilotti, Giulia

2010-12-01

319

Parallel computing and domain decomposition.  

National Technical Information Service (NTIS)

Domain decomposition techniques appear a natural way to make good use of parallel computers. In particular, these techniques divide a computation into a local part, which may be done without any interprocessor communication, and a part that involves commu...

W. Gropp

1991-01-01

320

Challenge of Massively Parallel Computing.  

National Technical Information Service (NTIS)

Since the mid-1980's, there have been a number of commercially available parallel computers with hundreds or thousands of processors. These machines have provided a new capability to the scientific community, and they been used successfully by scientists ...

D. E. Womble

1999-01-01

321

Performance Model for Massive Parallelism.  

National Technical Information Service (NTIS)

A popular argument is that vector and parallel architectures should not be carried to extremes because the scalar or serial portion of the code will eventually dominate. Since pipeline stages and extra processors obviously add hardware cost, a corollary t...

J. L. Gustafson

1988-01-01

322

Parallel Implicit Algorithms for CFD.  

National Technical Information Service (NTIS)

The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) 'Newton' refers to a quadratically convergent nonlinear iterat...

D. E. Keyes

1998-01-01

323

Demonstrating Forces between Parallel Wires.  

ERIC Educational Resources Information Center

|Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)|

Baker, Blane

2000-01-01

324

New Methodologies for Parallel Architecture  

Microsoft Academic Search

Moore's law continues to grant computer architects ever more transistors in the foreseeable future, and parallelism is the\\u000a key to continued performance scaling in modern microprocessors. In this paper, the achievements in our research project, which\\u000a is supported by the National Basic Research 973 Program of China, on parallel architecture, are systematically presented.\\u000a The innovative approaches and techniques to solve

Dong-Rui Fan; Xiao-Wei Li; Guo-Jie Li

2011-01-01

325

Debugging in a parallel environment  

SciTech Connect

This paper describes the preliminary results of a project investigating approaches to dynamic debugging in parallel processing systems. Debugging programs in a multiprocessing environment is particularly difficult because of potential errors in synchronization of tasks, data dependencies, sharing of data among tasks, and irreproducibility of specific machine instruction sequences from one job to the next. The basic methodology involved in predicate-based debuggers is given as well as other desirable features of dynamic parallel debugging. 13 refs.

Wasserman, H.J.; Griffin, J.H.

1985-01-01

326

Sumatra and Cascadia: Parallels Explored  

Microsoft Academic Search

The 2004 Sumatra-Andaman Mw9.2 earthquake has spawned superficial parallels between the Sumatra and Cascadia convergent margins in terms of rupture length and tsunami generation, however the parallels go deeper than these simple parameters. The accretionary wedges of both systems are fed by large accreting submarine fans at their northern ends, sourcing sediments from remote continental interiors. The Astoria and Nitinat

C. Goldfinger; L. C. McNeill

2006-01-01

327

Parallelization of the SIP algorithm  

NASA Astrophysics Data System (ADS)

This work is devoted to the development of an algorithm for the parallelization of the SIP-solver for pentadiagonal matrices. It bases on the so-called block Jacobi algorithm. The parallelization of the matrix solver is done using the OpenMP standard. The performance of the new algorithm is illustrated with the numerical solution of a 2D Laplace's equation problem and a 2D Navier-Stokes equation problem (lid driven cavity).

Dierich, F.; Nikrityuk, P. A.

2012-09-01

328

Parallelization of FM-Index  

Microsoft Academic Search

A parallel design and implementation of FM-index is presented in this paper. In applications, the performance of the FM-index is crucial, which is a self-contained, highly compressed indexing algorithm. With the popularity of multi-core processors, parallel computing allows the FM-index to run faster by performing multiple computations simultaneously when possible. Our approach works by splitting input data into overlapping blocks

Di Zhang; Yunquan Zhang; Shengfei Liu; Xiaodi Huang

2008-01-01

329

Prioritization in Parallel Symbolic Computing  

Microsoft Academic Search

It is argued that scheduling is an important determinant of performance for many parallel symbolic computations, in addition\\u000a to the issues of dynamic load balancing and grain size control. We propose associating unbounded levels of priorities with\\u000a tasks and messages as the mechanism of choice for specifying scheduling strategies. We demonstrate how priorities can be used\\u000a in parallelizing computations in

Laxmikant V. Kalé; Balkrishna Ramkumar; Vikram A. Saletore; Amitabh Sinha

1992-01-01

330

Another view on parallel speedup  

Microsoft Academic Search

In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. Two sets of speedup formulations are derived for these three models. One set requires more information and gives more accurate estimation. Another set considers a simplified case and provides a clear picture of possible performance gain of parallel processing. The simplified

Xian-He Sun; Lionel M. Ni

1990-01-01

331

A parallel string search algorithm  

Microsoft Academic Search

A new parallel processing algorithm for solving string search problems is presented. The proposed algorithm uses O(m×n) processors where n is the length of a text and m is the length of a pattern. It requires two and only two iteration steps to find the pattern in the text, while the best existing parallel algorithm needs the computation time O(loglog

Yoshiyasu Takefuji; Toshimitsu Tanaka; Kuo Chun Lee

1992-01-01

332

Adaptive Explicitly Parallel Instruction Computing  

Microsoft Academic Search

Poor scalability of Superscalar architectures with increasinginstruction-level parallelism (ILP) has resulted in a trend towards staticallyscheduled horizontal architectures such as Very Large Instruction Word(VLIW) processors and their more sophisticated successors called ExplicitlyParallel Instruction Computing (EPIC) architectures. We extend the EPiCmodel with additional capabilities to reconfigure the datapath at runtimein terms of the number and types of functional units...

Krishna V. Palem; Surendranath Talla; Patrick W. Devaney

1999-01-01

333

Genetic Algorithms for Reliability-Based Optimization of Water Distribution Systems  

Microsoft Academic Search

A new approach for reliability-based optimization of water distribution networks is presented. The approach links a genetic algorithm ~GA! as the optimization tool with the first-order reliability method ~FORM! for estimating network capacity reliability. Network capacity reliability in this case study refers to the probability of meeting minimum allowable pressure constraints across the network under uncertain nodal demands and uncertain

Bryan A. Tolson; Holger R. Maier; Angus R. Simpson; Barbara J. Lence

2004-01-01

334

Parallel asynchronous particle swarm optimization  

PubMed Central

SUMMARY The high computational cost of complex engineering optimization problems has motivated the development of parallel optimization algorithms. A recent example is the parallel particle swarm optimization (PSO) algorithm, which is valuable due to its global search capabilities. Unfortunately, because existing parallel implementations are synchronous (PSPSO), they do not make efficient use of computational resources when a load imbalance exists. In this study, we introduce a parallel asynchronous PSO (PAPSO) algorithm to enhance computational efficiency. The performance of the PAPSO algorithm was compared to that of a PSPSO algorithm in homogeneous and heterogeneous computing environments for small- to medium-scale analytical test problems and a medium-scale biomechanical test problem. For all problems, the robustness and convergence rate of PAPSO were comparable to those of PSPSO. However, the parallel performance of PAPSO was significantly better than that of PSPSO for heterogeneous computing environments or heterogeneous computational tasks. For example, PAPSO was 3.5 times faster than was PSPSO for the biomechanical test problem executed on a heterogeneous cluster with 20 processors. Overall, PAPSO exhibits excellent parallel performance when a large number of processors (more than about 15) is utilized and either (1) heterogeneity exists in the computational task or environment, or (2) the computation-to-communication time ratio is relatively small.

Koh, Byung-Il; George, Alan D.; Haftka, Raphael T.; Fregly, Benjamin J.

2006-01-01

335

Transfer form  

Cancer.gov

06/06 Transfer Investigational Agent Form This form is to be used for an intra-in stitutional transfer, one transfer/form. Cancer Therapy Evaluation Program Division of Cancer Treatment and Diagnosis National Cancer Institute National Institutes of

336

Parallel search of strongly ordered game trees  

SciTech Connect

The alpha-beta algorithm forms the basis of many programs that search game trees. A number of methods have been designed to improve the utility of the sequential version of this algorithm, especially for use in game-playing programs. These enhancements are based on the observation that alpha beta is most effective when the best move in each position is considered early in the search. Trees that have this so-called strong ordering property are not only of practical importance but possess characteristics that can be exploited in both sequential and parallel environments. This paper draws upon experiences gained during the development of programs which search chess game trees. Over the past decade major enhancements of the alpha beta algorithm have been developed by people building game-playing programs, and many of these methods will be surveyed and compared here. The balance of the paper contains a study of contemporary methods for searching chess game trees in parallel, using an arbitrary number of independent processors. To make efficient use of these processors, one must have a clear understanding of the basic properties of the trees actually traversed when alpha-beta cutoffs occur. This paper provides such insights and concludes with a brief description of a refinement to a standard parallel search algorithm for this problem. 33 references.

Marsland, T.A.; Campbell, M.

1982-12-01

337

Improvement of Weapon Systems Reliability Through Reliability Improvement Warranties.  

National Technical Information Service (NTIS)

This report outlines the basic causes of poor weapon systems reliability. These include: (1) Military requirements that demand greater improvements in capability over improvements in reliability; (2) Inadequate development testing; and (3) The lack of inc...

J. D. Shmoldas

1977-01-01

338

Parallel Algorithms For Globally Adaptive Quadrature  

Microsoft Academic Search

12Declaration 13Copyright 14The Author 15Acknowledgements 161 Introduction 172 Parallel Computing 232.1 Why use parallel computers? . . . . . . . . . . . . . . . . . . . . . 232.2 Architecture of parallel computers . . . . . . . . . . . . . . . . . . 242.2.1

Jonathan Mark Bull

1997-01-01

339

[Implementing graphical and analytical procedures for developing parallel tests].  

PubMed

We present an instrumental study in which two general procedures (graphical and objective) for constructing parallel forms are implemented. Both procedures are based on an item pool which is calibrated by means of the classical test theory. The graphic procedure is Gulliksen's matched random subtests method. The objective procedure is based on the criteria proposed by van der Linden and Boekkooi-Timminga, and uses zero-one programming. The stand-alone program FOR-PAR is free and user-friendly, and allows the parallel forms to be obtained directly from the item scores. FOR-PAR covers an important need in applied research, because developing parallel forms is a requirement in some applications, and, so far, no programmes of this type were available. The procedures which are implemented were applied in an illustrative example based on an adjustment test intended for visually handicapped people. PMID:19403089

Ferrando Piera, Pere Joan; Lorenzo-Seva, Urbano; Pallero González, Rafael

2009-05-01

340

Substation Reliability Centered Maintenance  

SciTech Connect

Substation Reliability Centered Maintenance (RCM) is a technique that is used to develop maintenance plans and criteria so the operational capability of substation equipment is achieved, restored, or maintained. The objective of the RCM process is to focus attention on system equipment in a manner that leads to the formulation of an optimal maintenance plan. The RCM concept originated in the airline industry in the 1970s and has been used since 1985 to establish maintenance requirements for nuclear power plants. The RCM process is initially applied during the design and development phase of equipment or systems on the premise that reliability is a design characteristic. It is then reapplied, as necessary, during the operational phase to sustain a more optimal maintenance program based on actual field experiences. The purpose of the RCM process is to develop a maintenance program that provides desired or specified levels of operational safety and reliability at the lowest possible overall cost. The objectives are to predict or detect and correct incipient failures before they occur or before they develop into major defects, reduce the probability of failure, detect hidden problems, and improve the cost-effectiveness of the maintenance program. RCM accomplishes two basic purposes: (1) It identifies in real-time incipient equipment problems, averting potentially expensive catastrophic failures by communicating potential problems to appropriate system operators and maintenance personnel. (2) It provides decision support by recommending, identifying, and scheduling preventive maintenance. Recommendations are based on maintenance criteria, maintenance history, experience with similar equipment, real-time field data, and resource constraints. Hardware and software are used to accomplish these two purposes. The RCM system includes instrumentation that monitors critical substation equipment as well as computer software that helps analyze equipment data.

Purucker, S.L.

1992-11-01

341

Reliability, synchrony and noise  

PubMed Central

The brain is noisy. Neurons receive tens of thousands of highly fluctuating inputs and generate spike trains that appear highly irregular. Much of this activity is spontaneous—uncoupled to overt stimuli or motor outputs—leading to questions about the functional impact of this noise. Although noise is most often thought of as disrupting patterned activity and interfering with the encoding of stimuli, recent theoretical and experimental work has shown that noise can play a constructive role—leading to increased reliability or regularity of neuronal firing in single neurons and across populations. These results raise fundamental questions about how noise can influence neural function and computation.

Ermentrout, G. Bard; Galan, Roberto F.; Urban, Nathaniel N.

2008-01-01

342

Parallel 3-D spherical-harmonics transport methods  

SciTech Connect

This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The authors have developed massively parallel algorithms and codes for solving the radiation transport equation on 3-D unstructured spatial meshes consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. Three self-adjoint forms of the transport equation are solved: the even-parity form, the odd-parity form, and the self-adjoint angular flux form. The authors developed this latter form, which offers several significant advantages relative to the traditional forms. The transport equation is discretized in space using a trilinear finite-element approximation, in direction using a spherical-harmonic approximation, and in energy using the multigroup approximation. The discrete equations are solved used a parallel conjugate-gradient. All of the parallel algorithms were implemented on the CM-5 computer at LANL. Calculations are presented which demonstrate that the solution technique is both highly parallel and efficient.

Morel, J.E.; McGhee, J.M. [Los Alamos National Lab., NM (United States). Computing, Information, and Communications Div.; Manteuffel, T. [Univ. of Colorado, Boulder, CO (United States). Dept. of Mathematics

1997-08-01

343

Storage Reliability of Reserve Batteries.  

National Technical Information Service (NTIS)

This report concerns the storage reliability of reserve batteries. Items developed for munitions have a 20 yr. shelf life requirement over a wide temperature range. Developers need to prove storage reliability. Actual documentation is preferred. Science c...

J. Swank A. Goldberg

2001-01-01

344

The reliability of lie detection performance.  

PubMed

We examined whether individuals' ability to detect deception remained stable over time. In two sessions, held one week apart, university students viewed video clips of individuals and attempted to differentiate between the lie-tellers and truth-tellers. Overall, participants had difficulty detecting all types of deception. When viewing children answering yes-no questions about a transgression (Experiments 1 and 5), participants' performance was highly reliable. However, rating adults who provided truthful or fabricated accounts did not produce a significant alternate forms correlation (Experiment 2). This lack of reliability was not due to the types of deceivers (i.e., children versus adults) or interviews (i.e., closed-ended questions versus extended accounts) (Experiment 3). Finally, the type of deceptive scenario (naturalistic vs. experimentally-manipulated) could not account for differences in reliability (Experiment 4). Theoretical and legal implications are discussed. PMID:18594955

Leach, Amy-May; Lindsay, R C L; Koehler, Rachel; Beaudry, Jennifer L; Bala, Nicholas C; Lee, Kang; Talwar, Victoria

2008-07-02

345

FAROW: A tool for fatigue and reliability of wind turbines  

Microsoft Academic Search

FAROW is a computer program that evaluates the fatigue and reliability of wind turbine components using structural reliability methods. A deterministic fatigue life formulation is based on functional forms of three basic parts of wind turbine fatigue calculation: (1) the loading environment, (2) the gross level of structural response given the load environment, and (3) the local failure criterion given

P. S. Veers; C. H. Lange; S. R. Winterstein

1993-01-01

346

Software-Reliability Modeling: The Case for Deterministic Behavior  

Microsoft Academic Search

Software-reliability models (SRMs) are used for the assessment and improvement of reliability in software systems. These models are normally based on stochastic processes, with the nonhomogeneous Poisson process being one of the most prominent model forms. An underlying assumption of these models is that software failures occur randomly in time. This assumption has never been quantitatively tested. Our contribution in

Scott Dick; Cindy L. Bethel; Abraham Kandel

2007-01-01

347

The Reliability of Environmental Measures of the College Alcohol Environment.  

ERIC Educational Resources Information Center

|Assesses the inter-rater reliability of two environmental scanning tools designed to identify alcohol-related advertisements targeting college students. Inter-rater reliability for these forms varied across different rating categories and ranged from poor to excellent. Suggestions for future research are addressed. (Contains 26 references and 6…

Clapp, John D.; Whitney, Mike; Shillington, Audrey M.

2002-01-01

348

Dumper upgrades boost reliability  

SciTech Connect

This article describes how coal handling equipment reliability was improved at two power plants after refurbishment. Rotary railroad car dumpers first gained widespread adoption for bulk materials transportation on more than 50 years ago. At that time, the concept of rotating a loaded railcar 180 degrees to dump its entire load was considered revolutionary. Today, such equipment, often called roll dumpers, seems so commonplace that the current state-of-the-art dumper designs may go unrecognized because of their external similarities to the earliest developments. Current rotary car dumper systems are more efficient, accurate and productive. Because they have been more or less dependable, many dumper installations in use today date from the very early years. However, wear, weathering and new demands on such aging equipment naturally take their toll. TVA engineers had become watchful for signs of catastrophic failures at the unsheltered dumpers and closely monitored the frequency of breakdowns and downtime at all power plants. It appeared that the future economics of fossil fuel usage would increase each power plant`s need for dependable, productive coal handling and car dumping systems. The fossil fuel procurement department routinely reviews cost data to identify and track the components that make up procurement and fuel handling costs. During the winter of 1987--88, the work identified railcar dumpers at both the Kingston and Shawnee plants as major contributors to increasing costs. Weathering was not the major problem here because both installations were under cover. However, the real needs involved improved reliability and greater efficiency.

Johnson, R. [Tennessee Valley Authority, Knoxville, TN (United States)

1995-04-01

349

Highly reliable fused couplers  

NASA Astrophysics Data System (ADS)

The focus of this paper will be Oplink's high-reliable fused coupler program. It is based on an Oplink proprietary fusion system design and offers a unique technology. The fusion temperature is extremely high so that the whole cladding glasses of the fibers are strongly fused together. The optical performance benefits provide superior stability over time and withstand extreme environmental condition changes. Additionally, the insertion losses and polarization dependent losses (PDL) can be achieved at extremely low values with almost no changes over time. The high reliability test shows results (beyond Telcordia requirements GR-1221-CORE) of up to 8000 hours in the damping heat test and the high temperature storage (at 100 degree(s)C). In addition, we will present the data results from other stringent tests. The FIT number is also estimated based on the test data. This unique fusion system design technology can be applied to a wide range of fused fiber products such as tap couplers and fused wavelength division multiplexers. In applications, it can be potentially implemented for fused products used in undersea submarine systems.

Shi, Zhong; Ren, Steve; Wu, Weiti

2002-09-01

350

The Reliability of Neurons  

PubMed Central

The prevalent probabilistic view is virtually untestable; it remains a plausible belief. The cases usually cited can not be taken as evidence for it. Several grounds for this conclusion are developed. Three issues are distinguished in an attempt to clarify a murky debate: (a) the utility of probabilistic methods in data reduction, (b) the value of models that assume indeterminacy, and (c) the validity of the inference that the nervous system is largely indeterministic at the neuronal level. No exception is taken to the first two; the second is a private heuristic question. The third is the issue to which the assertion in the first two sentences is addressed. Of the two kinds of uncertainty, statistical mechanical (= practical unpredictability) as in a gas, and Heisenbergian indeterminancy, the first certainly exists, the second is moot at the neuronal level. It would contribute to discussion to recognize that neurons perform with a degree of reliability. Although unreliability is difficult to establish, to say nothing of measure, evidence that some neurons have a high degree of reliability, in both connections and activity is increasing greatly. An example is given from sternarchine electric fish.

Bullock, Theodore Holmes

1970-01-01

351

Parallel plasma fluid turbulence calculations  

SciTech Connect

The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center`s CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.

Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

1994-12-31

352

Parallelization of the simplex method using the quadrant interlocking factorization  

SciTech Connect

This dissertation considers the parallelization of the simplex method of linear programming. Current implementations of the simplex method on sequential computers are based on a triangular factorization of the inverse of the current basis. An alternative decomposition designed for parallel computation, called the quadrant interlocking factorization, has previously been proposed for solving linear systems of equations. This research presents the theoretical justification and algorithms required to implement this new factorization in a simplex-based linear-programming system. Four algorithms are presented for updating the quadrant-interlocking factorization of the basis matrix when modified by a rank-one matrix. Parallel algorithms for producing the factorization in product form and for solving the linear systems of the simplex method are developed. The computations are scheduled to minimize the total execution time on a multiple-instructions multiple-data (MIMD) parallel computer that incorporates p identical processors sharing a common memory.

Zaki, H.A.

1987-01-01

353

Kinesthetic Aftereffect Scores Are Reliable  

ERIC Educational Resources Information Center

|The validity of the Kinesthetic Aftereffect (KAE) as a measure of personality has been criticized because of KAE's poor test-retest reliability. However, systematic bias effects render KA E retest sessions invalid and make test-retest reliability an inappropriate measure of KAE's true reliability. (Author/CTM)|

Mishara, Brian L.; Baker, A. Harvey

1978-01-01

354

Recent Developments in Reliability Analysis.  

ERIC Educational Resources Information Center

When one wants to set data reliability standards for a class of scientific inquiries or when one needs to compare and select among many different kinds of data with reliabilities that are crucial to a particular research undertaking, then one needs a single reliability coefficient that is adaptable to all or most situations. Work toward this goal…

Krippendorff, Klaus

355

Massively parallel MRI detector arrays.  

PubMed

Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas via reception, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called "ultimate" SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

Keil, Boris; Wald, Lawrence L

2013-02-07

356

Visualizing parallel computer system performance  

SciTech Connect

Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized. Large, complex parallel systems pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eliding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

Malony, A.D.; Reed, D.A. (Illinois Univ., Urbana, IL (USA). Dept. of Computer Science)

1988-01-01

357

Fast data parallel polygon rendering  

SciTech Connect

This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

Ortega, F.A.; Hansen, C.D.

1993-09-01

358

Massively parallel MRI detector arrays  

NASA Astrophysics Data System (ADS)

Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas via reception, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called "ultimate" SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays.

Keil, Boris; Wald, Lawrence L.

2013-04-01

359

Wave Propagation Using Parallel Computational Techniques.  

NASA Astrophysics Data System (ADS)

This thesis is concerned with the application of computational techniques appropriate to simulation of wave propagation from radiating elements in an antenna array utilizing parallel computer architectures. The intention of this study is to implement numerical solutions of the wave equation in a parallel computer architecture to gain a computational speed advantage and to develop direct simulations techniques (based on a description dependent only on local physics) which are an accurate representation of the physical system. Energy conserving integration schemes for the wave equation represented in finite difference form have been developed and analyzed. The analysis entails first verifying the stability of these finite difference equations using von Neumann stability analysis and then finding analytic closed form solutions using discrete Laplace and Fourier transforms. Analytic results are then verified by comparison to the simulations. These integration schemes have been extended to include boundary conditions which permit the simulation of the free-space wave propagation on a finite computational mesh. The simulations are characterized by computational requirements which are independent of the geometry of the simulation. Therefore this type of simulation will be used as a tool to simulate wave propagation in inhomogeneous media with an arbitrary spatial distribution of sources and reflectors. It will also be appropriate as a method of evaluating antenna performance in both the far and near field.

Sparagna, Stephen Michael

1990-01-01

360

Hybrid parallel programming with MPI and Unified Parallel C.  

SciTech Connect

The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity because of their ability to provide a shared global address space that spans the memories of multiple compute nodes. However, taking advantage of UPC can require a large recoding effort for existing parallel applications. In this paper, we explore a new hybrid parallel programming model that combines MPI and UPC. This model allows MPI programmers incremental access to a greater amount of memory, enabling memory-constrained MPI codes to process larger data sets. In addition, the hybrid model offers UPC programmers an opportunity to create static UPC groups that are connected over MPI. As we demonstrate, the use of such groups can significantly improve the scalability of locality-constrained UPC codes. This paper presents a detailed description of the hybrid model and demonstrates its effectiveness in two applications: a random access benchmark and the Barnes-Hut cosmological simulation. Experimental results indicate that the hybrid model can greatly enhance performance; using hybrid UPC groups that span two cluster nodes, RA performance increases by a factor of 1.33 and using groups that span four cluster nodes, Barnes-Hut experiences a twofold speedup at the expense of a 2% increase in code size.

Dinan, J.; Balaji, P.; Lusk, E.; Sadayappan, P.; Thakur, R.; Mathematics and Computer Science; The Ohio State Univ.

2010-01-01

361

Detecting coarse-grain parallelism using an interprocedural parallelizing compiler  

Microsoft Academic Search

This paper presents an extensive empirical evaluation of an interprocedural parallelizing compiler, developed as part of the Stanford SUIF compiler system. The system incorporates a comprehensive and integrated collection of analyses, including privatization and reduction recognition for both array and scalar variables, and symbolic analysis of array subscripts. The interprocedural analysis framework is designed to provide analysis results nearly as

Mary H. Hall; Saman P. Amarasinghe; Brian R. Murphy; Shih-Wei Liao; Monica S. Lam

1995-01-01

362

Constructions: Parallel Through A Point  

NSDL National Science Digital Library

After review of Construction Basics, the technique of constructing a parallel line through a point not on the line will be learned. Let's review the basics of Constructions in Geometry first: Constructions - General Rules Review of how to copy an angle is helpful; please review that here: Constructions: Copy a Line Segment and an Angle Now, using a paper, pencil, straight edge, and compass, you will learn how to construct a parallel through a point. A video demonstration is available to help you. (Windows Media ...

Neubert, Mrs.

2010-12-31

363

Gang scheduling a parallel machine  

SciTech Connect

Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processes. User programs and their gangs of processes are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantum are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory.

Gorda, B.C.; Brooks, E.D. III.

1991-12-01

364

Gang scheduling a parallel machine  

SciTech Connect

Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

Gorda, B.C.; Brooks, E.D. III.

1991-03-01

365

THE RELIABILITY OF SPECIAL TESTS IN MEASURING PERSONALITY.  

ERIC Educational Resources Information Center

A FOLLOWUP STUDY WAS REPORTED THAT ENLARGED THE SCOPE OF THE AUTHOR'S PREVIOUS STUDY OF THE OPPOSITE-FORM APPROACH USED BY STUDENTS IN TEST AND MEASUREMENT COURSES. THE STUDY HAD THREE PURPOSES--(1) TO INVESTIGATE THE RELIABILITIES OF OPPOSITE-FORM INVENTORIES, (2) TO CROSS VALIDATE OPPOSITE-FORM INVENTORIES, AND (3) TO STUDY THE PATTERNING OF…

ONG, JIN

366

Consider insulation reliability  

SciTech Connect

This paper reports that when calcium silicate and two brands of mineral wool were compared in a series of laboratory tests, calcium silicate was more reliable. And in-service experience with mineral wool at a Canadian heavy crude refinery provided examples of many of the lab's findings. Lab tests, conducted under controlled conditions following industry accepted practices, showed calcium silicate insulation was stronger, tougher and more durable than the mineral wools to which it was compared. For instance, the calcium silicate insulation exhibited only some minor surface cracking when heated to 1,200[degrees]F (649[degrees]C), while the mineral wools suffered binder burnout resulting in sagging, delamination and a general loss of dimensional stability.

Gamboa (Manville Mechanical Insulations, a Div. of Schuller International Inc., Denver, CO (United States))

1993-01-01

367

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2011 CFR

...2011-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2011-10-01

368

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2012 CFR

...2012-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2012-10-01

369

Dynamic redundancy allocation for reliable and high-performance nanocomputing  

Microsoft Academic Search

Nanoelectronic devices are considered to be the fabrics of future nanocomputing systems due to their ultra-high speed and integration density. However, the imperfect bottom-up self-assembly fabrication leads to excessive defects that emerge as a barrier for reliable computing. In addition, transient errors continue to be an issue in nanoscale integration. The massive parallelism rendered by the ultra-high integration density opens

Shuo Wang; Lei Wang; Faquir Jain

2007-01-01

370

Spray forming  

Microsoft Academic Search

Spray forming is a relatively new manufacturing process for near net shape preforms in a wide variety of alloys. Spray formed materials have a characteristic equiaxed microstructure with small grain sizes, low levels of solute partitioning, and inhibited coarsening of secondary phases. After consolidation to full density, spray formed materials have consistently shown properties superior to conventionally cast materials, and

P. S. Grant

1995-01-01

371

Approximate Time-Parallel Cache Simulation  

Microsoft Academic Search

In time-parallel simulation, the simulation time axis is de- composed into a number of slices which are assigned to parallel processes for concurrent simulation. Although a promising parallelization technique, it is difficult to be ap- plied. Recently, using approximation with time-parallel simulation has been proposed to extend the class of suit- able models and to improve the performance of existing

Tobias Kiesling

2004-01-01

372

A Parallel Communication Infrastructure for STAPL  

Microsoft Academic Search

Communicationis an important but difficult as- pect of parallel programming. This paper describes a parallel communication infrastructure, based on remote method invo- cation, to simplify parallel programming by abstracting low- level shared-memory or message passing details while main- taining high performance and portability. STAPL, the Stan- dard Template Adaptive Parallel Library, builds upon this in- frastructure to make communication transparent

Steven Saunders; Lawrence Rauchwerger

373

Residual Splash for Optimally Parallelizing Belief Propagation  

Microsoft Academic Search

As computer architectures move towards multi- core we must build a theoretical understanding of parallelism in machine learning. In this paper we focus on parallel inference in graphical mod- els. We demonstrate that the natural, fully syn- chronous parallelization of belief propagation is highly inefficient. By bounding the achievable parallel performance in chain graphical models we develop a theoretical understanding

Joseph E. Gonzalez; Yucheng Low; Carlos Guestrin

2009-01-01

374

Performance considerations for parallel FFT algorithms  

Microsoft Academic Search

The authors describe parallel algorithm performance evaluation in a programming and instrumentation environment (PIE), an environment geared toward efficient parallel programming and the prediction, implementation, measurement, and evaluation of parallel fast Fourier transform (FFT) algorithms. An example of a mature technology for evaluating parallel applications is provided, emphasizing the need for integration between modeling and measurements. Performance tradeoffs for a

Masakatsu Kosaka; Zary Segall

1990-01-01

375

Parallel p-code for parallel Pascal and other high level languages  

SciTech Connect

Parallel p-code is an intermediate compiler language for parallel processors. It was originally designed as part of a parallel Pascal compiler for NASA's massively parallel processor (MPP). However, it should also be suitable for a wide variety of high level languages and parallel architectures. Parallel p-code is based on a p-code language for serial processors. The authors describe the extensions which were necessary for the parallel environment. 6 references.

Bruner, J.D.; Reeves, A.P.

1983-01-01

376

Parallel P-code for Parallel Pascal and other high level languages  

SciTech Connect

Parallel P-code is an intermediate compiler language for parallel processors. It was originally designed as part of a Parallel Pascal compiler for NASA's Massively Parallel Processor (MPP). However, it should also be suitable for a wide variety of high level languages and parallel architectures. Parallel P-code is based on a P-code language for serial processors; this paper describes the extensions which were necessary for the parallel environment.

Bruner, J.D.; Reeves, A.P.

1983-07-21

377

Experimental Assessment of Parallel Systems  

Microsoft Academic Search

In the research reported in this paper, transient faults were injected in the nodes and in the communication subsystem (by using software fault injection) of a commercial parallel machine running several real applications. The results showed that a significant percentage of faults caused the system to produce wrong results while the application seemed to terminate normally, thus demonstrating that fault

João Gabriel Silva; Joao Carreira; Henrique Madeira; Francisco Moreira; P. Moreira

1996-01-01

378

Why Structured Parallel Programming Matters  

Microsoft Academic Search

Abstract. Simple parallel programming frameworks such as Pthreads, or the six function core of MPI, are universal in the sense that they support the expression of arbitrarily complex patterns of computation and interaction between concurrent activities. Pragmatically, their de-scriptive power is constrained only by the programmer's creativity and capacity for attention to detail. Meanwhile, as our understanding of the structure

Murray Cole

2004-01-01

379

Parallel performance characteristics of ICEPIC  

Microsoft Academic Search

Fast, efficient results from the ICEPIC (improved concurrent electromagnetic particle in cell) code are key to the Air Force Research Laboratory's efforts to design high power microwave sources for electronic warfare and nonlethal weaponry. Parallelization of ICEPIC allows the use of DoD supercomputer assets to perform device simulations which would previously have been impossible, and also to obtain these results

P. Mardahl; A. Greenwood; T. Murphy; K. Cartwright

2003-01-01

380

Ejs Parallel Plate Capacitor Model  

NSDL National Science Digital Library

The Ejs Parallel Plate Capacitor model displays a parallel-plate capacitor which consists of two identical metal plates, placed parallel to one another. The capacitor can be charged by connecting one plate to the positive terminal of a battery and the other plate to the negative terminal. The dielectric constant and the separation of the plates can be changed via sliders. You can modify this simulation if you have Ejs installed by right-clicking within the plot and selecting "Open Ejs Model" from the pop-up menu item. Ejs Parallel Plate Capacitor model was created using the Easy Java Simulations (Ejs) modeling tool. It is distributed as a ready-to-run (compiled) Java archive. Double clicking the ejs_bu_capacitor.jar file will run the program if Java is installed. Ejs is a part of the Open Source Physics Project and is designed to make it easier to access, modify, and generate computer models. Additional Ejs models for Newtonian mechanics are available. They can be found by searching ComPADRE for Open Source Physics, OSP, or Ejs.

Duffy, Andrew

2008-07-14

381

Parallel, Distributed Scripting with Python  

SciTech Connect

Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

Miller, P J

2002-05-24

382

Coupled parallel waveguide semiconductor laser  

SciTech Connect

The operation of a new type of tunable laser, where the two separately controlled individual lasers are placed vertically in parallel, has been demonstrated. One of the cavities (''control'' cavity) is operated below threshold and assists the longitudinal mode selection and tuning of the other laser. With a minor modification, the same device can operate as an independent two-wavelength laser source.

Mukai, S.; Kapon, E.; Katz, J.; Lindsey, C.; Rav-Noy, Z.; Margalit, S.; Yariv, A.

1984-03-01

383

Cathodic protection with parallel cylinders  

Microsoft Academic Search

This paper reports that anodes should be placed so as to supply a uniform current density to the surface of the protected cathode to maintain it within a specified potential range relative to the adjacent electrolytic medium. Analysis of two or more parallel circular cylinders is carried out by solving Laplace's equation with the uniform current density on the cathode.

John Newman

1991-01-01

384

Bloom and Hypertext: Parallel Taxonomies?  

ERIC Educational Resources Information Center

Discusses parallels between Bloom's taxonomy for cognitive learning outcomes and hypertext design, and argues that hypertext designs should be consistent with desired learning outcomes. Bloom's taxonomy is explained; hypertext designs are described and diagrammed; and the need for more research connecting hypertext design and effective learning is…

Ross, Tweed W.

1993-01-01

385

Coarray Fortran for parallel programming  

Microsoft Academic Search

Co-Array Fortran, formerly known as F--, is a small extension of Fortran 95 for parallel processing. A Co-Array Fortran program is interpreted as if it were replicated a number of times and all copies were executed asynchronously. Each copy has its own set of data objects and is termed an image. The array syntax of Fortran 95 is extended with

Robert W. Numrich; John Reid

1998-01-01

386

Microeconomic Scheduler for Parallel Computers.  

National Technical Information Service (NTIS)

We describe a scheduler based on the microeconomic paradigm for scheduling on-line a set of parallel jobs in a multiprocessor system. In addition to increasing the system throughput and reducing the response time, we consider fairness in allocating system...

I. Stoica H. Abdel-Wahab

1995-01-01

387

Aligning Sentences in Parallel Corpora  

Microsoft Academic Search

In this paper we describe a statistical technique for aligning sentences with their translations in two parallel corpora. In addition to certain anchor points that are available in our data, the only information about the sentences that we use for calculating alignments is the number of tokens that they contain. Because we make no use of the lexical details of

Peter F. Brown; Jennifer C. Lai; Robert L. Mercer

1991-01-01

388

Microeconomic Scheduler for Parallel Computers.  

National Technical Information Service (NTIS)

We describe a scheduler based on the microeconomic paradigm for scheduling on-line a set of parallel jobs in a multiprocessor system. In addition to the classical objectives of increasing the system throughput and reducing the response time, we consider f...

I. Stoica H. Abdel-wahab A. Pothen

1995-01-01

389

Parallel Programming Examples using MPI  

NSDL National Science Digital Library

Despite the rate at which computers have advanced in recent history, human imagination has advanced faster. Often greater computing power can be achieved by having multiple computers work together on a single problem. This tutorial discusses how Message Passing Interface (MPI) can be used to implement parallel programming solutions in a variety of cases.

Joiner, David; The Shodor Education Foundation, Inc.

390

Scalable parallel suffix array construction  

Microsoft Academic Search

Abstract. Suffix arrays are a simple and powerful data structure for text processing that can be used for full text indexes, data compression, and many,other applications in particular in bioinformatics. We describe the first implementation and experimental evaluation of a scalable parallel algorithm for suffix array construction. The implementation works on distributed memory computers using MPI, Experiments with up to

Fabian Kulla; Peter Sanders

2007-01-01

391

The plane with parallel coordinates  

Microsoft Academic Search

By means ofParallel Coordinates planar “graphs” of multivariate relations are obtained. Certain properties of the relationship correspond tothe geometrical properties of its graph. On the plane a point ?? line duality with several interesting properties is induced. A new duality betweenbounded and unbounded convex sets and hstars (a generalization of hyperbolas) and between Convex Unions and Intersections is found. This

Alfred Inselberg

1985-01-01

392

Resequencing Considerations in Parallel Downloads  

Microsoft Academic Search

Several recent studies have proposed methods to ac- celerate the receipt of a file by downloading its parts from differ- ent servers in parallel. This paper formulates models for an ap- proach based on receiving only one copy of each of the data pack- ets in a file, while different packets may be obtained from different sources. This approach guarantees

Yoav Nebat; Moshe Sidi

2002-01-01

393

Portable Parallel Programming in HPC  

Microsoft Academic Search

HPC++ is a C++ library and language extension framework that is being developed by the HPC++ consortium as a standard model for portable parallel C++ programming. This paper provides a brief introduction to HPC++ style programming and outlines some of the unresolved issues

Peter H. Beckman; Dennis Gannon; Elizabeth Johnson

1996-01-01

394

Cluster-based parallel image processing toolkit  

NASA Astrophysics Data System (ADS)

Many image processing tasks exhibit a high degree of data locality and parallelism and map quite readily to specialized massively parallel computing hardware. However, as network technologies continue to mature, workstation clusters are becoming a viable and economical parallel computing resource, so it is important to understand how to use these environments for parallel image processing as well. In this paper we discuss our implementation of parallel image processing software library (the Parallel Image Processing Toolkit). The Toolkit uses a message- passing model of parallelism designed around the Message Passing Interface (MPI) standard. Experimental results are presented to demonstrate the parallel speedup obtained with the Parallel Image Processing Toolkit in a typical workstation cluster over a wide variety of image processing tasks. We also discuss load balancing and the potential for parallelizing portions of image processing tasks that seem to be inherently sequential, such as visualization and data I/O.

Squyres, J. M.; Lumsdaine, A.; Stevenson, Robert L.

1995-03-01

395

Demonstrating tail-gas treater reliability reduces costs  

SciTech Connect

The reliability of a hybrid tail-gas treating unit (TGTU), proposed as an alternative to parallel TGTUs, is nearly equal to that of two parallel units. This is proven using fault tree analysis. A Gulf Coast refiner was able to reduce major process equipment needed to satisfy environmental regulations and permit expansion of sulfur recovery facilities. Estimated capital cost savings of 67% are achievable by installing a hybrid system instead of a complete unit to process tail gas from the second of two parallel Claus sulfur recovery units. Using the same component failure rates, component repair times and human error probabilities for both cases and applying well-documented methods of quantifying onstream time, it is shown that there is not a meaningful difference in the annual sulfur dioxide (SO{sub 2}) emissions between the two cases. The hybrid unit provides the reliability and onstream time required by regulatory agencies, while reducing the capital outlay to comply with environmental regulations. The paper discusses background of the problem, fault tree methodology, two case descriptions, fault tree development, quench tower subsystem failure, component reliability data, quantitative fault tree analysis, and emissions comparison.

Kafesjian, A.S.; Dewey, R.C. [Ford, Bacon and Davis, Inc., Salt Lake City, UT (United States)

1995-04-01

396

Selection Panel for Reliable Soldering Systems  

Microsoft Academic Search

In order to guarantee a good solder reliability, the choice of soldering partners is crucial. With the on-going reduction of dimension of solder joints, intermetallic phase formation becomes an issue for the long-term-stability. The intermetallics can be mostly observed after soldering. Especially lead-fee solders are well-known for fast reaction by needle-shape forming intermetallics as well as by consumption of solder

S. Nieland; M. Bahr

2006-01-01

397

Supporting server-level fault tolerance in concurrent-push-based parallel video servers  

Microsoft Academic Search

Parallel video servers have been proposed for building large-scale video-on-demand (VoD) systems from mul- tiple low-cost servers. However, when adding more servers to scale up the capacity, system-level reliability will decrease as failure of any one of the servers will cripple the entire system. To tackle this reliability problem, this paper proposes and analyzes architectures to support server-level fault tolerance

Jack Y. B. Lee

2001-01-01

398

Softspec: Software-based Speculative Parallelism  

Microsoft Academic Search

We present Softspec, a technique for parallelizing sequential applications using a hybrid compile-time and run-time technique. Softspec parallelizes loops whose memory references are stride-predictable. By detecting and speculatively executing potential parallelism at runtime Softspec eliminates the need for complex program analysis required by parallelizing compilers. By using runtime information Softspec succeeds in parallelizing loops whose memory access patterns are statically...

Derek Bruening; Srikrishna Devabhaktuni; Saman Amarasinghe

2000-01-01

399

Reservoir Thermal Recover Simulation on Parallel Computers  

Microsoft Academic Search

The rapid development of parallel computers has provided a hardware background for massive refine reservoir simulation. However,\\u000a the lack of parallel reservoir simulation software has blocked the application of parallel computers on reservoir simulation.\\u000a Although a variety of parallel methods have been studied and applied to black oil, compositional, and chemical model numerical\\u000a simulations, there has been limited parallel software

Baoyan Li; Yuanle Ma

2000-01-01

400

Characterizing Correctness Properties of Parallel Programs Using Fixpoints  

Microsoft Academic Search

We have shown that correctness properties of parallel programs can be described using computation trees and that from these descriptions fixpoint characterizations can be generated. We have also given conditions on the form of computation tree descriptions to ensure that a correctness property can be characterized using continuous fixpoints. A consequence is that a correctness property such as inevitability under

E. Allen Emerson; Edmund M. Clarke

1980-01-01

401

Enabling active storage on parallel I\\/O software stacks  

Microsoft Academic Search

As data sizes continue to increase, the concept of active storage is well fitted for many data analysis kernels. Nevertheless, while this concept has been investigated and deployed in a number of forms, enabling it from the parallel I\\/O software stack has been largely unexplored. In this paper, we propose and evaluate an active storage system that allows data analysis,

Seung Woo Son; Samuel Lang; Philip Carns; Robert Ross; Rajeev Thakur; Berkin Ozisikyilmaz; Prabhat Kumar; Wei-Keng Liao; Alok Choudhary

2010-01-01

402

P4M: A Parallel Version Of P3M  

NASA Astrophysics Data System (ADS)

We present a basic parallel implementation of the particle--particle/particle--mesh (P3M) code for cosmological simulations on the IBM RS/6000 SP distributed-memory supercomputer using explicit message passing (in the form of MPL). The resulting code (P4M) is tested and compared to the serial version. We examine performance issues and discuss applications and future modifications.

Brieu, Philippe P.; Evrard, August E.

1998-05-01

403

Parallel test description and analysis of parallel test system speedup through Amdahl's law  

Microsoft Academic Search

This paper will outline various types of parallel test, discuss an adaptation of Amdahl's law to parallel test, and discuss possible extensions to ATML for parallel test. Amdahl's law is an equation in computer science that is used to derive the speedup gained through parallelizing the software; it expresses the speedup as a function of number of processors. Parallel test

Nathan Waivio; Rolling Meadows

2007-01-01

404

Nuclear weapon reliability evaluation methodology  

SciTech Connect

This document provides an overview of those activities that are normally performed by Sandia National Laboratories to provide nuclear weapon reliability evaluations for the Department of Energy. These reliability evaluations are first provided as a prediction of the attainable stockpile reliability of a proposed weapon design. Stockpile reliability assessments are provided for each weapon type as the weapon is fielded and are continuously updated throughout the weapon stockpile life. The reliability predictions and assessments depend heavily on data from both laboratory simulation and actual flight tests. An important part of the methodology are the opportunities for review that occur throughout the entire process that assure a consistent approach and appropriate use of the data for reliability evaluation purposes.

Wright, D.L. [Sandia National Labs., Albuquerque, NM (United States)

1993-06-01

405

Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes  

SciTech Connect

This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

Parsons, I D; Solberg, J M

2006-02-03

406

JPARSS: A Java Parallel Network Package for Grid Computing  

SciTech Connect

The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size. This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services

Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

2002-03-01

407

Reliability Evaluation of Solar Photovoltaic Arrays  

NASA Astrophysics Data System (ADS)

The operational lifetimes of large solar PV arrays is investigated using the probability theory for the assessment of reliability. Arrays based on the following three solar cell interconnection schemes have been considered: (i) simple series-parallel (SP) array, (ii) the total-crossed-tied (TCT) array which is obtained from the SP array by connecting ties across each row of junctions and (iii) the bridge-linked (BL) array in which all cells are interconnected in bridge rectifier fashion. To evaluate the reliability of the bridge-linked configuration, the cut-set technique is used. Computational results based on arrays consisting of (720 x 20) solar cells indicate that the operational life of an array is almost doubled by the introduction of cross ties (TCT or BL schemes) in the array. The operational lifetime can be further increased by approximately 30% by modularized networks based on TCT and BL configurations. These results are based on a theoretical analysis, however, and not on measure efficiency and life expectancy of solar cells.

Gautam, Nalin K.; Kaushika, N. D.

2002-02-01

408

US electric power system reliability  

NASA Astrophysics Data System (ADS)

Electric energy supply, transmission and distribution systems are investigated in order to determine priorities for legislation. The status and the outlook for electric power reliability are discussed.

409

Numerical analysis and parallel processing  

SciTech Connect

Each week of this three week meeting was a self-contained event, although each had the same underlying theme-the effect of parallel processing on numerical analysis. Each week provided the opportunity for intensive study to broaden participants' research interests or deepen their understanding of topics of which they already had some knowledge. There was also the opportunity for continuing individual research in the stimulating environment created by the presence of several experts of international stature. This book contains lecture notes for most of the major courses of lectures presented at the meeting; they cover topics in parallel algorithms for large sparse linear systems and optimization, an introductory survey of level-index arithmetic and superconvergence in the finite element method.

Turner, P.R. (U.S. Naval Academy, Annapolis, MD (US))

1989-01-01

410

Parallel supercomputing with commodity components  

SciTech Connect

We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10{sup 15} floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

Warren, M.S.; Goda, M.P. [Los Alamos National Lab., NM (United States); Becker, D.J. [Goddard Space Flight Center, Greenbelt, MD (United States)] [and others

1997-09-01

411

Parallel Materialization of Large ABoxes  

PubMed Central

This paper is concerned with the efficient computation of materialization in a knowledge base with a large ABox. We present a framework for performing this task on a shared-nothing parallel machine. The framework partitions TBox and ABox axioms using a min-min strategy. It utilizes an existing system, like SwiftOWLIM, to perform local inference computations and coordinates exchange of relevant information between processors. Our approach is able to exploit parallelism in the axioms of the TBox to achieve speedup in a cluster. However, this approach is limited by the complexity of the TBox. We present an experimental evaluation of the framework using datasets from the Lehigh University Benchmark (LUBM).

Narayanan, Sivaramakrishnan; Catalyurek, Umit; Kurc, Tahsin; Saltz, Joel

2011-01-01

412

Algorithms for parallel polygon rendering  

SciTech Connect

This book is the result of research in the implementation of polygon-based graphics operations on certain general purpose parallel processors; the aim is to provide a speed-up over sequential implementations of the graphics operations concerned, and the resulting software can be viewed as a subset of the application suites of the relevant parallel machines. A literature review and a brief description of the architectures considered give an introduction into the field. Most algorithms are consistently presented in an extension of the Occam language which includes single instruction multiple data stream (SIMD) data types and operations on them. Methods for polygon rendering - including the operations of filling, hidden surface elimination and smooth shading - are presented for SIMD architectures like the DAP and for a dual-paradigm (SIMD-MIMD) machine constructed out of a DAP-like processor array and a transputer network. Polygon clipping algorithms for both transputer and the DAP are described and contrasted.

Theoharis, T. (St. Catherine's College, Cambridge (GB))

1989-01-01

413

Permission Forms  

ERIC Educational Resources Information Center

The prevailing practice in public schools is to routinely require permission or release forms for field trips and other activities that pose potential for liability. The legal status of such forms varies, but they are generally considered to be neither rock-solid protection nor legally valueless in terms of immunity. The following case and the…

Zirkel, Perry A.

2005-01-01

414

An efficient SAR parallel processor  

Microsoft Academic Search

A parallel architecture especially designed for a synthetic-aperture-radar (SAR) processing algorithm based on an appropriate two-dimensional fast Fourier transform (FFT) code is presented. The algorithm is briefly summarized, and the FFT code is given for the one-dimensional case, although all results can be immediately generalized to the double FFT. The computer architecture, which consists of a toroidal net with transputers

Giorgio Franceschetti; ANTONINO MAZZEO; NICOLA MAZZOCCA; V. Pascazio; GILDA SCHIRINZI

1991-01-01

415

The Jitney Parallel Optical Interconnect  

Microsoft Academic Search

The Jitney Parallel Optical Interconnect consists of a transmitter module, a receiver module, and a cable capable of sending 1 GigaByte\\/sec over 1-100 meter spans. This technology has been developed to be cost competitive with copper bus technology, while still offering all the features of optical interconnects. Jitney has a two Byte wide interface with extra lines for clocking and

J. D. Crow; Joong-Ho Choi; M. S. Cohen; G. Johnson; D. Kuchta; D. Lacey; S. Ponnapalli; P. Pepeljugoski; K. Stawiasz; J. Trewhella; P. Xiao; S. Tremblay; S. Ouimet; A. Lacerte; M. Gauvin; D. Booth; W. Nation; T. L. Smith; B. A. DeBaun; G. D. Henson; S. A. Igl; N. A. Lee; A. J. Piekarczyk; A. S. Kuczma; S. L. Spanoudis

1996-01-01

416

Parallel Structured Adaptive Mesh Refinement  

Microsoft Academic Search

Parallel structured adaptive mesh refinement is a technique for efficient utilization of computational resources. It reduces\\u000a the computational effort and memory requirements needed for numerical simulation of complex phenomena, described by partial\\u000a differential equations. Structured adaptive mesh refinement (SAMR) is applied in simulations where the domain is divided into\\u000a logically rectangular patches, where each patch is discretized with a structured

Jarmo Rantakokko; Michael Thuné

417

Load balancing for parallel forwarding  

Microsoft Academic Search

Workload distribution is critical to the performance of network processor based parallel forwarding systems. Scheduling schemes that operate at the packet level, e.g., round-robin, cannot preserve packet-ordering within individual TCP connections. Moreover, these schemes create duplicate information in processor caches and therefore are inefficient in resource utilization. Hashing operates at the flow level and is naturally able to maintain per-connection

Weiguang Shi; M. H. MacGregor; Pawel Gburzynski

2005-01-01

418

Parallel strategies for SAR processing  

NASA Astrophysics Data System (ADS)

This article proposes a series of strategies for improving the computer process of the Synthetic Aperture Radar (SAR) signal treatment, following the three usual lines of action to speed up the execution of any computer program. On the one hand, it is studied the optimization of both, the data structures and the application architecture used on it. On the other hand it is considered a hardware improvement. For the former, they are studied both, the usually employed SAR process data structures, proposing the use of parallel ones and the way the parallelization of the algorithms employed on the process is implemented. Besides, the parallel application architecture classifies processes between fine/coarse grain. These are assigned to individual processors or separated in a division among processors, all of them in their corresponding architectures. For the latter, it is studied the hardware employed on the computer parallel process used in the SAR handling. The improvement here refers to several kinds of platforms in which the SAR process is implemented, shared memory multicomputers, and distributed memory multiprocessors. A comparison between them gives us some guidelines to follow in order to get a maximum throughput with a minimum latency and a maximum effectiveness with a minimum cost, all together with a limited complexness. It is concluded and described, that the approach consisting of the processing of the algorithms in a GNU/Linux environment, together with a Beowulf cluster platform offers, under certain conditions, the best compromise between performance and cost, and promises the major development in the future for the Synthetic Aperture Radar computer power thirsty applications in the next years.

Segoviano, Jesus A.

2004-12-01

419

Parallel multi-delay simulation  

Microsoft Academic Search

ABSTRACT The Multi-DelayPara llel (MDP) algorithm is an unconventional multi-delay algorithm in that it uses no timing wheel, or any event-sorting mechanism of any kind. Instead, wide bit-fields containing net values for several different times are used to resolve out-of-order events, and bit-parallel operations are performed to simulate the required gates. The MDP algorithm was designed to be implemented in

Yun Sik Lee; Peter M. Maurer

1993-01-01

420

Enabling parallel computing in CRASH  

NASA Astrophysics Data System (ADS)

We present the new parallel version (PCRASH2) of the cosmological radiative transfer code CRASH2 for distributed memory supercomputing facilities. The code is based on a static domain decomposition strategy inspired by geometric dilution of photons in the optical thin case that ensures a favourable performance speed-up with an increasing number of computational cores. Linear speed-up is ensured as long as the number of radiation sources is equal to the number of computational cores or larger. The propagation of rays is segmented and rays are only propagated through one sub-domain per time-step to guarantee an optimal balance between communication and computation. We have extensively checked PCRASH2 with a standardized set of test cases to validate the parallelization scheme. The parallel version of CRASH2 can easily handle the propagation of radiation from a large number of sources and is ready for the extension of the ionization network to species other than hydrogen and helium.

Partl, A. M.; Maselli, A.; Ciardi, B.; Ferrara, A.; Müller, V.

2011-06-01

421

UW Madison Libraries: Parallel Press  

NSDL National Science Digital Library

UW-Madison Libraries' Parallel Press combines book publishing traditions with new technology to provide print-on-demand books and a series of chapbooks (small, inexpensive books featuring the works of authors and poets with a Wisconsin connection). Print-on-demand books parallel the online editions created by the Libraries' digitizing initiatives. Currently, four titles, including David Hayman's A First-Draft Version of Finnegan's Wake (originally published in 1963) and The Book of Beasts (1954), by T.H. White, are available via Parallel Press print-on-demand service. The poetry chapbook series began in 1999 with the publication of four Wisconsin poets (Elizabeth Oness, Max Garland, Katharine Whitcomb, and Andrea Potos) and has continued with six chapbooks per year. A prose chapbook series began in 2002 with American Trilogy. This chapbook consists of historical reproductions of the American Declaration of Independence, Constitution, and Bill of Rights, with introductory material by UW Professor Stephen E. Lucas, and an afterword by John P. Kaminski, Director of the Center for the Study of the American Constitution -- published as part of a one year later, university-wide reflection, on the impact of the Sept. 11, 2001, terrorist attacks.

422

Parallel Environment for Quantum Computing  

NASA Astrophysics Data System (ADS)

To facilitate numerical study of noise and decoherence in QC algorithms,and of the efficacy of error correction schemes, we have developed a Fortran 90 quantum computer simulator with parallel processing capabilities. It permits rapid evaluation of quantum algorithms for a large number of qubits and for various ``noise'' scenarios. State vectors are distributed over many processors, to employ a large number of qubits. Parallel processing is implemented by the Message-Passing Interface protocol. A description of how to spread the wave function components over many processors, along with how to efficiently describe the action of general one- and two-qubit operators on these state vectors will be delineated.Grover's search and Shor's factoring algorithms with noise will be discussed as examples. A major feature of this work is that concurrent versions of the algorithms can be evaluated with each version subject to diverse noise effects, corresponding to solving a stochastic Schrodinger equation. The density matrix for the ensemble of such noise cases is constructed using parallel distribution methods to evaluate its associated entropy. Applications of this powerful tool is made to delineate the stability and correction of QC processes using Hamiltonian based dynamics.

Tabakin, Frank; Diaz, Bruno Julia

2009-03-01

423

Overt versus covert assessment of observer reliability.  

PubMed

This study examined the tendency of observers to make less reliable recordings of behavioral events when a calibrating observer is absent. Experienced observers in 4 different research sites coded videotapes of family interaction using the multicategory system developed at each site. 60 tapes were coded simultaneously by randomly selected observer pairs (overt reliability assessment) and 40 tapes were coded independently by 2 observers who did not know that their entries could be compared (covert assessment). Within site, intraclass correlations (ICCs) were computed separately for both forms of reliability assessment and a variety of behaviors. Overt ICCs were very high for most behaviors in all 4 systems. The corresponding covert reliabilities were significantly lower. Covert decline was conspicuous in the first 10 min of a 1-hour coding session. Hence, observer fatigue was not its principal cause. Apparently, observers lapse into a less attentive "set" prior to coding without a partner. This tendency is most discernible when a highly complex system is employed. PMID:6734309

Weinrott, M R; Jones, R R

1984-06-01

424

Reliability analysis of airship remote sensing system  

NASA Astrophysics Data System (ADS)

Airship Remote Sensing System (ARSS) for obtain the dynamic or real time images in the remote sensing of the catastrophe and the environment, is a mixed complex system. Its sensor platform is a remote control airship. The achievement of a remote sensing mission depends on a series of factors. For this reason, it is very important for us to analyze reliability of ARSS. In first place, the system model was simplified form multi-stage system to two-state system on the basis of the result of the failure mode and effect analysis and the failure tree failure mode effect and criticality analysis. The failure tree was created after analyzing all factors and their interrelations. This failure tree includes four branches, e.g. engine subsystem, remote control subsystem, airship construction subsystem, flying metrology and climate subsystem. By way of failure tree analysis and basic-events classing, the weak links were discovered. The result of test running shown no difference in comparison with theory analysis. In accordance with the above conclusions, a plan of the reliability growth and reliability maintenance were posed. System's reliability are raised from 89 percent to 92 percent with the reformation of the man-machine interactive interface, the augmentation of the secondary better-groupie and the secondary remote control equipment.

Qin, Jun

1998-08-01

425

The reliability of counting actinic keratosis.  

PubMed

Many epidemiological studies and clinical trials have been performed concerning actinic keratoses. The most eligible endpoint in the majority of articles is counting of actinic keratoses before and after treatments, nevertheless some authors support that this is not a reliable form of evaluation. The aim of this study was to evaluate the actinic keratoses counting by various raters and suggest approaches to increase the reliability. Cross-sectional study: forty-three patients were evaluated by four raters (inter- and intra-rater assessment) on the face and forearms. The mean actinic keratoses counts on the face and forearms were 7.7 and 9.1. The overall agreement among the raters for the facial and forearm actinic keratoses was 0.74 and 0.77. The intra-rater assessment showed high rates of agreement for the face (ICC = 0.93) and forearms (ICC = 0.83). Higher agreement occurred when counting up to five lesions. Four raters led to increased measurement variability and loss of reliability. Higher rates of agreement may be achieved with small number of lesions, limitation and/or segmentation of body areas to reduce their number, in AK prevention designs, are strategies that may lead to a greater reliability of these measurements. PMID:24045957

Ianhez, M; Junior, L F F Fleury; Bagatin, E; Miot, H A

2013-09-18

426

18 CFR 39.5 - Reliability Standards.  

Code of Federal Regulations, 2010 CFR

...2009-04-01 2009-04-01 false Reliability Standards. 39.5 Section 39...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability...

2009-04-01

427

18 CFR 39.5 - Reliability Standards.  

Code of Federal Regulations, 2013 CFR

...2013-04-01 2013-04-01 false Reliability Standards. 39.5 Section 39...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability...

2013-04-01

428

18 CFR 39.5 - Reliability Standards.  

Code of Federal Regulations, 2010 CFR

...2010-04-01 2010-04-01 false Reliability Standards. 39.5 Section 39...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability...

2010-04-01

429

Reliability Test Report. Modular Cryogenic Generator.  

National Technical Information Service (NTIS)

Reliability Testing of the LOX-30 Liquid Oxygen Plant was evaluated in accordance with MIL-STD-781. Reliability Tests were divided into Environmental Requirements, Reliability Growth and Reliability Demonstration Tests. Accept/reject criteria for the demo...

R. Ferret

1978-01-01

430

NWChem: scalable parallel computational chemistry  

SciTech Connect

NWChem is a general purpose computational chemistry code specifically designed to run on distributed memory parallel computers. The core functionality of the code focuses on molecular dynamics, Hartree-Fock and density functional theory methods for both plane-wave basis sets as well as Gaussian basis sets, tensor contraction engine based coupled cluster capabilities and combined quantum mechanics/molecular mechanics descriptions. It was realized from the beginning that scalable implementations of these methods required a programming paradigm inherently different from what message passing approaches could offer. In response a global address space library, the Global Array Toolkit, was developed. The programming model it offers is based on using predominantly one-sided communication. This model underpins most of the functionality in NWChem and the power of it is exemplified by the fact that the code scales to tens of thousands of processors. In this paper the core capabilities of NWChem are described as well as their implementation to achieve an efficient computational chemistry code with high parallel scalability. NWChem is a modern, open source, computational chemistry code1 specifically designed for large scale parallel applications2. To meet the challenges of developing efficient, scalable and portable programs of this nature a particular code design was adopted. This code design involved two main features. First of all, the code is build up in a modular fashion so that a large variety of functionality can be integrated easily. Secondly, to facilitate writing complex parallel algorithms the Global Array toolkit was developed. This toolkit allows one to write parallel applications in a shared memory like approach, but offers additional mechanisms to exploit data locality to lower communication overheads. This framework has proven to be very successful in computational chemistry but is applicable to any engineering domain. Within the context created by the features above NWChem has grown into a general purpose computational chemistry code that supports a wide variety of energy expressions and capabilities to calculate properties based there upon. The main energy expressions are classical mechanics force fields, Hartree-Fock and DFT both for finite systems and condensed phase systems, coupled cluster, as well as QM/MM. For most energy expressions single point calculations, geometry optimizations, excited states, and other properties are available. Below we briefly discuss each of the main energy expressions and the critical points involved in scalable implementations thereof.

van Dam, Hubertus JJ; De Jong, Wibe A.; Bylaska, Eric J.; Govind, Niranjan; Kowalski, Karol; Straatsma, TP; Valiev, Marat

2011-11-01

431

Catalytic Parallel Kinetic Resolution under Homogeneous Conditions  

PubMed Central

Two complementary chiral catalysts, the phosphine 8d and the DMAP-derived ent-23b, are used simultaneously to selectively activate one of a mixture of two different achiral anhydrides as acyl donors under homogeneous conditions. The resulting activated intermediates 25 and 26 react with the racemic benzylic alcohol 5 to form enantioenriched esters (R)-24 and (S)-17 by fully catalytic parallel kinetic resolution (PKR). The aroyl ester (R)-24 is obtained with near-ideal enantioselectivity for the PKR process, but (S)-17 is contaminated by ca. 8% of the minor enantiomer (R)-17 resulting from a second pathway via formation of mixed anhydride 24 and its activation by 8d.

Duffey, Trisha A.; MacKay, James A.; Vedejs, Edwin

2010-01-01

432

Reliability of Microelectronic Circuit Connections.  

National Technical Information Service (NTIS)

This report describes the test program used for establishing failure rates of lap soldered and parallel gap welded microelectronic circuit connections in environmental tests simulating condition of usage. In order to determine failure rates based on pre-p...

R. D. Bryant M. H. Bester J. L. Behhedahl A. G. Gross

1967-01-01

433

Maximizing Reliability While Scheduling Real-Time Task-Graphs on a Cluster of Computers  

Microsoft Academic Search

Improper scheduling of real-time applications on a cluster may lead to missing required deadlines and offset the gain of using the system and software parallelism. Most existing scheduling algorithms do not consider factors such as real-time deadlines, system reliability, processing power fragmentation, inter-task communication and degree of parallelism on performance. In this paper we introduce a new scheduling algorithm, which

Alaa Amin; Reda A. Ammar; Sanguthevar Rajasekaran

2005-01-01

434

Reliability analysis based on significance  

Microsoft Academic Search

Due to the expected increase of defects and errors in circuits based on deep submicron technologies, reliability has become an important design criterion. As reliability improvement is generally achieved by adding redundancy, identify and classify critical blocks of a circuit is a major concern. This work presents two new classification methods regarding the significance of a block with respect to

Lirida A. de B. Naviner; Jean-Francois Naviner; Tian Ban; G. S. Gutemberg

2011-01-01

435

Lithium battery safety and reliability  

Microsoft Academic Search

Lithium batteries have been used in a variety of applications for a number of years. As their use continues to grow, particularly in the consumer market, a greater emphasis needs to be placed on safety and reliability. There is a useful technique which can help to design cells and batteries having a greater degree of safety and higher reliability. This

Samuel C. Levy

1991-01-01

436

Photovoltaic performance and reliability workshop.  

National Technical Information Service (NTIS)

This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986--1993. PV performance and PV reliability are at least as important as PV cost, if not more....

L. Mrig

1993-01-01

437

Highly reliable harmony search algorithm  

Microsoft Academic Search

In this paper, after a literature overview, studies will be concentrated on pitch adjustment ratio function of harmony search algorithm. A more rational function will be proposed which increase the robustness of algorithm and therefore leads to a highly reliable algorithm simulations on a set of standard TSP problems, demonstrates that parameter of reliability (variance over average), has experienced 75%

Nima Taherinejad

2009-01-01

438

The Demonstration of Telemetry Reliability  

Microsoft Academic Search

This paper discusses the analytical, techniques which can be utilized in the demonstration of telemetry reliability in a system development program. Analytical techniques have been established to provide telemetry equipment reliability predictions at periodic intervals as functional and environmental test data are accrued. A mathematical model has been developed which allows for a realistic and sound approach to the problem.

Edwin D. Karmiol; W. Thomas Weir; John S. Youtchaff

1963-01-01

439

Reliability analysis of distribution networks  

Microsoft Academic Search

Monitoring of failures and outages in the transmission and distribution of electrical energy is necessary for determination of the reliability of network components and the supply of electrical energy to consumers. Incorrect input data leads, of course, to false results even if the correct computing method is used. The paper deals with obtaining of reliability indices of distribution network analyzing

Radomir Gono; Stanislav Rusek; Michal Kratky

2007-01-01

440

Reliability of Optical Wireless Links  

Microsoft Academic Search

The article presents some problems of optical wireless link (OWL) reliability. The influence of atmosphere on OWL reliability is explained. There are link models from which it is possible to determine a dependability of characteristics. A summary of basic ways of its improvement is given. 15 Optical wireless links represent an alternative to optical-fibre systems for high bit rate, in

JIRI NEMECEK; VIERA BIOLKOVA; OTAKAR WILFERT; DALIBOR BIOLEK

441

Numerical Errors: Reliable Numerical Simulations  

SciTech Connect

Understanding numerical errors in long calculations is a very subtle science and is critical to understanding the reliability of the final answer. We will carefully examine the accumulation of numerical errors over time and discuss how these can lead to reliability estimates. The primary focus will be on a newly uncovered understanding of mode resolution which is at the heart of all numerical computations.

Jameson, L

2001-07-27

442

Software-Reliability-Engineered Testing  

Microsoft Academic Search

Software testing often results in delays to market and high cost without assuring product reliability. Software reliability engineered testing (SRET), an AT&T best practice, carefully engineers testing to overcome these weaknesses. The article describes SRET in the context of an actual project at AT&T, which is called Fone Follower. The author selected this example because of its simplicity; it in

John D. Musa

1996-01-01

443

The Validity of Reliability Measures  

Microsoft Academic Search

It is shown that some commonly used indices can be misleading in their quantification of reliability. The effects are seen most vividly in assessing the reliability of gain or difference scores. In addition, it is explained that the calculation of all the currently used indices is based on a wrong assumption about the value of the correlation coefficient between true

G. M. Seddon

1988-01-01

444

Reliability, sampling, and algorithmic randomness  

Microsoft Academic Search

We investigate the relationship between software testing, statistical reliability modelling, and random sampling. Let P be a program whose reliability is assessed statistically based on the number of times P fails when executed on a sample S of its inputs. We show that if this assessment is grossly misleading, then either the degree of randomness (Kolmogorov complexity) of S is

Andy Podgurski

1991-01-01

445

Quality improvement principles boost SCADA system reliability  

SciTech Connect

A major section of Chevron Pipe Line Co.'s SCADA system was recently brought up to the industry-standard 99.5% data-reporting reliability by an intercompany team applying quality improvement (QI) principles. To make the study manageable, the scope was limited to only half the CPL SCADA system, southeast Texas. The study concentrated on 20% of these remote sites which all happened to operate below 90% reliability. The team surveyed 21 sites and recorded data on reliability problem root causes. The data were categorized and formed into a Pareto chart. This chart indicated the root cause of 80% of problems was related to lack of maintenance on both radio equipment and RTU/PLCs. These results were presented to management along with recommendations for forming a quality improvement team to work on developing a preventative maintenance system, a task to be performed jointly between the radio technicians and the pipe line technicians. Goal was to allow the technicians to develop a working relationship with one another and to facilitate a better knowledge of the physical interfaces involved.

Boling, J.E. (Chevron Information Technology Co., New Orleans, LA (United States))

1994-08-01

446

A knowledge management system for series-parallel availability optimization and design  

Microsoft Academic Search

System availability is an important subject in the design field of industrial system as the system structure becomes more complicated. While improving the system's reliability, the cost is also on the upswing. The availability is increased by a redundancy system. Redun- dancy Allocation Problem (RAP) of a series-parallel system is traditionally resolved by experienced system designers. We proposed a genetic

Ying-shen Juang; Shui-shun Lin; Hsing-pei Kao

2008-01-01

447

Frequency Diagnostic Universal Fault Protection for Current Fed Parallel Electronic Resonant Ballast  

Microsoft Academic Search

Current fed parallel resonant electronic ballasts (PREB) dominate the applications for the T8 fluorescent lamp in North America for the advantage of low cost, flexibility and high reliability. At fault condition, PREB generates much heat externally, but it is difficult to provide fault protection using traditional approaches without sacrificing other advantages of PREB. In this paper, a digital solution is

Qinghong Yu; Joe Parisella

2006-01-01

448

Optimal design of series-parallel systems considering maintenance and salvage value  

Microsoft Academic Search

A reliability based design (RBD) model is developed for a series-parallel system with deteriorating components in order to minimize the life cycle cost of the system. The effects of fixed asset depreciation, preventive maintenance and minimal repair are incorporated in the model. We also propose equations to model the effects of preventive maintenance on the system's failure rate and the

Amit Monga; Ming J Zuo

2001-01-01

449

Parallel operation of single phase inverter modules with no control interconnections  

Microsoft Academic Search

To provide reliable power under scheduled and unscheduled outages requires an uninterruptible power supply (UPS) which can be easily expanded to meet the needs of a growing demand. A system suck as this should also be fault tolerant and include the capability for redundancy. These goals can be met by paralleling together smaller inverters if a control scheme can be

A. Tuladhar; H. Jin; T. Unger; K. Mauch

1997-01-01

450

Precise measurement of first Townsend coefficient, using parallel plate avalanche chamber  

NASA Astrophysics Data System (ADS)

By employing iso-C4H10 gas, we have studied the effective parameters in the first Townsend coefficient measurement using parallel plate avalanche chamber (PPAC). Obtained results are free from space charge and gap deformation effects, which have seriously affected previous PPAC-based measurements. The required conditions for a reliable Townsend coefficient measurement are presented as well.

Nakhostin, Mohammad; Baba, Mamoru; Ohtsuki, Tsutomu; Oishi, Takuji; Itoga, Toshiro

2007-03-01

451

Dosage Form  

Center for Drug Evaluation (CDER)

... Concept ID: C42636. Version Number. 008. Description. This standard provides for all drug dosage forms. The granularity of ... More results from www.fda.gov/drugs/developmentapprovalprocess/formssubmissionrequirements

452

Actigraph data are reliable, with functional reliability increasing with aggregation.  

PubMed

Motion sensor devices such as actigraphs are increasingly used in studies that seek to obtain an objective assessment of activity level. They have many advantages, and are useful additions to research in fields such as sleep assessment, drug efficacy, behavior genetics, and obesity. However, questions still remain over the reliability of data collected using actigraphic assessment. We aimed to apply generalizability theory to actigraph data collected on a large, general-population sample in middle childhood, during 8 cognitive tasks across two body loci, and to examine reliability coefficients on actigraph data aggregated across different numbers of tasks and different numbers of attachment loci. Our analyses show that aggregation greatly increases actigraph data reliability, with reliability coefficients on data collected at one body locus during 1 task (.29) being much lower than that aggregated across data collected on two body loci and during 8 tasks (.66). Further increases in reliability coefficients by aggregating across four loci and 12 tasks were estimated to be modest in prospective analyses, indicating an optimum trade-off between data collection and reliability estimates. We also examined possible instrumental effects on actigraph data and found these to be nonsignificant, further supporting the reliability and validity of actigraph data as a method of activity level assessment. PMID:18697683

Wood, Alexis C; Kuntsi, Jonna; Asherson, Philip; Saudino, Kimberly J

2008-08-01

453

Parallel channel instabilities in boiling water reactor systems: boundary conditions for out of phase oscillations  

Microsoft Academic Search

In this paper we study the boundary conditions during out of phase oscillations, in a system formed by two parallel channels coupled to multimodal neutron kinetics. The fact that the pressure drop can change with time, but remains the same in all the parallel channels, leads us to analytical integration of the time derivative term of the channel momentum equation,

J. L. Muñoz-Cobo; M. Z. Podowski; S. Chiva

2002-01-01

454

The openGL visualization of the 2D parallel FDTD algorithm  

NASA Astrophysics Data System (ADS)

This paper presents a way of visualization of a two-dimensional version of a parallel algorithm of the FDTD method. The visualization module was created on the basis of the OpenGL graphic standard with the use of the GLUT interface. In addition, the work includes the results of the efficiency of the parallel algorithm in the form of speedup charts.

Walendziuk, Wojciech

2005-02-01

455

Multifrontal Parallel Distributed Symmetric and Unsymmetric Solvers.  

National Technical Information Service (NTIS)

The authors consider the solution of both symmetric of sparse linear equations. A new parallel distributed memory multifrontal approach is described. To handle numerical pivoting efficiently, a parallel asynchronous algorithm with dynamic scheduling of th...

P. R. Amestoy I. S. Duff J. Y. L'Excellent

1998-01-01

456

Multilist Scheduling. A New Parallel Programming Model.  

National Technical Information Service (NTIS)

Parallel programming requires task scheduling to optimize performance; this primarily involves balancing the load over the processors. In many cases, it is critical to perform task scheduling at runtime. For example, (1) in many parallel applications the ...

I. C. Wu H. T. Kung P. Steenkiste D. O'Hallaron G. Thompson

1993-01-01

457

Delft Parallel Processor 84/16.  

National Technical Information Service (NTIS)

The development of the Delft Parallel Processor started in 1976. Since then the machine has grown to a processor with 16 independently operating, but tightly connected processing elements. The parallel processor is supervised by a host processor. In the f...

J. H. M. Andriessen

1986-01-01

458

Massive Parallelism in the Future of Science.  

National Technical Information Service (NTIS)

Massive parallelism appears in three domains of action of concern to scientists, where it produces collective action that is not possible from any individual agent's behavior. In the domain of data parallelism, computers comprising very large numbers of p...

P. J. Denning

1988-01-01

459

Parallel computational fluid dynamics - Implementations and results  

SciTech Connect

The present volume on parallel CFD discusses implementations on parallel machines, numerical algorithms for parallel CFD, and performance evaluation and computer science issues. Attention is given to a parallel algorithm for compressible flows through rotor-stator combinations, a massively parallel Euler solver for unstructured grids, a fast scheme to analyze 3D disk airflow on a parallel computer, and a block implicit multigrid solution of the Euler equations. Topics addressed include a 3D ADI algorithm on distributed memory multiprocessors, clustered element-by-element computations for fluid flow, hypercube FFT and the Fourier pseudospectral method, and an investigation of parallel iterative algorithms for CFD. Also discussed are fluid dynamics using interface methods on parallel processors, sorting for particle flow simulation on the connection machine, a large grain mapping method, and efforts toward a Teraflops capability for CFD.

Simon, H.D.

1992-01-01

460

Parallel computational fluid dynamics - Implementations and results  

NASA Astrophysics Data System (ADS)

The present volume on parallel CFD discusses implementations on parallel machines, numerical algorithms for parallel CFD, and performance evaluation and computer science issues. Attention is given to a parallel algorithm for compressible flows through rotor-stator combinations, a massively parallel Euler solver for unstructured grids, a fast scheme to analyze 3D disk airflow on a parallel computer, and a block implicit multigrid solution of the Euler equations. Topics addressed include a 3D ADI algorithm on distributed memory multiprocessors, clustered element-by-element computations for fluid flow, hypercube FFT and the Fourier pseudospectral method, and an investigation of parallel iterative algorithms for CFD. Also discussed are fluid dynamics using interface methods on parallel processors, sorting for particle flow simulation on the connection machine, a large grain mapping method, and efforts toward a Teraflops capability for CFD.

Simon, Horst D.

461

ENABLING PRIMITIVES FOR COMPILING PARALLEL LANGUAGES  

Microsoft Academic Search

This paper presents three novel languageimplementation primitives---lazy threads,stacklets, andsynchronizers---andshows how they combine to provide a parallel call at nearly the efficiency ofa sequential call. The central idea is to transform parallel calls into parallel-ready sequential calls.Excess parallelism degrades into sequential calls with the attendant efficient stack managementand direct transfer of control and data, unless a call truly needs to execute

1995-01-01

462

Parallelizing Algorithms for Symbolic Computation using ||MAPLE||  

Microsoft Academic Search

kMAPLEk (speak: parallel Maple) is a portable system forparallel symbolic computation. The system is built as aninterface between the parallel declarative programming languageStrand and the sequential computer algebra systemMaple, thus providing the elegance of Strand and the powerof the existing sequential algorithms in Maple.The implementation of different parallel programmingparadigms shows that it is fairly easy to parallelize evencomplex algebraic algorithms

Kurt Siegl

1993-01-01

463

Hybrid Parallel Programming on HPC Platforms  

Microsoft Academic Search

Summary Most HPC systems are clusters of shared memory nodes. Parallel programming must combine the distributed mem- ory parallelization on the node inter-connect with the shared memory parallelization inside of each node. Various hybrid MPI+OpenMP programming models are compared with pure MPI. Benchmark results of several platforms are presented. This paper analyzes the strength and weakness of several parallel programming

Rolf Rabenseifner

2003-01-01

464

Probabilistic modeling of aquifer heterogeneity using reliability methods  

Microsoft Academic Search

A probabilistic model of groundwater contaminant transport is presented. The model is based on coupling first- and second-order reliability methods (FORM and SORM) with a two-dimensional finite element solution of groundwater transport equations. Uncertainty in aquifer media is considered by modeling hydraulic conductivity as a spatial random field with a prescribed correlation structure. FORM and SORM provide the probability that

Clint N. Dawson

1996-01-01

465

High-reliability gas turbine combined-cycle development program: Phase I. Final report  

Microsoft Academic Search

The objective of the High Reliability Gas Turbine Combined Cycle Development Program is to generate a new conceptual centerline design for gas turbine and accessories, with reliability as the key parameter. Tradeoff studies of reliability vs cost, performance, firing temperature and other parameters formed the basis for all major design approaches and decisions. This program results in the conceptual design

Kunkel

1981-01-01

466

Reliability Based Design Optimization of Bridge Abutments Using Pseudo-dynamic Method  

Microsoft Academic Search

In this paper, the reliability of a gravity retaining wall bridge abutment is analyzed. The first order reliability method (FORM) is applied to estimate the component reliability indices of each failure mode and to assess the effect of uncertainties in design parameters. Two modes of failure namely rotation of the wall about its heel, sliding of the wall on its

B. Munwar Basha; G. L. Sivakumar Babu

467

Parallel MRI at microtesla fields  

PubMed Central

Parallel imaging techniques have been widely used in high-field magnetic resonance imaging (MRI). Multiple receiver coils have been shown to improve image quality and allow accelerated image acquisition. Magnetic resonance imaging at ultra-low fields (ULF MRI) is a new imaging approach that uses SQUID (superconducting quantum interference device) sensors to measure the spatially encoded precession of pre-polarized nuclear spin populations at microtesla-range measurement fields. In this work, parallel imaging at microtesla fields is systematically studied for the first time. A seven-channel SQUID system, designed for both ULF MRI and magnetoencephalography (MEG), is used to acquire 3D images of a human hand, as well as 2D images of a large water phantom. The imaging is performed at 46 microtesla measurement field with pre-polarization at 40 mT. It is shown how the use of seven channels increases imaging field of view and improves signal-to-noise ratio for the hand images. A simple procedure for approximate correction of concomitant gradient artifacts is described. Noise propagation is analyzed experimentally, and the main source of correlated noise is identified. Accelerated imaging based on one-dimensional undersampling and 1D SENSE (sensitivity encoding) image reconstruction is studied in the case of the 2D phantom. Actual 3-fold imaging acceleration in comparison to single-average fully encoded Fourier imaging is demonstrated. These results show that parallel imaging methods are efficient in ULF MRI, and that imaging performance of SQUID-based instruments improves substantially as the number of channels is increased.

Zotev, Vadim S.; Volegov, Petr L.; Matlashov, Andrei N.; Espy, Michelle A.; Mosher, John C.; Kraus, Robert H.

2008-01-01

468

Parallel Processing at the High School Level.  

ERIC Educational Resources Information Center

This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

Sheary, Kathryn Anne

469

Building Bilingual Dictionaries from Parallel Web Documents  

Microsoft Academic Search

In this paper we describe a system for automatically constructing a bilingual dictionary for cross-language information retrieval applications. We describe how we automatically target candidate parallel documents, filter the candidate documents and process them to create parallel sentences. The parallel sentences are then automatically translated using an adaptation of the EMIM technique and a dictionary of translation terms is created.

Craig J. A. Mcewan; Iadh Ounis; Ian Ruthven

2002-01-01

470

Inductive Information Retrieval Using Parallel Distributed Computation.  

ERIC Educational Resources Information Center

|This paper reports on an application of parallel models to the area of information retrieval and argues that massively parallel, distributed models of computation, called connectionist, or parallel distributed processing (PDP) models, offer a new approach to the representation and manipulation of knowledge. Although this document focuses on…

Mozer, Michael C.

471

Deterministic asynchronous interpretation of parallel microprograms  

SciTech Connect

This paper is a continuation of the study of the deterministic properties of asynchronous interpretation of parallel microprograms. The author describes classes of parallel microprograms that generate deterministic asynchronous computations, presents the Petri nets modeling these computation, considers their special features for certain classes of microprograms, and states a necessary and sufficient condition for deterministic asynchronous interpretations of parallel microprograms.

Achasova, S.M.

1986-07-01

472

A Container-Iterator Parallel Programming Model  

Microsoft Academic Search

There are several parallel programming models available for numerical computations at different levels of expressibility and ease of use. For the development of new domain specific programming models, a splitting into a distributed data container and parallel data iterators is proposed. Data distribution is implemented in application specific li- braries. Data iterators are directly analysed and compiled automatically into parallel

Gerhard W. Zumbusch

2007-01-01

473

Parallel Computing Using Web Servers and "Servlets".  

ERIC Educational Resources Information Center

|Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

Lo, Alfred; Bloor, Chris; Choi, Y. K.

2000-01-01

474

Happe Honeywell Associative Parallel Processing Ensemble  

Microsoft Academic Search

Many problems, inherent in air traffic control, weather analysis and prediction, nuclear reaction, missile tracking, and hydrodynamics have common processing characteristics that can most efficiently be solved using parallel “non-conventional” techniques. Because of high sensor data rates, these parallel problem solving techniques cannot be economically applied using the standard sequential computer. The application of special processing techniques such as parallel\\/associative

Orin E. Marvel

1973-01-01

475

A new type of parallel finger mechanism  

Microsoft Academic Search

Based on the principle of parallel robot mechanism and bionics, a new type of three degree-of-freedom parallel finger mechanism is proposed. The basic unit of mechanism is parallelogram linkage. In the paper, the emphasis is laid on the study of the finger mechanism; the forward solution and inverse solution of the finger mechanism are obtained, an idea of parallel finger

Dejun Mu; Zhen Huang

2007-01-01

476

Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism  

ERIC Educational Resources Information Center

The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

Agarwal, Mayank

2009-01-01

477

Hyperdimensional Data Analysis Using Parallel Coordinates  

Microsoft Academic Search

This article presents the basic results of using the parallel coordinate representation as a high-dimensional data analysis tool. Several alternatives are reviewed. The basic algorithm for parallel coordinates is laid out and a discussion of its properties as a projective transformation is given. Several duality results are discussed along with their interpretations as data analysis tools. Permutations of the parallel

Edward J. Wegman

1990-01-01

478

Global approach to detection of parallelism  

Microsoft Academic Search

Modern computers will increasingly rely on parallelism to achieve high computation rates. Techniques to automatically detect and exploit parallelism have shown effective for computers with vector capabilities. To employ similar techniques for asynchronous multiprocessor machines, the analysis and transformations used for vectorization must be extended to apply to entire programs rather than single loops. Three subproblems are addressed. A sequential-to-parallel

David Callahan; C. D. II

1987-01-01

479

OPALS - Optical parallel array logic system  

Microsoft Academic Search

A new optical-digital computing system called OPALS (optical parallel array logic system) is presented. OPALS can execute various parallel neighborhood operations such as cellular logic as well as parallel logical operations for two-dimensional sampled objects. The system has the ability to perform iterative operations. OPALS is systemized, centering on the optical logic method using image coding and optical correlation techniques.

Jun Tanida; Yoshiki Ichioka

1986-01-01

480

VCSEL-based parallel optical transmission module  

Microsoft Academic Search

This paper describes the design process and performance of the optimized parallel optical transmission module. Based on 1×12 VCSEL (Vertical Cavity Surface Emitting Laser) array, we designed and fabricated the high speed parallel optical modules. Our parallel optical module contains a 1×12 VCSEL array, a 12 channel CMOS laser driver circuit, a high speed PCB (Printed Circuit Board), a MT

Rongxuan Shen; Hongda Chen; Chao Zuo; Weihua Pei; Yi Zhou; Jun Tang

2005-01-01

481

Inductive Information Retrieval Using Parallel Distributed Computation.  

ERIC Educational Resources Information Center

This paper reports on an application of parallel models to the area of information retrieval and argues that massively parallel, distributed models of computation, called connectionist, or parallel distributed processing (PDP) models, offer a new approach to the representation and manipulation of knowledge. Although this document focuses on…

Mozer, Michael C.

482

Series-Parallel Combination Circuits  

NSDL National Science Digital Library

Tony R. Kuphaldt is the creator of All About Circuits, a collection of online textbooks about circuits and electricity. The site is split into volumes, chapters, and topics to make finding and learning about these subjects convenient. Volume 1, Chapter 7: Series-Parallel Combination Circuits digs deeper into these circuits than Chapter 5. This chapter offers a step-by-step analysis technique in order to identify all changes in voltage and current. It also offers a set of detailed instructions for component failure analysis. All in all, this is a great resource for educators or students.

Kuphaldt, Tony R.

2008-07-01

483

On Parallelism in Turing Machines  

Microsoft Academic Search

A model of parallel computation based on a generalization of nondeterminism in Turing machines is introduced. Complexity classes \\/\\/T(n)-TIME, \\/\\/L(n)-SPACE, \\/\\/LOGSPACE, \\/\\/PTIME, etc. are defined for these machines in a way analogous to T(n)-TIME, L(n)-SPACE, LOGSPACE, PTIME, etc. for deterministic machines. It is shown that, given appropriate honesty conditions, L(n)-SPACE ? \\/\\/L(n)2-TIME T(n)-TIME ? \\/\\/log T(n)-SPACE \\/\\/L(n)-SPACE ? exp L(n)-TIME

Dexter Kozen

1976-01-01

484

Parallel Mapping Approaches for GNUMAP  

PubMed Central

Mapping short next-generation reads to reference genomes is an important element in SNP calling and expression studies. A major limitation to large-scale whole-genome mapping is the large memory requirements for the algorithm and the long run-time necessary for accurate studies. Several parallel implementations have been performed to distribute memory on different processors and to equally share the processing requirements. These approaches are compared with respect to their memory footprint, load balancing, and accuracy. When using MPI with multi-threading, linear speedup can be achieved for up to 256 processors.

Clement, Nathan L.; Clement, Mark J.; Snell, Quinn; Johnson, W. Evan

2013-01-01

485

Efficient parallel algorithms and VLSI architectures for manipulator Jacobian computation  

SciTech Connect

The real-time computation of manipulator Jacobian that relates the manipulator joint velocities to the linear and angular velocities of the manipulator end-effector is pursued. Since the Jacobian can be expressed in the form of the first-order linear recurrence, the time lower bound to complete the Jacobian can be proved to be of order O(N) on uniprocessor computers, and of order O(log{sub 2}N) on both parallel single-instruction-stream multiple-data-stream (SIMD) computers and parallel VLSI pipelines, where N is the number of links of the manipulator. To achieve the computation time lower bound, we developed the generalized-k method on uniprocessor computers, the parallel forward and backward recursive doubling algorithm (PFABRD) on SIMD computers, and a parallel systolic architecture on VLSI pipelines. All the methods are capable of computing the Jacobian at any desired reference coordinate frame k from the base coordinate frame to the end-effector coordinate frame. The computation effort in terms of floating point operations is minimal when k is in the range (4, N {minus} 3) for the generalized-k method, and k = (N + 1)/2 for both the PFABRD algorithm and the parallel pipeline.

Yeung, T.B. (LSI Logic Corp., Milpitas, CA (US)); Lee, C.S.G. (Of Electrical Enginerring, Purdue Univ., West Lafayette, IN (US))

1989-09-01

486

Highly Parallel Alternating Directions Algorithm for Time Dependent Problems  

NASA Astrophysics Data System (ADS)

In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.

Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.

2011-11-01

487

Integrated modular engine - Reliability assessment  

NASA Astrophysics Data System (ADS)

A major driver in the increased interest in integrated modular engine configurations is the desire for ultra reliability for future rocket propulsion systems. The concept of configuring multiple sets of turbomachinery networked to multiple thrust chamber assemblies has been identified as an approach with potential to achieve significant reliability enhancement. This paper summarizes the results of a reliability study comparing networked systems vs. discrete engine installations, both with and without major module and engine redundancy. The study was conducted for gas generator, expander, and staged combustion cycles. The results are representative of either booster or upper-stage applications and are indicative of either plug or nonplug installation philosophies.

Parsley, R. C.; Ward, T. B.

1992-07-01

488

Complete classification of parallel Lorentz surfaces in four-dimensional neutral pseudosphere  

SciTech Connect

A Lorentz surface of an indefinite space form is called parallel if its second fundamental form is parallel with respect to the Van der Waerden-Bortolotti connection. Such surfaces are locally invariant under the reflection with respect to the normal space at each point. Parallel surfaces are important in geometry as well as in general relativity since extrinsic invariants of such surfaces do not change from point to point. Parallel Lorentz surfaces in four-dimensional (4D) Lorentzian space forms are classified by Chen and Van der Veken [''Complete classification of parallel surfaces in 4-dimensional Lorentz space forms,'' Tohoku Math. J. 61, 1 (2009)]. Recently, explicit classification of parallel Lorentz surfaces in the pseudo-Euclidean 4-space E{sub 2}{sup 4} and in the pseudohyperbolic 4-space H{sub 2}{sup 4}(-1) are obtained recently by Chen et al. [''Complete classification of parallel Lorentzian surfaces in Lorentzian complex space forms,'' Int. J. Math. 21, 665 (2010); ''Complete classification of parallel Lorentz surfaces in neutral pseudo hyperbolic 4-space,'' Cent. Eur. J. Math. 8, 706 (2010)], respectively. In this article, we completely classify the remaining case; namely, parallel Lorentz surfaces in 4D neutral pseudosphere S{sub 2}{sup 4}(1). Our result states that there are 24 families of such surfaces in S{sub 2}{sup 4}(1). Conversely, every parallel Lorentz surface in S{sub 2}{sup 4}(1) is obtained from one of the 24 families. The main result indicates that there are major differences between Lorentz surfaces in the de Sitter 4-space dS{sub 4} and in the neutral pseudo 4-sphere S{sub 2}{sup 4}.

Chen, Bang-Yen [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824-1027 (United States)

2010-08-15

489

Xyce parallel electronic simulator design.  

SciTech Connect

This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

2010-09-01

490

Risk-Reliability Programming for Optimal Water Quality Control  

Microsoft Academic Search

A risk-reliability programming approach is developed for optimal allocation of releases for control of water quality downstream of a multipurpose reservoir. Additionally, the approach allows the evaluation of optimal risk\\/reliability values. Risk is defined as a probability of not satisfying constraints given in probabilistic form, e.g., encroachment of water quality reservation on that for flood control. The objective function includes

Slobodan P. Simonovic; Gerald T. Orlob

1984-01-01

491

A Study of the Phase and Filter Properties of Arrays of Parallel Conductors between Ground Planes  

Microsoft Academic Search

A number of structures are analyzed which consist of arrays of parallel conductors between ground planes or above a single ground plane. These include interdigital line, meander line, a form of helix, \\

J. T. Bolljahn; G. L. Matthaei

1962-01-01

492

Overview of IBM system\\/390 parallel sysplex-a commercial parallel processing system  

Microsoft Academic Search

Scalability has never been more a part of System\\/390 than with Parallel Sysplex. The Parallel Sysplex environment permits a mainframe or Parallel Enterprise Server to grow from a single system to a configuration of 32 systems (initially), and appear as a single image to the end user and applications. The IBM S\\/390 Parallel Sysplex provides capacity for today's largest commercial

Jeffrey M. Nick; Jen-Yao Chung; Nicholas S. Bowen

1996-01-01

493

Debugging and analysis of large-scale parallel programs. Doctoral thesis  

SciTech Connect

One of the most serious problems in the development cycle of large-scale parallel programs is the lack of tools for debugging and performance analysis. Parallel programs are more difficult to analyze than their sequential counterparts for several reasons. First, race conditions in parallel programs can cause non-deterministic behavior, which reduces the effectiveness of traditional cyclic debugging techniques. Second, invasive, interactive analysis can distort a parallel program's execution beyond recognition. Finally, comprehensive analysis of a parallel program's execution requires collection, management, and presentation of an enormous amount of information. This dissertation addresses the problem of debugging and analysis of large-scale parallel programs executing on shared-memory multiprocessors. It proposes a methodology for top-down analysis of parallel program executions that replaces previous ad-hoc approaches. To support this methodology, a formal model for shared-memory communication among processes in a parallel program is developed. It is shown how synchronization traces based on this abstract model can be used to create indistinguishable executions that form the basis for debugging. This result is used to develop a practical technique for tracing parallel program executions on shared-memory parallel processors so that their executions can be repeated deterministically on demand.

Mellor-Crummey, J.M.

1989-09-01

494

Reliability Evaluation of Semiconductor Memories.  

National Technical Information Service (NTIS)

The report presents a study which was conducted to evaluate the reliability of high usage semiconductor memories. The study determined parametric and functional tests which are required for military specifications. Special attention was given to the appli...

A. C. L. Chiang

1976-01-01

495

Hardware/Software Reliability Study.  

National Technical Information Service (NTIS)

Techniques for reliability analysis of the logical part of hardware/software systems are reviewed, and recommendations for requirements and analysis techniques needed to demonstrate compliance are made. Working practices to be used to assess software reli...

P. Mellor

1987-01-01

496

VCSEL reliability: a user's perspective  

NASA Astrophysics Data System (ADS)

VCSEL arrays are being considered for use in interconnect applications that require high speed, high bandwidth, high density, and high reliability. In order to better understand the reliability of VCSEL arrays, we initiated an internal project at SUN Microsystems, Inc. In this paper, we present preliminary results of an ongoing accelerated temperature-humidity-bias stress test on VCSEL arrays from several manufacturers. This test revealed no significant differences between the reliability of AlGaAs, oxide confined VCSEL arrays constructed with a trench oxide and mesa for isolation. This test did find that the reliability of arrays needs to be measured on arrays and not be estimated with the data from singulated VCSELs as is a common practice.

McElfresh, David K.; Lopez, Leoncio D.; Melanson, Robert; Vacar, Dan

2005-03-01

497

Bibliography on Reliability. Addendum I.  

National Technical Information Service (NTIS)

This report is the first Addendum to US Army Materiel Systems Analysis Activity Technical Report No. 82 a. Roberta Wooten, July 1973. It consists of an update of a bibliography of reliability theory and practice including references from government public...

H. P. Betz A. S. Chem

1977-01-01

498

Introduction to Structural Reliability Theory.  

National Technical Information Service (NTIS)

The report provides an introduction to the state-of-the-art in structural reliability theory directed specifically toward the marine industry. Comprehensive probabilistic models are described for the environment wave loads acting on a marine structure, it...

A. E. Mansour

1989-01-01

499

Reliability Analysis of Phased Missions.  

National Technical Information Service (NTIS)

In a phased mission the relevant system configuration (block diagram or fault tree) changes during consecutive time periods (phases). Many systems are required to perform phased missions. A classic example is a space vehicle. A reliability analysis for a ...

J. D. Esary H. Ziehms

1975-01-01

500

Performance prediction for complex parallel applications  

SciTech Connect

Today`s massively parallel machines are typically message-passing systems consisting of hundreds or thousands of processors. Implementing parallel applications efficiently in this environment is a challenging task, and poor parallel design decisions can be expensive to correct. Tools and techniques that allow the fast and accurate evaluation of different parallelization strategies would significantly improve the productivity of application developers and increase throughput on parallel architectures. This paper investigates one of the major issues in building tools to compare parallelization strategies: determining what type of performance models of the application code and of the computer system are sufficient for a fast and accurate comparison of different strategies. The paper is built around a case study employing the Performance Prediction Tool (PerPreT) to predict performance of the Parallel Spectral Transform Shallow Water Model code (PSTSWM) on the Intel Paragon. 13 refs., 6 tabs.

Brehm, J. [Hannover Univ. (Germany). Inst. fuer Rechnerstrukturen und Betriebssysteme; Worley, P.H. [Oak Ridge National Lab., TN (United States)

1997-04-01