Science.gov

Sample records for parallel forms reliability

  1. Reliability of a Parallel Pipe Network

    NASA Technical Reports Server (NTRS)

    Herrera, Edgar; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

  2. Parallelized reliability estimation of reconfigurable computer networks

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Das, Subhendu; Palumbo, Dan

    1990-01-01

    A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.

  3. A parallel version of FORM 3

    NASA Astrophysics Data System (ADS)

    Fliegner, D.; Rétey, A.; Vermaseren, J. A. M.

    2001-08-01

    The parallel version of the symbolic manipulation program FORM for clusters of workstations and massive parallel systems is presented. We discuss various cluster architectures and the implementation of the parallel program using message passing (MPI). Performance results for real physics applications are shown.

  4. Swift : fast, reliable, loosely coupled parallel computation.

    SciTech Connect

    Zhao, Y.; Hategan, M.; Clifford, B.; Foster, I.; von Laszewski, G.; Nefedova, V.; Raicu, I.; Stef-Praun, T.; Wilde, M.; Mathematics and Computer Science; Univ. of Chicago

    2007-01-01

    A common pattern in scientific computing involves the execution of many tasks that are coupled only in the sense that the output of one may be passed as input to one or more others - for example, as a file, or via a Web Services invocation. While such 'loosely coupled' computations can involve large amounts of computation and communication, the concerns of the programmer tend to be different than in traditional high performance computing, being focused on management issues relating to the large numbers of datasets and tasks (and often, the complexities inherent in 'messy' data organizations) rather than the optimization of interprocessor communication. To address these concerns, we have developed Swift, a system that combines a novel scripting language called SwiftScript with a powerful runtime system based on CoG Karajan and Falkon to allow for the concise specification, and reliable and efficient execution, of large loosely coupled computations. Swift adopts and adapts ideas first explored in the GriPhyN virtual data system, improving on that system in many regards. We describe the SwiftScript language and its use of XDTM to describe the logical structure of complex file system structures. We also present the Swift system and its use of CoG Karajan, Falkon, and Globus services to dispatch and manage the execution of many tasks in different execution environments. We summarize application experiences and detail performance experiments that quantify the cost of Swift operations.

  5. Construction of Parallel Test Forms Using Optimal Test Designs.

    ERIC Educational Resources Information Center

    Dirir, Mohamed A.

    The effectiveness of an optimal item selection method in designing parallel test forms was studied during the development of two forms that were parallel to an existing form for each of three language arts tests for fourth graders used in the Connecticut Mastery Test. Two listening comprehension forms, two reading comprehension forms, and two…

  6. Parallel Merit Reliability: Error of Measurement from a Single Test Administration.

    ERIC Educational Resources Information Center

    Stuck, Ivan A.

    Parallel merit reliability (PMR) indexes the same consistency of measurement that is reflected in a validity coefficient; it reflects the reliability of measurement across identical merit score cases. Research has identified the potential benefits of the PMR approach as providing item level and cut-score reliability indices without requiring…

  7. PRAND: GPU accelerated parallel random number generation library: Using most reliable algorithms and applying parallelism of modern GPUs and CPUs

    NASA Astrophysics Data System (ADS)

    Barash, L. Yu.; Shchur, L. N.

    2014-04-01

    The library PRAND for pseudorandom number generation for modern CPUs and GPUs is presented. It contains both single-threaded and multi-threaded realizations of a number of modern and most reliable generators recently proposed and studied in Barash (2011), Matsumoto and Tishimura (1998), L'Ecuyer (1999,1999), Barash and Shchur (2006) and the efficient SIMD realizations proposed in Barash and Shchur (2011). One of the useful features for using PRAND in parallel simulations is the ability to initialize up to 1019 independent streams. Using massive parallelism of modern GPUs and SIMD parallelism of modern CPUs substantially improves performance of the generators.

  8. Alternate Forms Reliability of the Behavioral Relaxation Scale: Preliminary Results

    ERIC Educational Resources Information Center

    Lundervold, Duane A.; Dunlap, Angel L.

    2006-01-01

    Alternate forms reliability of the Behavioral Relaxation Scale (BRS; Poppen,1998), a direct observation measure of relaxed behavior, was examined. A single BRS score, based on long duration observation (5-minute), has been found to be a valid measure of relaxation and is correlated with self-report and some physiological measures. Recently,…

  9. Reliability and mass analysis of dynamic power conversion systems with parallel of standby redundancy

    NASA Technical Reports Server (NTRS)

    Juhasz, A. J.; Bloomfield, H. S.

    1985-01-01

    A combinatorial reliability approach is used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis is also performed, specifically for a 100 kWe nuclear Brayton power conversion system with parallel redundancy. Although this study is done for a reactor outlet temperature of 1100K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.

  10. Reliability and mass analysis of dynamic power conversion systems with parallel or standby redundancy

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.; Bloomfield, Harvey S.

    1987-01-01

    A combinatorial reliability approach was used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis was also performed, specifically for a 100-kWe nuclear Brayton power conversion system with parallel redundancy. Although this study was done for a reactor outlet temperature of 1100 K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.

  11. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    NASA Astrophysics Data System (ADS)

    Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.

    2013-08-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature ( T max) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative distribution functions of the CDOCE and T max as well as the correlation coefficients are obtained by using the FORM and the results are compared with corresponding Monte-Carlo simulations (MCS). According to the results obtained from the FORM, an increase in the pulling speed yields an increase in the probability of T max being greater than the resin degradation temperature. A similar trend is also seen for the probability of the CDOCE being less than 0.8.

  12. Bristol Stool Form Scale reliability and agreement decreases when determining Rome III stool form designations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Rater reproducibility of the Bristol Stool Form Scale (BSFS), which categorizes stools into one of seven types, is unknown. We sought to determine reliability and agreement by individual stool type and when responses are categorized by Rome III clinical designation as normal or abnormal (constipatio...

  13. Parameter Interval Estimation of System Reliability for Repairable Multistate Series-Parallel System with Fuzzy Data

    PubMed Central

    2014-01-01

    The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ≥ 100. In addition, the optimal α-cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728

  14. Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gibson, Garth Alan

    1990-01-01

    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

  15. Generating Random Parallel Test Forms Using CTT in a Computer-Based Environment.

    ERIC Educational Resources Information Center

    Weiner, John A.; Gibson, Wade M.

    1998-01-01

    Describes a procedure for automated-test-forms assembly based on Classical Test Theory (CTT). The procedure uses stratified random-content sampling and test-form preequating to ensure both content and psychometric equivalence in generating virtually unlimited parallel forms. Extends the usefulness of CTT in automated test construction. (Author/SLD)

  16. Separation of image parts using 2-D parallel form recursive filters.

    PubMed

    Sivaramakrishna, R

    1996-01-01

    This correspondence deals with a new technique to separate objects or image parts in a composite image. A parallel form extension of a 2-D Steiglitz-McBride method is applied to the discrete cosine transform (DCT) of the image containing the objects that are to be separated. The obtained parallel form is the sum of several filters or systems, where the impulse response of each filter corresponds to the DCT of one object in the original image. Preliminary results on an image with two objects show that the algorithm works well, even in the case where one object occludes another as well as in the case of moderate noise. PMID:18285105

  17. Exploring Equivalent Forms Reliability Using a Key Stage 2 Reading Test

    ERIC Educational Resources Information Center

    Benton, Tom

    2013-01-01

    This article outlines an empirical investigation into equivalent forms reliability using a case study of a national curriculum reading test. Within the situation being studied, there has been a genuine attempt to create several equivalent forms and so it is of interest to compare the actual behaviour of the relationship between these forms to the…

  18. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  19. Reliability Modeling Methodology for Independent Approaches on Parallel Runways Safety Analysis

    NASA Technical Reports Server (NTRS)

    Babcock, P.; Schor, A.; Rosch, G.

    1998-01-01

    This document is an adjunct to the final report An Integrated Safety Analysis Methodology for Emerging Air Transport Technologies. That report presents the results of our analysis of the problem of simultaneous but independent, approaches of two aircraft on parallel runways (independent approaches on parallel runways, or IAPR). This introductory chapter presents a brief overview and perspective of approaches and methodologies for performing safety analyses for complex systems. Ensuing chapter provide the technical details that underlie the approach that we have taken in performing the safety analysis for the IAPR concept.

  20. Genomic evidence for the parallel evolution of coastal forms in the Senecio lautus complex.

    PubMed

    Roda, Federico; Ambrose, Luke; Walter, Gregory M; Liu, Huanle L; Schaul, Andrea; Lowe, Andrew; Pelser, Pieter B; Prentis, Peter; Rieseberg, Loren H; Ortiz-Barrientos, Daniel

    2013-06-01

    Instances of parallel ecotypic divergence where adaptation to similar conditions repeatedly cause similar phenotypic changes in closely related organisms are useful for studying the role of ecological selection in speciation. Here we used a combination of traditional and next generation genotyping techniques to test for the parallel divergence of plants from the Senecio lautus complex, a phenotypically variable groundsel that has adapted to disparate environments in the South Pacific. Phylogenetic analysis of a broad selection of Senecio species showed that members of the S. lautus complex form a distinct lineage that has diversified recently in Australasia. An inspection of thousands of polymorphisms in the genome of 27 natural populations from the S. lautus complex in Australia revealed a signal of strong genetic structure independent of habitat and phenotype. Additionally, genetic differentiation between populations was correlated with the geographical distance separating them, and the genetic diversity of populations strongly depended on geographical location. Importantly, coastal forms appeared in several independent phylogenetic clades, a pattern that is consistent with the parallel evolution of these forms. Analyses of the patterns of genomic differentiation between populations further revealed that adjacent populations displayed greater genomic heterogeneity than allopatric populations and are differentiated according to variation in soil composition. These results are consistent with a process of parallel ecotypic divergence in face of gene flow. PMID:23710896

  1. The Reliability and Validity of the Coopersmith Self-Esteem Inventory-Form B.

    ERIC Educational Resources Information Center

    Chiu, Lian-Hwang

    1985-01-01

    The purpose of this study was to determine the test-retest reliability and concurrent validity of the short form (Form B) of the Coopersmith Self-Esteem Inventory. Criterion measures for validity included: (1) sociometric measures; (2) teacher's popularity ranking; and, (3) self-esteem rating. (Author/LMO)

  2. Analysing the reliability of actuation elements in series and parallel configurations for high-redundancy actuation

    NASA Astrophysics Data System (ADS)

    Steffen, Thomas; Schiller, Frank; Blum, Michael; Dixon, Roger

    2013-08-01

    A high-redundancy actuator (HRA) is an actuation system composed of a high number of actuation elements, increasing both travel and force above the capability of an individual element. This approach provides inherent fault tolerance: if one of the elements fails, the capabilities of the whole actuator may be reduced, but it retains core functionality. Many different configurations are possible, with different implications for the actuator capability and reliability. This article analyses the reliability of the HRA based on the likelihood of an unacceptable reduction in capability. The analysis of the HRA is a highly structured problem, but it does not fit into known reliability categories (such as the k-out-of-n system), and a fault-tree analysis becomes prohibitively large. Instead, a multi-state systems approach is pursued here, which provides an easy, concise and efficient reliability analysis of the HRA. The resulting probability distribution can be used to find the optimal configuration of an HRA for a given set of requirements.

  3. An expert system for ensuring the reliability of the technological process of cold sheet metal forming

    NASA Astrophysics Data System (ADS)

    Kashapova, L. R.; Pankratov, D. L.; Utyaganov, P. P.

    2016-06-01

    In order to exclude periodic defects in the parts manufacturing obtained by cold sheet metal forming a method of automated estimation of technological process reliability was developed. The technique is based on the analysis of reliability factors: detail construction, material, mechanical and physical requirements; hardware settings, tool characteristics, etc. In the work the expert system is presented based on a statistical accumulation of the knowledge of the operator (technologist) and decisions of control algorithms.

  4. Complete classification of parallel spatial surfaces in pseudo-Riemannian space forms with arbitrary index and dimension

    NASA Astrophysics Data System (ADS)

    Chen, Bang-Yen

    2010-02-01

    A spatial surface of a pseudo-Riemannian space form is called parallel if its second fundamental form is parallel with respect to the Van der Waerden-Bortolotti connection. It is well known that a surface in a pseudo-Riemannian space form is parallel if and only if it is locally invariant under the reflection with respect to the normal space at each point. Such surfaces are important in geometry as well as in general relativity since the extrinsic invariants of the surfaces do not change from point to point. Recently, parallel spatial surfaces in 4-dimensional Lorentzian space forms were classified by Chen and Van der Veken (2009) [6]. In this article, we completely classify parallel spatial surfaces in pseudo-Riemannian space forms with an arbitrary index and dimensions. As an immediate by-product, we achieve the classification of all spatial surfaces in Lorentzian space forms with arbitrary dimensions.

  5. Parallel FE Approximation of the Even/Odd Parity Form of the Linear Boltzmann Equation

    SciTech Connect

    Drumm, Clifton R.; Lorenz, Jens

    1999-07-21

    A novel solution method has been developed to solve the linear Boltzmann equation on an unstructured triangular mesh. Instead of tackling the first-order form of the equation, this approach is based on the even/odd-parity form in conjunction with the conventional mdtigroup discrete-ordinates approximation. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, and the method is well suited for massively parallel computers.

  6. Magnetosheath filamentary structures formed by ion acceleration at the quasi-parallel bow shock

    NASA Astrophysics Data System (ADS)

    Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.

    2014-04-01

    Results from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.

  7. Magnetosheath Filamentary Structures Formed by Ion Acceleration at the Quasi-Parallel Bow Shock

    NASA Technical Reports Server (NTRS)

    Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.

    2014-01-01

    Results from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.

  8. Lineation-parallel c-axis Fabric of Quartz Formed Under Water-rich Conditions

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Zhang, J.; Li, P.

    2014-12-01

    The crystallographic preferred orientation (CPO) of quartz is of great significance because it records much valuable information pertinent to the deformation of quartz-rich rocks in the continental crust. The lineation-parallel c-axis CPO (i.e., c-axis forming a maximum parallel to the lineation) in naturally deformed quartz is generally considered to form under high temperature (> ~550 ºC) conditions. However, most laboratory deformation experiments on quartzite failed to produce such a CPO at high temperatures up to 1200 ºC. Here we reported a new occurrence of the lineation-parallel c-axis CPO of quartz from kyanite-quartz veins in eclogite. Optical microstructural observations, fourier transform infrared (FTIR) and electron backscattered diffraction (EBSD) techniques were integrated to illuminate the nature of quartz CPOs. Quartz exhibits mostly straight to slightly curved grain boundaries, modest intracrystalline plasticity, and significant shape preferred orientation (SPO) and CPOs, indicating dislocation creep dominated the deformation of quartz. Kyanite grains in the veins are mostly strain-free, suggestive of their higher strength than quartz. The pronounced SPO and CPOs in kyanite were interpreted to originate from anisotropic crystal growth and/or mechanical rotation during vein-parallel shearing. FTIR results show quartz contains a trivial amount of structurally bound water (several tens of H/106 Si), while kyanite has a water content of 384-729 H/106 Si; however, petrographic observations suggest quartz from the veins were practically deformed under water-rich conditions. We argue that the observed lineation-parallel c-axis fabric in quartz was inherited from preexisting CPOs as a result of anisotropic grain growth under stress facilitated by water, but rather than due to a dominant c-slip. The preservation of the quartz CPOs probably benefited from the preexisting quartz CPOs which renders most quartz grains unsuitably oriented for an easy a-slip at

  9. A Test Reliability Analysis of an Abbreviated Version of the Pupil Control Ideology Form.

    ERIC Educational Resources Information Center

    Gaffney, Patrick V.

    A reliability analysis was conducted of an abbreviated, 10-item version of the Pupil Control Ideology Form (PCI), using the Cronbach's alpha technique (L. J. Cronbach, 1951) and the computation of the standard error of measurement. The PCI measures a teacher's orientation toward pupil control. Subjects were 168 preservice teachers from one private…

  10. The Question of Reliability of Course and Evaluation Forms at Indiana University.

    ERIC Educational Resources Information Center

    Majer, Kenneth; Stayrook, Nicholas

    The reliability of the two most widely used course evaluation instruments at Indiana University was examined. The 38-item Form A, a modification of a course-evaluation instrument reported by Hildebrand and Wilson, at the University of California at Davis, asked students to rate instructors on a seven-point scale, from excellent to poor. The…

  11. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  12. The Behaviour Problems Inventory-Short Form: Reliability and Factorial Validity in Adults with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Mascitelli, Andréa N.; Rojahn, Johannes; Nicolaides, Vias C.; Moore, Linda; Hastings, Richard P.; Christian-Jones, Ceri

    2015-01-01

    Background: The Behaviour Problems Inventory-Short Form (BPI-S) is a spin-off of the BPI-01 that was empirically developed from a large BPI-01 data set. In this study, the reliability and factorial validity of the BPI-S was investigated for the first time on newly collected data from adults with intellectual disabilities. Methods: The sample…

  13. Validity and Reliability of International Physical Activity Questionnaire-Short Form in Chinese Youth

    ERIC Educational Resources Information Center

    Wang, Chao; Chen, Peijie; Zhuang, Jie

    2013-01-01

    Purpose: The psychometric profiles of the widely used International Physical Activity Questionnaire-Short Form (IPAQ-SF) in Chinese youth have not been reported. The purpose of this study was to examine the validity and reliability of the IPAQ-SF using a sample of Chinese youth. Method: One thousand and twenty-one youth (M[subscript age] = 14.26 ±…

  14. Microelectromechanical filter formed from parallel-connected lattice networks of contour-mode resonators

    DOEpatents

    Wojciechowski, Kenneth E; Olsson, III, Roy H; Ziaei-Moayyed, Maryam

    2013-07-30

    A microelectromechanical (MEM) filter is disclosed which has a plurality of lattice networks formed on a substrate and electrically connected together in parallel. Each lattice network has a series resonant frequency and a shunt resonant frequency provided by one or more contour-mode resonators in the lattice network. Different types of contour-mode resonators including single input, single output resonators, differential resonators, balun resonators, and ring resonators can be used in MEM filter. The MEM filter can have a center frequency in the range of 10 MHz-10 GHz, with a filter bandwidth of up to about 1% when all of the lattice networks have the same series resonant frequency and the same shunt resonant frequency. The filter bandwidth can be increased up to about 5% by using unique series and shunt resonant frequencies for the lattice networks.

  15. Identifying logical planes formed of compute nodes of a subcommunicator in a parallel computer

    DOEpatents

    Davis, Kristan D.; Faraj, Daniel

    2016-05-03

    In a parallel computer, a plurality of logical planes formed of compute nodes of a subcommunicator may be identified by: for each compute node of the subcommunicator and for a number of dimensions beginning with a first dimension: establishing, by a plane building node, in a positive direction of the first dimension, all logical planes that include the plane building node and compute nodes of the subcommunicator in a positive direction of a second dimension, where the second dimension is orthogonal to the first dimension; and establishing, by the plane building node, in a negative direction of the first dimension, all logical planes that include the plane building node and compute nodes of the subcommunicator in the positive direction of the second dimension.

  16. Identifying logical planes formed of compute nodes of a subcommunicator in a parallel computer

    DOEpatents

    Davis, Kristan D.; Faraj, Daniel A.

    2016-03-01

    In a parallel computer, a plurality of logical planes formed of compute nodes of a subcommunicator may be identified by: for each compute node of the subcommunicator and for a number of dimensions beginning with a first dimension: establishing, by a plane building node, in a positive direction of the first dimension, all logical planes that include the plane building node and compute nodes of the subcommunicator in a positive direction of a second dimension, where the second dimension is orthogonal to the first dimension; and establishing, by the plane building node, in a negative direction of the first dimension, all logical planes that include the plane building node and compute nodes of the subcommunicator in the positive direction of the second dimension.

  17. Parallel processing in the brain's visual form system: an fMRI study

    PubMed Central

    Shigihara, Yoshihito; Zeki, Semir

    2014-01-01

    We here extend and complement our earlier time-based, magneto-encephalographic (MEG), study of the processing of forms by the visual brain (Shigihara and Zeki, 2013) with a functional magnetic resonance imaging (fMRI) study, in order to better localize the activity produced in early visual areas when subjects view simple geometric stimuli of increasing perceptual complexity (lines, angles, rhombuses) constituted from the same elements (lines). Our results show that all three categories of form activate all three visual areas with which we were principally concerned (V1–V3), with angles producing the strongest and rhombuses the weakest activity in all three. The difference between the activity produced by angles and rhombuses was significant, that between lines and rhombuses was trend significant while that between lines and angles was not. Taken together with our earlier MEG results, the present ones suggest that a parallel strategy is used in processing forms, in addition to the well-documented hierarchical strategy. PMID:25126064

  18. The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills

    ERIC Educational Resources Information Center

    Bae, Jungok; Lee, Yae-Sheik

    2011-01-01

    Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of…

  19. Pharmacokinetic Comparison of Omeprazole Granule and Suspension Forms in Children: A Randomized, Parallel Pilot Trial.

    PubMed

    Karami, S; Dehghanzadeh, G; Haghighat, M; Mirzaei, R; Rahimi, H R

    2016-03-01

    Although, omeprazole is widely used for treatment of gastric acid-mediated disorders. However, its pharmacokinetic and chemical instability does not allow simple aqueous dosage form formulation synthesis for therapy of, especially child, these patients. The aim of this study was at first preparation of suspension dosage form omeprazole and second to compare the blood levels of 2 oral formulations/dosage forms of suspension & granule by high performance liquid chromatography (HPLC). The omeprazole suspension was prepared; in this regard omeprazole powder was added to 8.4% sodium bicarbonate to make final concentration 2 mg/ml omeprazole. After that a randomized, parallel pilot trial study was performed in 34 pediatric patients with acid peptic disorder who considered usage omeprazole. Selected patients were received suspension and granule, respectively. After oral administration, blood samples were collected and analyzed for omeprazole levels using validated HPLC method. The mean omeprazole blood concentration before usage the next dose, (trough level) were 0.12±0.08 µg/ml and 0.18±0.15 µg/ml for granule and suspension groups, respectively and mean blood level after dosing (C2 peak level) were 0.68±0.61 µg/ml and 0.86±0.76 µg/ml for granule and suspension groups, respectively. No significant changes were observed in comparison 2 dosage forms 2 h before (P=0.52) and after (P=0.56) the last dose. These results demonstrate that omeprazole suspension is a suitable substitute for granule in pediatrics. PMID:26398674

  20. Optimal design of parallel triplex forming oligonucleotides containing Twisted Intercalating Nucleic Acids--TINA.

    PubMed

    Schneider, Uffe V; Mikkelsen, Nikolaj D; Jøhnk, Nina; Okkels, Limei M; Westh, Henrik; Lisby, Gorm

    2010-07-01

    Twisted intercalating nucleic acid (TINA) is a novel intercalator and stabilizer of Hoogsteen type parallel triplex formations (PT). Specific design rules for position of TINA in triplex forming oligonucleotides (TFOs) have not previously been presented. We describe a complete collection of easy and robust design rules based upon more than 2500 melting points (T(m)) determined by FRET. To increase the sensitivity of PT, multiple TINAs should be placed with at least 3 nt in-between or preferable one TINA for each half helixturn and/or whole helixturn. We find that Delta T(m) of base mismatches on PT is remarkably high (between 7.4 and 15.2 degrees C) compared to antiparallel duplexes (between 3.8 and 9.4 degrees C). The specificity of PT by Delta T(m) increases when shorter TFOs and higher pH are chosen. To increase Delta Tms, base mismatches should be placed in the center of the TFO and when feasible, A, C or T to G base mismatches should be avoided. Base mismatches can be neutralized by intercalation of a TINA on each side of the base mismatch and masked by a TINA intercalating direct 3' (preferable) or 5' of it. We predict that TINA stabilized PT will improve the sensitivity and specificity of DNA based clinical diagnostic assays. PMID:20338879

  1. Optimal design of parallel triplex forming oligonucleotides containing Twisted Intercalating Nucleic Acids—TINA

    PubMed Central

    Schneider, Uffe V.; Mikkelsen, Nikolaj D.; Jøhnk, Nina; Okkels, Limei M.; Westh, Henrik; Lisby, Gorm

    2010-01-01

    Twisted intercalating nucleic acid (TINA) is a novel intercalator and stabilizer of Hoogsteen type parallel triplex formations (PT). Specific design rules for position of TINA in triplex forming oligonucleotides (TFOs) have not previously been presented. We describe a complete collection of easy and robust design rules based upon more than 2500 melting points (Tm) determined by FRET. To increase the sensitivity of PT, multiple TINAs should be placed with at least 3 nt in-between or preferable one TINA for each half helixturn and/or whole helixturn. We find that ΔTm of base mismatches on PT is remarkably high (between 7.4 and 15.2°C) compared to antiparallel duplexes (between 3.8 and 9.4°C). The specificity of PT by ΔTm increases when shorter TFOs and higher pH are chosen. To increase ΔTms, base mismatches should be placed in the center of the TFO and when feasible, A, C or T to G base mismatches should be avoided. Base mismatches can be neutralized by intercalation of a TINA on each side of the base mismatch and masked by a TINA intercalating direct 3′ (preferable) or 5′ of it. We predict that TINA stabilized PT will improve the sensitivity and specificity of DNA based clinical diagnostic assays. PMID:20338879

  2. An Examination of the Reliability of Scores from Zuckerman's Sensation Seeking Scales, Form V.

    ERIC Educational Resources Information Center

    Deditius-Island, Heide K.; Caruso, John C.

    2002-01-01

    Conducted a reliability generalization study on Zuckerman's Sensation Seeking Scale (M. Zuckerman and others, 1964) using 113 reliability coefficients from 21 published studies. The reliability of scores was marginal for four of the five scales, and low for the other. Mean age of subjects has a significant relationship with score reliability. (SLD)

  3. Reliability and Validity of the Korean Young Schema Questionnaire-Short Form-3 in Medical Students

    PubMed Central

    Lee, Seung Jae; Choi, Young Hee; Rim, Hyo Deog; Won, Seung Hee

    2015-01-01

    Objective The Young Schema Questionnaire (YSQ) is a self-report measure of early maladaptive schemas and is currently in its third revision; it is available in both long (YSQ-L3) and short (YSQ-S3) forms. The goal of this study was to develop a Korean version of the YSQ-S3 and establish its psychometric properties in a Korean sample. Methods A total of 542 graduate medical students completed the Korean version of the YSQ-S3 and several other psychological scales. A subsample of 308 subjects completed the Korean YSQ-S3 both before and after a 2-year test-retest interval. Correlation, regression, and confirmatory factor analyses were performed on the data. Results The internal consistency of the 90-item Korean YSQ-S3 was 0.97 and that of each schema was acceptable, with Cronbach's alphas ranging from 0.59 to 0.90. The test-retest reliability ranged from 0.46 to 0.65. Every schema showed robust positive correlations with most psychological measures. The confirmatory factor analysis for the 18-factor structure originally proposed by Young, Klosko, and Weishaar (2003) showed that most goodness-of-fit statistics were indicative of a satisfactory fit. Conclusion These findings support the reliability and validity of the Korean version of the YSQ-S3. PMID:26207121

  4. Validity, Reliability, and Potential Bias of Short Forms of Students' Evaluation of Teaching: The Case of UAE University

    ERIC Educational Resources Information Center

    Dodeen, Hamzeh

    2013-01-01

    Students' opinions continue to be a significant factor in the evaluation of teaching in higher education institutions. The purpose of this study was to psychometrically assess short students evaluation of teaching (SET) forms using the UAE University form as a model. The study evaluated the form validity, reliability, the overall question,…

  5. Bringing the cognitive estimation task into the 21st century: normative data on two new parallel forms.

    PubMed

    MacPherson, Sarah E; Wagner, Gabriela Peretti; Murphy, Patrick; Bozzali, Marco; Cipolotti, Lisa; Shallice, Tim

    2014-01-01

    The Cognitive Estimation Test (CET) is widely used by clinicians and researchers to assess the ability to produce reasonable cognitive estimates. Although several studies have published normative data for versions of the CET, many of the items are now outdated and parallel forms of the test do not exist to allow cognitive estimation abilities to be assessed on more than one occasion. In the present study, we devised two new 9-item parallel forms of the CET. These versions were administered to 184 healthy male and female participants aged 18-79 years with 9-22 years of education. Increasing age and years of education were found to be associated with successful CET performance as well as gender, intellect, naming, arithmetic and semantic memory abilities. To validate that the parallel forms of the CET were sensitive to frontal lobe damage, both versions were administered to 24 patients with frontal lobe lesions and 48 age-, gender- and education-matched controls. The frontal patients' error scores were significantly higher than the healthy controls on both versions of the task. This study provides normative data for parallel forms of the CET for adults which are also suitable for assessing frontal lobe dysfunction on more than one occasion without practice effects. PMID:24671170

  6. A novel transition pathway of ligand-induced topological conversion from hybrid forms to parallel forms of human telomeric G-quadruplexes.

    PubMed

    Wang, Zi-Fu; Li, Ming-Hao; Chen, Wei-Wen; Hsu, Shang-Te Danny; Chang, Ta-Chau

    2016-05-01

    The folding topology of DNA G-quadruplexes (G4s) depends not only on their nucleotide sequences but also on environmental factors and/or ligand binding. Here, a G4 ligand, 3,6-bis(1-methyl-4-vinylpyridium iodide)-9-(1-(1-methyl-piperidinium iodide)-3,6,9-trioxaundecane) carbazole (BMVC-8C3O), can induce topological conversion of non-parallel to parallel forms in human telomeric DNA G4s. Nuclear magnetic resonance (NMR) spectroscopy with hydrogen-deuterium exchange (HDX) reveals the presence of persistent imino proton signals corresponding to the central G-quartet during topological conversion of Tel23 and Tel25 G4s from hybrid to parallel forms, implying that the transition pathway mainly involves local rearrangements. In contrast, rapid HDX was observed during the transition of 22-CTA G4 from an anti-parallel form to a parallel form, resulting in complete disappearance of all the imino proton signals, suggesting the involvement of substantial unfolding events associated with the topological transition. Site-specific imino proton NMR assignments of Tel23 G4 enable determination of the interconversion rates of individual guanine bases and detection of the presence of intermediate states. Since the rate of ligand binding is much higher than the rate of ligand-induced topological conversion, a three-state kinetic model was evoked to establish the associated energy diagram for the topological conversion of Tel23 G4 induced by BMVC-8C3O. PMID:26975658

  7. A novel transition pathway of ligand-induced topological conversion from hybrid forms to parallel forms of human telomeric G-quadruplexes

    PubMed Central

    Wang, Zi-Fu; Li, Ming-Hao; Chen, Wei-Wen; Hsu, Shang-Te Danny; Chang, Ta-Chau

    2016-01-01

    The folding topology of DNA G-quadruplexes (G4s) depends not only on their nucleotide sequences but also on environmental factors and/or ligand binding. Here, a G4 ligand, 3,6-bis(1-methyl-4-vinylpyridium iodide)-9-(1-(1-methyl-piperidinium iodide)-3,6,9-trioxaundecane) carbazole (BMVC-8C3O), can induce topological conversion of non-parallel to parallel forms in human telomeric DNA G4s. Nuclear magnetic resonance (NMR) spectroscopy with hydrogen-deuterium exchange (HDX) reveals the presence of persistent imino proton signals corresponding to the central G-quartet during topological conversion of Tel23 and Tel25 G4s from hybrid to parallel forms, implying that the transition pathway mainly involves local rearrangements. In contrast, rapid HDX was observed during the transition of 22-CTA G4 from an anti-parallel form to a parallel form, resulting in complete disappearance of all the imino proton signals, suggesting the involvement of substantial unfolding events associated with the topological transition. Site-specific imino proton NMR assignments of Tel23 G4 enable determination of the interconversion rates of individual guanine bases and detection of the presence of intermediate states. Since the rate of ligand binding is much higher than the rate of ligand-induced topological conversion, a three-state kinetic model was evoked to establish the associated energy diagram for the topological conversion of Tel23 G4 induced by BMVC-8C3O. PMID:26975658

  8. Form analysis using digital signal processing reliably discriminates far-field R waves from P waves.

    PubMed

    Van Hemel, Norbert M; Wohlgemuth, Peter; Engbers, Jos G; Lawo, Thomas; Nebaznivy, Jan; Taborsky, Milos; Witte, Joachim; Boute, Wim; Munneke, Dave; Van Groeningen, Chris

    2004-12-01

    The correct detection of atrial arrhythmias by pacemakers is often limited by the presence of far-field R waves (FFRWs) in the atrial electrogram. Digital signal processing (DSP) of intracardiac signals is assumed to provide improved discrimination between P waves and FFRWs when compared to current methods. For this purpose, 100 bipolar and unipolar intracardiac atrial recordings from 31 patients were collected during pacemaker replacement and used for the off-line application of a novel DSP algorithm. Digital processing of the atrial intracardiac electrogram (IEGM) signals (8 bit, 800 samples/s) included filtering and calculation of the maximum amplitude and slope of the detected events. The form parameter was calculated, being the sum of the most negative value of the amplitude and that of the slope of the detected event. The algorithm collects form parameter data of P waves and FFRWs and composes histograms of these data. A sufficiently large gap between the FFRW and P wave histograms allows discrimination of these two signals based on form parameters. Three independent observers reviewed the reliability of classification with this algorithm. Sensitivity and specificity of FFRW detection were 99.63% and 100%, respectively, and no P waves were falsely classified. It can be concluded that this novel DSP algorithm shows excellent discrimination of FFRWs under off-line conditions and justify the implementation of this algorithm in future pacemakers for real-time discrimination between P waves and FFRWs. This method prevents false mode switching and allows correct and immediate intervention pacing for atrial tachyarrhythmias. PMID:15613124

  9. Parallel Computing of Multi-scale Finite Element Sheet Forming Analyses Based on Crystallographic Homogenization Method

    SciTech Connect

    Kuramae, Hiroyuki; Okada, Kenji; Uetsuji, Yasutomo; Nakamachi, Eiji; Tam, Nguyen Ngoc; Nakamura, Yasunori

    2005-08-05

    Since the multi-scale finite element analysis (FEA) requires large computation time, development of the parallel computing technique for the multi-scale analysis is inevitable. A parallel elastic/crystalline viscoplastic FEA code based on a crystallographic homogenization method has been developed using PC cluster. The homogenization scheme is introduced to compute macro-continuum plastic deformations and material properties by considering a polycrystal texture. Since the dynamic explicit method is applied to this method, the analysis using micro crystal structures computes the homogenized stresses in parallel based on domain partitioning of macro-continuum without solving simultaneous linear equations. The micro-structure is defined by the Scanning Electron Microscope (SEM) and the Electron Back Scan Diffraction (EBSD) measurement based crystal orientations. In order to improve parallel performance of elastoplasticity analysis, which dynamically and partially increases computational costs during the analysis, a dynamic workload balancing technique is introduced to the parallel analysis. The technique, which is an automatic task distribution method, is realized by adaptation of subdomain size for macro-continuum to maintain the computational load balancing among cluster nodes. The analysis code is applied to estimate the polycrystalline sheet metal formability.

  10. Re-forming supercritical quasi-parallel shocks. I - One- and two-dimensional simulations

    NASA Technical Reports Server (NTRS)

    Thomas, V. A.; Winske, D.; Omidi, N.

    1990-01-01

    The process of reforming supercritical quasi-parallel shocks is investigated using one-dimensional and two-dimensional hybrid (particle ion, massless fluid electron) simulations both of shocks and of simpler two-stream interactions. It is found that the supercritical quasi-parallel shock is not steady. Instread of a well-defined shock ramp between upstream and downstream states that remains at a fixed position in the flow, the ramp periodically steepens, broadens, and then reforms upstream of its former position. It is concluded that the wave generation process is localized at the shock ramp and that the reformation process proceeds in the absence of upstream perturbations intersecting the shock.

  11. Structural Aspects of the Antiparallel and Parallel Duplexes Formed by DNA, 2’-O-Methyl RNA and RNA Oligonucleotides

    PubMed Central

    Szabat, Marta; Pedzinski, Tomasz; Czapik, Tomasz; Kierzek, Elzbieta; Kierzek, Ryszard

    2015-01-01

    This study investigated the influence of the nature of oligonucleotides on the abilities to form antiparallel and parallel duplexes. Base pairing of homopurine DNA, 2’-O-MeRNA and RNA oligonucleotides with respective homopyrimidine DNA, 2’-O-MeRNA and RNA as well as chimeric oligonucleotides containing LNA resulted in the formation of 18 various duplexes. UV melting, circular dichroism and fluorescence studies revealed the influence of nucleotide composition on duplex structure and thermal stability depending on the buffer pH value. Most duplexes simultaneously adopted both orientations. However, at pH 5.0, parallel duplexes were more favorable. Moreover, the presence of LNA nucleotides within a homopyrimidine strand favored the formation of parallel duplexes. PMID:26579720

  12. Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data

    ERIC Educational Resources Information Center

    Dinno, Alexis

    2009-01-01

    Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make assertions…

  13. Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.

    SciTech Connect

    Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee; Neary, Vincent Sinclair

    2014-09-01

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours. In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters

  14. Reliability and Validity the Brief Problem Monitor, an Abbreviated Form of the Child Behavior Checklist

    PubMed Central

    Piper, Brian J.; Gray, Hilary M.; Raber, Jacob; Birkett, Melissa A.

    2014-01-01

    Aim The parent form of the 113 item Child Behavior Checklist (CBCL) is widely utilized by child psychiatrists and psychologists. This report examines the reliability and validity of a recently developed abbreviated version of the CBCL, the Brief Problem Monitor (BPM). Methods Caregivers (N=567) completed the CBCL online and the 19 BPM items were examined separately. Results Internal consistency of the BPM was high (Cronbach’s alpha=0.91) and satisfactory for the Internalizing (0.78), Externalizing (0.86), and Attention (0.87) scales. High correlations between the CBCL and BPM were identified for the total score (r=0.95) as well as the Internalizing (0.86), Externalizing (0.93), and Attention (0.97) scales. The BPM and scales were sensitive and identified significantly higher behavioral and emotional problems among children whose caregiver reported a psychiatric diagnosis of Attention Deficit Hyperactivity Disorder, bipolar, depression, anxiety, developmental disabilities, or Autism Spectrum Disorders relative to a comparison group that had not been diagnosed with these disorders. BPM ratings also differed by the socioeconomic status and education of the caregiver. Mothers with higher annual incomes rated their children as having 38.8% fewer total problems (Cohen’s d=0.62) as well as 42.8% lower Internalizing (d=0.53), 44.1% less Externalizing (d=0.62), and 30.9% decreased Attention (d=0.39). A similar pattern was evident for maternal education (d=0.30 to 0.65). Conclusion Overall, these findings provide strong psychometric support for the BPM although the differences based on the characteristics of the parent indicates that additional information from other sources (e.g., teachers) should be obtained to complement parental reports. PMID:24735087

  15. In search of parsimony: reliability and validity of the Functional Performance Inventory-Short Form

    PubMed Central

    Leidy, Nancy Kline; Knebel, Ann

    2010-01-01

    Purpose: The 65-item Functional Performance Inventory (FPI), developed to quantify functional performance in patients with chronic obstructive pulmonary disease (COPD), has been shown to be reliable and valid. The purpose of this study was to create a shorter version of the FPI while preserving the integrity and psychometric properties of the original. Patients and methods: Secondary analyses were performed on qualitative and quantitative data used to develop and validate the FPI long form. Seventeen men and women with COPD participated in the qualitative work, while 154 took part in the mail survey; 54 completed 2-week reproducibility assessment, and 40 relatives contributed validation data. Following a systematic process of item reduction, performance properties of the 32-item short form (FPI-SF) were examined. Results: The FPI-SF was internally consistent (total scale α = 0.93; subscales: 0.76–0.89) and reproducible (r = 0.88; subscales: 0.69–0.86). Validity was maintained, with significant (P < 0.001) correlations between the FPI-SF and the Functional Status Questionnaire (activities of daily living, r = 0.71; instrumental activities of daily living, r = 0.73), Duke Activity Status Index (r = 0.65), Bronchitis-Emphysema Symptom Checklist (r = −0.61), Basic Need Satisfaction Inventory (r = 0.61) and Cantril’s Ladder of Life Satisfaction (r = 0.63), and Katz Adjustment Scale for Relatives (socially expected activities, r = 0.51; free-time activities, r = −0.49, P < 0.01). The FPI-SF differentiated patients with an FEVl% predicted greater than and less than 50% (t = 4.26, P < 0.001), and those with severe and moderate levels of perceived severity and activity limitation (t = 9.91, P < 0.001). Conclusion: Results suggest the FPI-SF is a viable alternative to the FPI for situations in which a shorter instrument is desired. Further assessment of the instrument’s performance properties in new samples of patients with COPD is warranted. PMID:21191436

  16. A reliability study of springback on the sheet metal forming process under probabilistic variation of prestrain and blank holder force

    NASA Astrophysics Data System (ADS)

    Mrad, Hatem; Bouazara, Mohamed; Aryanpour, Gholamreza

    2013-08-01

    This work deals with a reliability assessment of springback problem during the sheet metal forming process. The effects of operative parameters and material properties, blank holder force and plastic prestrain, on springback are investigated. A generic reliability approach was developed to control springback. Subsequently, the Monte Carlo simulation technique in conjunction with the Latin hypercube sampling method was adopted to study the probabilistic springback. Finite element method based on implicit/explicit algorithms was used to model the springback problem. The proposed constitutive law for sheet metal takes into account the adaptation of plastic parameters of the hardening law for each prestrain level considered. Rackwitz-Fiessler algorithm is used to find reliability properties from response surfaces of chosen springback geometrical parameters. The obtained results were analyzed using a multi-state limit reliability functions based on geometry compensations.

  17. Comparisons between Classical Test Theory and Item Response Theory in Automated Assembly of Parallel Test Forms

    ERIC Educational Resources Information Center

    Lin, Chuan-Ju

    2008-01-01

    The automated assembly of alternate test forms for online delivery provides an alternative to computer-administered, fixed test forms, or computerized-adaptive tests when a testing program migrates from paper/pencil testing to computer-based testing. The weighted deviations model (WDM) heuristic particularly promising for automated test assembly…

  18. Developing Form Assembly Specifications for Exams with Multiple Choice and Constructed Response Items: Balancing Reliability and Validity Concerns

    ERIC Educational Resources Information Center

    Hendrickson, Amy; Patterson, Brian; Ewing, Maureen

    2010-01-01

    The psychometric considerations and challenges associated with including constructed response items on tests are discussed along with how these issues affect the form assembly specifications for mixed-format exams. Reliability and validity, security and fairness, pretesting, content and skills coverage, test length and timing, weights, statistical…

  19. A Validation Study of the Dutch Childhood Trauma Questionnaire-Short Form: Factor Structure, Reliability, and Known-Groups Validity

    ERIC Educational Resources Information Center

    Thombs, Brett D.; Bernstein, David P.; Lobbestael, Jill; Arntz, Arnoud

    2009-01-01

    Objective: The 28-item Childhood Trauma Questionnaire-Short Form (CTQ-SF) has been translated into at least 10 different languages. The validity of translated versions of the CTQ-SF, however, has generally not been examined. The objective of this study was to investigate the factor structure, internal consistency reliability, and known-groups…

  20. Measuring Teacher Self-Report on Classroom Practices: Construct Validity and Reliability of the Classroom Strategies Scale-Teacher Form

    ERIC Educational Resources Information Center

    Reddy, Linda A.; Dudek, Christopher M.; Fabiano, Gregory A.; Peters, Stephanie

    2015-01-01

    This article presents information about the construct validity and reliability of a new teacher self-report measure of classroom instructional and behavioral practices (the Classroom Strategies Scales-Teacher Form; CSS-T). The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented.…

  1. Utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to are specific to the Cray X-MP line of computers and its associated SSD (Solid-State Disk). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  2. An Examination of the Assumption that the Equating of Parallel Forms is Population-Independent.

    ERIC Educational Resources Information Center

    Angoff, William H.; Cowell, William R.

    Linear and equipercentile equating conversions were developed for two forms of the Graduate Record Examinations (GRE) quantitative test and the verbal-plus-quantitative test. From a very large sample of students taking the GRE in October 1981, subpopulations were selected with respect to race, sex, field of study, and level of performance (defined…

  3. Parallel Cortical Networks Formed by Modular Organization of Primary Motor Cortex Outputs.

    PubMed

    Hamadjida, Adjia; Dea, Melvin; Deffeyes, Joan; Quessy, Stephan; Dancause, Numa

    2016-07-11

    In primates, the refinement of motor behaviors, in particular hand use, is associated with the establishment of more direct projections from primary motor cortex (M1) onto cervical motoneurons [1, 2] and the appearance of additional premotor and sensory cortical areas [3]. All of these areas have reciprocal connections with M1 [4-7]. Thus, during the evolution of the sensorimotor network, the number of interlocutors with which M1 interacts has tremendously increased. It is not clear how these additional interconnections are organized in relation to one another within the hand representation of M1. This is important because the organization of connections between M1 and phylogenetically newer and specialized cortical areas is likely to be key to the increased repertoire of hand movements in primates. In cebus monkeys, we used injections of retrograde tracers into the hand representation of different cortical areas of the sensorimotor network (ventral and dorsal premotor areas [PMv and PMd], supplementary motor area [SMA], and posterior parietal cortex [area 5]), and we analyzed the pattern of labeled neurons within the hand representation of M1. Instead of being uniformly dispersed across M1, neurons sending projections to each distant cortical area were largely segregated in different subregions of M1. These data support the view that primates split the cortical real estate of M1 into modules, each preferentially interconnected with a particular cortical area within the sensorimotor network. This modular organization could sustain parallel processing of interactions with multiple specialized cortical areas to increase the behavioral repertoire of the hand. PMID:27322001

  4. The relative noise levels of parallel axis gear sets with various contact ratios and gear tooth forms

    NASA Technical Reports Server (NTRS)

    Drago, Raymond J.; Lenski, Joseph W., Jr.; Spencer, Robert H.; Valco, Mark; Oswald, Fred B.

    1993-01-01

    The real noise reduction benefits which may be obtained through the use of one gear tooth form as compared to another is an important design parameter for any geared system, especially for helicopters in which both weight and reliability are very important factors. This paper describes the design and testing of nine sets of gears which are as identical as possible except for their basic tooth geometry. Noise measurements were made at various combinations of load and speed for each gear set so that direct comparisons could be made. The resultant data was analyzed so that valid conclusions could be drawn and interpreted for design use.

  5. Self-Formed Barrier with Cu-Mn alloy Metallization and its Effects on Reliability

    SciTech Connect

    Koike, J.; Wada, M.; Usui, T.; Nasu, H.; Takahashi, S.; Shimizu, N.; Yoshimaru, M.; Shibata, H.

    2006-02-07

    Advancement of semiconductor devices requires the realization of an ultra-thin (less than 5 nm thick) diffusion barrier layer between Cu interconnect and insulating layers. Self-forming barrier layers have been considered as an alternative barrier structure to the conventional Ta/TaN barrier layers. The present work investigated the possibility of the self-forming barrier layer using Cu-Mn alloy thin films deposited directly on SiO2. After annealing at 450 deg. C for 30 min, an amorphous oxide layer of 3-4 nm in thickness was formed uniformly at the interface. The oxide formation was accompanied by complete expulsion of Mn atoms from the Cu-Mn alloy, leading to a drastic decrease in resistivity of the film. No interdiffusion was observed between Cu and SiO2, indicating an excellent diffusion-barrier property of the interface oxide.

  6. The Validity and Reliability of The Social Communication Questionnaire-Turkish Form in Autistics Aged 4-18 Years

    PubMed Central

    Avcil, Sibelnur; Baykara, Burak; Baydur, Hakan; Münir, Kerim M.; Inal Emiroğlu, Neslihan

    2016-01-01

    Objective The Social Communication Questionnaire (SCQ) is a valid and reliable 40-item scale used to assess pervasive developmental disorders (PDDs). The aim of this study was to determine the validity and reliability of the SCQ-Turkish Form (SCQ-TF). Materials and Methods The study included 100 children and adolescents aged 4-18 years; 50 were diagnosed as PDD and 50 were diagnosed with intellectual disability (ID) based on DSM-IV-TR criteria. The consistency, test-retest reliability, content validity, and discriminant validity of SCQ-TF for the groups in the study sample were evaluated. SCQ-TF was compared to the Childhood Autism Rating Scale (CARS), Autism Behavioural Checklist (ABC), and Clinical Global Impression-Severity of Illness (CGI-SI). The most appropriate SCQ-TF cut-off point was determined via ROC analysis. Results The 4-factor structure of SCQ-TF accounted for 43.0% of the observed total variance. Correlations between SCQ-TF and the other measures were significant. The Cronbach's alpha value for the SCQ-TF total score was 0.80. The intraclass correlation coefficient (ICC) varied between 0.87 and 0.96, and the cut-off point was 15. Conclusion The findings show that SCQ-TF is valid and reliable for use in Turkey in those aged 4-18 years. PMID:25742038

  7. Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments

    ERIC Educational Resources Information Center

    Satern, Miriam N.

    2011-01-01

    Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the…

  8. An Investigation into the Test Reliability of the Pupil Control Ideology Form.

    ERIC Educational Resources Information Center

    Gaffney, Patrick V.; Byrd-Gaffney, Sharon

    The Pupil Control Ideology Form (PCI) is one of the major instruments used by researchers interested in the study of school climate. Pupil control is a central feature of the organizational life of schools, and each school appears to have a prevailing ideology of pupil control. The PCI is a self-report instrument used to measure an educator's…

  9. Reliability of equivalent sphere model in blood-forming organ dose estimation

    SciTech Connect

    Shinn, J.L.; Wilson, J.W.; Nealy, J.E.

    1990-04-01

    The radiation dose equivalents to blood-forming organs (BFO's) of the astronauts at the Martian surface due to major solar flare events are calculated using the detailed body geometry of Langley and Billings. The solar flare spectra of February 1956, November 1960, and August 1972 events are employed instead of the idealized Webber form. The detailed geometry results are compared with those based on the 5-cm sphere model which was used often in the past to approximate BFO dose or dose equivalent. Larger discrepancies are found for the later two events possibly due to the lower numbers of highly penetrating protons. It is concluded that the 5-cm sphere model is not suitable for quantitative use in connection with future NASA deep-space, long-duration mission shield design studies.

  10. Reliability of equivalent sphere model in blood-forming organ dose estimation

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.; Nealy, John E.

    1990-01-01

    The radiation dose equivalents to blood-forming organs (BFO's) of the astronauts at the Martian surface due to major solar flare events are calculated using the detailed body geometry of Langley and Billings. The solar flare spectra of February 1956, November 1960, and August 1972 events are employed instead of the idealized Webber form. The detailed geometry results are compared with those based on the 5-cm sphere model which was used often in the past to approximate BFO dose or dose equivalent. Larger discrepancies are found for the later two events possibly due to the lower numbers of highly penetrating protons. It is concluded that the 5-cm sphere model is not suitable for quantitative use in connection with future NASA deep-space, long-duration mission shield design studies.

  11. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative

  12. easyCBM Beginning Reading Measures: Grades K-1 Alternate Form Reliability and Criterion Validity with the SAT-10. Technical Report #1403

    ERIC Educational Resources Information Center

    Wray, Kraig; Lai, Cheng-Fei; Sáez, Leilani; Alonzo, Julie; Tindal, Gerald

    2013-01-01

    We report the results of an alternate form reliability and criterion validity study of kindergarten and grade 1 (N = 84-199) reading measures from the easyCBM© assessment system and Stanford Early School Achievement Test/Stanford Achievement Test, 10th edition (SESAT/SAT-­10) across 5 time points. The alternate form reliabilities ranged from…

  13. Reliability of the State-Trait Anxiety Inventory, Form Y in Japanese samples.

    PubMed

    Iwata, N; Mishima, N

    1999-04-01

    The internal consistency of the State-Trait Anxiety Inventory, Form Y was examined using data collected from Japanese participants by five diverse surveys, in which one included American university students. Cronbach coefficient alpha was calculated separately for state and trait items as well as for anxiety-present and absent items. The internal consistency was higher for the anxiety-absent items than those of the state and trait anxiety items, but this tendency was not clear for the anxiety-present items. The trait anxiety items showed the lowest internal consistency for all Japanese groups, whereas the anxiety-present items showed the lowest alpha for American university students. It can be considered that this difference might induce the difference in two--factor structure between Japanese and people in Western countries. PMID:10335063

  14. Parallel-plate submicron gap formed by micromachined low-density pillars for near-field radiative heat transfer

    SciTech Connect

    Ito, Kota; Miura, Atsushi; Iizuka, Hideo; Toshiyoshi, Hiroshi

    2015-02-23

    Near-field radiative heat transfer has been a subject of great interest due to the applicability to thermal management and energy conversion. In this letter, a submicron gap between a pair of diced fused quartz substrates is formed by using micromachined low-density pillars to obtain both the parallelism and small parasitic heat conduction. The gap uniformity is validated by the optical interferometry at four corners of the substrates. The heat flux across the gap is measured in a steady-state and is no greater than twice of theoretically predicted radiative heat flux, which indicates that the parasitic heat conduction is suppressed to the level of the radiative heat transfer or less. The heat conduction through the pillars is modeled, and it is found to be limited by the thermal contact resistance between the pillar top and the opposing substrate surface. The methodology to form and evaluate the gap promotes the near-field radiative heat transfer to various applications such as thermal rectification, thermal modulation, and thermophotovoltaics.

  15. Non-image forming effects of illuminance level: Exploring parallel effects on physiological arousal and task performance.

    PubMed

    Huiberts, Laura M; Smolders, Karin C H J; de Kort, Yvonne A W

    2016-10-01

    This study investigated diurnal non-image forming (NIF) effects of illuminance level on physiological arousal in parallel to NIF effects on vigilance and working memory performance. We employed a counterbalanced within-subjects design in which thirty-nine participants (mean age=21.2; SD=2.1; 11 male) completed three 90-min sessions (165 vs. 600lx vs. 1700lx at eye level) either in the morning (N=18) or afternoon (N=21). During each session, participants completed four measurement blocks (incl. one baseline block) each consisting of a 10-min Psychomotor Vigilance Task (PVT) and a Backwards Digit-Span Task (BDST) including easy trials (4-6 digits) and difficult trials (7-8 digits). Heart rate (HR), skin conductance level (SCL) and systolic blood pressure (SBP) were measured continuously. The results revealed significant improvements in performance on the BDST difficult trials under 1700lx vs. 165lx (p=0.01), while illuminance level did not affect performance on the PVT and BDST easy trials. Illuminance level impacted HR and SCL, but not SBP. In the afternoon sessions, HR was significantly higher under 1700lx vs. 165lx during PVT performance (p=0.05), while during BDST performance, HR was only slightly higher under 600 vs. 165lx (p=0.06). SCL was significantly higher under 1700lx vs. 165lx during performance on BDST easy trials (p=0.02) and showed similar, but nonsignificant trends during the PVT and BDST difficult trials. Although both physiology and performance were affected by illuminance level, no consistent pattern emerged with respect to parallel changes in physiology and performance. Rather, physiology and performance seemed to be affected independently, via unique pathways. PMID:27221368

  16. The Forms of Bullying Scale (FBS): validity and reliability estimates for a measure of bullying victimization and perpetration in adolescence.

    PubMed

    Shaw, Thérèse; Dooley, Julian J; Cross, Donna; Zubrick, Stephen R; Waters, Stacey

    2013-12-01

    The study of bullying behavior and its consequences for young people depends on valid and reliable measurement of bullying victimization and perpetration. Although numerous self-report bullying-related measures have been developed, robust evidence of their psychometric properties is scant, and several limitations inhibit their applicability. The Forms of Bullying Scale (FBS), with versions to measure bullying victimization (FBS-V) and perpetration (FBS-P), was developed on the basis of existing instruments, for use with 12- to 15-year-old adolescents to economically, yet comprehensively measure both bullying perpetration and victimization. Measurement properties were estimated. Scale validity was tested using data from 2 independent studies of 3,496 Grade 8 and 783 Grade 8-10 students, respectively. Construct validity of scores on the FBS was shown in confirmatory factor analysis. The factor structure was not invariant across gender. Strong associations between the FBS-V and FBS-P and separate single-item bullying items demonstrated adequate concurrent validity. Correlations, in directions as expected with social-emotional outcomes (i.e., depression, anxiety, conduct problems, and peer support), provided robust evidence of convergent and discriminant validity. Responses to the FBS items were found to be valid and concurrently reliable measures of self-reported frequency of bullying victimization and perpetration, as well as being useful to measure involvement in the different forms of bullying behaviors. (PsycINFO Database Record (c) 2013 APA, all rights reserved). PMID:23730831

  17. The Four Canonical TPR Subunits of Human APC/C Form Related Homo-Dimeric Structures and Stack in Parallel to Form a TPR Suprahelix☆

    PubMed Central

    Zhang, Ziguo; Chang, Leifu; Yang, Jing; Conin, Nora; Kulkarni, Kiran; Barford, David

    2013-01-01

    The anaphase-promoting complex or cyclosome (APC/C) is a large E3 RING-cullin ubiquitin ligase composed of between 14 and 15 individual proteins. A striking feature of the APC/C is that only four proteins are involved in directly recognizing target proteins and catalyzing the assembly of a polyubiquitin chain. All other subunits, which account for > 80% of the mass of the APC/C, provide scaffolding functions. A major proportion of these scaffolding subunits are structurally related. In metazoans, there are four canonical tetratricopeptide repeat (TPR) proteins that form homo-dimers (Apc3/Cdc27, Apc6/Cdc16, Apc7 and Apc8/Cdc23). Here, we describe the crystal structure of the N-terminal homo-dimerization domain of Schizosaccharomyces pombe Cdc23 (Cdc23Nterm). Cdc23Nterm is composed of seven contiguous TPR motifs that self-associate through a related mechanism to those of Cdc16 and Cdc27. Using the Cdc23Nterm structure, we generated a model of full-length Cdc23. The resultant “V”-shaped molecule docks into the Cdc23-assigned density of the human APC/C structure determined using negative stain electron microscopy (EM). Based on sequence conservation, we propose that Apc7 forms a homo-dimeric structure equivalent to those of Cdc16, Cdc23 and Cdc27. The model is consistent with the Apc7-assigned density of the human APC/C EM structure. The four canonical homo-dimeric TPR proteins of human APC/C stack in parallel on one side of the complex. Remarkably, the uniform relative packing of neighboring TPR proteins generates a novel left-handed suprahelical TPR assembly. This finding has implications for understanding the assembly of other TPR-containing multimeric complexes. PMID:23583778

  18. The Bruininks-Oseretsky Test of Motor Proficiency-Short Form is reliable in children living in remote Australian Aboriginal communities

    PubMed Central

    2013-01-01

    Background The Lililwan Project is the first population-based study to determine Fetal Alcohol Spectrum Disorders (FASD) prevalence in Australia and was conducted in the remote Fitzroy Valley in North Western Australia. The diagnostic process for FASD requires accurate assessment of gross and fine motor functioning using standardised cut-offs for impairment. The Bruininks-Oseretsky Test of Motor Proficiency, Second Edition (BOT-2) is a norm-referenced assessment of motor function used worldwide and in FASD clinics in North America. It is available in a Complete Form with 53 items or a Short Form with 14 items. Its reliability in measuring motor performance in children exposed to alcohol in utero or living in remote Australian Aboriginal communities is unknown. Methods A prospective inter-rater and test-retest reliability study was conducted using the BOT-2 Short Form. A convenience sample of children (n = 30) aged 7 to 9 years participating in the Lililwan Project cohort (n = 108) study, completed the reliability study. Over 50% of mothers of Lililwan Project children drank alcohol during pregnancy. Two raters simultaneously scoring each child determined inter-rater reliability. Test-retest reliability was determined by assessing each child on a second occasion using predominantly the same rater. Reliability was analysed by calculating Intra-Class correlation Coefficients, ICC(2,1), Percentage Exact Agreement (PEA) and Percentage Close Agreement (PCA) and measures of Minimal Detectable Change (MDC) were calculated. Results Thirty Aboriginal children (18 male, 12 female: mean age 8.8 years) were assessed at eight remote Fitzroy Valley communities. The inter-rater reliability for the BOT-2 Short Form score sheet outcomes ranged from 0.88 (95%CI, 0.77 – 0.94) to 0.92 (95%CI, 0.84 – 0.96) indicating excellent reliability. The test-retest reliability (median interval between tests being 45.5 days) for the BOT-2 Short Form score sheet outcomes ranged from

  19. The Myeloproliferative Neoplasm Symptom Assessment Form (MPN-SAF): international prospective validation and reliability trial in 402 patients.

    PubMed

    Scherber, Robyn; Dueck, Amylou C; Johansson, Peter; Barbui, Tiziano; Barosi, Giovanni; Vannucchi, Alessandro M; Passamonti, Francesco; Andreasson, Bjorn; Ferarri, Maria L; Rambaldi, Alessandro; Samuelsson, Jan; Birgegard, Gunnar; Tefferi, Ayalew; Harrison, Claire N; Radia, Deepti; Mesa, Ruben A

    2011-07-14

    Symptomatic burden in myeloproliferative neoplasms is present in most patients and compromises quality of life. We sought to validate a broadly applicable 18-item instrument (Myeloproliferative Neoplasm Symptom Assessment Form [MPN-SAF], coadministered with the Brief Fatigue Inventory) to assess symptoms of myelofibrosis, essential thrombocythemia, and polycythemia vera among prospective cohorts in the United States, Sweden, and Italy. A total of 402 MPN-SAF surveys were administered (English [25%], Italian [46%], and Swedish [28%]) in 161 patients with essential thrombocythemia, 145 patients with polycythemia vera, and 96 patients with myelofibrosis. Responses among the 3 administered languages showed great consistency after controlling for MPN subtype. Strong correlations existed between individual items and key symptomatic elements represented on both the MPN-SAF and the European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire-C30. Enrolling physicians' blinded opinion of patient symptoms (6 symptoms assessed) were highly correlated with corresponding patients' responses. Serial administration of the English MPN-SAF among 53 patients showed that most MPN-SAF items are well correlated (r > 0.5, P < .001) and highly reproducible (intraclass correlation coefficient > 0.7). The MPN-SAF is a comprehensive and reliable instrument that is available in multiple languages to evaluate symptoms associated with all types of MPNs in clinical trials globally. PMID:21536863

  20. Farsi Version of Social Skills Rating System-Secondary Student Form: Cultural Adaptation, Reliability and Construct Validity

    PubMed Central

    Eslami, Ahmad Ali; Amidi Mazaheri, Maryam; Mostafavi, Firoozeh; Abbasi, Mohamad Hadi; Noroozi, Ensieh

    2014-01-01

    Objective: Assessment of social skills is a necessary requirement to develop and evaluate the effectiveness of cognitive and behavioral interventions. This paper reports the cultural adaptation and psychometric properties of the Farsi version of the social skills rating system-secondary students form (SSRS-SS) questionnaire (Gresham and Elliot, 1990), in a normative sample of secondary school students. Methods: A two-phase design was used that phase 1 consisted of the linguistic adaptation and in phase 2, using cross-sectional sample survey data, the construct validity and reliability of the Farsi version of the SSRS-SS were examined in a sample of 724 adolescents aged from 13 to 19 years. Results: Content validity index was excellent, and the floor/ceiling effects were low. After deleting five of the original SSRS-SS items, the findings gave support for the item convergent and divergent validity. Factor analysis revealed four subscales. Results showed good internal consistency (0.89) and temporal stability (0.91) for the total scale score. Conclusion: Findings demonstrated support for the use of the 27-item Farsi version in the school setting. Directions for future research regarding the applicability of the scale in other settings and populations of adolescents are discussed. PMID:25053964

  1. HELIOS Critical Design Review: Reliability

    NASA Technical Reports Server (NTRS)

    Benoehr, H. C.; Herholz, J.; Prem, H.; Mann, D.; Reichert, L.; Rupp, W.; Campbell, D.; Boettger, H.; Zerwes, G.; Kurvin, C.

    1972-01-01

    This paper presents Helios Critical Design Review Reliability form October 16-20, 1972. The topics include: 1) Reliability Requirement; 2) Reliability Apportionment; 3) Failure Rates; 4) Reliability Assessment; 5) Reliability Block Diagram; and 5) Reliability Information Sheet.

  2. Comparison of Educators' and Industrial Managers' Work Motivation Using Parallel Forms of the Work Components Study Questionnaire.

    ERIC Educational Resources Information Center

    Thornton, Billy W.; And Others

    The idea that educators would differ from business managers on Herzberg's motivation factors and Blum's security orientations was posited. Parallel questionnaires were used to measure the motivational variables. The sample was composed of 432 teachers, 118 administrators, and 192 industrial managers. Data were analyzed using multivariate and…

  3. Binding of oligonucleotides to a viral hairpin forming RNA triplexes with parallel G*G•C triplets

    PubMed Central

    Carmona, Pedro; Molina, Marina

    2002-01-01

    Infrared and UV spectroscopies have been used to study the assembly of a hairpin nucleotide sequence (nucleotides 3–30) of the 5′ non-coding region of the hepatitis C virus RNA (5′-GGCGGGGAUUAUCCCCGCUGUGAGGCGG-3′) with a RNA 20mer ligand (5′-CCGCCUCACAAAGGUGGGGU-3′) in the presence of magnesium ion and spermidine. The resulting complex involves two helical structural domains: the first one is an intermolecular duplex stem at the bottom of the target hairpin and the second one is a parallel triplex generated by the intramolecular hairpin duplex and the ligand. Infrared spectroscopy shows that N-type sugars are exclusively present in the complex. This is the first case of formation of a RNA parallel triplex with purine motif and shows that this type of targeting RNA strands to viral RNA duplexes can be used as an alternative to antisense oligonucleotides or ribozymes. PMID:11884630

  4. Re-forming supercritical quasi-parallel shocks. II - Mechanism for wave generation and front re-formation

    NASA Technical Reports Server (NTRS)

    Winske, D.; Thomas, V. A.; Omidi, N.; Quest, K. B.

    1990-01-01

    This paper continues the study of Thomas et al. (1990) in which hybrid simulations of quasi-parallel shocks were performed in one and two spatial dimensions. To identify the wave generation processes, the electromagnetic structure of the shock is examined by performing a number of one-dimensional hybrid simulations of quasi-parallel shocks for various upstream conditions. In addition, numerical experiments were carried out in which the backstreaming ions were removed from calculations to show their fundamental importance in reformation process. The calculations show that the waves are excited before ions can propagate far enough upstream to generate resonant modes. At some later times, the waves are regenerated at the leading edge of the interface, with properties like those of their initial interactions.

  5. Short-Forms of the Schedule for Nonadaptive and Adaptive Personality (SNAP) for Self- and Collateral Ratings: Development, Reliability, and Validity.

    ERIC Educational Resources Information Center

    Harlan, Elena; Clark, Lee Anna

    1999-01-01

    Reports the development of a paragraph-descriptor short form of the Schedule for Nonadaptive and Adaptive Personality (SNAP); (L. Clark, 1993) with self- and other versions. Data from 294 college students, with parental ratings for 94 students, support the reliability and validity of the measure. (SLD)

  6. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 1. Technical Report #1216

    ERIC Educational Resources Information Center

    Anderson, Daniel; Park, Jasmine, Bitnara; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest/and alternate form) and G-Theory/D-Study research on the easy CBM reading measures, grades 1-5. Data were gathered in the spring 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due…

  7. Parent Ratings Using the Chinese Version of the Parent Gifted Rating Scales-School Form: Reliability and Validity for Chinese Students

    ERIC Educational Resources Information Center

    Li, Huijun; Lee, Donghyuck; Pfeiffer, Steve I.; Petscher, Yaacov

    2008-01-01

    This study examined the reliability and validity of the scores of a Chinese-translated version of the Gifted Rating Scales-School Form (GRS-S) using parents as raters and explored the effects of gender and grade on the ratings. A total of 222 parents participated in the study and rated their child independently using the Chinese version of the…

  8. Validity and Reliability of the Turkish Form of Technology-Rich Outcome-Focused Learning Environment Inventory

    ERIC Educational Resources Information Center

    Cakir, Mustafa

    2011-01-01

    The purpose of the study was to investigate the reliability and validity of a Turkish adaptation of Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI) which was developed by Aldridge, Dorman, and Fraser. A sample of 985 students from 16 high schools (Grades 9-12) participated in the study. Translation process followed…

  9. Investigating the Comparability of School Scores across Test Forms that Are Not Parallel. Technical Guidelines for Performance Assessment.

    ERIC Educational Resources Information Center

    Fitzpatrick, Anne R.

    This study, one of a series designed to answer practical questions about performance based assessment, examined the comparability of school scores on short, nonparallel test forms. The data were obtained from mathematics tests with both multiple choice (MC) and performance assessment (PA) items. The tests were administered in a statewide testing…

  10. Logical memory subtest of the Wechsler Memory Scale: age and education norms and alternate-form reliability of two scoring systems.

    PubMed

    Abikoff, H; Alvir, J; Hong, G; Sukoff, R; Orazio, J; Solomon, S; Saravay, S

    1987-08-01

    The Logical Memory (LM) subtest of the Wechsler Memory Scale has been characterized by imprecise scoring instructions which can make data interpretation and study comparisons difficult. A total of 339 adults, from 18 to 83 years old, took either Form I or Form II of the LM. Verbal recall of the story passages was evaluated using gist and verbatim scoring systems. Interrater reliability was very high for both scoring approaches. The two forms were equivalent for gist recall. However, verbatim recall of Form I was more difficult than Form II because the former consists of more words to remember. Recall was related more to educational level than to age. For both gist and verbatim scoring, age and education norms were generated for immediate, delayed, and 24-h recall. PMID:3597734

  11. Reliability and Validity of the Korean Version of the Childhood Trauma Questionnaire-Short Form for Psychiatric Outpatients

    PubMed Central

    Park, Seon-Cheol; Yang, Hyunjoo; Oh, Dong Hoon

    2011-01-01

    Objective The Childhood Trauma Questionnaire (CTQ) is perhaps the most widely used and well-studied retrospective measure of childhood abuse or neglect. This study tested the initial reliability and validity of a Korean translation of the Childhood Trauma Questionnaire (CTQ-K) among non-psychotic psychiatric outpatients. Methods The CTQ-K was administered to a total of 163 non-psychotic psychiatric outpatients at a university-affiliated training hospital. Internal consistency, four-week test-retest reliability, and validity were calculated. A portion of the participants (n=65) also completed the Trauma Assessment Questionnaire (TAQ), the Impact of Events Scale-Revised, and the Dissociative Experiences Scale-Taxon. Results Four-week test-retest reliability was high (r=0.87) and internal consistency was good (Cronbach's α=0.88). Each type of childhood trauma was significantly correlated with the corresponding subscale of the TAQ, thus confirming its concurrent validity. In addition, the CTQ-K total score was positively related to post-traumatic symptoms and pathological dissociation, demonstrating the convergent validity of the scale. The CTQ-K was also negatively correlated with the competence and safety subscale of the TAQ, confirming discriminant validity. Additionally, we confirmed the factorial validity by identifying a five-factor structure that explained 64% of the total variance. Conclusion Our study indicates that the CTQ-K is a measure of psychometric soundness that can be used to assess childhood abuse or neglect in Korean patients. It also supports the cross-cultural equivalence of the scale. PMID:22216039

  12. An Investigation of Psychometric Properties of Coping Styles Scale Brief Form: A Study of Validity and Reliability

    ERIC Educational Resources Information Center

    Bacanli, Hasan; Surucu, Mustafa; Ilhan, Tahsin

    2013-01-01

    The aim of the current study was to develop a short form of Coping Styles Scale based on COPE Inventory. A total of 275 undergraduate students (114 female, and 74 male) were administered in the first study. In order to test factors structure of Coping Styles Scale Brief Form, principal components factor analysis and direct oblique rotation was…

  13. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Passage Reading Fluency Assessments: Grade 4. Technical Report #1219

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

  14. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 5. Technical Report #1220

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

  15. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 2. Technical Report #1217

    ERIC Educational Resources Information Center

    Anderson, Daniel; Lai, Cheg-Fei; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest an alternate form) and G-Theory/D-Study on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from the convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due to…

  16. A G-Rich Sequence within the c-kit Oncogene Promoter Forms a Parallel G-Quadruplex Having Asymmetric G-Tetrad Dynamics

    PubMed Central

    Hsu, Shang-Te Danny; Varnai, Peter; Bugaut, Anthony; Reszka, Anthony P.; Neidle, Stephen; Balasubramanian, Shankar

    2011-01-01

    Guanine-rich DNA sequences with the ability to form quadruplex structures are enriched in the promoter regions of protein-coding genes, particularly those of proto-oncogenes. G-quadruplexes are structurally polymorphic and their folding topologies can depend on the sample conditions. We report here on a structural study using solution state NMR spectroscopy of a second G-quadruplex-forming motif (c-kit2) that has been recently identified in the promoter region of the c-kit oncogene. In the presence of potassium ions, c-kit2 exists as an ensemble of structures that share the same parallel-stranded propeller-type conformations. Subtle differences in structural dynamics have been identified using hydrogen–deuterium exchange experiments by NMR spectroscopy, suggesting the coexistence of at least two structurally similar but dynamically distinct substates, which undergo slow interconversion on the NMR timescale. PMID:19705869

  17. A G-rich sequence within the c-kit oncogene promoter forms a parallel G-quadruplex having asymmetric G-tetrad dynamics.

    PubMed

    Hsu, Shang-Te Danny; Varnai, Peter; Bugaut, Anthony; Reszka, Anthony P; Neidle, Stephen; Balasubramanian, Shankar

    2009-09-23

    Guanine-rich DNA sequences with the ability to form quadruplex structures are enriched in the promoter regions of protein-coding genes, particularly those of proto-oncogenes. G-quadruplexes are structurally polymorphic and their folding topologies can depend on the sample conditions. We report here on a structural study using solution state NMR spectroscopy of a second G-quadruplex-forming motif (c-kit2) that has been recently identified in the promoter region of the c-kit oncogene. In the presence of potassium ions, c-kit2 exists as an ensemble of structures that share the same parallel-stranded propeller-type conformations. Subtle differences in structural dynamics have been identified using hydrogen-deuterium exchange experiments by NMR spectroscopy, suggesting the coexistence of at least two structurally similar but dynamically distinct substates, which undergo slow interconversion on the NMR timescale. PMID:19705869

  18. Can older adults with dementia accurately report depression using brief forms? Reliability and validity of the Geriatric Depression Scale.

    PubMed

    Lach, Helen W; Chang, Yu-Ping; Edwards, Dorothy

    2010-05-01

    The Geriatric Depression Scale (GDS) is a commonly used screening tool, but its use in older adults with cognitive impairment has been controversial. This study compared the short forms of the GDS with clinician diagnosis of depression using standard criteria (Diagnostic and Statistical Manual of Mental Disorders, 4th edition, text revision) in people with and without dementia. Sensitivity and specificity were acceptable for all forms of the GDS. These results build evidence for using the short GDS 5- and 15-item versions in populations that include people with mild to moderate dementia, increasing the ease of depression screening so it can be performed more frequently in clinical settings. PMID:20349852

  19. The Truncated Human Telomeric Sequence forms a Hybrid-Type Intramolecular Mixed Parallel/antiparallel G-quadruplex Structure in K(+) Solution.

    PubMed

    Liu, Yuxia; Cheng, Dengfeng; Ge, Min; Lin, Weizhen

    2016-07-01

    In 80-90% tumor cells, telomerase becomes active and stabilizes the length of telomeres. The formation and stabilization of G-quadruplexes formed from human telomeric sequences have been proved able to inhibit the activity of telomerase, thus human telomeric G-quadruplex structure has become a potential target for the development of cancer therapy. Hence, structure of G-quadruplex formed in K(+) solution has been an attractive hotspot for further studies. However, the exact structure of human telomeric G-quadruplex in K(+) is extremely controversial, this study provides information for the understanding of different G-quadruplexes. Here, we report that 22nt and 24nt human telomeric sequences form unimolecular hybrid-type mixed parallel/antiparallel G-quadruplex in K(+) solution elucidated utilizing Circular Dichroism, Differential Scanning Calorimetry, and gel electrophoresis. Moreover, individual configuration of these two sequences was speculated in this study. The detailed structure information of the G-quadruplex formed under physiologically relevant condition is necessary for structure-based rational drug design. PMID:26867976

  20. Reliability and validity of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in evaluations of chronic low back pain patients.

    PubMed

    Tarescavage, Anthony M; Scheman, Judith; Ben-Porath, Yossef S

    2015-06-01

    The purpose of the current study was to investigate the reliability and concurrent validity of Minnesota Multiphasic Personality Inventory (MMPI)-2-Restructured Form (2-RF) (Ben-Porath & Tellegen, 2008/2011) scores in a sample of 811 chronic low back pain patients (346 males, 529 females) beginning treatment in a short-term interdisciplinary pain rehabilitation program. We calculated internal consistency coefficients, mean-item correlations, and SEM for all substantive scales, as well as zero-order correlations with collateral medical record information and self-report testing. Results indicated reliability and validity for most of the MMPI-2-RF substantive scales. Implications of these findings and limitations of this study are discussed. PMID:25436662

  1. The C-terminal region of the transcriptional regulator THAP11 forms a parallel coiled-coil domain involved in protein dimerization.

    PubMed

    Cukier, Cyprian D; Maveyraud, Laurent; Saurel, Olivier; Guillet, Valérie; Milon, Alain; Gervais, Virginie

    2016-06-01

    Thanatos associated protein 11 (THAP11) is a cell cycle and cell growth regulator differentially expressed in cancer cells. THAP11 belongs to a distinct family of transcription factors recognizing specific DNA sequences via an atypical zinc finger motif and regulating diverse cellular processes. Outside the extensively characterized DNA-binding domain, THAP proteins vary in size and predicted domains, for which structural data are still lacking. We report here the crystal structure of the C-terminal region of human THAP11 protein, providing the first 3D structure of a coiled-coil motif from a THAP family member. We further investigate the stability, dynamics and oligomeric properties of the determined structure combining molecular dynamics simulations and biophysical experiments. Our results show that the C-ter region of THAP11 forms a left-handed parallel homo-dimeric coiled-coil structure possessing several unusual features. PMID:26975212

  2. Identifying a largest logical plane from a plurality of logical planes formed of compute nodes of a subcommunicator in a parallel computer

    DOEpatents

    Davis, Kristan D.; Faraj, Daniel A.

    2016-07-12

    In a parallel computer, a largest logical plane from a plurality of logical planes formed of compute nodes of a subcommunicator may be identified by: identifying, by each compute node of the subcommunicator, all logical planes that include the compute node; calculating, by each compute node for each identified logical plane that includes the compute node, an area of the identified logical plane; initiating, by a root node of the subcommunicator, a gather operation; receiving, by the root node from each compute node of the subcommunicator, each node's calculated areas as contribution data to the gather operation; and identifying, by the root node in dependence upon the received calculated areas, a logical plane of the subcommunicator having the greatest area.

  3. Female Genital Mutilation in Sierra Leone: Forms, Reliability of Reported Status, and Accuracy of Related Demographic and Health Survey Questions

    PubMed Central

    Grant, Donald S.; Berggren, Vanja

    2013-01-01

    Objective. To determine forms of female genital mutilation (FGM), assess consistency between self-reported and observed FGM status, and assess the accuracy of Demographic and Health Surveys (DHS) FGM questions in Sierra Leone. Methods. This cross-sectional study, conducted between October 2010 and April 2012, enrolled 558 females aged 12–47 from eleven antenatal clinics in northeast Sierra Leone. Data on demography, FGM status, and self-reported anatomical descriptions were collected. Genital inspection confirmed the occurrence and extent of cutting. Results. All participants reported FGM status; 4 refused genital inspection. Using the WHO classification of FGM, 31.7% had type Ib; 64.1% type IIb; and 4.2% type IIc. There was a high level of agreement between reported and observed FGM prevalence (81.2% and 81.4%, resp.). There was no correlation between DHS FGM responses and anatomic extent of cutting, as 2.7% reported pricking; 87.1% flesh removal; and 1.1% that genitalia was sewn closed. Conclusion. Types I and II are the main forms of FGM, with labia majora alterations in almost 5% of cases. Self-reports on FGM status could serve as a proxy measurement for FGM prevalence but not for FGM type. The DHS FGM questions are inaccurate for determining cutting extent. PMID:24204384

  4. Female genital mutilation in sierra leone: forms, reliability of reported status, and accuracy of related demographic and health survey questions.

    PubMed

    Bjälkander, Owolabi; Grant, Donald S; Berggren, Vanja; Bathija, Heli; Almroth, Lars

    2013-01-01

    Objective. To determine forms of female genital mutilation (FGM), assess consistency between self-reported and observed FGM status, and assess the accuracy of Demographic and Health Surveys (DHS) FGM questions in Sierra Leone. Methods. This cross-sectional study, conducted between October 2010 and April 2012, enrolled 558 females aged 12-47 from eleven antenatal clinics in northeast Sierra Leone. Data on demography, FGM status, and self-reported anatomical descriptions were collected. Genital inspection confirmed the occurrence and extent of cutting. Results. All participants reported FGM status; 4 refused genital inspection. Using the WHO classification of FGM, 31.7% had type Ib; 64.1% type IIb; and 4.2% type IIc. There was a high level of agreement between reported and observed FGM prevalence (81.2% and 81.4%, resp.). There was no correlation between DHS FGM responses and anatomic extent of cutting, as 2.7% reported pricking; 87.1% flesh removal; and 1.1% that genitalia was sewn closed. Conclusion. Types I and II are the main forms of FGM, with labia majora alterations in almost 5% of cases. Self-reports on FGM status could serve as a proxy measurement for FGM prevalence but not for FGM type. The DHS FGM questions are inaccurate for determining cutting extent. PMID:24204384

  5. Reliability and validity of the Spanish version of the Child Health and Illness Profile (CHIP) Child-Edition, Parent Report Form (CHIP-CE/PRF)

    PubMed Central

    2010-01-01

    Background The objectives of the study were to assess the reliability, and the content, construct, and convergent validity of the Spanish version of the CHIP-CE/PRF, to analyze parent-child agreement, and compare the results with those of the original U.S. version. Methods Parents from a representative sample of children aged 6-12 years were selected from 9 primary schools in Barcelona. Test-retest reliability was assessed in a convenience subsample of parents from 2 schools. Parents completed the Spanish version of the CHIP-CE/PRF. The Achenbach Child Behavioural Checklist (CBCL) was administered to a convenience subsample. Results The overall response rate was 67% (n = 871). There was no floor effect. A ceiling effect was found in 4 subdomains. Reliability was acceptable at the domain level (internal consistency = 0.68-0.86; test-retest intraclass correlation coefficients = 0.69-0.85). Younger girls had better scores on Satisfaction and Achievement than older girls. Comfort domain score was lower (worse) in children with a probable mental health problem, with high effect size (ES = 1.45). The level of parent-child agreement was low (0.22-0.37). Conclusions The results of this study suggest that the parent version of the Spanish CHIP-CE has acceptable psychometric properties although further research is needed to check reliability at sub-domain level. The CHIP-CE parent report form provides a comprehensive, psychometrically sound measure of health for Spanish children 6 to 12 years old. It can be a complementary perspective to the self-reported measure or an alternative when the child is unable to complete the questionnaire. In general, the results are similar to the original U.S. version. PMID:20678198

  6. Computerized life and reliability modelling for turboprop transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Radil, K. C.; Lewicki, D. G.; Coy, J. J.

    1988-01-01

    A generalized life and reliability model is presented for parallel shaft geared prop-fan and turboprop aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on classical fatigue theory and the two parameter Weibull failure distribution. A computer program was developed to calculate the transmission life and reliability. The program is modular. In its present form, the program can analyze five different transmission arrangements. However, the program can be modified easily to include additional transmission arrangements. An example is included which compares the life of a compound two-stage transmission with the life of a split-torque, parallel compound two-stage transmission, as calculated by the computer program.

  7. Computerized life and reliability modelling for turboprop transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Radil, K. C.; Lewicki, D. G.; Coy, J. J.

    1988-01-01

    A generalized life and reliability model is presented for parallel shaft geared prop-fan and turboprop aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on classical fatigue theory and the two parameter Weibull failure distribution. A computer program was developed to calculate the transmission life and reliability. The program is modular. In its present form, the program can analyze five different transmission arrangements. However, the program can be modified easily to include additional transmission arrangements. An example is included which compares the life of a compound two-stage transmission with the life of a split-torque, parallel compound two-stage transmission as calculated by the comaputer program.

  8. Parallel Inhibition of Dopamine Amacrine Cells and Intrinsically Photosensitive Retinal Ganglion Cells in a Non-Image-Forming Visual Circuit of the Mouse Retina

    PubMed Central

    Vuong, Helen E.; Hardi, Claudia N.; Barnes, Steven

    2015-01-01

    An inner retinal microcircuit composed of dopamine (DA)-containing amacrine cells and melanopsin-containing, intrinsically photosensitive retinal ganglion cells (M1 ipRGCs) process information about the duration and intensity of light exposures, mediating light adaptation, circadian entrainment, pupillary reflexes, and other aspects of non-image-forming vision. The neural interaction is reciprocal: M1 ipRGCs excite DA amacrine cells, and these, in turn, feed inhibition back onto M1 ipRGCs. We found that the neuropeptide somatostatin [somatotropin release inhibiting factor (SRIF)] also inhibits the intrinsic light response of M1 ipRGCs and postulated that, to tune the bidirectional interaction of M1 ipRGCs and DA amacrine cells, SRIF amacrine cells would provide inhibitory modulation to both cell types. SRIF amacrine cells, DA amacrine cells, and M1 ipRGCs form numerous contacts. DA amacrine cells and M1 ipRGCs express the SRIF receptor subtypes sst2A and sst4 respectively. SRIF modulation of the microcircuit was investigated with targeted patch-clamp recordings of DA amacrine cells in TH–RFP mice and M1 ipRGCs in OPN4–EGFP mice. SRIF increases K+ currents, decreases Ca2+ currents, and inhibits spike activity in both cell types, actions reproduced by the selective sst2A agonist L-054,264 (N-[(1R)-2-[[[(1S*,3R*)-3-(aminomethyl)cyclohexyl]methyl]amino]-1-(1H-indol-3-ylmethyl)-2-oxoethyl]spiro[1H-indene-1,4′-piperidine]-1′-carboxamide) in DA amacrine cells and the selective sst4 agonist L-803,087 (N2-[4-(5,7-difluoro-2-phenyl-1H-indol-3-yl)-1-oxobutyl]-l-arginine methyl ester trifluoroacetate) in M1 ipRGCs. These parallel actions of SRIF may serve to counteract the disinhibition of M1 ipRGCs caused by SRIF inhibition of DA amacrine cells. This allows the actions of SRIF on DA amacrine cells to proceed with adjusting retinal DA levels without destabilizing light responses by M1 ipRGCs, which project to non-image-forming targets in the brain. SIGNIFICANCE

  9. Reliability of fluid systems

    NASA Astrophysics Data System (ADS)

    Kopáček, Jaroslav; Fojtášek, Kamil; Dvořák, Lukáš

    2016-03-01

    This paper focuses on the importance of detection reliability, especially in complex fluid systems for demanding production technology. The initial criterion for assessing the reliability is the failure of object (element), which is seen as a random variable and their data (values) can be processed using by the mathematical methods of theory probability and statistics. They are defined the basic indicators of reliability and their applications in calculations of serial, parallel and backed-up systems. For illustration, there are calculation examples of indicators of reliability for various elements of the system and for the selected pneumatic circuit.

  10. Parallel Inhibition of Dopamine Amacrine Cells and Intrinsically Photosensitive Retinal Ganglion Cells in a Non-Image-Forming Visual Circuit of the Mouse Retina.

    PubMed

    Vuong, Helen E; Hardi, Claudia N; Barnes, Steven; Brecha, Nicholas C

    2015-12-01

    An inner retinal microcircuit composed of dopamine (DA)-containing amacrine cells and melanopsin-containing, intrinsically photosensitive retinal ganglion cells (M1 ipRGCs) process information about the duration and intensity of light exposures, mediating light adaptation, circadian entrainment, pupillary reflexes, and other aspects of non-image-forming vision. The neural interaction is reciprocal: M1 ipRGCs excite DA amacrine cells, and these, in turn, feed inhibition back onto M1 ipRGCs. We found that the neuropeptide somatostatin [somatotropin release inhibiting factor (SRIF)] also inhibits the intrinsic light response of M1 ipRGCs and postulated that, to tune the bidirectional interaction of M1 ipRGCs and DA amacrine cells, SRIF amacrine cells would provide inhibitory modulation to both cell types. SRIF amacrine cells, DA amacrine cells, and M1 ipRGCs form numerous contacts. DA amacrine cells and M1 ipRGCs express the SRIF receptor subtypes sst(2A) and sst4 respectively. SRIF modulation of the microcircuit was investigated with targeted patch-clamp recordings of DA amacrine cells in TH-RFP mice and M1 ipRGCs in OPN4-EGFP mice. SRIF increases K(+) currents, decreases Ca(2+) currents, and inhibits spike activity in both cell types, actions reproduced by the selective sst(2A) agonist L-054,264 (N-[(1R)-2-[[[(1S*,3R*)-3-(aminomethyl)cyclohexyl]methyl]amino]-1-(1H-indol-3-ylmethyl)-2-oxoethyl]spiro[1H-indene-1,4'-piperidine]-1'-carboxamide) in DA amacrine cells and the selective sst4 agonist L-803,087 (N(2)-[4-(5,7-difluoro-2-phenyl-1H-indol-3-yl)-1-oxobutyl]-L-arginine methyl ester trifluoroacetate) in M1 ipRGCs. These parallel actions of SRIF may serve to counteract the disinhibition of M1 ipRGCs caused by SRIF inhibition of DA amacrine cells. This allows the actions of SRIF on DA amacrine cells to proceed with adjusting retinal DA levels without destabilizing light responses by M1 ipRGCs, which project to non-image-forming targets in the brain. PMID:26631476

  11. Application of principal component analysis (PCA) and improved joint probability distributions to the inverse first-order reliability method (I-FORM) for predicting extreme sea states

    DOE PAGESBeta

    Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; Neary, Vincent S.

    2016-01-06

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (Hs) and either energy period (Te) or peak period (Tp) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmental contours. This papermore » develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less

  12. The Zarit Caregiver Burden Interview Short Form (ZBI-12) in spouses of Veterans with Chronic Spinal Cord Injury, Validity and Reliability of the Persian Version

    PubMed Central

    Rajabi-Mashhadi, Mohammad T; Mashhadinejad, Hosein; Ebrahimzadeh, Mohammad H; Golhasani-Keshtan, Farideh; Ebrahimi, Hanieh; Zarei, Zahra

    2015-01-01

    Background: To test the psychometric properties of the Persian version of Zarit Burden Interview (ZBI-12) in the Iranian population. Methods: After translating and cultural adaptation of the questionnaire into Persian, 100 caregiver spouses of Iran- Iraq war (1980-88) veterans with chronic spinal cord injury who live in the city of Mashhad, Iran, invited to participate in the study. The Persian version of ZBI-12 accompanied with the Persian SF-36 was completed by the caregivers to test validity of the Persian ZBI-12.A Pearson`s correlation coefficient was calculated for validity testing. In order to assess reliability of the Persian ZBI-12, we administered the ZBI-12 randomly in 48 caregiver spouses again 3 days later. Results: Generally, the internal consistency of the questionnaire was found to be strong (Cronbach's alpha 0.77). Intercorrelation matrix between the different domains of ZBI-12 at test-retest was 0.78. The results revealed that majority of questions the Persian ZBI_12 have a significant correlation to each other. In terms of validity, our results showed that there is significant correlations between some domains of the Persian version the Short Form Health Survey -36 with the Persian Zarit Burden Interview such as Q1 with Role Physical (P=0.03),General Health (P=0.034),Social Functional (0.037), Mental Health (0.023) and Q3 with Physical Function (P=0.001),Viltality (0.002), Socil Function (0.001). Conclusions: Our findings suggest that the Zarit Burden Interview Persian version is both a valid and reliable instrument for measuring the burden of caregivers of individuals with chronic spinal cord injury. PMID:25692171

  13. Network reliability

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1985-01-01

    Network control (or network management) functions are essential for efficient and reliable operation of a network. Some control functions are currently included as part of the Open System Interconnection model. For local area networks, it is widely recognized that there is a need for additional control functions, including fault isolation functions, monitoring functions, and configuration functions. These functions can be implemented in either a central or distributed manner. The Fiber Distributed Data Interface Medium Access Control and Station Management protocols provide an example of distributed implementation. Relative information is presented here in outline form.

  14. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  15. The investigation of supply chain's reliability measure: a case study

    NASA Astrophysics Data System (ADS)

    Taghizadeh, Houshang; Hafezi, Ehsan

    2012-10-01

    In this paper, using supply chain operational reference, the reliability evaluation of available relationships in supply chain is investigated. For this purpose, in the first step, the chain under investigation is divided into several stages including first and second suppliers, initial and final customers, and the producing company. Based on the formed relationships between these stages, the supply chain system is then broken down into different subsystem parts. The formed relationships between the stages are based on the transportation of the orders between stages. Paying attention to the system elements' location, which can be in one of the five forms of series namely parallel, series/parallel, parallel/series, or their combinations, we determine the structure of relationships in the divided subsystems. According to reliability evaluation scales on the three levels of supply chain, the reliability of each chain is then calculated. Finally, using the formulas of calculating the reliability in combined systems, the reliability of each system and ultimately the whole system is investigated.

  16. A Japanese short form of the Swanson Cognitive Processing Test to measure working memory: reliability, validity, and differences in scores between primary school children of the United States and Japan.

    PubMed

    Hwang, Yeonhee; Hosokawa, Toru; Swanson, H Lee; Ishizaka, Ikuyo; Kifune, Noriyuki; Ohira, Dan; Ota, Tomio

    2006-08-01

    The purpose of this study was to examine the reliability and validity of a Japanese short form of the Swanson Cognitive Processing Test, which assesses capacity of working memory. Test-retest reliability was acceptable (r = .76). Concurrent validity was suggested through comparison of scores on the Reading Span Task (r = .55). Means on the Japanese short form were comparable with means for the 3 subtests for the older group and 2 subtests for the younger group. With the exception of the Auditory Digit Sequence, results suggested that both the Japanese short form and the initial Swanson Cognitive Processing Test measured comparably the working memory in the two samples of children. PMID:17037447

  17. Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach

    ERIC Educational Resources Information Center

    Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

    2012-01-01

    Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

  18. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  19. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  20. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  1. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  2. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  3. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer's task easier.

  4. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer`s task easier.

  5. Responses of free-ranging rhesus monkeys to a natural form of social separation. I. Parallels with mother-infant separation in captivity.

    PubMed

    Berman, C M; Rasmussen, K L; Suomi, S J

    1994-08-01

    Observations of 23 free-ranging rhesus monkey infants on Cayo Santiago, Puerto Rico, indicated that mothers' first postpartum estrous periods were marked by large increases in the amount of time infants were separated from their mothers, by disturbances in mother-infant relationships, and by increases in infant distress behavior. When their mothers resumed mating, most infants showed signs of agitation; a few briefly showed indications of depression. Male infants responded to their mothers' resumption of mating by playing more, whereas females engaged in less play and more allogrooming. The results suggest (a) that basic parallels exist between the behavioral responses of rhesus infants to their mothers' resumption of mating in the field and to forcible separation from their mothers in captivity and (b) that early separation experiences may play a role in the normal development or manifestation of sex differences in behavior. PMID:7956463

  6. Test-Retest Reliability of a Short Form of the Children’s Social Desirability Scale for Nutrition and Health-Related Research

    PubMed Central

    Miller, Patricia H.; Baxter, Suzanne D.; Hitchcock, David B.; Royer, Julie A.; Smith, Albert F.; Guinn, Caroline H.

    2014-01-01

    Objective To examine test-retest reliability and internal consistency of the Children’s Social Desirability Short (S-CSD) scale, consisting of 14 items from the Children’s Social Desirability scale. Methods The previously validated S-CSD scale was classroom administered to 97 fourth-grade children (80% African American, 76% low socioeconomic status) in 2 sessions a month apart. Each classroom administration lasted approximately 5 minutes. Results The S-CSD scale showed acceptable levels of test-retest reliability (0.70) and internal consistency (0.82 and 0.85 for the first and second administrations, respectively). Reliability was adequate within subgroups of gender, socioeconomic status, academic achievement, and body mass index percentile. Levels of social desirability did not differ across subgroups. Conclusions and Implications Social desirability bias is a potential source of systematic response error in children’s self-report assessments of nutrition and health-related behaviors. The S-CSD scale may be used with diverse groups of children to reliably and efficiently assess social desirability bias. PMID:24418615

  7. Improved techniques of parallel gap welding and monitoring

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Gillanders, M. S.

    1984-01-01

    Welding programs which show that parallel gas welding is a reliable process are discussed. When monitoring controls and nondestructive tests are incorporated into the process, parallel gap welding becomes more reliable and cost effective. The panel fabrication techniques and the HAC thermal cycling test indicate reliable product integrity. The design and building of automated tooling and fixturing for welding are discussed.

  8. A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2012-01-01

    Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…

  9. Measuring health status in British patients with rheumatoid arthritis: reliability, validity and responsiveness of the short form 36-item health survey (SF-36).

    PubMed

    Ruta, D A; Hurst, N P; Kind, P; Hunter, M; Stubbings, A

    1998-04-01

    The objective was to assess the performance of the SF-36 health survey (SF-36) in a sample of patients with rheumatoid arthritis (RA) stratified by functional class. The eight SF-36 subscales and the two summary scales (the physical and mental component scales) were assessed for test retest reliability, construct validity and responsiveness to self-reported change in health. In 233 patients with RA, the SF-36 scales were: reliable (intra-class correlation coefficients 0.76-0.93); correlated with American College of Rheumatology (ACR) core disease activity measures [Spearman r = -0.12 (erythrocyte sedimentation rate) to -0.89 (Modified Health Assessment Questionnaire)]; and responsive to improvements in health (standardized response means 0.27-0.9). The distribution of scores on four of the eight subscales (physical function, role limitations physical, role limitations emotional and social function) was clearly non-Gaussian. Very marked floor effects were noted with the physical function scale, and both ceiling and floor effects with the other three subscales. The two SF-36 physical and mental component summary scales are reliable, valid and responsive measures of health status in patients with RA. Six of the eight subscales meet standards required for comparing groups of patients, and the physical function and general health scales may be suitable for monitoring individuals. The two scales measuring role limitations have poor measurement characteristics. The SF-36 pain and physical function scales may be suitable for use as patient self-assessed measures of pain and physical function within the ACR core disease activity set. PMID:9619895

  10. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  11. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  12. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Word and Passage Reading Fluency Assessments: Grade 3. Technical Report #1218

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

  13. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  14. Reliability training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  15. Highly improved reliability of amber light emitting diode with Ca -α-SiAlON phosphor in glass formed by gas pressure sintering for automotive applications.

    PubMed

    Yoon, Chang-Bun; Kim, Sanghyun; Choi, Sung-Woo; Yoon, Chulsoo; Ahn, Sang Hyeon; Chung, Woon Jin

    2016-04-01

    Phosphor in glass (PiG) with 40 wt% of Ca-α-SiAlON phosphor and 60 wt% of Pb-free silicate glass was synthesized and mounted on a high-power blue LED to make an amber LED for automotive applications. Gas pressure sintering was applied after the conventional sintering process was used to achieve fully dense PiG plates. Changes in photoluminescence spectra and color coordination were inspected by varying the thickness of the plates that were mounted after optical polishing and machining. A trade-off between luminous flux and color purity was observed. The commercial feasibility of amber PiG packaged LED, which can satisfy international regulations for automotive components, was successfully demonstrated by examining the practical reliability under 85% humidity at an 85°C condition. PMID:27192294

  16. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  17. Assessing the Discriminant Ability, Reliability, and Comparability of Multiple Short Forms of the Boston Naming Test in an Alzheimer’s Disease Center Cohort

    PubMed Central

    Katsumata, Yuriko; Mathews, Melissa; Abner, Erin L.; Jicha, Gregory A.; Caban-Holt, Allison; Smith, Charles D.; Nelson, Peter T.; Kryscio, Richard J.; Schmitt, Frederick A.; Fardo, David W.

    2015-01-01

    Background The Boston Naming Test (BNT) is a commonly used neuropsychological test of confrontation naming that aids in determining the presence and severity of dysnomia. Many short versions of the original 60-item test have been developed and are routinely administered in clinical/research settings. Because of the common need to translate similar measures within and across studies, it is important to evaluate the operating characteristics and agreement of different BNT versions. Methods We analyzed longitudinal data of research volunteers (n = 681) from the University of Kentucky Alzheimer’s Disease Center longitudinal cohort. Conclusions With the notable exception of the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) 15-item BNT, short forms were internally consistent and highly correlated with the full version; these measures varied by diagnosis and generally improved from normal to mild cognitive impairment (MCI) to dementia. All short forms retained the ability to discriminate between normal subjects and those with dementia. The ability to discriminate between normal and MCI subjects was less strong for the short forms than the full BNT, but they exhibited similar patterns. These results have important implications for researchers designing longitudinal studies, who must consider that the statistical properties of even closely related test forms may be quite different. PMID:25613081

  18. Development, reliability and factor analysis of a self-administered questionnaire which originates from the World Health Organization's Composite International Diagnostic Interview – Short Form (CIDI-SF) for assessing mental disorders

    PubMed Central

    2008-01-01

    Background The Composite International Diagnostic Interview – Short Form consists of short form scales for evaluating psychiatric disorders. Also for this version training of the interviewer is required. Moreover, the confidentiality could be not adequately protected. This study focuses on the preliminary validation of a brief self-completed questionnaire which originates from the CIDI-SF. Sampling and Methods A preliminary version was assessed for content and face validity. An intermediate version was evaluated for test-retest reliability. The final version of the questionnaire was evaluated for factor exploratory analysis, and internal consistency. Results After the modifications by the focus groups, the questionnaire included 29 initial probe questions and 56 secondary questions. The test retest reliability weighted Kappas were acceptable to excellent for the vast majority of questions. Factor analysis revealed six factors explaining 53.6% of total variance. Cronbach's alpha was 0.89 for the questionnaire and 0.89, 0.67, 0.71, 0.71, 0.49, and 0.67, for the six factors respectively. Conclusion The questionnaire has satisfactory reliability, and internal consistency, and might be efficient for using in community research and clinical practice. In the future, the questionnaire could be further validated (i.e., concurrent validity, discriminant validity). PMID:18402667

  19. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  20. The Satz-Mogel short form of the Wechsler Adult Intelligence Scale--revised: effects of global mental status and age on test-retest reliability.

    PubMed

    McPherson, S; Buckwalter, G J; Tingus, K; Betz, B; Back, C

    2000-10-01

    Abbreviated versions of the Wechsler Adult Intelligence Scale-Revised (WAIS-R) have been developed as time saving devices that provide accurate estimates of overall level of general intellectual functioning while decreasing test administration time. The Satz-Mogel short form of the WAIS-R has received substantial attention in the literature as an accurate measure of intellectual functions when compared with the Full WAIS-R. However, most studies comparing the Satz-Mogel version to the Full WAIS-R have only provided correlational analyses. Our study was an attempt to apply a more rigorous statistical methodology in determining if the Full WAIS-R and abbreviated versions are equivalent. We explored the impact of level of global mental status and age on the Satz-Mogel version. Although the two forms of the test correlated highly, repeated measures design indicated significant differences between Satz-Mogel and Full WAIS-R when participants were divided into groups based on level of global impairment and age. Our results suggest that the Satz-Mogel version of the test may not be equivalent to the full WAIS-R and is likely to misrepresent a patient's level of intellectual functioning, particularly for patients with progressive degenerative conditions. The implications of applying Satz-Mogel scoring to the Wechsler Adult Intelligence Scale-III (WAIS-III) are discussed. PMID:11094390

  1. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  2. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  3. Reliability physics

    NASA Technical Reports Server (NTRS)

    Cuddihy, E. F.; Ross, R. G., Jr.

    1984-01-01

    Speakers whose topics relate to the reliability physics of solar arrays are listed and their topics briefly reviewed. Nine reports are reviewed ranging in subjects from studies of photothermal degradation in encapsulants and polymerizable ultraviolet stabilizers to interface bonding stability to electrochemical degradation of photovoltaic modules.

  4. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  5. Adaptive parallel logic networks

    SciTech Connect

    Martinez, T.R.; Vidal, J.J.

    1988-02-01

    This paper presents a novel class of special purpose processors referred to as ASOCS (adaptive self-organizing concurrent systems). Intended applications include adaptive logic devices, robotics, process control, system malfunction management, and in general, applications of logic reasoning. ASOCS combines massive parallelism with self-organization to attain a distributed mechanism for adaptation. The ASOCS approach is based on an adaptive network composed of many simple computing elements (nodes) which operate in a combinational and asynchronous fashion. Problem specification (programming) is obtained by presenting to the system if-then rules expressed as Boolean conjunctions. New rules are added incrementally. In the current model, when conflicts occur, precedence is given to the most recent inputs. With each rule, desired network response is simply presented to the system, following which the network adjusts itself to maintain consistency and parsimony of representation. Data processing and adaptation form two separate phases of operation. During processing, the network acts as a parallel hardware circuit. Control of the adaptive process is distributed among the network nodes and efficiently exploits parallelism.

  6. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  7. Parallel NPARC: Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Townsend, S. E.

    1996-01-01

    Version 3 of the NPARC Navier-Stokes code includes support for large-grain (block level) parallelism using explicit message passing between a heterogeneous collection of computers. This capability has the potential for significant performance gains, depending upon the block data distribution. The parallel implementation uses a master/worker arrangement of processes. The master process assigns blocks to workers, controls worker actions, and provides remote file access for the workers. The processes communicate via explicit message passing using an interface library which provides portability to a number of message passing libraries, such as PVM (Parallel Virtual Machine). A Bourne shell script is used to simplify the task of selecting hosts, starting processes, retrieving remote files, and terminating a computation. This script also provides a simple form of fault tolerance. An analysis of the computational performance of NPARC is presented, using data sets from an F/A-18 inlet study and a Rocket Based Combined Cycle Engine analysis. Parallel speedup and overall computational efficiency were obtained for various NPARC run parameters on a cluster of IBM RS6000 workstations. The data show that although NPARC performance compares favorably with the estimated potential parallelism, typical data sets used with previous versions of NPARC will often need to be reblocked for optimum parallel performance. In one of the cases studied, reblocking increased peak parallel speedup from 3.2 to 11.8.

  8. Parallel hierarchical method in networks

    NASA Astrophysics Data System (ADS)

    Malinochka, Olha; Tymchenko, Leonid

    2007-09-01

    This method of parallel-hierarchical Q-transformation offers new approach to the creation of computing medium - of parallel -hierarchical (PH) networks, being investigated in the form of model of neurolike scheme of data processing [1-5]. The approach has a number of advantages as compared with other methods of formation of neurolike media (for example, already known methods of formation of artificial neural networks). The main advantage of the approach is the usage of multilevel parallel interaction dynamics of information signals at different hierarchy levels of computer networks, that enables to use such known natural features of computations organization as: topographic nature of mapping, simultaneity (parallelism) of signals operation, inlaid cortex, structure, rough hierarchy of the cortex, spatially correlated in time mechanism of perception and training [5].

  9. Photovoltaic module reliability workshop

    NASA Astrophysics Data System (ADS)

    Mrig, L.

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986 to 1990. The reliability photovoltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warrantees available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the U.S., PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  10. Photovoltaic module reliability workshop

    SciTech Connect

    Mrig, L.

    1990-01-01

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  11. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  12. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  13. DUST EXTINCTION FROM BALMER DECREMENTS OF STAR-FORMING GALAXIES AT 0.75 {<=} z {<=} 1.5 WITH HUBBLE SPACE TELESCOPE/WIDE-FIELD-CAMERA 3 SPECTROSCOPY FROM THE WFC3 INFRARED SPECTROSCOPIC PARALLEL SURVEY

    SciTech Connect

    Dominguez, A.; Siana, B.; Masters, D.; Henry, A. L.; Martin, C. L.; Scarlata, C.; Bedregal, A. G.; Malkan, M.; Ross, N. R.; Atek, H.; Colbert, J. W.; Teplitz, H. I.; Rafelski, M.; McCarthy, P.; Hathi, N. P.; Dressler, A.; Bunker, A.

    2013-02-15

    Spectroscopic observations of H{alpha} and H{beta} emission lines of 128 star-forming galaxies in the redshift range 0.75 {<=} z {<=} 1.5 are presented. These data were taken with slitless spectroscopy using the G102 and G141 grisms of the Wide-Field-Camera 3 (WFC3) on board the Hubble Space Telescope as part of the WFC3 Infrared Spectroscopic Parallel survey. Interstellar dust extinction is measured from stacked spectra that cover the Balmer decrement (H{alpha}/H{beta}). We present dust extinction as a function of H{alpha} luminosity (down to 3 Multiplication-Sign 10{sup 41} erg s{sup -1}), galaxy stellar mass (reaching 4 Multiplication-Sign 10{sup 8} M {sub Sun }), and rest-frame H{alpha} equivalent width. The faintest galaxies are two times fainter in H{alpha} luminosity than galaxies previously studied at z {approx} 1.5. An evolution is observed where galaxies of the same H{alpha} luminosity have lower extinction at higher redshifts, whereas no evolution is found within our error bars with stellar mass. The lower H{alpha} luminosity galaxies in our sample are found to be consistent with no dust extinction. We find an anti-correlation of the [O III] {lambda}5007/H{alpha} flux ratio as a function of luminosity where galaxies with L {sub H{alpha}} < 5 Multiplication-Sign 10{sup 41} erg s{sup -1} are brighter in [O III] {lambda}5007 than H{alpha}. This trend is evident even after extinction correction, suggesting that the increased [O III] {lambda}5007/H{alpha} ratio in low-luminosity galaxies is likely due to lower metallicity and/or higher ionization parameters.

  14. Algorithmically Specialized Parallel Architecture For Robotics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Computing system called Robot Mathematics Processor (RMP) contains large number of processor elements (PE's) connected in various parallel and serial combinations reconfigurable via software. Special-purpose architecture designed for solving diverse computational problems in robot control, simulation, trajectory generation, workspace analysis, and like. System an MIMD-SIMD parallel architecture capable of exploiting parallelism in different forms and at several computational levels. Major advantage lies in design of cells, which provides flexibility and reconfigurability superior to previous SIMD processors.

  15. MPP parallel forth

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

  16. The Trojan Lifetime Champions Health Survey: Development, Validity, and Reliability

    PubMed Central

    Sorenson, Shawn C.; Romano, Russell; Scholefield, Robin M.; Schroeder, E. Todd; Azen, Stanley P.; Salem, George J.

    2015-01-01

    Context Self-report questionnaires are an important method of evaluating lifespan health, exercise, and health-related quality of life (HRQL) outcomes among elite, competitive athletes. Few instruments, however, have undergone formal characterization of their psychometric properties within this population. Objective To evaluate the validity and reliability of a novel health and exercise questionnaire, the Trojan Lifetime Champions (TLC) Health Survey. Design Descriptive laboratory study. Setting A large National Collegiate Athletic Association Division I university. Patients or Other Participants A total of 63 university alumni (age range, 24 to 84 years), including former varsity collegiate athletes and a control group of nonathletes. Intervention(s) Participants completed the TLC Health Survey twice at a mean interval of 23 days with randomization to the paper or electronic version of the instrument. Main Outcome Measure(s) Content validity, feasibility of administration, test-retest reliability, parallel-form reliability between paper and electronic forms, and estimates of systematic and typical error versus differences of clinical interest were assessed across a broad range of health, exercise, and HRQL measures. Results Correlation coefficients, including intraclass correlation coefficients (ICCs) for continuous variables and κ agreement statistics for ordinal variables, for test-retest reliability averaged 0.86, 0.90, 0.80, and 0.74 for HRQL, lifetime health, recent health, and exercise variables, respectively. Correlation coefficients, again ICCs and κ, for parallel-form reliability (ie, equivalence) between paper and electronic versions averaged 0.90, 0.85, 0.85, and 0.81 for HRQL, lifetime health, recent health, and exercise variables, respectively. Typical measurement error was less than the a priori thresholds of clinical interest, and we found minimal evidence of systematic test-retest error. We found strong evidence of content validity, convergent

  17. Fault-tolerant parallel processor

    SciTech Connect

    Harper, R.E.; Lala, J.H. )

    1991-06-01

    This paper addresses issues central to the design and operation of an ultrareliable, Byzantine resilient parallel computer. Interprocessor connectivity requirements are met by treating connectivity as a resource that is shared among many processing elements, allowing flexibility in their configuration and reducing complexity. Redundant groups are synchronized solely by message transmissions and receptions, which aslo provide input data consistency and output voting. Reliability analysis results are presented that demonstrate the reduced failure probability of such a system. Performance analysis results are presented that quantify the temporal overhead involved in executing such fault-tolerance-specific operations. Empirical performance measurements of prototypes of the architecture are presented. 30 refs.

  18. Reliability and Confidence.

    ERIC Educational Resources Information Center

    Test Service Bulletin, 1952

    1952-01-01

    Some aspects of test reliability are discussed. Topics covered are: (1) how high should a reliability coefficient be?; (2) two factors affecting the interpretation of reliability coefficients--range of talent and interval between testings; (3) some common misconceptions--reliability of speed tests, part vs. total reliability, reliability for what…

  19. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  20. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  1. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  2. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  3. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  4. Improved CDMA Performance Using Parallel Interference Cancellation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Divsalar, Dariush

    1995-01-01

    This report considers a general parallel interference cancellation scheme that significantly reduces the degradation effect of user interference but with a lesser implementation complexity than the maximum-likelihood technique. The scheme operates on the fact that parallel processing simultaneously removes from each user the interference produced by the remaining users accessing the channel in an amount proportional to their reliability. The parallel processing can be done in multiple stages. The proposed scheme uses tentative decision devices with different optimum thresholds at the multiple stages to produce the most reliably received data for generation and cancellation of user interference. The 1-stage interference cancellation is analyzed for three types of tentative decision devices, namely, hard, null zone, and soft decision, and two types of user power distribution, namely, equal and unequal powers. Simulation results are given for a multitude of different situations, in particular, those cases for which the analysis is too complex.

  5. Eclipse Parallel Tools Platform

    SciTech Connect

    Watson, Gregory; DeBardeleben, Nathan; Rasmussen, Craig

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices, and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures, and basis

  6. Design considerations for parallel graphics libraries

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  7. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  8. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  9. Two Level Parallel Grammatical Evolution

    NASA Astrophysics Data System (ADS)

    Ošmera, Pavel

    This paper describes a Two Level Parallel Grammatical Evolution (TLPGE) that can evolve complete programs using a variable length linear genome to govern the mapping of a Backus Naur Form grammar definition. To increase the efficiency of Grammatical Evolution (GE) the influence of backward processing was tested and a second level with differential evolution was added. The significance of backward coding (BC) and the comparison with standard coding of GEs is presented. The new method is based on parallel grammatical evolution (PGE) with a backward processing algorithm, which is further extended with a differential evolution algorithm. Thus a two-level optimization method was formed in attempt to take advantage of the benefits of both original methods and avoid their difficulties. Both methods used are discussed and the architecture of their combination is described. Also application is discussed and results on a real-word application are described.

  10. Integrated circuit reliability testing

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Sayah, Hoshyar R. (Inventor)

    1990-01-01

    A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

  11. Integrated circuit reliability testing

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Sayah, Hoshyar R. (Inventor)

    1988-01-01

    A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

  12. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  13. Reliability model generator

    NASA Technical Reports Server (NTRS)

    McMann, Catherine M. (Inventor); Cohen, Gerald C. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  14. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  15. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  16. Calculating system reliability with SRFYDO

    SciTech Connect

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  17. Eclipse Parallel Tools Platform

    Energy Science and Technology Software Center (ESTSC)

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  18. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  19. Reliability Generalization: "Lapsus Linguae"

    ERIC Educational Resources Information Center

    Smith, Julie M.

    2011-01-01

    This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

  20. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  1. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  2. Parallel methods for dynamic simulation of multiple manipulator systems

    NASA Technical Reports Server (NTRS)

    Mcmillan, Scott; Sadayappan, P.; Orin, David E.

    1993-01-01

    In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

  3. Low-power approaches for parallel, free-space photonic interconnects

    SciTech Connect

    Carson, R.F.; Lovejoy, M.L.; Lear, K.L.; WSarren, M.E.; Seigal, P.K.; Craft, D.C.; Kilcoyne, S.P.; Patrizi, G.A.; Blum, O.

    1995-12-31

    Future advances in the application of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMS) and board-level parallel connections. Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. These will make use of new device-level technologies such as vertical cavity surface-emitting lasers and special low-power parallel photoreceiver circuits. Depending on the application, these device technologies will often be monolithically integrated to reduce the amount of board or module real estate required by the photonics. Highly parallel MCM and board-level applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated in photonic and optoelectronic technologies. An example is found in two-dimensional point-to-point array interconnects for MCM stacking. These interconnects are based on high-efficiency Vertical Cavity Surface Emitting Lasers (VCSELs), Heterojunction Bipolar Transistor (HBT) photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques. Individual channels have been demonstrated at 100 Mb/s, operating with a direct 3.3V CMOS electronic interface while using 45 mW of electrical power. These results demonstrate how optoelectronic device technologies can be optimized for low-power parallel link applications.

  4. Wind turbine reliability database update.

    SciTech Connect

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  5. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  6. Can There Be Reliability without "Reliability?"

    ERIC Educational Resources Information Center

    Mislevy, Robert J.

    2004-01-01

    An "Educational Researcher" article by Pamela Moss (1994) asks the title question, "Can there be validity without reliability?" Yes, she answers, if by reliability one means "consistency among independent observations intended as interchangeable" (Moss, 1994, p. 7), quantified by internal consistency indices such as KR-20 coefficients and…

  7. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  8. Parallel nearest neighbor calculations

    NASA Astrophysics Data System (ADS)

    Trease, Harold

    We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

  9. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  10. Comparison of Reliability Measures under Factor Analysis and Item Response Theory

    ERIC Educational Resources Information Center

    Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng

    2012-01-01

    Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…

  11. SLAPP: A systolic linear algebra parallel processor

    SciTech Connect

    Drake, B.L.; Luk, F.T.; Speiser, J.M.; Symanski, J.J.

    1987-07-01

    Systolic array computer architectures provide a means for fast computation of the linear algebra algorithms that form the building blocks of many signal-processing algorithms, facilitating their real-time computation. For applications to signal processing, the systolic array operates on matrices, an inherently parallel view of the data, using numerical linear algebra algorithms that have been suitably parallelized to efficiently utilize the available hardware. This article describes work currently underway at the Naval Ocean Systems Center, San Diego, California, to build a two-dimensional systolic array, SLAPP, demonstrating efficient and modular parallelization of key matric computations for real-time signal- and image-processing problems.

  12. Parallel optical memories for very large databases

    NASA Astrophysics Data System (ADS)

    Mitkas, Pericles A.; Berra, P. B.

    1993-02-01

    The steady increase in volume of current and future databases dictates the development of massive secondary storage devices that allow parallel access and exhibit high I/O data rates. Optical memories, such as parallel optical disks and holograms, can satisfy these requirements because they combine high recording density and parallel one- or two-dimensional output. Several configurations for database storage involving different types of optical memory devices are investigated. All these approaches include some level of optical preprocessing in the form of data filtering in an attempt to reduce the amount of data per transaction that reach the electronic front-end.

  13. Parallel system simulation

    SciTech Connect

    Tai, H.M.; Saeks, R.

    1984-03-01

    A relaxation algorithm for solving large-scale system simulation problems in parallel is proposed. The algorithm, which is composed of both a time-step parallel algorithm and a component-wise parallel algorithm, is described. The interconnected nature of the system, which is characterized by the component connection model, is fully exploited by this approach. A technique for finding an optimal number of the time steps is also described. Finally, this algorithm is illustrated via several examples in which the possible trade-offs between the speed-up ratio, efficiency, and waiting time are analyzed.

  14. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  15. Parallels with nature

    NASA Astrophysics Data System (ADS)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  16. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  17. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  18. Partitioning and parallel radiosity

    NASA Astrophysics Data System (ADS)

    Merzouk, S.; Winkler, C.; Paul, J. C.

    1996-03-01

    This paper proposes a theoretical framework, based on domain subdivision for parallel radiosity. Moreover, three various implementation approaches, taking advantage of partitioning algorithms and global shared memory architecture, are presented.

  19. Simplified Parallel Domain Traversal

    SciTech Connect

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep by performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.

  20. A trial for a reliable shape measurement using interferometry and deflectometry

    NASA Astrophysics Data System (ADS)

    Hanayama, Ryohei

    2014-07-01

    Phase measuring deflectometry is an emerging technique to measure specular complex surface, such as aspherical surface and free-form surface. It is very attractive for its wide dynamic range of vertical scale and application range. Because it is a gradient based surface profilometry, we have to integrate the measured data to get surface shape. It can be cause of low accuracy. On the other hand, interferometry is accurate and well-known method for precision shape measurement. In interferometry, the original measured data is phase of interference signal, which directly shows the surface shape of the target. However interferometry is too precise to measure aspherical surface, free-form surface and usual surface in common industry. To assure the accuracy in ultra-precision measurement, reliability is the most important thing. Reliability can be kept by cross-checking. Then I will propose measuring method using both interferometer and deflectometry for reliable shape measurement. In this concept, global shape is measured using deflectometry and local shape around flat area is measured using interferometry. The result of deflectometry is global and precise. But it include ambiguity due to slope integration. In interferometry, only a small area can be measured, which is almost parallel to the reference surface. But it is accurate and reliable. To combine both results, it should be global, precise and reliable measurement. I will present the concept of combination of interferometry and deflectometry and some preliminary experimental results.

  1. Continuous parallel coordinates.

    PubMed

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data. PMID:19834230

  2. Reliability computation from reliability block diagrams

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.; Eckstein, R. E.

    1971-01-01

    A method and a computer program are presented to calculate probability of system success from an arbitrary reliability block diagram. The class of reliability block diagrams that can be handled include any active/standby combination of redundancy, and the computations include the effects of dormancy and switching in any standby redundancy. The mechanics of the program are based on an extension of the probability tree method of computing system probabilities.

  3. Human Reliability Program Overview

    SciTech Connect

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  4. Power electronics reliability analysis.

    SciTech Connect

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  5. Method of Administration of PROMIS Scales Did Not Significantly Impact Score Level, Reliability or Validity

    PubMed Central

    Bjorner, Jakob B.; Rose, Matthias; Gandek, Barbara; Stone, Arthur A.; Junghaenel, Doerte U.; Ware, John E.

    2014-01-01

    Objective To test the impact of method of administration (MOA) on score level, reliability, and validity of scales developed in the Patient Reported Outcomes Measurement Information System (PROMIS). Study Design and Setting Two non-overlapping parallel forms each containing 8 items from each of three PROMIS item banks (Physical Function, Fatigue and Depression) were completed by 923 adults with COPD, depression, or rheumatoid arthritis. In a randomized cross-over design, subjects answered one form by interactive voice response (IVR) technology, paper questionnaire (PQ), personal digital assistant (PDA), or personal computer (PC) and a second form by PC, in the same administration. Method equivalence was evaluated through analyses of difference scores, intraclass correlations (ICC), and convergent/discriminant validity. Results In difference score analyses, no significant mode differences were found and all confidence intervals were within the pre-specified MID of 0.2 SD. Parallel forms reliabilities were very high (ICC=0.85-0.93). Only one across mode ICC was significantly lower than the same mode ICC. Tests of validity showed no differential effect by MOA. Participants preferred screen interface over PQ and IVR. Conclusion We found no statistically or clinically significant differences in score levels or psychometric properties of IVR, PQ or PDA administration as compared to PC. PMID:24262772

  6. Reliable Design Versus Trust

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  7. Evaluation of competing software reliability predictions

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaly, A. A.; Chan, P. Y.; Littlewood, B.

    1986-01-01

    Different software reliability models can produce very different answers when called upon to predict future reliability in a reliability growth context. Users need to know which, if any, of the competing predictions are trustworthy. Some techniques are presented which form the basis of a partial solution to this problem. Rather than attempting to decide which model is generally best, the approach adopted here allows a user to decide upon the most appropriate model for each application.

  8. Reliability model generator specification

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Mccann, Catherine

    1990-01-01

    The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

  9. Predicting software reliability

    NASA Technical Reports Server (NTRS)

    Littlewood, B.

    1989-01-01

    A detailed look is given to software reliability techniques. A conceptual model of the failure process is examined, and some software reliability growth models are discussed. Problems for which no current solutions exist are addressed, emphasizing the very difficult problem of safety-critical systems for which the reliability requirements can be enormously demanding.

  10. Derivation of operation rules for reservoirs in parallel with joint water demand

    NASA Astrophysics Data System (ADS)

    Zeng, Xiang; Hu, Tiesong; Xiong, Lihua; Cao, Zhixian; Xu, Chongyu

    2015-12-01

    The purpose of this paper is to derive the general optimality conditions of the commonly used operating policies for reservoirs in parallel with joint water demand, which are defined in terms of system-wide release rules and individual reservoir storage balancing functions. Following that, a new set of release rules for individual reservoirs are proposed in analytical forms by considering the optimality conditions for the balance of total water delivery utility and carryover storage value of individual reservoirs. Theoretical analysis indicates that the commonly used operating policies are a special case of the newly derived rules. The derived release rules are then applied to simulating the operation of a parallel reservoir system in northeastern China. Compared to the performance of the commonly used policies, some advantages of the proposed operation rules are illustrated. Most notably, less water shortage occurrence and higher water supply reliability are obtained from the proposed operation rules.

  11. Low-power, parallel photonic interconnections for Multi-Chip Module applications

    SciTech Connect

    Carson, R.F.; Lovejoy, M.L.; Lear, K.L.

    1994-12-31

    New applications of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMs). Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. MCM-based applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated currently in photonic and optoelectronic technologies. The work described is a parallel link array, designed for vertical (Z-Axis) interconnection of the layers in a MCM-based signal processor stack, operating at a data rate of 100 Mb/s. This interconnect is based upon high-efficiency VCSELs, HBT photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques.

  12. Business of reliability

    NASA Astrophysics Data System (ADS)

    Engel, Pierre

    1999-12-01

    The presentation is organized around three themes: (1) The decrease of reception equipment costs allows non-Remote Sensing organization to access a technology until recently reserved to scientific elite. What this means is the rise of 'operational' executive agencies considering space-based technology and operations as a viable input to their daily tasks. This is possible thanks to totally dedicated ground receiving entities focusing on one application for themselves, rather than serving a vast community of users. (2) The multiplication of earth observation platforms will form the base for reliable technical and financial solutions. One obstacle to the growth of the earth observation industry is the variety of policies (commercial versus non-commercial) ruling the distribution of the data and value-added products. In particular, the high volume of data sales required for the return on investment does conflict with traditional low-volume data use for most applications. Constant access to data sources supposes monitoring needs as well as technical proficiency. (3) Large volume use of data coupled with low- cost equipment costs is only possible when the technology has proven reliable, in terms of application results, financial risks and data supply. Each of these factors is reviewed. The expectation is that international cooperation between agencies and private ventures will pave the way for future business models. As an illustration, the presentation proposes to use some recent non-traditional monitoring applications, that may lead to significant use of earth observation data, value added products and services: flood monitoring, ship detection, marine oil pollution deterrent systems and rice acreage monitoring.

  13. Validity and reliability of the multidimensional health locus of control scale for college students

    PubMed Central

    Moshki, Mahdi; Ghofranipour, Fazlollah; Hajizadeh, Ebrahim; Azadfallah, Parviz

    2007-01-01

    Background The purpose of the present study was to assess the validity and reliability of Form A of Multidimensional Health Locus of Control scales in Iran. Health locus of control is one of the most widely measured parameters of health belief for the planning of health education programs. Methods 496 university students participated in this study. The reliability coefficients were calculated in three different methods: test-retest, parallel forms and Cronbach alpha. In order to survey validity of the scale we used three methods including content validity, concurrent validity and construct validity. Results We established the content validity of the Persian translation by translating (and then back-translating) each item from the English version into the Persian version. The concurrent validity of the questionnaire, as measured by Levenson's IPC scale was .57 (P < .001), .49 (P < .01) and .53 (P < .001) for IPC, respectively. Exploratory principal components analysis supported a three-factor structure that items loading adequately on each factor. Moreover, the approximate orthogonal of the dimensions were obtained through correlation analyses. In addition, the reliability results were acceptable, too. Conclusion The results showed that the reliability and validity of Persian Form A of MHLC was acceptable and respectable and is suggested as an applicable criterion for similar studies in Iran. PMID:17942001

  14. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  15. Parallel time integration software

    Energy Science and Technology Software Center (ESTSC)

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  16. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  17. Improving Reliability of a Residency Interview Process

    PubMed Central

    Serres, Michelle L.; Gundrum, Todd E.

    2013-01-01

    Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station—impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity. PMID:24159209

  18. Parallel optical sampler

    SciTech Connect

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  19. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  20. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  1. Embodied and Distributed Parallel DJing.

    PubMed

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things. PMID:27534347

  2. Coarrars for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Snyder, W. Van

    2011-01-01

    The design of the Coarray feature of Fortran 2008 was guided by answering the question "What is the smallest change required to convert Fortran to a robust and efficient parallel language." Two fundamental issues that any parallel programming model must address are work distribution and data distribution. In order to coordinate work distribution and data distribution, methods for communication and synchronization must be provided. Although originally designed for Fortran, the Coarray paradigm has stimulated development in other languages. X10, Chapel, UPC, Titanium, and class libraries being developed for C++ have the same conceptual framework.

  3. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  4. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  5. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  6. Science Grade 7, Long Form.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Bureau of Curriculum Development.

    The Grade 7 Science course of study was prepared in two parallel forms. A short form designed for students who had achieved a high measure of success in previous science courses; the long form for those who have not been able to maintain the pace. Both forms contain similar content. The Grade 7 guide is the first in a three-year sequence for…

  7. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  8. Parallel Multigrid Equation Solver

    Energy Science and Technology Software Center (ESTSC)

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  9. Parallel Dislocation Simulator

    Energy Science and Technology Software Center (ESTSC)

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  10. Parallel Total Energy

    Energy Science and Technology Software Center (ESTSC)

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  11. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  12. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  13. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  14. Optical parallel selectionist systems

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  15. Human reliability analysis

    SciTech Connect

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach.

  16. Combinatorial reliability analysis of multiprocessor computers

    SciTech Connect

    Hwang, K.; Tian-Pong Chang

    1982-12-01

    The authors propose a combinatorial method to evaluate the reliability of multiprocessor computers. Multiprocessor structures are classified as crossbar switch, time-shared buses, and multiport memories. Closed-form reliability expressions are derived via combinatorial path enumeration on the probabilistic-graph representation of a multiprocessor system. The method can analyze the reliability performance of real systems like C.mmp, Tandem 16, and Univac 1100/80. User-oriented performance levels are defined for measuring the performability of degradable multiprocessor systems. For a regularly structured multiprocessor system, it is fast and easy to use this technique for evaluating system reliability with statistically independent component reliabilities. System availability can be also evaluated by this reliability study. 6 references.

  17. Constructing higher order DNA origami arrays using DNA junctions of anti-parallel/parallel double crossovers

    NASA Astrophysics Data System (ADS)

    Ma, Zhipeng; Park, Seongsu; Yamashita, Naoki; Kawai, Kentaro; Hirai, Yoshikazu; Tsuchiya, Toshiyuki; Tabata, Osamu

    2016-06-01

    DNA origami provides a versatile method for the construction of nanostructures with defined shape, size and other properties; such nanostructures may enable a hierarchical assembly of large scale architecture for the placement of other nanomaterials with atomic precision. However, the effective use of these higher order structures as functional components depends on knowledge of their assembly behavior and mechanical properties. This paper demonstrates construction of higher order DNA origami arrays with controlled orientations based on the formation of two types of DNA junctions: anti-parallel and parallel double crossovers. A two-step assembly process, in which preformed rectangular DNA origami monomer structures themselves undergo further self-assembly to form numerically unlimited arrays, was investigated to reveal the influences of assembly parameters. AFM observations showed that when parallel double crossover DNA junctions are used, the assembly of DNA origami arrays occurs with fewer monomers than for structures formed using anti-parallel double crossovers, given the same assembly parameters, indicating that the configuration of parallel double crossovers is not energetically preferred. However, the direct measurement by AFM force-controlled mapping shows that both DNA junctions of anti-parallel and parallel double crossovers have homogeneous mechanical stability with any part of DNA origami.

  18. Recalibrating software reliability models

    NASA Technical Reports Server (NTRS)

    Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

    1989-01-01

    In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

  19. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  20. Recalibrating software reliability models

    NASA Technical Reports Server (NTRS)

    Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

    1990-01-01

    In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predicitons for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates prodcued by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

  1. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  2. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  3. Reliability quantification and visualization for electric microgrids

    NASA Astrophysics Data System (ADS)

    Panwar, Mayank

    and parallel with the area Electric Power Systems (EPS), (3) includes the local EPS and may include portions of the area EPS, and (4) is intentionally planned. A more reliable electric power grid requires microgrids to operate in tandem with the EPS. The reliability can be quantified through various metrics for performance measure. This is done through North American Electric Reliability Corporation (NERC) metrics in North America. The microgrid differs significantly from the traditional EPS, especially at asset level due to heterogeneity in assets. Thus, the performance cannot be quantified by the same metrics as used for EPS. Some of the NERC metrics are calculated and interpreted in this work to quantify performance for a single asset and group of assets in a microgrid. Two more metrics are introduced for system level performance quantification. The next step is a better representation of the large amount of data generated by the microgrid. Visualization is one such form of representation which is explored in detail and a graphical user interface (GUI) is developed as a deliverable tool to the operator for informative decision making and planning. Electronic appendices-I and II contain data and MATLAB© program codes for analysis and visualization for this work.

  4. Artful Balance: The Parallel Structures of Style.

    ERIC Educational Resources Information Center

    Hiatt, Mary P.

    Based on an extensive computer-aided examination of representative published American writing, this book examines and compares how various kinds of prose employ the diverse forms of parallelism. A scale of rhetorical value for assessing the cooccurring rhetorical devices of repetition is also presented. The chapters are entitled: "Balance or…

  5. A New Approach to Parallel Interference Cancellation for CDMA

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Martin

    1996-01-01

    This paper introduces an improved nonlinear parallel interference cancellation scheme that significantly reduces the degrading effect of user interference with implementation complexity linear in the number of users. The scheme operates on the fact that parallel processing simultaneously removes from each user a part of the interference produced by the remaining users accessing the channel the amount being proportional to their reliability. The parallel processing can be done in multiple stages. Simulation results are given for a multitude of different situations, in particular those cases for which the analysis is too complex.

  6. JSD: Parallel Job Accounting on the IBM SP2

    NASA Technical Reports Server (NTRS)

    Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

    1995-01-01

    The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

  7. Hawaii electric system reliability.

    SciTech Connect

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  8. Seeing in parallel

    SciTech Connect

    Little, J.J.; Poggio, T.; Gamble, E.B. Jr.

    1988-01-01

    Computer algorithms have been developed for early vision processes that give separate cues to the distance from the viewer of three-dimensional surfaces, their shape, and their material properties. The MIT Vision Machine is a computer system that integrates several early vision modules to achieve high-performance recognition and navigation in unstructured environments. It is also an experimental environment for theoretical progress in early vision algorithms, their parallel implementation, and their integration. The Vision Machine consists of a movable, two-camera Eye-Head input device and an 8K Connection Machine. The authors have developed and implemented several parallel early vision algorithms that compute edge detection, stereopsis, motion, texture, and surface color in close to real time. The integration stage, based on coupled Markov random field models, leads to a cartoon-like map of the discontinuities in the scene, with partial labeling of the brightness edges in terms of their physical origin.

  9. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  10. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721