Sample records for parallel forms reliability

  1. Test Reliability at the Individual Level

    PubMed Central

    Hu, Yueqin; Nesselroade, John R.; Erbacher, Monica K.; Boker, Steven M.; Burt, S. Alexandra; Keel, Pamela K.; Neale, Michael C.; Sisk, Cheryl L.; Klump, Kelly

    2016-01-01

    Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS. PMID:28936107

  2. Accuracy of a Classical Test Theory-Based Procedure for Estimating the Reliability of a Multistage Test. Research Report. ETS RR-17-02

    ERIC Educational Resources Information Center

    Kim, Sooyeon; Livingston, Samuel A.

    2017-01-01

    The purpose of this simulation study was to assess the accuracy of a classical test theory (CTT)-based procedure for estimating the alternate-forms reliability of scores on a multistage test (MST) having 3 stages. We generated item difficulty and discrimination parameters for 10 parallel, nonoverlapping forms of the complete 3-stage test and…

  3. A parallel form of the Gudjonsson Suggestibility Scale.

    PubMed

    Gudjonsson, G H

    1987-09-01

    The purpose of this study is twofold: (1) to present a parallel form of the Gudjonsson Suggestibility Scale (GSS, Form 1); (2) to study test-retest reliabilities of interrogative suggestibility. Three groups of subjects were administered the two suggestibility scales in a counterbalanced order. Group 1 (28 normal subjects) and Group 2 (32 'forensic' patients) completed both scales within the same testing session, whereas Group 3 (30 'forensic' patients) completed the two scales between one week and eight months apart. All the correlations were highly significant, giving support for high 'temporal consistency' of interrogative suggestibility.

  4. Reliability of a science admission test (HAM-Nat) at Hamburg medical school.

    PubMed

    Hissbach, Johanna; Klusmann, Dietrich; Hampe, Wolfgang

    2011-01-01

    The University Hospital in Hamburg (UKE) started to develop a test of knowledge in natural sciences for admission to medical school in 2005 (Hamburger Auswahlverfahren für Medizinische Studiengänge, Naturwissenschaftsteil, HAM-Nat). This study is a step towards establishing the HAM-Nat. We are investigating parallel forms reliability, the effect of a crash course in chemistry on test results, and correlations of HAM-Nat test results with a test of scientific reasoning (similar to a subtest of the "Test for Medical Studies", TMS). 316 first-year students participated in the study in 2007. They completed different versions of the HAM-Nat test which consisted of items that had already been used (HN2006) and new items (HN2007). Four weeks later half of the participants were tested on the HN2007 version of the HAM-Nat again, while the other half completed the test of scientific reasoning. Within this four week interval students were offered a five day chemistry course. Parallel forms reliability for four different test versions ranged from r(tt)=.53 to r(tt)=.67. The retest reliabilities of the HN2007 halves were r(tt)=.54 and r(tt )=.61. Correlations of the two HAM-Nat versions with the test of scientific reasoning were r=.34 und r=.21. The crash course in chemistry had no effect on HAM-Nat scores. The results suggest that further versions of the test of natural sciences will not easily conform to the standards of internal consistency, parallel-forms reliability and retest reliability. Much care has to be taken in order to assemble items which could be used interchangeably for the construction of new test versions. The test of scientific reasoning and the HAM-Nat are tapping different constructs. Participation in a chemistry course did not improve students' achievement, probably because the content of the course was not coordinated with the test and many students lacked of motivation to do well in the second test.

  5. Reliability of a science admission test (HAM-Nat) at Hamburg medical school

    PubMed Central

    Hissbach, Johanna; Klusmann, Dietrich; Hampe, Wolfgang

    2011-01-01

    Objective: The University Hospital in Hamburg (UKE) started to develop a test of knowledge in natural sciences for admission to medical school in 2005 (Hamburger Auswahlverfahren für Medizinische Studiengänge, Naturwissenschaftsteil, HAM-Nat). This study is a step towards establishing the HAM-Nat. We are investigating parallel forms reliability, the effect of a crash course in chemistry on test results, and correlations of HAM-Nat test results with a test of scientific reasoning (similar to a subtest of the "Test for Medical Studies", TMS). Methods: 316 first-year students participated in the study in 2007. They completed different versions of the HAM-Nat test which consisted of items that had already been used (HN2006) and new items (HN2007). Four weeks later half of the participants were tested on the HN2007 version of the HAM-Nat again, while the other half completed the test of scientific reasoning. Within this four week interval students were offered a five day chemistry course. Results: Parallel forms reliability for four different test versions ranged from rtt=.53 to rtt=.67. The retest reliabilities of the HN2007 halves were rtt=.54 and rtt =.61. Correlations of the two HAM-Nat versions with the test of scientific reasoning were r=.34 und r=.21. The crash course in chemistry had no effect on HAM-Nat scores. Conclusions: The results suggest that further versions of the test of natural sciences will not easily conform to the standards of internal consistency, parallel-forms reliability and retest reliability. Much care has to be taken in order to assemble items which could be used interchangeably for the construction of new test versions. The test of scientific reasoning and the HAM-Nat are tapping different constructs. Participation in a chemistry course did not improve students’ achievement, probably because the content of the course was not coordinated with the test and many students lacked of motivation to do well in the second test. PMID:21866246

  6. A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2012-01-01

    Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…

  7. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  8. The Trojan Lifetime Champions Health Survey: development, validity, and reliability.

    PubMed

    Sorenson, Shawn C; Romano, Russell; Scholefield, Robin M; Schroeder, E Todd; Azen, Stanley P; Salem, George J

    2015-04-01

    Self-report questionnaires are an important method of evaluating lifespan health, exercise, and health-related quality of life (HRQL) outcomes among elite, competitive athletes. Few instruments, however, have undergone formal characterization of their psychometric properties within this population. To evaluate the validity and reliability of a novel health and exercise questionnaire, the Trojan Lifetime Champions (TLC) Health Survey. Descriptive laboratory study. A large National Collegiate Athletic Association Division I university. A total of 63 university alumni (age range, 24 to 84 years), including former varsity collegiate athletes and a control group of nonathletes. Participants completed the TLC Health Survey twice at a mean interval of 23 days with randomization to the paper or electronic version of the instrument. Content validity, feasibility of administration, test-retest reliability, parallel-form reliability between paper and electronic forms, and estimates of systematic and typical error versus differences of clinical interest were assessed across a broad range of health, exercise, and HRQL measures. Correlation coefficients, including intraclass correlation coefficients (ICCs) for continuous variables and κ agreement statistics for ordinal variables, for test-retest reliability averaged 0.86, 0.90, 0.80, and 0.74 for HRQL, lifetime health, recent health, and exercise variables, respectively. Correlation coefficients, again ICCs and κ, for parallel-form reliability (ie, equivalence) between paper and electronic versions averaged 0.90, 0.85, 0.85, and 0.81 for HRQL, lifetime health, recent health, and exercise variables, respectively. Typical measurement error was less than the a priori thresholds of clinical interest, and we found minimal evidence of systematic test-retest error. We found strong evidence of content validity, convergent construct validity with the Short-Form 12 Version 2 HRQL instrument, and feasibility of administration in an elite, competitive athletic population. These data suggest that the TLC Health Survey is a valid and reliable instrument for assessing lifetime and recent health, exercise, and HRQL, among elite competitive athletes. Generalizability of the instrument may be enhanced by additional, larger-scale studies in diverse populations.

  9. The Trojan Lifetime Champions Health Survey: Development, Validity, and Reliability

    PubMed Central

    Sorenson, Shawn C.; Romano, Russell; Scholefield, Robin M.; Schroeder, E. Todd; Azen, Stanley P.; Salem, George J.

    2015-01-01

    Context Self-report questionnaires are an important method of evaluating lifespan health, exercise, and health-related quality of life (HRQL) outcomes among elite, competitive athletes. Few instruments, however, have undergone formal characterization of their psychometric properties within this population. Objective To evaluate the validity and reliability of a novel health and exercise questionnaire, the Trojan Lifetime Champions (TLC) Health Survey. Design Descriptive laboratory study. Setting A large National Collegiate Athletic Association Division I university. Patients or Other Participants A total of 63 university alumni (age range, 24 to 84 years), including former varsity collegiate athletes and a control group of nonathletes. Intervention(s) Participants completed the TLC Health Survey twice at a mean interval of 23 days with randomization to the paper or electronic version of the instrument. Main Outcome Measure(s) Content validity, feasibility of administration, test-retest reliability, parallel-form reliability between paper and electronic forms, and estimates of systematic and typical error versus differences of clinical interest were assessed across a broad range of health, exercise, and HRQL measures. Results Correlation coefficients, including intraclass correlation coefficients (ICCs) for continuous variables and κ agreement statistics for ordinal variables, for test-retest reliability averaged 0.86, 0.90, 0.80, and 0.74 for HRQL, lifetime health, recent health, and exercise variables, respectively. Correlation coefficients, again ICCs and κ, for parallel-form reliability (ie, equivalence) between paper and electronic versions averaged 0.90, 0.85, 0.85, and 0.81 for HRQL, lifetime health, recent health, and exercise variables, respectively. Typical measurement error was less than the a priori thresholds of clinical interest, and we found minimal evidence of systematic test-retest error. We found strong evidence of content validity, convergent construct validity with the Short-Form 12 Version 2 HRQL instrument, and feasibility of administration in an elite, competitive athletic population. Conclusions These data suggest that the TLC Health Survey is a valid and reliable instrument for assessing lifetime and recent health, exercise, and HRQL, among elite competitive athletes. Generalizability of the instrument may be enhanced by additional, larger-scale studies in diverse populations. PMID:25611315

  10. Evaluation of General Classes of Reliability Estimators Often Used in Statistical Analyses of Quasi-Experimental Designs

    NASA Astrophysics Data System (ADS)

    Saini, K. K.; Sehgal, R. K.; Sethi, B. L.

    2008-10-01

    In this paper major reliability estimators are analyzed and there comparatively result are discussed. There strengths and weaknesses are evaluated in this case study. Each of the reliability estimators has certain advantages and disadvantages. Inter-rater reliability is one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers. As an alternative, you could look at the correlation of ratings of the same single observer repeated on two different occasions. Each of the reliability estimators will give a different value for reliability. In general, the test-retest and inter-rater reliability estimates will be lower in value than the parallel forms and internal consistency ones because they involve measuring at different times or with different raters. Since reliability estimates are often used in statistical analyses of quasi-experimental designs.

  11. Development of Officer Selection Battery Forms 3 and 4

    DTIC Science & Technology

    1986-03-01

    the development, standardization, and validation of two parallel forms of a test to be used for assessing young men and women applying to ROTC. Fairly...appropriate di6ffculty, high reliability, and state-of-the-art validity and fairness for mit~orities and women . EDGAR M. JOHNSON Technical Directcr 4v 4...administrable, test for use in assessing young men and women applying to Advanced Army ROTC. Procedur .-: Earlier research had performed an analysis of the

  12. Reliability of an e-PRO Tool of EORTC QLQ-C30 for Measurement of Health-Related Quality of Life in Patients With Breast Cancer: Prospective Randomized Trial.

    PubMed

    Wallwiener, Markus; Matthies, Lina; Simoes, Elisabeth; Keilmann, Lucia; Hartkopf, Andreas D; Sokolov, Alexander N; Walter, Christina B; Sickenberger, Nina; Wallwiener, Stephanie; Feisst, Manuel; Gass, Paul; Fasching, Peter A; Lux, Michael P; Wallwiener, Diethelm; Taran, Florin-Andrei; Rom, Joachim; Schneeweiss, Andreas; Graf, Joachim; Brucker, Sara Y

    2017-09-14

    Breast cancer represents the most common malignant disease in women worldwide. As currently systematic palliative treatment only has a limited effect on survival rates, the concept of health-related quality of life (HRQoL) is gaining more and more importance in the therapy setting of metastatic breast cancer. One of the major patient-reported outcomes (PROs) for measuring HRQoL in patients with breast cancer is provided by the European Organization for Research and Treatment of Cancer (EORTC). Currently, paper-based surveys still predominate, as only a few reliable and validated electronic-based questionnaires are available. Facing the possibilities associated with evolving digitalization in medicine, validation of electronic versions of well-established PRO is essential in order to contribute to comprehensive and holistic oncological care and to ensure high quality in cancer research. The aim of this study was to analyze the reliability of a tablet-based measuring application for EORTC QLQ-C30 in German language in patients with adjuvant and (curative) metastatic breast cancer. Paper- and tablet-based questionnaires were completed by a total of 106 female patients with adjuvant and metastatic breast cancer recruited as part of the e-PROCOM study. All patients were required to complete the electronic- (e-PRO) and paper-based versions of the HRQoL EORTC QLQ-C30 questionnaire. A frequency analysis was performed to determine descriptive sociodemographic characteristics. Both dimensions of reliability (parallel forms reliability [Wilcoxon test] and test of internal consistency [Spearman rho and agreement rates for single items, Pearson correlation and Kendall tau for each scale]) were analyzed. High correlations were shown for both dimensions of reliability (parallel forms reliability and internal consistency) in the patient's response behavior between paper- and electronic-based questionnaires. Regarding the test of parallel forms reliability, no significant differences were found in 27 of 30 single items and in 14 of 15 scales, whereas a statistically significant correlation in the test of consistency was found in all 30 single items and all 15 scales. The evaluated e-PRO version of the EORTC QLQ-C30 is reliable for patients with both adjuvant and metastatic breast cancer, showing a high correlation in almost all questions (and in many scales). Thus, we conclude that the validated paper-based PRO assessment and the e-PRO tool are equally valid. However, the reliability should also be analyzed in other prospective trials to ensure that usability is reliable in all patient groups. ClinicalTrials.gov NCT03132506; https://clinicaltrials.gov/ct2/show/NCT03132506 (Archived by WebCite at http://www.webcitation.org/6tRcgQuou). ©Markus Wallwiener, Lina Matthies, Elisabeth Simoes, Lucia Keilmann, Andreas D Hartkopf, Alexander N Sokolov, Christina B Walter, Nina Sickenberger, Stephanie Wallwiener, Manuel Feisst, Paul Gass, Peter A Fasching, Michael P Lux, Diethelm Wallwiener, Florin-Andrei Taran, Joachim Rom, Andreas Schneeweiss, Joachim Graf, Sara Y Brucker. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 14.09.2017.

  13. Optimum allocation of redundancy among subsystems connected in series. Ph.D. Thesis - Case Western Reserve Univ., Sep. 1970

    NASA Technical Reports Server (NTRS)

    Bien, D. D.

    1973-01-01

    This analysis considers the optimum allocation of redundancy in a system of serially connected subsystems in which each subsystem is of the k-out-of-n type. Redundancy is optimally allocated when: (1) reliability is maximized for given costs; or (2) costs are minimized for given reliability. Several techniques are presented for achieving optimum allocation and their relative merits are discussed. Approximate solutions in closed form were attainable only for the special case of series-parallel systems and the efficacy of these approximations is discussed.

  14. Reliability of a Parallel Pipe Network

    NASA Technical Reports Server (NTRS)

    Herrera, Edgar; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

  15. The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-06-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.

  16. Reliability models applicable to space telescope solar array assembly system

    NASA Technical Reports Server (NTRS)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  17. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    PubMed

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Assessment of an Illness-specific Dimension of Self-esteem in Youths with Type 1 Diabetes

    PubMed Central

    Iannotti, Ronald J.; Nansel, Tonja R.; Haynie, Denise L.; Sobel, Douglas O.; Simons-Morton, Bruce

    2009-01-01

    Objective The initial validation of a brief assessment of a diabetes-specific self-esteem dimension in adolescents with type 1 diabetes. Methods Youths with type 1 diabetes (n = 87) aged 10–16 years were administered the multidimensional Self-Esteem Questionnaire (SEQ) and a newly designed assessment of diabetes-specific self-esteem (DSSE). Their parents completed parallel forms. Adherence to the diabetes regimen and glycemic control were also assessed. Results In factor analysis, DSSE items formed a distinct dimension of self-esteem in addition to the SEQ dimensions. This factor uniquely contributed to differences in youths’ global self-esteem. Significant associations with adherence and glycemic control suggested its concurrent validity. Agreement between youth- and parent-report DSSE forms supported inter-rater reliability. Conclusions The findings provide preliminary support for recognizing the importance of a DSSE dimension in adolescents’ adjustment to diabetes, and for the reliability and validity of the proposed assessment strategy. PMID:18664512

  19. Turkish adaptation of the pregnancy-related anxiety questionnaire-revised 2: Validity and reliability study in multiparous and primiparous pregnancy.

    PubMed

    Aksoy Derya, Yeşim; Timur Taşhan, Sermin; Duman, Mesude; Durgun Ozan, Yeter

    2018-07-01

    The purpose of this study was to create a Turkish version of the Pregnancy-Related Anxiety Questionnaire-Revised 2 (PRAQR2), which was revised for application to multiparous and primiparous pregnancy, and to explore its psychometric characteristics in multiparous and primiparous pregnancy. This study was methodologically designed to assess the reliability and validity of the PRAQ-R2. The study was carried out in the obstetrics clinic of a training and research hospital in Malatya. A total of 616 healthy pregnant women (399 multiparous and 217 primiparous) constituted the sample of the study. The cultural adaptation process of the questionnaire was conducted in three phases: language validity, content validity, and pilot application. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were used to test the construct validity of the questionnaire. The reliability of the PRAQ-R2 was evaluated with Cronbach's alpha internal consistency coefficient, item-total correlation, test-retest analysis, and parallel forms reliability. The EFA revealed that the PRAQ-R2 consists of 10 items for the multiparous group and 11 for the primiparous group after adding the item ``I am anxious about the delivery because I have never experienced one before.'' The CFA for both groups supported the three-factor questionnaire yielded by the EFA. Good fit index values were obtained in both groups. Cronbach's alpha internal consistency coefficient ranged from 0.81 to 0.93 for the multiparous group and 0.87 to 0.94 for the primiparous group for the complete PRAQ-R2 and each of its subdimensions. In addition, the item-total correlation, test-retest analysis, and parallel forms reliability of the questionnaire were highly correlated. The PRAQ-R2 is a valid and reliable instrument that can be used to evaluate the level of anxiety in Turkish pregnant women irrespective of parity. The use of the PRAQ-R2 in prenatal healthcare services will contribute to the early diagnosis, treatment, and management of pregnancy-related anxiety. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Using multivariate generalizability theory to assess the effect of content stratification on the reliability of a performance assessment.

    PubMed

    Keller, Lisa A; Clauser, Brian E; Swanson, David B

    2010-12-01

    In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed.

  1. Evaluation of fault-tolerant parallel-processor architectures over long space missions

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1989-01-01

    The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.

  2. Parallelized reliability estimation of reconfigurable computer networks

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Das, Subhendu; Palumbo, Dan

    1990-01-01

    A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.

  3. Reliability and mass analysis of dynamic power conversion systems with parallel of standby redundancy

    NASA Technical Reports Server (NTRS)

    Juhasz, A. J.; Bloomfield, H. S.

    1985-01-01

    A combinatorial reliability approach is used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis is also performed, specifically for a 100 kWe nuclear Brayton power conversion system with parallel redundancy. Although this study is done for a reactor outlet temperature of 1100K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.

  4. Sensitive and label-free detection of miRNA-145 by triplex formation.

    PubMed

    Aviñó, Anna; Huertas, César S; Lechuga, Laura M; Eritja, Ramon

    2016-01-01

    The development of new strategies for detecting microRNAs (miRNAs) has become a crucial step in the diagnostic field. miRNA profiles depend greatly on the sample and the analytical platform employed, leading sometimes to contradictory results. In this work, we study the use of modified parallel tail-clamps to detect a miRNA sequence involved in tumor suppression by triplex formation. Thermal denaturing curves and circular dichroism (CD) measurements have been performed to confirm that parallel clamps carrying 8-aminoguanine form the most stable triplex structures with their target miRNA. The modified tail-clamps have been tested as bioreceptors in a surface plasmon resonance (SPR) biosensor for the detection of miRNA-145. The detection limit was improved 2.4 times demonstrating that a stable triplex structure is formed between target miRNA and 8-aminoguanine tail-clamp bioreceptor. This new approach is an essential step toward the label-free and reliable detection of miRNA signatures for diagnostic purposes.

  5. Parallel short forms for the assessment of activities of daily living in cardiovascular rehabilitation patients (PADL-cardio): development and validation.

    PubMed

    Schmucker, Andreas; Abberger, Birgit; Boecker, Maren; Baumeister, Harald

    2017-11-26

    To develop and validate parallel short forms for the assessment of activities of daily living in cardiac rehabilitation patients (PADL-cardio I & II). PADL-cardio I & II were developed based on a sample of 106 patients [mean age  =  57.6; standard deviation (SD) = 11.1; 72.6% males] using Rasch analysis and validated with a sample of 81 patients (mean age  =  59.1; SD  =  11.1; 88.9% males). All patients answered PADL-cardio and the Short Form 12 Health Survey. Both versions of PADL-cardio are composed of 10 items. The fit to the Rasch model was given documented by a non-significant Item-trait interaction score (PADL-cardio I: χ 2  = 31.08, df  =  30, p  =  0.41; PADL-cardio II: χ 2  = 45.6, df  =  40, p  =  0.25). The two versions were free of differential item functioning. Person-separation reliability was 0.72/0.78 and unidimensionality was given. The two versions correlated with r = 0.98 and the correlation between PADL-cardio and the underlying item bank was 0.99 for both versions. Concurrent validity is indicated through correlations with the Short Form 12 Health Survey (r  = -0.37 to -0.40). PADL-cardio provides a short and psychometrically sound option for the assessment of activities of daily living in cardiovascular rehabilitation patients. The two versions of PADL-cardio are equivalent. Hence, they can be used to reduce practice and retest effects in repeated measurement, facilitating the longitudinal assessment of activities of daily living. Implications for Rehabilitation New parallel test forms for the assessment of activities of daily living in cardiac rehabilitation (PADL-cardio I & PADL-cardio II) are available. PADL-cardio I & II consist of 10 items and are therefore especially timesaving. Concurrent validity is given through correlations with the Short Form Health Survey 12. Therapeutic success could be determined more precisely by the parallel forms reducing practice and retest effects.

  6. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  7. Combining points and lines in rectifying satellite images

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed F.

    2017-09-01

    The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.

  8. High-rate serial interconnections for embedded and distributed systems with power and resource constraints

    NASA Astrophysics Data System (ADS)

    Sheynin, Yuriy; Shutenko, Felix; Suvorova, Elena; Yablokov, Evgenej

    2008-04-01

    High rate interconnections are important subsystems in modern data processing and control systems of many classes. They are especially important in prospective embedded and on-board systems that used to be multicomponent systems with parallel or distributed architecture, [1]. Modular architecture systems of previous generations were based on parallel busses that were widely used and standardised: VME, PCI, CompactPCI, etc. Busses evolution went in improvement of bus protocol efficiency (burst transactions, split transactions, etc.) and increasing operation frequencies. However, due to multi-drop bus nature and multi-wire skew problems the parallel bussing speedup became more and more limited. For embedded and on-board systems additional reason for this trend was in weight, size and power constraints of an interconnection and its components. Parallel interfaces have become technologically more challenging as their respective clock frequencies have increased to keep pace with the bandwidth requirements of their attached storage devices. Since each interface uses a data clock to gate and validate the parallel data (which is normally 8 bits or 16 bits wide), the clock frequency need only be equivalent to the byte rate or word rate being transmitted. In other words, for a given transmission frequency, the wider the data bus, the slower the clock. As the clock frequency increases, more high frequency energy is available in each of the data lines, and a portion of this energy is dissipated in radiation. Each data line not only transmits this energy but also receives some from its neighbours. This form of mutual interference is commonly called "cross-talk," and the signal distortion it produces can become another major contributor to loss of data integrity unless compensated by appropriate cable designs. Other transmission problems such as frequency-dependent attenuation and signal reflections, while also applicable to serial interfaces, are more troublesome in parallel interfaces due to the number of additional cable conductors involved. In order to compensate for these drawbacks, higher quality cables, shorter cable runs and fewer devices on the bus have been the norm. Finally, the physical bulk of the parallel cables makes them more difficult to route inside an enclosure, hinders cooling airflow and is incompatible with the trend toward smaller form-factor devices. Parallel busses worked in systems during the past 20 years, but the accumulated problems dictate the need for change and the technology is available to spur the transition. The general trend in high-rate interconnections turned from parallel bussing to scalable interconnections with a network architecture and high-rate point-to-point links. Analysis showed that data links with serial information transfer could achieve higher throughput and efficiency and it was confirmed in various research and practical design. Serial interfaces offer an improvement over older parallel interfaces: better performance, better scalability, and also better reliability as the parallel interfaces are at their limits of speed with reliable data transfers and others. The trend was implemented in major standards' families evolution: e.g. from PCI/PCI-X parallel bussing to PCIExpress interconnection architecture with serial lines, from CompactPCI parallel bus to ATCA (Advanced Telecommunications Architecture) specification with serial links and network topologies of an interconnection, etc. In the article we consider a general set of characteristics and features of serial interconnections, give a brief overview of serial interconnections specifications. In more details we present the SpaceWire interconnection technology. Have been developed for space on-board systems applications the SpaceWire has important features and characteristics that make it a prospective interconnection for wide range of embedded systems.

  9. Breaking Barriers to Low-Cost Modular Inverter Production & Use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogdan Borowy; Leo Casey; Jerry Foshage

    2005-05-31

    The goal of this cost share contract is to advance key technologies to reduce size, weight and cost while enhancing performance and reliability of Modular Inverter Product for Distributed Energy Resources (DER). Efforts address technology development to meet technical needs of DER market protection, isolation, reliability, and quality. Program activities build on SatCon Technology Corporation inverter experience (e.g., AIPM, Starsine, PowerGate) for Photovoltaic, Fuel Cell, Energy Storage applications. Efforts focused four technical areas, Capacitors, Cooling, Voltage Sensing and Control of Parallel Inverters. Capacitor efforts developed a hybrid capacitor approach for conditioning SatCon's AIPM unit supply voltages by incorporating several typesmore » and sizes to store energy and filter at high, medium and low frequencies while minimizing parasitics (ESR and ESL). Cooling efforts converted the liquid cooled AIPM module to an air-cooled unit using augmented fin, impingement flow cooling. Voltage sensing efforts successfully modified the existing AIPM sensor board to allow several, application dependent configurations and enabling voltage sensor galvanic isolation. Parallel inverter control efforts realized a reliable technique to control individual inverters, connected in a parallel configuration, without a communication link. Individual inverter currents, AC and DC, were balanced in the paralleled modules by introducing a delay to the individual PWM gate pulses. The load current sharing is robust and independent of load types (i.e., linear and nonlinear, resistive and/or inductive). It is a simple yet powerful method for paralleling both individual devices dramatically improves reliability and fault tolerance of parallel inverter power systems. A patent application has been made based on this control technology.« less

  10. The relative noise levels of parallel axis gear sets with various contact ratios and gear tooth forms

    NASA Technical Reports Server (NTRS)

    Drago, Raymond J.; Lenski, Joseph W., Jr.; Spencer, Robert H.; Valco, Mark; Oswald, Fred B.

    1993-01-01

    The real noise reduction benefits which may be obtained through the use of one gear tooth form as compared to another is an important design parameter for any geared system, especially for helicopters in which both weight and reliability are very important factors. This paper describes the design and testing of nine sets of gears which are as identical as possible except for their basic tooth geometry. Noise measurements were made at various combinations of load and speed for each gear set so that direct comparisons could be made. The resultant data was analyzed so that valid conclusions could be drawn and interpreted for design use.

  11. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  12. The Social Attribution Task - Multiple Choice (SAT-MC): Psychometric comparison with social cognitive measures for schizophrenia research.

    PubMed

    Johannesen, Jason K; Fiszdon, Joanna M; Weinstein, Andrea; Ciosek, David; Bell, Morris D

    2018-04-01

    The Social Attribution Task-Multiple Choice (SAT-MC) tests the ability to extract social themes from viewed object motion. This form of animacy perception is thought to aid the development of social inference, but appears impaired in schizophrenia. The current study was undertaken to examine psychometric equivalence of two forms of the SAT-MC and to compare their performance against social cognitive tests recommended for schizophrenia research. Thirty-two schizophrenia (SZ) and 30 substance use disorder (SUD) participants completed both SAT-MC forms, the Bell-Lysaker Emotion Recognition Task (BLERT), Hinting Task, The Awareness of Social Inference Test (TASIT), Ambiguous Intentions and Hostility Questionnaire (AIHQ) and questionnaire measures of interpersonal function. Test sensitivity, construct and external validity, test-retest reliability, and internal consistency were evaluated. SZ scored significantly lower than SUD on both SAT-MC forms, each classifying ~60% of SZ as impaired, compared with ~30% of SUD. SAT-MC forms demonstrated good test-retest and parallel form reliability, minimal practice effect, high internal consistency, and similar patterns of correlation with social cognitive and external validity measures. The SAT-MC compared favorably to recommended social cognitive tests across psychometric features and, with exception of TASIT, was most sensitive to impairment in schizophrenia when compared to a chronic substance use sample. Published by Elsevier B.V.

  13. Comparison of Reliability Measures under Factor Analysis and Item Response Theory

    ERIC Educational Resources Information Center

    Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng

    2012-01-01

    Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…

  14. An empirical look at the Defense Mechanism Test (DMT): reliability and construct validity.

    PubMed

    Ekehammar, Bo; Zuber, Irena; Konstenius, Marja-Liisa

    2005-07-01

    Although the Defense Mechanism Test (DMT) has been in use for almost half a century, there are still quite contradictory views about whether it is a reliable instrument, and if so, what it really measures. Thus, based on data from 39 female students, we first examined DMT inter-coder reliability by analyzing the agreement among trained judges in their coding of the same DMT protocols. Second, we constructed a "parallel" photographic picture that retained all structural characteristic of the original and analyzed DMT parallel-test reliability. Third, we examined the construct validity of the DMT by (a) employing three self-report defense-mechanism inventories and analyzing the intercorrelations between DMT defense scores and corresponding defenses in these instruments, (b) studying the relationships between DMT responses and scores on trait and state anxiety, and (c) relating DMT-defense scores to measures of self-esteem. The main results showed that the DMT can be coded with high reliability by trained coders, that the parallel-test reliability is unsatisfactory compared to traditional psychometric standards, that there is a certain generalizability in the number of perceptual distortions that people display from one picture to another, and that the construct validation provided meager empirical evidence for the conclusion that the DMT measures what it purports to measure, that is, psychological defense mechanisms.

  15. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  16. Factorial validity and reliability of the Malaysian simplified Chinese version of Multidimensional Scale of Perceived Social Support (MSPSS-SCV) among a group of university students.

    PubMed

    Guan, Ng Chong; Seng, Loh Huai; Hway Ann, Anne Yee; Hui, Koh Ong

    2015-03-01

    This study was aimed at validating the simplified Chinese version of the Multidimensional Scale of Perceived Support (MSPSS-SCV) among a group of medical and dental students in University Malaya. Two hundred and two students who took part in this study were given the MSPSS-SCV, the Medical Outcome Study social support survey, the Malay version of the Beck Depression Inventory, the Malay version of the General Health Questionnaire, and the English version of the MSPSS. After 1 week, these students were again required to complete the MSPSS-SCV but with the item sequences shuffled. This scale displayed excellent internal consistency (Cronbach's α = .924), high test-retest reliability (.71), parallel form reliability (.92; Spearman's ρ, P < .01), and validity. In conclusion, the MSPSS-SCV demonstrated sound psychometric properties in measuring social support among a group of medical and dental students. It could therefore be used as a simple screening tool among young educated Malaysian adolescents. © 2013 APJPH.

  17. Fuel cells provide a revenue-generating solution to power quality problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, J.M. Jr.

    Electric power quality and reliability are becoming increasingly important as computers and microprocessors assume a larger role in commercial, health care and industrial buildings and processes. At the same time, constraints on transmission and distribution of power from central stations are making local areas vulnerable to low voltage, load addition limitations, power quality and power reliability problems. Many customers currently utilize some form of premium power in the form of standby generators and/or UPS systems. These include customers where continuous power is required because of health and safety or security reasons (hospitals, nursing homes, places of public assembly, air trafficmore » control, military installations, telecommunications, etc.) These also include customers with industrial or commercial processes which can`t tolerance an interruption of power because of product loss or equipment damage. The paper discusses the use of the PC25 fuel cell power plant for backup and parallel power supplies for critical industrial applications. Several PC25 installations are described: the use of propane in a PC25; the use by rural cooperatives; and a demonstration of PC25 technology using landfill gas.« less

  18. Monte Carlo simulation methodology for the reliabilty of aircraft structures under damage tolerance considerations

    NASA Astrophysics Data System (ADS)

    Rambalakos, Andreas

    Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.

  19. Reliability Analysis and Modeling of ZigBee Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.

  20. How to make your own response boxes: A step-by-step guide for the construction of reliable and inexpensive parallel-port response pads from computer mice.

    PubMed

    Voss, Andreas; Leonhart, Rainer; Stahl, Christoph

    2007-11-01

    Psychological research is based in large parts on response latencies, which are often registered by keypresses on a standard computer keyboard. Recording response latencies with a standard keyboard is problematic because keypresses are buffered within the keyboard hardware before they are signaled to the computer, adding error variance to the recorded latencies. This can be circumvented by using external response pads connected to the computer's parallel port. In this article, we describe how to build inexpensive, reliable, and easy-to-use response pads with six keys from two standard computer mice that can be connected to the PC's parallel port. We also address the problem of recording data from the parallel port with different software packages under Microsoft's Windows XP.

  1. Tools and Techniques for Adding Fault Tolerance to Distributed and Parallel Programs

    DTIC Science & Technology

    1991-12-07

    is rapidly approaching dimensions where fault tolerance can no longer be ignored. No matter how reliable the i .nd~ividual components May be, the...The scale of parallel computing systems is rapidly approaching dimensions where 41to’- erance can no longer be ignored. No matter how relitble the...those employed in the Tandem [71 and Stratus [35] systems, is clearly impractical. * No matter how reliable the individual components are, the sheer

  2. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-11-01

    Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  3. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less costly than development of comparable parallel code. Moreover, SequenceL not only automatically parallelizes the code, but since it is based on CSP-NT, it is provably race free, thus eliminating the largest quality challenge the parallelized software developer faces.

  4. Improved CDMA Performance Using Parallel Interference Cancellation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Divsalar, Dariush

    1995-01-01

    This report considers a general parallel interference cancellation scheme that significantly reduces the degradation effect of user interference but with a lesser implementation complexity than the maximum-likelihood technique. The scheme operates on the fact that parallel processing simultaneously removes from each user the interference produced by the remaining users accessing the channel in an amount proportional to their reliability. The parallel processing can be done in multiple stages. The proposed scheme uses tentative decision devices with different optimum thresholds at the multiple stages to produce the most reliably received data for generation and cancellation of user interference. The 1-stage interference cancellation is analyzed for three types of tentative decision devices, namely, hard, null zone, and soft decision, and two types of user power distribution, namely, equal and unequal powers. Simulation results are given for a multitude of different situations, in particular, those cases for which the analysis is too complex.

  5. To the question about the states of workability for automatic control systems with complicated structure

    NASA Astrophysics Data System (ADS)

    Kuznetsov, P. A.; Kovalev, I. V.; Losev, V. V.; Kalinin, A. O.; Murygin, A. V.

    2016-04-01

    The article discusses the reliability of automated control systems. Analyzes the approach to the classification systems for health States. This approach can be as traditional binary approach, operating with the concept of "serviceability", and other variants of estimation of the system state. This article provides one such option, providing selective evaluation of components for the reliability of the entire system. Introduced description of various automatic control systems and their elements from the point of view of health and risk, mathematical method of determining the transition object from state to state, they differ from each other in the implementation of the objective function. Explores the interplay of elements in different States, the aggregate state of the elements connected in series or in parallel. Are the tables of various logic States and the principles of their calculation in series and parallel connection. Through simulation the proposed approach is illustrated by finding the probability of getting into the system state data in parallel and serially connected elements, with their different probabilities of moving from state to state. In general, the materials of article will be useful for analyzing of the reliability the automated control systems and engineering of the highly-reliable systems. Thus, this mechanism to determine the State of the system provides more detailed information about it and allows a selective approach to the reliability of the system as a whole. Detailed results when assessing the reliability of the automated control systems allows the engineer to make an informed decision when designing means of improving reliability.

  6. The Reliability of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Livingston, Samuel A.

    The assumptions of the classical test-theory model are used to develop a theory of reliability for criterion-referenced measures which parallels that for norm-referenced measures. It is shown that the Spearman-Brown formula holds for criterion-referenced measures and that the criterion-referenced reliability coefficient can be used to correct…

  7. Design of fuel cell powered data centers for sufficient reliability and availability

    NASA Astrophysics Data System (ADS)

    Ritchie, Alexa J.; Brouwer, Jacob

    2018-04-01

    It is challenging to design a sufficiently reliable fuel cell electrical system for use in data centers, which require 99.9999% uptime. Such a system could lower emissions and increase data center efficiency, but the reliability and availability of such a system must be analyzed and understood. Currently, extensive backup equipment is used to ensure electricity availability. The proposed design alternative uses multiple fuel cell systems each supporting a small number of servers to eliminate backup power equipment provided the fuel cell design has sufficient reliability and availability. Potential system designs are explored for the entire data center and for individual fuel cells. Reliability block diagram analysis of the fuel cell systems was accomplished to understand the reliability of the systems without repair or redundant technologies. From this analysis, it was apparent that redundant components would be necessary. A program was written in MATLAB to show that the desired system reliability could be achieved by a combination of parallel components, regardless of the number of additional components needed. Having shown that the desired reliability was achievable through some combination of components, a dynamic programming analysis was undertaken to assess the ideal allocation of parallel components.

  8. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  9. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  10. Estimating Measures of Pass-Fail Reliability from Parallel Half-Tests.

    ERIC Educational Resources Information Center

    Woodruff, David J.; Sawyer, Richard L.

    Two methods for estimating measures of pass-fail reliability are derived, by which both theta and kappa may be estimated from a single test administration. The methods require only a single test administration and are computationally simple. Both are based on the Spearman-Brown formula for estimating stepped-up reliability. The non-distributional…

  11. NAS Requirements Checklist for Job Queuing/Scheduling Software

    NASA Technical Reports Server (NTRS)

    Jones, James Patton

    1996-01-01

    The increasing reliability of parallel systems and clusters of computers has resulted in these systems becoming more attractive for true production workloads. Today, the primary obstacle to production use of clusters of computers is the lack of a functional and robust Job Management System for parallel applications. This document provides a checklist of NAS requirements for job queuing and scheduling in order to make most efficient use of parallel systems and clusters for parallel applications. Future requirements are also identified to assist software vendors with design planning.

  12. Linear control theory for gene network modeling.

    PubMed

    Shin, Yong-Jun; Bleris, Leonidas

    2010-09-16

    Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  13. Comparing reliabilities of strip and conventional patch testing.

    PubMed

    Dickel, Heinrich; Geier, Johannes; Kreft, Burkhard; Pfützner, Wolfgang; Kuss, Oliver

    2017-06-01

    The standardized protocol for performing the strip patch test has proven to be valid, but evidence on its reliability is still missing. To estimate the parallel-test reliability of the strip patch test as compared with the conventional patch test. In this multicentre, prospective, randomized, investigator-blinded reliability study, 132 subjects were enrolled. Simultaneous duplicate strip and conventional patch tests were performed with the Finn Chambers ® on Scanpor ® tape test system and the patch test preparations nickel sulfate 5% pet., potassium dichromate 0.5% pet., and lanolin alcohol 30% pet. Reliability was estimated by the use of Cohen's kappa coefficient. Parallel-test reliability values of the three standard patch test preparations turned out to be acceptable, with slight advantages for the strip patch test. The differences in reliability were 9% (95%CI: -8% to 26%) for nickel sulfate and 23% (95%CI: -16% to 63%) for potassium dichromate, both favouring the strip patch test. The standardized strip patch test method for the detection of allergic contact sensitization in patients with suspected allergic contact dermatitis is reliable. Its application in routine clinical practice can be recommended, especially if the conventional patch test result is presumably false negative. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Inspection criteria ensure quality control of parallel gap soldering

    NASA Technical Reports Server (NTRS)

    Burka, J. A.

    1968-01-01

    Investigation of parallel gap soldering of electrical leads resulted in recommendation on material preparation, equipment, process control, and visual inspection criteria to ensure reliable solder joints. The recommendations will minimize problems in heat-dwell time, amount of solder, bridging conductors, and damage of circuitry.

  15. Employing machine learning for reliable miRNA target identification in plants

    PubMed Central

    2011-01-01

    Background miRNAs are ~21 nucleotide long small noncoding RNA molecules, formed endogenously in most of the eukaryotes, which mainly control their target genes post transcriptionally by interacting and silencing them. While a lot of tools has been developed for animal miRNA target system, plant miRNA target identification system has witnessed limited development. Most of them have been centered around exact complementarity match. Very few of them considered other factors like multiple target sites and role of flanking regions. Result In the present work, a Support Vector Regression (SVR) approach has been implemented for plant miRNA target identification, utilizing position specific dinucleotide density variation information around the target sites, to yield highly reliable result. It has been named as p-TAREF (plant-Target Refiner). Performance comparison for p-TAREF was done with other prediction tools for plants with utmost rigor and where p-TAREF was found better performing in several aspects. Further, p-TAREF was run over the experimentally validated miRNA targets from species like Arabidopsis, Medicago, Rice and Tomato, and detected them accurately, suggesting gross usability of p-TAREF for plant species. Using p-TAREF, target identification was done for the complete Rice transcriptome, supported by expression and degradome based data. miR156 was found as an important component of the Rice regulatory system, where control of genes associated with growth and transcription looked predominant. The entire methodology has been implemented in a multi-threaded parallel architecture in Java, to enable fast processing for web-server version as well as standalone version. This also makes it to run even on a simple desktop computer in concurrent mode. It also provides a facility to gather experimental support for predictions made, through on the spot expression data analysis, in its web-server version. Conclusion A machine learning multivariate feature tool has been implemented in parallel and locally installable form, for plant miRNA target identification. The performance was assessed and compared through comprehensive testing and benchmarking, suggesting a reliable performance and gross usability for transcriptome wide plant miRNA target identification. PMID:22206472

  16. Employing machine learning for reliable miRNA target identification in plants.

    PubMed

    Jha, Ashwani; Shankar, Ravi

    2011-12-29

    miRNAs are ~21 nucleotide long small noncoding RNA molecules, formed endogenously in most of the eukaryotes, which mainly control their target genes post transcriptionally by interacting and silencing them. While a lot of tools has been developed for animal miRNA target system, plant miRNA target identification system has witnessed limited development. Most of them have been centered around exact complementarity match. Very few of them considered other factors like multiple target sites and role of flanking regions. In the present work, a Support Vector Regression (SVR) approach has been implemented for plant miRNA target identification, utilizing position specific dinucleotide density variation information around the target sites, to yield highly reliable result. It has been named as p-TAREF (plant-Target Refiner). Performance comparison for p-TAREF was done with other prediction tools for plants with utmost rigor and where p-TAREF was found better performing in several aspects. Further, p-TAREF was run over the experimentally validated miRNA targets from species like Arabidopsis, Medicago, Rice and Tomato, and detected them accurately, suggesting gross usability of p-TAREF for plant species. Using p-TAREF, target identification was done for the complete Rice transcriptome, supported by expression and degradome based data. miR156 was found as an important component of the Rice regulatory system, where control of genes associated with growth and transcription looked predominant. The entire methodology has been implemented in a multi-threaded parallel architecture in Java, to enable fast processing for web-server version as well as standalone version. This also makes it to run even on a simple desktop computer in concurrent mode. It also provides a facility to gather experimental support for predictions made, through on the spot expression data analysis, in its web-server version. A machine learning multivariate feature tool has been implemented in parallel and locally installable form, for plant miRNA target identification. The performance was assessed and compared through comprehensive testing and benchmarking, suggesting a reliable performance and gross usability for transcriptome wide plant miRNA target identification.

  17. Stitch-bond parallel-gap welding for IC circuits

    NASA Technical Reports Server (NTRS)

    Chvostal, P.; Tuttle, J.; Vanderpool, R.

    1980-01-01

    Stitch-bonded flatpacks are superior to soldered dual-in-lines where size, weight, and reliability are important. Results should interest designers of packaging for complex high-reliability electronics, such as that used in security systems, industrial process control, and vehicle electronics.

  18. Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gibson, Garth Alan

    1990-01-01

    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

  19. Molecular structures of amyloid and prion fibrils: consensus versus controversy.

    PubMed

    Tycko, Robert; Wickner, Reed B

    2013-07-16

    Many peptides and proteins self-assemble into amyloid fibrils. Examples include mammalian and fungal prion proteins, polypeptides associated with human amyloid diseases, and proteins that may have biologically functional amyloid states. To understand the propensity for polypeptides to form amyloid fibrils and to facilitate rational design of amyloid inhibitors and imaging agents, it is necessary to elucidate the molecular structures of these fibrils. Although fibril structures were largely mysterious 15 years ago, a considerable body of reliable structural information about amyloid fibril structures now exists, with essential contributions from solid state nuclear magnetic resonance (NMR) measurements. This Account reviews results from our laboratories and discusses several structural issues that have been controversial. In many cases, the amino acid sequences of amyloid fibrils do not uniquely determine their molecular structures. Self-propagating, molecular-level polymorphism complicates the structure determination problem and can lead to apparent disagreements between results from different laboratories, particularly when different laboratories study different polymorphs. For 40-residue β-amyloid (Aβ₁₋₄₀) fibrils associated with Alzheimer's disease, we have developed detailed structural models from solid state NMR and electron microscopy data for two polymorphs. These polymorphs have similar peptide conformations, identical in-register parallel β-sheet organizations, but different overall symmetry. Other polymorphs have also been partially characterized by solid state NMR and appear to have similar structures. In contrast, cryo-electron microscopy studies that use significantly different fibril growth conditions have identified structures that appear (at low resolution) to be different from those examined by solid state NMR. Based on solid state NMR and electron paramagnetic resonance (EPR) measurements, the in-register parallel β-sheet organization found in β-amyloid fibrils also occurs in many other fibril-forming systems. We attribute this common structural motif to the stabilization of amyloid structures by intermolecular interactions among like amino acids, including hydrophobic interactions and polar zippers. Surprisingly, we have recently identified and characterized antiparallel β-sheets in certain fibrils that are formed by the D23N mutant of Aβ₁₋₄₀, a mutant that is associated with early-onset, familial neurodegenerative disease. Antiparallel D23N-Aβ₁₋₄₀ fibrils are metastable with respect to parallel structures and, therefore, represent an off-pathway intermediate in the amyloid fibril formation process. Other methods have recently produced additional evidence for antiparallel β-sheets in other amyloid-formation intermediates. As an alternative to simple parallel and antiparallel β-sheet structures, researchers have proposed β-helical structural models for some fibrils, especially those formed by mammalian and fungal prion proteins. Solid state NMR and EPR data show that fibrils formed in vitro by recombinant PrP have in-register parallel β-sheet structures. However, the structure of infectious PrP aggregates is not yet known. The fungal HET-s prion protein has been shown to contain a β-helical structure. However, all yeast prions studied by solid state NMR (Sup35p, Ure2p, and Rnq1p) have in-register parallel β-sheet structures, with their Gln- and Asn-rich N-terminal segments forming the fibril core.

  20. Reliability Evaluation for Clustered WSNs under Malware Propagation

    PubMed Central

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C.; Yu, Shui; Cao, Qiying

    2016-01-01

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node’s MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN. PMID:27294934

  1. Reliability Evaluation for Clustered WSNs under Malware Propagation.

    PubMed

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C; Yu, Shui; Cao, Qiying

    2016-06-10

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node's MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.

  2. Combinatorial Reliability and Repair

    DTIC Science & Technology

    1992-07-01

    Press, Oxford, 1987. [2] G. Gordon and L. Traldi, Generalized activities and the Tutte polynomial, Discrete Math . 85 (1990), 167-176. [3] A. B. Huseby, A...Chromatic polynomials and network reliability, Discrete Math . 67 (1987), 57-79. [7] A. Satayanarayana and R. K. Wood, A linear-time algorithm for comput- ing...K-terminal reliability in series-parallel networks, SIAM J. Comput. 14 (1985), 818-832. [8] L. Traldi, Generalized activities and K-terminal reliability, Discrete Math . 96 (1991), 131-149. 4

  3. Temperature compensated photovoltaic array

    DOEpatents

    Mosher, D.M.

    1997-11-18

    A temperature compensated photovoltaic module comprises a series of solar cells having a thermally activated switch connected in parallel with several of the cells. The photovoltaic module is adapted to charge conventional batteries having a temperature coefficient differing from the temperature coefficient of the module. The calibration temperatures of the switches are chosen whereby the colder the ambient temperature for the module, the more switches that are on and form a closed circuit to short the associated solar cells. By shorting some of the solar cells as the ambient temperature decreases, the battery being charged by the module is not excessively overcharged at lower temperatures. PV module is an integrated solution that is reliable and inexpensive. 2 figs.

  4. Wikipedia mining of hidden links between political leaders

    NASA Astrophysics Data System (ADS)

    Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.

    2016-12-01

    We describe a new method of reduced Google matrix which allows to establish direct and hidden links between a subset of nodes of a large directed network. This approach uses parallels with quantum scattering theory, developed for processes in nuclear and mesoscopic physics and quantum chaos. The method is applied to the Wikipedia networks in different language editions analyzing several groups of political leaders of USA, UK, Germany, France, Russia and G20. We demonstrate that this approach allows to recover reliably direct and hidden links among political leaders. We argue that the reduced Google matrix method can form the mathematical basis for studies in social and political sciences analyzing Leader-Members eXchange (LMX).

  5. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  6. Serial Back-Plane Technologies in Advanced Avionics Architectures

    NASA Technical Reports Server (NTRS)

    Varnavas, Kosta

    2005-01-01

    Current back plane technologies such as VME, and current personal computer back planes such as PCI, are shared bus systems that can exhibit nondeterministic latencies. This means a card can take control of the bus and use resources indefinitely affecting the ability of other cards in the back plane to acquire the bus. This provides a real hit on the reliability of the system. Additionally, these parallel busses only have bandwidths in the 100s of megahertz range and EMI and noise effects get worse the higher the bandwidth goes. To provide scalable, fault-tolerant, advanced computing systems, more applicable to today s connected computing environment and to better meet the needs of future requirements for advanced space instruments and vehicles, serial back-plane technologies should be implemented in advanced avionics architectures. Serial backplane technologies eliminate the problem of one card getting the bus and never relinquishing it, or one minor problem on the backplane bringing the whole system down. Being serial instead of parallel improves the reliability by reducing many of the signal integrity issues associated with parallel back planes and thus significantly improves reliability. The increased speeds associated with a serial backplane are an added bonus.

  7. Method of forming oriented block copolymer line patterns, block copolymer line patterns formed thereby, and their use to form patterned articles

    DOEpatents

    Russell, Thomas P.; Hong, Sung Woo; Lee, Doug Hyun; Park, Soojin; Xu, Ting

    2015-10-13

    A block copolymer film having a line pattern with a high degree of long-range order is formed by a method that includes forming a block copolymer film on a substrate surface with parallel facets, and annealing the block copolymer film to form an annealed block copolymer film having linear microdomains parallel to the substrate surface and orthogonal to the parallel facets of the substrate. The line-patterned block copolymer films are useful for the fabrication of magnetic storage media, polarizing devices, and arrays of nanowires.

  8. Method of forming oriented block copolymer line patterns, block copolymer line patterns formed thereby, and their use to form patterned articles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, Thomas P.; Hong, Sung Woo; Lee, Dong Hyun

    A block copolymer film having a line pattern with a high degree of long-range order is formed by a method that includes forming a block copolymer film on a substrate surface with parallel facets, and annealing the block copolymer film to form an annealed block copolymer film having linear microdomains parallel to the substrate surface and orthogonal to the parallel facets of the substrate. The line-patterned block copolymer films are useful for the fabrication of magnetic storage media, polarizing devices, and arrays of nanowires.

  9. CPM Test-Retest Reliability: "Standard" vs "Single Test-Stimulus" Protocols.

    PubMed

    Granovsky, Yelena; Miller-Barmak, Adi; Goldstein, Oren; Sprecher, Elliot; Yarnitsky, David

    2016-03-01

    Assessment of pain inhibitory mechanisms using conditioned pain modulation (CPM) is relevant clinically in prediction of pain and analgesic efficacy. Our objective is to provide necessary estimates of intersession CPM reliability, to enable transformation of the CPM paradigm into a clinical tool. Two cohorts of young healthy subjects (N = 65) participated in two dual-session studies. In Study I, a Bath-Thermode CPM protocol was used, with hot water immersion and contact heat as conditioning- and test-stimuli, respectively, in a classical parallel CPM design introducing test-stimulus first, and then the conditioning- and repeated test-stimuli in parallel. Study II consisted of two CPM protocols: 1) Two-Thermodes, one for each of the stimuli, in the same parallel design as above, and 2) single test-stimulus (STS) protocol with a single administration of a contact heat test-stimulus, partially overlapped in time by a remote shorter contact heat as conditioning stimulus. Test-retest reliability was assessed within 3-7 days. The STS-CPM had superior reliability intraclass correlation (ICC 2 ,: 1  = 0.59) over Bath-Thermode (ICC 2 ,: 1  = 0.34) or Two-Thermodes (ICC 2 ,: 1  = 0.21) protocols. The hand immersion conditioning pain had higher reliability than thermode pain (ICC 2 ,: 1  = 0.76 vs ICC 2 ,: 1  = 0.16). Conditioned test-stimulus pain scores were of good (ICC 2 ,: 1  = 0.62) or fair (ICC 2 ,: 1  = 0.43) reliability for the Bath-Thermode and the STS, respectively, but not for the Two-Thermodes protocol (ICC 2 ,: 1  = 0.20). The newly developed STS-CPM paradigm was more reliable than other CPM protocols tested here, and should be further investigated for its clinical relevance. It appears that large contact size of the conditioning-stimulus and use of single rather than dual test-stimulus pain contribute to augmentation of CPM reliability. © 2015 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. The role of bed-parallel slip in the development of complex normal fault zones

    NASA Astrophysics Data System (ADS)

    Delogkos, Efstratios; Childs, Conrad; Manzocchi, Tom; Walsh, John J.; Pavlides, Spyros

    2017-04-01

    Normal faults exposed in Kardia lignite mine, Ptolemais Basin, NW Greece formed at the same time as bed-parallel slip-surfaces, so that while the normal faults grew they were intermittently offset by bed-parallel slip. Following offset by a bed-parallel slip-surface, further fault growth is accommodated by reactivation on one or both of the offset fault segments. Where one fault is reactivated the site of bed-parallel slip is a bypassed asperity. Where both faults are reactivated, they propagate past each other to form a volume between overlapping fault segments that displays many of the characteristics of relay zones, including elevated strains and transfer of displacement between segments. Unlike conventional relay zones, however, these structures contain either a repeated or a missing section of stratigraphy which has a thickness equal to the throw of the fault at the time of the bed-parallel slip event, and the displacement profiles along the relay-bounding fault segments have discrete steps at their intersections with bed-parallel slip-surfaces. With further increase in displacement, the overlapping fault segments connect to form a fault-bound lens. Conventional relay zones form during initial fault propagation, but with coeval bed-parallel slip, relay-like structures can form later in the growth of a fault. Geometrical restoration of cross-sections through selected faults shows that repeated bed-parallel slip events during fault growth can lead to complex internal fault zone structure that masks its origin. Bed-parallel slip, in this case, is attributed to flexural-slip arising from hanging-wall rollover associated with a basin-bounding fault outside the study area.

  11. Fabrication of uniform multi-compartment particles using microfludic electrospray technology for cell co-culture studies.

    PubMed

    Liu, Zhou; Shum, Ho Cheung

    2013-01-01

    In this work, we demonstrate a robust and reliable approach to fabricate multi-compartment particles for cell co-culture studies. By taking advantage of the laminar flow within our microfluidic nozzle, multiple parallel streams of liquids flow towards the nozzle without significant mixing. Afterwards, the multiple parallel streams merge into a single stream, which is sprayed into air, forming monodisperse droplets under an electric field with a high field strength. The resultant multi-compartment droplets are subsequently cross-linked in a calcium chloride solution to form calcium alginate micro-particles with multiple compartments. Each compartment of the particles can be used for encapsulating different types of cells or biological cell factors. These hydrogel particles with cross-linked alginate chains show similarity in the physical and mechanical environment as the extracellular matrix of biological cells. Thus, the multi-compartment particles provide a promising platform for cell studies and co-culture of different cells. In our study, cells are encapsulated in the multi-compartment particles and the viability of cells is quantified using a fluorescence microscope after the cells are stained for a live/dead assay. The high cell viability after encapsulation indicates the cytocompatibility and feasibility of our technique. Our multi-compartment particles have great potential as a platform for studying cell-cell interactions as well as interactions of cells with extracellular factors.

  12. Fabrication of uniform multi-compartment particles using microfludic electrospray technology for cell co-culture studies

    PubMed Central

    Liu, Zhou; Shum, Ho Cheung

    2013-01-01

    In this work, we demonstrate a robust and reliable approach to fabricate multi-compartment particles for cell co-culture studies. By taking advantage of the laminar flow within our microfluidic nozzle, multiple parallel streams of liquids flow towards the nozzle without significant mixing. Afterwards, the multiple parallel streams merge into a single stream, which is sprayed into air, forming monodisperse droplets under an electric field with a high field strength. The resultant multi-compartment droplets are subsequently cross-linked in a calcium chloride solution to form calcium alginate micro-particles with multiple compartments. Each compartment of the particles can be used for encapsulating different types of cells or biological cell factors. These hydrogel particles with cross-linked alginate chains show similarity in the physical and mechanical environment as the extracellular matrix of biological cells. Thus, the multi-compartment particles provide a promising platform for cell studies and co-culture of different cells. In our study, cells are encapsulated in the multi-compartment particles and the viability of cells is quantified using a fluorescence microscope after the cells are stained for a live/dead assay. The high cell viability after encapsulation indicates the cytocompatibility and feasibility of our technique. Our multi-compartment particles have great potential as a platform for studying cell-cell interactions as well as interactions of cells with extracellular factors. PMID:24404050

  13. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    NASA Technical Reports Server (NTRS)

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-01-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  14. A PC parallel port button box provides millisecond response time accuracy under Linux.

    PubMed

    Stewart, Neil

    2006-02-01

    For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.

  15. The formation of quasi-parallel shocks. [in space, solar and astrophysical plasmas

    NASA Technical Reports Server (NTRS)

    Cargill, Peter J.

    1991-01-01

    In a collisionless plasma, the coupling between a piston and the plasma must take place through either laminar or turbulent electromagnetic fields. Of the three types of coupling (laminar, Larmor and turbulent), shock formation in the parallel regime is dominated by the latter and in the quasi-parallel regime by a combination of all three, depending on the piston. In the quasi-perpendicular regime, there is usually a good separation between piston and shock. This is not true in the quasi-parallel and parallel regime. Hybrid numerical simulations for hot plasma pistons indicate that when the electrons are hot, a shock forms, but does not cleanly decouple from the piston. For hot ion pistons, no shock forms in the parallel limit: in the quasi-parallel case, a shock forms, but there is severe contamination from hot piston ions. These results suggest that the properties of solar and astrophysical shocks, such as particle acceleration, cannot be readily separated from their driving mechanism.

  16. FaCSI: A block parallel preconditioner for fluid-structure interaction in hemodynamics

    NASA Astrophysics Data System (ADS)

    Deparis, Simone; Forti, Davide; Grandperrin, Gwenol; Quarteroni, Alfio

    2016-12-01

    Modeling Fluid-Structure Interaction (FSI) in the vascular system is mandatory to reliably compute mechanical indicators in vessels undergoing large deformations. In order to cope with the computational complexity of the coupled 3D FSI problem after discretizations in space and time, a parallel solution is often mandatory. In this paper we propose a new block parallel preconditioner for the coupled linearized FSI system obtained after space and time discretization. We name it FaCSI to indicate that it exploits the Factorized form of the linearized FSI matrix, the use of static Condensation to formally eliminate the interface degrees of freedom of the fluid equations, and the use of a SIMPLE preconditioner for saddle-point problems. FaCSI is built upon a block Gauss-Seidel factorization of the FSI Jacobian matrix and it uses ad-hoc preconditioners for each physical component of the coupled problem, namely the fluid, the structure and the geometry. In the fluid subproblem, after operating static condensation of the interface fluid variables, we use a SIMPLE preconditioner on the reduced fluid matrix. Moreover, to efficiently deal with a large number of processes, FaCSI exploits efficient single field preconditioners, e.g., based on domain decomposition or the multigrid method. We measure the parallel performances of FaCSI on a benchmark cylindrical geometry and on a problem of physiological interest, namely the blood flow through a patient-specific femoropopliteal bypass. We analyze the dependence of the number of linear solver iterations on the cores count (scalability of the preconditioner) and on the mesh size (optimality).

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Gary

    The primary objective of this project was to demonstrate the feasibility and reliability of utilizing high-temperature superconducting (HTS) materials in a Transmission Level Superconducting Fault Current Limiter (SFCL) application. During the project, the type of high-temperature superconducting material used evolved from 1 st generation (1G) BSCCO-2212 melt cast bulk high-temperature superconductors to 2 nd generation (2G) YBCO-based high-temperature superconducting tape. The SFCL employed SuperPower's “Matrix” technology, that offers modular features to enable scale up to transmission voltage levels. The SFCL consists of individual modules that contain elements and parallel inductors that assist in carrying the current during the fault. Amore » number of these modules are arranged in an m x n array to form the current-limiting matrix.« less

  18. Temperature compensated photovoltaic array

    DOEpatents

    Mosher, Dan Michael

    1997-11-18

    A temperature compensated photovoltaic module (20) comprised of a series of solar cells (22) having a thermally activated switch (24) connected in parallel with several of the cells (22). The photovoltaic module (20) is adapted to charge conventional batteries having a temperature coefficient (TC) differing from the temperature coefficient (TC) of the module (20). The calibration temperatures of the switches (24) are chosen whereby the colder the ambient temperature for the module (20), the more switches that are on and form a closed circuit to short the associated solar cells (22). By shorting some of the solar cells (22) as the ambient temperature decreases, the battery being charged by the module (20) is not excessively overcharged at lower temperatures. PV module (20) is an integrated solution that is reliable and inexpensive.

  19. Tutorial: Performance and reliability in redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Gibson, Garth A.

    1993-01-01

    A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.

  20. Reliability Modeling Methodology for Independent Approaches on Parallel Runways Safety Analysis

    NASA Technical Reports Server (NTRS)

    Babcock, P.; Schor, A.; Rosch, G.

    1998-01-01

    This document is an adjunct to the final report An Integrated Safety Analysis Methodology for Emerging Air Transport Technologies. That report presents the results of our analysis of the problem of simultaneous but independent, approaches of two aircraft on parallel runways (independent approaches on parallel runways, or IAPR). This introductory chapter presents a brief overview and perspective of approaches and methodologies for performing safety analyses for complex systems. Ensuing chapter provide the technical details that underlie the approach that we have taken in performing the safety analysis for the IAPR concept.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alred, Erik J.; Scheele, Emily G.; Berhanu, Workalemahu M.

    Recent experiments indicate a connection between the structure of amyloid aggregates and their cytotoxicity as related to neurodegenerative diseases. Of particular interest is the Iowa Mutant, which causes early-onset of Alzheimer's disease. While wild-type Amyloid β-peptides form only parallel beta-sheet aggregates, the mutant also forms meta-stable antiparallel beta sheets. Since these structural variations may cause the difference in the pathological effects of the two Aβ-peptides, we have studied in silico the relative stability of the wild type and Iowa mutant in both parallel and antiparallel forms. We compare regular molecular dynamics simulations with such where the viscosity of the samplesmore » is reduced, which, we show, leads to higher sampling efficiency. By analyzing and comparing these four sets of all-atom molecular dynamics simulations, we probe the role of the various factors that could lead to the structural differences. Our analysis indicates that the parallel forms of both wild type and Iowa mutant aggregates are stable, while the antiparallel aggregates are meta-stable for the Iowa mutant and not stable for the wild type. The differences result from the direct alignment of hydrophobic interactions in the in-register parallel oligomers, making them more stable than the antiparallel aggregates. The slightly higher thermodynamic stability of the Iowa mutant fibril-like oligomers in its parallel organization over that in antiparallel form is supported by previous experimental measurements showing slow inter-conversion of antiparallel aggregates into parallel ones. Knowledge of the mechanism that selects between parallel and antiparallel conformations and determines their relative stability may open new avenues for the development of therapies targeting familial forms of early-onset Alzheimer's disease.« less

  2. Measurement properties of the Spinal Cord Injury-Functional Index (SCI-FI) short forms.

    PubMed

    Heinemann, Allen W; Dijkers, Marcel P; Ni, Pengsheng; Tulsky, David S; Jette, Alan

    2014-07-01

    To evaluate the psychometric properties of the Spinal Cord Injury-Functional Index (SCI-FI) short forms (basic mobility, self-care, fine motor, ambulation, manual wheelchair, and power wheelchair) based on internal consistency; correlations between short forms banks, full item bank forms, and a 10-item computer adaptive test version; magnitude of ceiling and floor effects; and test information functions. Cross-sectional cohort study. Six rehabilitation hospitals in the United States. Individuals with traumatic spinal cord injury (N=855) recruited from 6 national Spinal Cord Injury Model Systems facilities. Not applicable. SCI-FI full item bank, 10-item computer adaptive test, and parallel short form scores. The SCI-FI short forms (with separate versions for individuals with paraplegia and tetraplegia) demonstrate very good internal consistency, group-level reliability, excellent correlations between short forms and scores based on the total item bank, and minimal ceiling and floor effects (except ceiling effects for persons with paraplegia on self-care, fine motor, and power wheelchair ability and floor effects for persons with tetraplegia on self-care, fine motor, and manual wheelchair ability). The test information functions are acceptable across the range of scores where most persons in the sample performed. Clinicians and researchers should consider the SCI-FI short forms when computer adaptive testing is not feasible. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  3. Soft Pneumatic Actuator Fascicles for High Force and Reliability

    PubMed Central

    Robertson, Matthew A.; Sadeghi, Hamed; Florez, Juan Manuel

    2017-01-01

    Abstract Soft pneumatic actuators (SPAs) are found in mobile robots, assistive wearable devices, and rehabilitative technologies. While soft actuators have been one of the most crucial elements of technology leading the development of the soft robotics field, they fall short of force output and bandwidth requirements for many tasks. In addition, other general problems remain open, including robustness, controllability, and repeatability. The SPA-pack architecture presented here aims to satisfy these standards of reliability crucial to the field of soft robotics, while also improving the basic performance capabilities of SPAs by borrowing advantages leveraged ubiquitously in biology; namely, the structured parallel arrangement of lower power actuators to form the basis of a larger and more powerful actuator module. An SPA-pack module consisting of a number of smaller SPAs will be studied using an analytical model and physical prototype. Experimental measurements show an SPA pack to generate over 112 N linear force, while the model indicates the benefit of parallel actuator grouping over a geometrically equivalent single SPA scale as an increasing function of the number of individual actuators in the group. For a module of four actuators, a 23% increase in force production over a volumetrically equivalent single SPA is predicted and validated, while further gains appear possible up to 50%. These findings affirm the advantage of utilizing a fascicle structure for high-performance soft robotic applications over existing monolithic SPA designs. An example of high-performance soft robotic platform will be presented to demonstrate the capability of SPA-pack modules in a complete and functional system. PMID:28289573

  4. Soft Pneumatic Actuator Fascicles for High Force and Reliability.

    PubMed

    Robertson, Matthew A; Sadeghi, Hamed; Florez, Juan Manuel; Paik, Jamie

    2017-03-01

    Soft pneumatic actuators (SPAs) are found in mobile robots, assistive wearable devices, and rehabilitative technologies. While soft actuators have been one of the most crucial elements of technology leading the development of the soft robotics field, they fall short of force output and bandwidth requirements for many tasks. In addition, other general problems remain open, including robustness, controllability, and repeatability. The SPA-pack architecture presented here aims to satisfy these standards of reliability crucial to the field of soft robotics, while also improving the basic performance capabilities of SPAs by borrowing advantages leveraged ubiquitously in biology; namely, the structured parallel arrangement of lower power actuators to form the basis of a larger and more powerful actuator module. An SPA-pack module consisting of a number of smaller SPAs will be studied using an analytical model and physical prototype. Experimental measurements show an SPA pack to generate over 112 N linear force, while the model indicates the benefit of parallel actuator grouping over a geometrically equivalent single SPA scale as an increasing function of the number of individual actuators in the group. For a module of four actuators, a 23% increase in force production over a volumetrically equivalent single SPA is predicted and validated, while further gains appear possible up to 50%. These findings affirm the advantage of utilizing a fascicle structure for high-performance soft robotic applications over existing monolithic SPA designs. An example of high-performance soft robotic platform will be presented to demonstrate the capability of SPA-pack modules in a complete and functional system.

  5. Modeling the Control Systems of Gas-Turbines to Ensure Their Reliable Parallel Operation in the UPS of Russia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinogradov, A. Yu., E-mail: vinogradov-a@ntcees.ru; Gerasimov, A. S.; Kozlov, A. V.

    Consideration is given to different approaches to modeling the control systems of gas turbines as a component of CCPP and GTPP to ensure their reliable parallel operation in the UPS of Russia. The disadvantages of the approaches to the modeling of combined-cycle units in studying long-term electromechanical transients accompanied by power imbalance are pointed out. Examples are presented to support the use of more detailed models of gas turbines in electromechanical transient calculations. It is shown that the modern speed control systems of gas turbines in combination with relatively low equivalent inertia have a considerable effect on electromechanical transients, includingmore » those caused by disturbances not related to power imbalance.« less

  6. Validation of Malay Version of Snaith-Hamilton Pleasure Scale: Comparison between Depressed Patients and Healthy Subjects at an Out-Patient Clinic in Malaysia

    PubMed Central

    NG, Chong Guan; CHIN, Soo Cheng; YEE, Anne Hway Ann; LOH, Huai Seng; SULAIMAN, Ahmad Hatim; Sherianne Sook Kuan, WONG; HABIL, Mohamed Hussain

    2014-01-01

    Background: The Snaith-Hamilton Pleasure Scale (SHAPS) is a self-assessment scale designed to evaluate anhedonia in various psychiatric disorders. In order to facilitate its use in Malaysian settings, our current study aimed to examine the validity of a Malay-translated version of the SHAPS (SHAPS-M). Methods: In this cross-sectional study, a total of 44 depressed patients and 82 healthy subjects were recruited from a university out-patient clinic. All participants were given both the Malay and English versions of the SHAPS, Fawcett-Clark Pleasure Scale (FCPS), General Health Questionnaire 12 (GHQ-12), and the Beck Depression Inventory (BDI) to assess their hedonic state, general mental health condition and levels of depression. Results: The results showed that the SHAPS-M has impressive internal consistency (α = 0.96), concurrent validity and good parallel-form reliability (intraclass coefficient, ICC = 0.65). Conclusion: In addition to demonstrating good psychometric properties, the SHAPS-M is easy to administer. Therefore, it is a valid, reliable, and suitable questionnaire for assessing anhedonia among depressed patients in Malaysia. PMID:25246837

  7. Validation of Malay Version of Snaith-Hamilton Pleasure Scale: Comparison between Depressed Patients and Healthy Subjects at an Out-Patient Clinic in Malaysia.

    PubMed

    Ng, Chong Guan; Chin, Soo Cheng; Yee, Anne Hway Ann; Loh, Huai Seng; Sulaiman, Ahmad Hatim; Sherianne Sook Kuan, Wong; Habil, Mohamed Hussain

    2014-05-01

    The Snaith-Hamilton Pleasure Scale (SHAPS) is a self-assessment scale designed to evaluate anhedonia in various psychiatric disorders. In order to facilitate its use in Malaysian settings, our current study aimed to examine the validity of a Malay-translated version of the SHAPS (SHAPS-M). In this cross-sectional study, a total of 44 depressed patients and 82 healthy subjects were recruited from a university out-patient clinic. All participants were given both the Malay and English versions of the SHAPS, Fawcett-Clark Pleasure Scale (FCPS), General Health Questionnaire 12 (GHQ-12), and the Beck Depression Inventory (BDI) to assess their hedonic state, general mental health condition and levels of depression. The results showed that the SHAPS-M has impressive internal consistency (α = 0.96), concurrent validity and good parallel-form reliability (intraclass coefficient, ICC = 0.65). In addition to demonstrating good psychometric properties, the SHAPS-M is easy to administer. Therefore, it is a valid, reliable, and suitable questionnaire for assessing anhedonia among depressed patients in Malaysia.

  8. JSD: Parallel Job Accounting on the IBM SP2

    NASA Technical Reports Server (NTRS)

    Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

    1995-01-01

    The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

  9. TVA-Based Assessment of Visual Attention Using Line-Drawings of Fruits and Vegetables

    PubMed Central

    Wang, Tianlu; Gillebert, Celine R.

    2018-01-01

    Visuospatial attention and short-term memory allow us to prioritize, select, and briefly maintain part of the visual information that reaches our senses. These cognitive abilities are quantitatively accounted for by Bundesen’s theory of visual attention (TVA; Bundesen, 1990). Previous studies have suggested that TVA-based assessments are sensitive to inter-individual differences in spatial bias, visual short-term memory capacity, top-down control, and processing speed in healthy volunteers as well as in patients with various neurological and psychiatric conditions. However, most neuropsychological assessments of attention and executive functions, including TVA-based assessment, make use of alphanumeric stimuli and/or are performed verbally, which can pose difficulties for individuals who have troubles processing letters or numbers. Here we examined the reliability of TVA-based assessments when stimuli are used that are not alphanumeric, but instead based on line-drawings of fruits and vegetables. We compared five TVA parameters quantifying the aforementioned cognitive abilities, obtained by modeling accuracy data on a whole/partial report paradigm using conventional alphabet stimuli versus the food stimuli. Significant correlations were found for all TVA parameters, indicating a high parallel-form reliability. Split-half correlations assessing internal reliability, and correlations between predicted and observed data assessing goodness-of-fit were both significant. Our results provide an indication that line-drawings of fruits and vegetables can be used for a reliable assessment of attention and short-term memory. PMID:29535660

  10. A Gradient-Based Multistart Algorithm for Multimodal Aerodynamic Shape Optimization Problems Based on Free-Form Deformation

    NASA Astrophysics Data System (ADS)

    Streuber, Gregg Mitchell

    Environmental and economic factors motivate the pursuit of more fuel-efficient aircraft designs. Aerodynamic shape optimization is a powerful tool in this effort, but is hampered by the presence of multimodality in many design spaces. Gradient-based multistart optimization uses a sampling algorithm and multiple parallel optimizations to reliably apply fast gradient-based optimization to moderately multimodal problems. Ensuring that the sampled geometries remain physically realizable requires manually developing specialized linear constraints for each class of problem. Utilizing free-form deformation geometry control allows these linear constraints to be written in a geometry-independent fashion, greatly easing the process of applying the algorithm to new problems. This algorithm was used to assess the presence of multimodality when optimizing a wing in subsonic and transonic flows, under inviscid and viscous conditions, and a blended wing-body under transonic, viscous conditions. Multimodality was present in every wing case, while the blended wing-body was found to be generally unimodal.

  11. Stability of Iowa mutant and wild type Aβ-peptide aggregates

    NASA Astrophysics Data System (ADS)

    Alred, Erik J.; Scheele, Emily G.; Berhanu, Workalemahu M.; Hansmann, Ulrich H. E.

    2014-11-01

    Recent experiments indicate a connection between the structure of amyloid aggregates and their cytotoxicity as related to neurodegenerative diseases. Of particular interest is the Iowa Mutant, which causes early-onset of Alzheimer's disease. While wild-type Amyloid β-peptides form only parallel beta-sheet aggregates, the mutant also forms meta-stable antiparallel beta sheets. Since these structural variations may cause the difference in the pathological effects of the two Aβ-peptides, we have studied in silico the relative stability of the wild type and Iowa mutant in both parallel and antiparallel forms. We compare regular molecular dynamics simulations with such where the viscosity of the samples is reduced, which, we show, leads to higher sampling efficiency. By analyzing and comparing these four sets of all-atom molecular dynamics simulations, we probe the role of the various factors that could lead to the structural differences. Our analysis indicates that the parallel forms of both wild type and Iowa mutant aggregates are stable, while the antiparallel aggregates are meta-stable for the Iowa mutant and not stable for the wild type. The differences result from the direct alignment of hydrophobic interactions in the in-register parallel oligomers, making them more stable than the antiparallel aggregates. The slightly higher thermodynamic stability of the Iowa mutant fibril-like oligomers in its parallel organization over that in antiparallel form is supported by previous experimental measurements showing slow inter-conversion of antiparallel aggregates into parallel ones. Knowledge of the mechanism that selects between parallel and antiparallel conformations and determines their relative stability may open new avenues for the development of therapies targeting familial forms of early-onset Alzheimer's disease.

  12. Advanced techniques in reliability model representation and solution

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  13. Evaluating Statistical Targets for Assembling Parallel Mixed-Format Test Forms

    ERIC Educational Resources Information Center

    Debeer, Dries; Ali, Usama S.; van Rijn, Peter W.

    2017-01-01

    Test assembly is the process of selecting items from an item pool to form one or more new test forms. Often new test forms are constructed to be parallel with an existing (or an ideal) test. Within the context of item response theory, the test information function (TIF) or the test characteristic curve (TCC) are commonly used as statistical…

  14. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  15. Parallel Mitogenome Sequencing Alleviates Random Rooting Effect in Phylogeography.

    PubMed

    Hirase, Shotaro; Takeshima, Hirohiko; Nishida, Mutsumi; Iwasaki, Wataru

    2016-04-28

    Reliably rooted phylogenetic trees play irreplaceable roles in clarifying diversification in the patterns of species and populations. However, such trees are often unavailable in phylogeographic studies, particularly when the focus is on rapidly expanded populations that exhibit star-like trees. A fundamental bottleneck is known as the random rooting effect, where a distant outgroup tends to root an unrooted tree "randomly." We investigated whether parallel mitochondrial genome (mitogenome) sequencing alleviates this effect in phylogeography using a case study on the Sea of Japan lineage of the intertidal goby Chaenogobius annularis Eighty-three C. annularis individuals were collected and their mitogenomes were determined by high-throughput and low-cost parallel sequencing. Phylogenetic analysis of these mitogenome sequences was conducted to root the Sea of Japan lineage, which has a star-like phylogeny and had not been reliably rooted. The topologies of the bootstrap trees were investigated to determine whether the use of mitogenomes alleviated the random rooting effect. The mitogenome data successfully rooted the Sea of Japan lineage by alleviating the effect, which hindered phylogenetic analysis that used specific gene sequences. The reliable rooting of the lineage led to the discovery of a novel, northern lineage that expanded during an interglacial period with high bootstrap support. Furthermore, the finding of this lineage suggested the existence of additional glacial refugia and provided a new recent calibration point that revised the divergence time estimation between the Sea of Japan and Pacific Ocean lineages. This study illustrates the effectiveness of parallel mitogenome sequencing for solving the random rooting problem in phylogeographic studies. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  16. Look-ahead Dynamic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-20

    Look-ahead dynamic simulation software system incorporates the high performance parallel computing technologies, significantly reduces the solution time for each transient simulation case, and brings the dynamic simulation analysis into on-line applications to enable more transparency for better reliability and asset utilization. It takes the snapshot of the current power grid status, functions in parallel computing the system dynamic simulation, and outputs the transient response of the power system in real time.

  17. Standardization and validation of a parallel form of the verbal and non-verbal recognition memory test in an Italian population sample.

    PubMed

    Smirni, Daniela; Smirni, Pietro; Di Martino, Giovanni; Cipolotti, Lisa; Oliveri, Massimiliano; Turriziani, Patrizia

    2018-05-04

    In the neuropsychological assessment of several neurological conditions, recognition memory evaluation is requested. Recognition seems to be more appropriate than recall to study verbal and non-verbal memory, because interferences of psychological and emotional disorders are less relevant in the recognition than they are in recall memory paradigms. In many neurological disorders, longitudinal repeated assessments are needed to monitor the effectiveness of rehabilitation programs or pharmacological treatments on the recovery of memory. In order to contain the practice effect in repeated neuropsychological evaluations, it is necessary the use of parallel forms of the tests. Having two parallel forms of the same test, that kept administration procedures and scoring constant, is a great advantage in both clinical practice, for the monitoring of memory disorder, and in experimental practice, to allow the repeated evaluation of memory on healthy and neurological subjects. First aim of the present study was to provide normative values in an Italian sample (n = 160) for a parallel form of a verbal and non-verbal recognition memory battery. Multiple regression analysis revealed significant effects of age and education on recognition memory performance, whereas sex did not reach a significant probability level. Inferential cutoffs have been determined and equivalent scores computed. Secondly, the study aimed to validate the equivalence of the two parallel forms of the Recognition Memory Test. The correlations analyses between the total scores of the two versions of the test and correlation between the three subtasks revealed that the two forms are parallel and the subtasks are equivalent for difficulty.

  18. A novel transition pathway of ligand-induced topological conversion from hybrid forms to parallel forms of human telomeric G-quadruplexes

    PubMed Central

    Wang, Zi-Fu; Li, Ming-Hao; Chen, Wei-Wen; Hsu, Shang-Te Danny; Chang, Ta-Chau

    2016-01-01

    The folding topology of DNA G-quadruplexes (G4s) depends not only on their nucleotide sequences but also on environmental factors and/or ligand binding. Here, a G4 ligand, 3,6-bis(1-methyl-4-vinylpyridium iodide)-9-(1-(1-methyl-piperidinium iodide)-3,6,9-trioxaundecane) carbazole (BMVC-8C3O), can induce topological conversion of non-parallel to parallel forms in human telomeric DNA G4s. Nuclear magnetic resonance (NMR) spectroscopy with hydrogen-deuterium exchange (HDX) reveals the presence of persistent imino proton signals corresponding to the central G-quartet during topological conversion of Tel23 and Tel25 G4s from hybrid to parallel forms, implying that the transition pathway mainly involves local rearrangements. In contrast, rapid HDX was observed during the transition of 22-CTA G4 from an anti-parallel form to a parallel form, resulting in complete disappearance of all the imino proton signals, suggesting the involvement of substantial unfolding events associated with the topological transition. Site-specific imino proton NMR assignments of Tel23 G4 enable determination of the interconversion rates of individual guanine bases and detection of the presence of intermediate states. Since the rate of ligand binding is much higher than the rate of ligand-induced topological conversion, a three-state kinetic model was evoked to establish the associated energy diagram for the topological conversion of Tel23 G4 induced by BMVC-8C3O. PMID:26975658

  19. A general method to eliminate laboratory induced recombinants during massive, parallel sequencing of cDNA library.

    PubMed

    Waugh, Caryll; Cromer, Deborah; Grimm, Andrew; Chopra, Abha; Mallal, Simon; Davenport, Miles; Mak, Johnson

    2015-04-09

    Massive, parallel sequencing is a potent tool for dissecting the regulation of biological processes by revealing the dynamics of the cellular RNA profile under different conditions. Similarly, massive, parallel sequencing can be used to reveal the complexity of viral quasispecies that are often found in the RNA virus infected host. However, the production of cDNA libraries for next-generation sequencing (NGS) necessitates the reverse transcription of RNA into cDNA and the amplification of the cDNA template using PCR, which may introduce artefact in the form of phantom nucleic acids species that can bias the composition and interpretation of original RNA profiles. Using HIV as a model we have characterised the major sources of error during the conversion of viral RNA to cDNA, namely excess RNA template and the RNaseH activity of the polymerase enzyme, reverse transcriptase. In addition we have analysed the effect of PCR cycle on detection of recombinants and assessed the contribution of transfection of highly similar plasmid DNA to the formation of recombinant species during the production of our control viruses. We have identified RNA template concentrations, RNaseH activity of reverse transcriptase, and PCR conditions as key parameters that must be carefully optimised to minimise chimeric artefacts. Using our optimised RT-PCR conditions, in combination with our modified PCR amplification procedure, we have developed a reliable technique for accurate determination of RNA species using NGS technology.

  20. Overdistribution illusions: Categorical judgments produce them, confidence ratings reduce them.

    PubMed

    Brainerd, C J; Nakamura, K; Reyna, V F; Holliday, R E

    2017-01-01

    Overdistribution is a form of memory distortion in which an event is remembered as belonging to too many episodic states, states that are logically or empirically incompatible with each other. We investigated a response formatting method of suppressing 2 basic types of overdistribution, disjunction and conjunction illusions, which parallel some classic illusions in the judgment and decision making literature. In this method, subjects respond to memory probes by rating their confidence that test cues belong to specific episodic states (e.g., presented on List 1, presented on List 2), rather than by making the usual categorical judgments about those states. The central prediction, which was derived from the task calibration principle of fuzzy-trace theory, was that confidence ratings should reduce overdistribution by diminishing subjects' reliance on noncompensatory gist memories. The data of 3 experiments agreed with that prediction. In Experiment 1, there were reliable disjunction illusions with categorical judgments but not with confidence ratings. In Experiment 2, both response formats produced reliable disjunction illusions, but those for confidence ratings were much smaller than those for categorical judgments. In Experiment 3, there were reliable conjunction illusions with categorical judgments but not with confidence ratings. Apropos of recent controversies over confidence-accuracy correlations in memory, such correlations were positive for hits, negative for correct rejections, and the 2 types of correlations were of equal magnitude. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Overdistribution Illusions: Categorical Judgments Produce Them, Confidence Ratings Reduce Them

    PubMed Central

    Brainerd, C. J.; Nakamura, K.; Reyna, V. F.; Holliday, R. E.

    2017-01-01

    Overdistribution is a form of memory distortion in which an event is remembered as belonging to too many episodic states, states that are logically or empirically incompatible with each other. We investigated a response formatting method of suppressing two basic types of overdistribution, disjunction and conjunction illusions, which parallel some classic illusions in the judgment and decision making literature. In this method, subjects respond to memory probes by rating their confidence that test cues belong to specific episodic states (e.g., presented on List 1, presented on List 2), rather than by making the usual categorical judgments about those states. The central prediction, which was derived from the task calibration principle of fuzzy-trace theory, was that confidence ratings should reduce overdistribution by diminishing subjects’ reliance on noncompensatory gist memories. The data of three experiments agreed with that prediction. In Experiment 1, there were reliable disjunction illusions with categorical judgments but not with confidence ratings. In Experiment 2, both response formats produced reliable disjunction illusions, but those for confidence ratings were much smaller than those for categorical judgments. In Experiment 3, there were reliable conjunction illusions with categorical judgments but not with confidence ratings. Apropos of recent controversies over confidence-accuracy correlations in memory, such correlations were positive for hits, negative for correct rejections, and the two types of correlations were of equal magnitude. PMID:28054811

  2. The effect of cell design and test criteria on the series/parallel performance of nickel cadmium cells and batteries

    NASA Technical Reports Server (NTRS)

    Halpert, G.; Webb, D. A.

    1983-01-01

    Three batteries were operated in parallel from a common bus during charge and discharge. SMM utilized NASA Standard 20AH cells and batteries, and LANDSAT-D NASA 50AH cells and batteries of a similar design. Each battery consisted of 22 series connected cells providing the nominal 28V bus. The three batteries were charged in parallel using the voltage limit/current taper mode wherein the voltage limit was temperature compensated. Discharge occurred on the demand of the spacecraft instruments and electronics. Both flights were planned for three to five year missions. The series/parallel configuration of cells and batteries for the 3-5 yr mission required a well controlled product with built-in reliability and uniformity. Examples of how component, cell and battery selection methods affect the uniformity of the series/parallel operation of the batteries both in testing and in flight are given.

  3. Improving reliability of a residency interview process.

    PubMed

    Peeters, Michael J; Serres, Michelle L; Gundrum, Todd E

    2013-10-14

    To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station-impact of content specificity was greatly reduced with more interview stations. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity.

  4. Progress on the Multiphysics Capabilities of the Parallel Electromagnetic ACE3P Simulation Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kononenko, Oleksiy

    2015-03-26

    ACE3P is a 3D parallel simulation suite that is being developed at SLAC National Accelerator Laboratory. Effectively utilizing supercomputer resources, ACE3P has become a key tool for the coupled electromagnetic, thermal and mechanical research and design of particle accelerators. Based on the existing finite-element infrastructure, a massively parallel eigensolver is developed for modal analysis of mechanical structures. It complements a set of the multiphysics tools in ACE3P and, in particular, can be used for the comprehensive study of microphonics in accelerating cavities ensuring the operational reliability of a particle accelerator.

  5. Measuring health-related quality of life in young adolescents: reliability and validity in the Norwegian version of the Pediatric Quality of Life Inventory 4.0 (PedsQL) generic core scales.

    PubMed

    Reinfjell, Trude; Diseth, Trond H; Veenstra, Marijke; Vikan, Arne

    2006-09-14

    Health-Related Quality of Life (HRQOL) studies concerning children and adolescents are a growing field of research. The Pediatric Quality of Life Inventory (PedsQL) is considered as a promising HRQOL instrument with the availability of age appropriate versions and parallel forms for both child and parents. The purpose of the current study was to evaluate the psychometric properties of the Norwegian translation of the Pediatric Quality of Life Inventory (PedsQL) 4.0 generic core scale in a sample of healthy young adolescents. A cross-sectional study of 425 healthy young adolescents and 237 of their caregivers participating as a proxy. Reliability was assessed by Cronbach's alpha. Construct validity was assessed using exploratory factor analysis and by exploring the intercorrelations between and among the four PedsQL subscales for adolescents and their parents. All the self-report scales and proxy-report scales showed satisfactory reliability with Cronbach's alpha varying between 0.77 and 0.88. Factor analysis showed results comparable with the original version, except for the Physical Health scale. On average, monotrait-multimethod correlations were higher than multitrait-multimethod correlations. Sex differences were noted on the emotional functioning subscale, girls reported lower HRQOL than boys. The Norwegian PedsQL is a valid and reliable generic pediatric health-related Quality of Life measurement that can be recommended for self-reports and proxy-reports for children in the age groups ranging from 13-15 years.

  6. Transverse conformal Killing forms on Kähler foliations

    NASA Astrophysics Data System (ADS)

    Jung, Seoung Dal

    2015-04-01

    On a closed, connected Riemannian manifold with a Kähler foliation of codimension q = 2 m, any transverse Killing r(≥ 2) -form is parallel (Jung and Jung, 2012). In this paper, we study transverse conformal Killing forms on Kähler foliations. In fact, if the foliation is minimal, then for any transverse conformal Killing r-form ϕ(2 ≤ r ≤ q - 2), Jϕ is parallel. Here J is defined in Section 4.

  7. Evaluation of Job Queuing/Scheduling Software: Phase I Report

    NASA Technical Reports Server (NTRS)

    Jones, James Patton

    1996-01-01

    The recent proliferation of high performance work stations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, the national Aerodynamic Simulation (NAS) supercomputer facility compiled a requirements checklist for job queuing/scheduling software. Next, NAS began an evaluation of the leading job management system (JMS) software packages against the checklist. This report describes the three-phase evaluation process, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still insufficient, even in the leading JMS's. However, by ranking each JMS evaluated against the requirements, we provide data that will be useful to other sites in selecting a JMS.

  8. Improving Reliability of a Residency Interview Process

    PubMed Central

    Serres, Michelle L.; Gundrum, Todd E.

    2013-01-01

    Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station—impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity. PMID:24159209

  9. A 1 MA, variable risetime pulse generator for high energy density plasma research

    NASA Astrophysics Data System (ADS)

    Greenly, J. B.; Douglas, J. D.; Hammer, D. A.; Kusse, B. R.; Glidden, S. C.; Sanders, H. D.

    2008-07-01

    COBRA is a 0.5Ω pulse generator driving loads of order 10nH inductance to >1MA current. The design is based on independently timed, laser-triggered switching of four water pulse-forming lines whose outputs are added in parallel to drive the load current pulse. The detailed design and operation of the switching to give a wide variety of current pulse shapes and rise times from 95to230ns is described. The design and operation of a simple inductive load voltage monitor are described which allows good accounting of load impedance and energy dissipation. A method of eliminating gas bubbles on the underside of nearly horizontal insulator surfaces in water was required for reliable operation of COBRA; a novel and effective solution to this problem is described.

  10. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  11. Micro/Nanoscale Parallel Patterning of Functional Biomolecules, Organic Fluorophores and Colloidal Nanocrystals

    PubMed Central

    2009-01-01

    We describe the design and optimization of a reliable strategy that combines self-assembly and lithographic techniques, leading to very precise micro-/nanopositioning of biomolecules for the realization of micro- and nanoarrays of functional DNA and antibodies. Moreover, based on the covalent immobilization of stable and versatile SAMs of programmable chemical reactivity, this approach constitutes a general platform for the parallel site-specific deposition of a wide range of molecules such as organic fluorophores and water-soluble colloidal nanocrystals. PMID:20596482

  12. Insight into a conformation of the PNA-PNA duplex with (2‧R,4‧R)- and (2‧R,4‧S)-prolyl-(1S,2S)-2-aminocyclopentanecarboxylic acid backbones

    NASA Astrophysics Data System (ADS)

    Maitarad, Amphawan; Poomsuk, Nattawee; Vilaivan, Chotima; Vilaivan, Tirayut; Siriwong, Khatcharin

    2018-04-01

    Suitable conformations for peptide nucleic acid (PNA) self-hybrids with (2‧R,4‧R)- and (2‧R,4‧S)-prolyl-(1S,2S)-2-aminocyclopentanecarboxylic acid backbones (namely, acpcPNA and epi-acpcPNA, respectively) were investigated based on molecular dynamics simulations. The results revealed that hybridization of the acpcPNA was observed only in the parallel direction, with a conformation close to the P-type structure. In contrast, self-hybrids of the epi-acpcPNA were formed in the antiparallel and parallel directions; the antiparallel duplex adopted the B-form conformation, and the parallel duplex was between B- and P-forms. The calculated binding energies and the experimental data indicate that the antiparallel epi-acpcPNA self-hybrid was more stable than the parallel duplex.

  13. 1060-nm VCSEL-based parallel-optical modules for optical interconnects

    NASA Astrophysics Data System (ADS)

    Nishimura, N.; Nagashima, K.; Kise, T.; Rizky, A. F.; Uemura, T.; Nekado, Y.; Ishikawa, Y.; Nasu, H.

    2015-03-01

    The capability of mounting a parallel-optical module onto a PCB through solder-reflow process contributes to reduce the number of piece parts, simplify its assembly process, and minimize a foot print for both AOC and on-board applications. We introduce solder-reflow-capable parallel-optical modules employing 1060-nm InGaAs/GaAs VCSEL which leads to the advantages of realizing wider modulation bandwidth, longer transmission distance, and higher reliability. We demonstrate 4-channel parallel optical link performance operated at a bit stream of 28 Gb/s 231-1 PRBS for each channel and transmitted through a 50-μm-core MMF beyond 500 m. We also introduce a new mounting technology of paralleloptical module to realize maintaining good coupling and robust electrical connection during solder-reflow process between an optical module and a polymer-waveguide-embedded PCB.

  14. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  15. An Evaluation of Different Statistical Targets for Assembling Parallel Forms in Item Response Theory

    PubMed Central

    Ali, Usama S.; van Rijn, Peter W.

    2015-01-01

    Assembly of parallel forms is an important step in the test development process. Therefore, choosing a suitable theoretical framework to generate well-defined test specifications is critical. The performance of different statistical targets of test specifications using the test characteristic curve (TCC) and the test information function (TIF) was investigated. Test length, the number of test forms, and content specifications are considered as well. The TCC target results in forms that are parallel in difficulty, but not necessarily in terms of precision. Vice versa, test forms created using a TIF target are parallel in terms of precision, but not necessarily in terms of difficulty. As sometimes the focus is either on TIF or TCC, differences in either difficulty or precision can arise. Differences in difficulty can be mitigated by equating, but differences in precision cannot. In a series of simulations using a real item bank, the two-parameter logistic model, and mixed integer linear programming for automated test assembly, these differences were found to be quite substantial. When both TIF and TCC are combined into one target with manipulation to relative importance, these differences can be made to disappear.

  16. Reliability of Three Benton Judgment of Line Orientation Short Forms in Idiopathic Parkinson’s Disease

    PubMed Central

    Gullett, Joseph M.; Price, Catherine C.; Nguyen, Peter; Okun, Michael S.; Bauer, Russell M.; Bowers, Dawn

    2013-01-01

    Individuals with Parkinson’s disease (PD) often exhibit deficits in visuospatial functioning throughout the course of their disease. These deficits should be carefully assessed as they may have implications for patient safety and disease severity. One of the most commonly administered tests of visuospatial ability, the Benton Judgment of Line Orientation (JLO), consists of 30 pairs of lines requiring the patient to match the orientation of two lines to an array of 11 lines on a separate page. Reliable short forms have been constructed out of the full JLO form, but the reliability of these forms in PD has yet to be examined. Recent functional MRI studies examining the JLO demonstrate right parietal and occipital activation, as well as bilateral frontal activation and PD is known to adversely affect these pathways. We compared the reliability of the original full form to three unique short forms in a sample of 141 non-demented, idiopathic PD patients and 56 age and education matched controls. Results indicated that a two-thirds length short form can be used with high reliability and classification accuracy in patients with idiopathic PD. The other short forms performed in a similar, though slightly less reliable manner. PMID:23957375

  17. A novel milliliter-scale chemostat system for parallel cultivation of microorganisms in stirred-tank bioreactors.

    PubMed

    Schmideder, Andreas; Severin, Timm Steffen; Cremer, Johannes Heinrich; Weuster-Botz, Dirk

    2015-09-20

    A pH-controlled parallel stirred-tank bioreactor system was modified for parallel continuous cultivation on a 10 mL-scale by connecting multichannel peristaltic pumps for feeding and medium removal with micro-pipes (250 μm inner diameter). Parallel chemostat processes with Escherichia coli as an example showed high reproducibility with regard to culture volume and flow rates as well as dry cell weight, dissolved oxygen concentration and pH control at steady states (n=8, coefficient of variation <5%). Reliable estimation of kinetic growth parameters of E. coli was easily achieved within one parallel experiment by preselecting ten different steady states. Scalability of milliliter-scale steady state results was demonstrated by chemostat studies with a stirred-tank bioreactor on a liter-scale. Thus, parallel and continuously operated stirred-tank bioreactors on a milliliter-scale facilitate timesaving and cost reducing steady state studies with microorganisms. The applied continuous bioreactor system overcomes the drawbacks of existing miniaturized bioreactors, like poor mass transfer and insufficient process control. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Statistical evaluation of synchronous spike patterns extracted by frequent item set mining

    PubMed Central

    Torre, Emiliano; Picado-Muiño, David; Denker, Michael; Borgelt, Christian; Grün, Sonja

    2013-01-01

    We recently proposed frequent itemset mining (FIM) as a method to perform an optimized search for patterns of synchronous spikes (item sets) in massively parallel spike trains. This search outputs the occurrence count (support) of individual patterns that are not trivially explained by the counts of any superset (closed frequent item sets). The number of patterns found by FIM makes direct statistical tests infeasible due to severe multiple testing. To overcome this issue, we proposed to test the significance not of individual patterns, but instead of their signatures, defined as the pairs of pattern size z and support c. Here, we derive in detail a statistical test for the significance of the signatures under the null hypothesis of full independence (pattern spectrum filtering, PSF) by means of surrogate data. As a result, injected spike patterns that mimic assembly activity are well detected, yielding a low false negative rate. However, this approach is prone to additionally classify patterns resulting from chance overlap of real assembly activity and background spiking as significant. These patterns represent false positives with respect to the null hypothesis of having one assembly of given signature embedded in otherwise independent spiking activity. We propose the additional method of pattern set reduction (PSR) to remove these false positives by conditional filtering. By employing stochastic simulations of parallel spike trains with correlated activity in form of injected spike synchrony in subsets of the neurons, we demonstrate for a range of parameter settings that the analysis scheme composed of FIM, PSF and PSR allows to reliably detect active assemblies in massively parallel spike trains. PMID:24167487

  19. Score Equating and Nominally Parallel Language Tests.

    ERIC Educational Resources Information Center

    Moy, Raymond

    Score equating requires that the forms to be equated are functionally parallel. That is, the two test forms should rank order examinees in a similar fashion. In language proficiency testing situations, this assumption is often put into doubt because of the numerous tests that have been proposed as measures of language proficiency and the…

  20. An Alternative Methodology for Creating Parallel Test Forms Using the IRT Information Function.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    The purpose of this paper is to report results on the development of a new computer-assisted methodology for creating parallel test forms using the item response theory (IRT) information function. Recently, several researchers have approached test construction from a mathematical programming perspective. However, these procedures require…

  1. The Potential Impact of Not Being Able to Create Parallel Tests on Expected Classification Accuracy

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2011-01-01

    In many practical testing situations, alternate test forms from the same testing program are not strictly parallel to each other and instead the test forms exhibit small psychometric differences. This article investigates the potential practical impact that these small psychometric differences can have on expected classification accuracy. Ten…

  2. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Van Rosendale, John

    1989-01-01

    A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.

  3. Parallelization and automatic data distribution for nuclear reactor simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebrock, L.M.

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less

  4. Highly fault-tolerant parallel computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spielman, D.A.

    We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less

  5. Evaluation of Urinary Tract Dilation Classification System for Grading Postnatal Hydronephrosis.

    PubMed

    Hodhod, Amr; Capolicchio, John-Paul; Jednak, Roman; El-Sherif, Eid; El-Doray, Abd El-Alim; El-Sherbiny, Mohamed

    2016-03-01

    We assessed the reliability and validity of the Urinary Tract Dilation classification system as a new grading system for postnatal hydronephrosis. We retrospectively reviewed charts of patients who presented with hydronephrosis from 2008 to 2013. We included patients diagnosed prenatally and those with hydronephrosis discovered incidentally during the first year of life. We excluded cases involving urinary tract infection, neurogenic bladder and chromosomal anomalies, those associated with extraurinary congenital malformations and those with followup of less than 24 months without resolution. Hydronephrosis was graded postnatally using the Society for Fetal Urology system, and then the management protocol was chosen. All units were regraded using the Urinary Tract Dilation classification system and compared to the Society for Fetal Urology system to assess reliability. Univariate and multivariate analyses were performed to assess the validity of the Urinary Tract Dilation classification system in predicting hydronephrosis resolution and surgical intervention. A total of 490 patients (730 renal units) were eligible to participate. The Urinary Tract Dilation classification system was reliable in the assessment of hydronephrosis (parallel forms 0.92). Hydronephrosis resolved in 357 units (49%), and 86 units (12%) were managed by surgical intervention. The remainder of renal units demonstrated stable or improved hydronephrosis. Multivariate analysis revealed that the likelihood of surgical intervention was predicted independently by Urinary Tract Dilation classification system risk group, while Society for Fetal Urology grades were predictive of likelihood of resolution. The Urinary Tract Dilation classification system is reliable for evaluation of postnatal hydronephrosis and is valid in predicting surgical intervention. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  6. Reliability and validity of the Persian lower extremity functional scale (LEFS) in a heterogeneous sample of outpatients with lower limb musculoskeletal disorders.

    PubMed

    Negahban, Hossein; Hessam, Masumeh; Tabatabaei, Saeid; Salehi, Reza; Sohani, Soheil Mansour; Mehravar, Mohammad

    2014-01-01

    The aim was to culturally translate and validate the Persian lower extremity functional scale (LEFS) in a heterogeneous sample of outpatients with lower extremity musculoskeletal disorders (n = 304). This is a prospective methodological study. After a standard forward-backward translation, psychometric properties were assessed in terms of test-retest reliability, internal consistency, construct validity, dimensionality, and ceiling or floor effects. The acceptable level of intraclass correlation coefficient >0.70 and Cronbach's alpha coefficient >0.70 was obtained for the Persian LEFS. Correlations between Persian LEFS and Short-Form 36 Health Survey (SF-36) subscales of Physical Health component (rs range = 0.38-0.78) were higher than correlations between Persian LEFS and SF-36 subscales of Mental Health component (rs range = 0.15-0.39). A corrected item--total correlation of >0.40 (Spearman's rho) was obtained for all items of the Persian LEFS. Horn's parallel analysis detected a total of two factors. No ceiling or floor effects were detected for the Persian LEFS. The Persian version of the LEFS is a reliable and valid instrument that can be used to measure functional status in Persian-speaking patients with different musculoskeletal disorders of the lower extremity. Implications for Rehabilitation The Persian lower extremity functional scale (LEFS) is a reliable, internally consistent and valid instrument, with no ceiling or floor effects, to determine functional status of heterogeneous patients with musculoskeletal disorders of the lower extremity. The Persian version of the LEFS can be used in clinical and research settings to measure function in Iranian patients with different musculoskeletal disorders of the lower extremity.

  7. Second Evaluation of Job Queuing/Scheduling Software. Phase 1

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Brickell, Cristy; Chancellor, Marisa (Technical Monitor)

    1997-01-01

    The recent proliferation of high performance workstations and the increased reliability of parallel systems have illustrated the need for robust job management systems to support parallel applications. To address this issue, NAS compiled a requirements checklist for job queuing/scheduling software. Next, NAS evaluated the leading job management system (JMS) software packages against the checklist. A year has now elapsed since the first comparison was published, and NAS has repeated the evaluation. This report describes this second evaluation, and presents the results of Phase 1: Capabilities versus Requirements. We show that JMS support for running parallel applications on clusters of workstations and parallel systems is still lacking, however, definite progress has been made by the vendors to correct the deficiencies. This report is supplemented by a WWW interface to the data collected, to aid other sites in extracting the evaluation information on specific requirements of interest.

  8. [The parallelisms in of sound signal of domestic sheep and Northern fur seals].

    PubMed

    Nikol'skiĭ, A A; Lisitsina, T Iu

    2011-01-01

    The parallelisms in communicative behavior of domestic sheep and Northern fur seals within a herd are accompanied by parallelisms in parameters of sound signal, the calling scream. This signal ensures ties between babies and their mothers at a long distance. The basis of parallelisms is formed by amplitude modulation at two levels: the one being a direct amplitude modulation of the carrier frequency and the other--modulation of the carrier frequency oscillation. Parallelisms in the signal oscillatory process result in corresponding parallelisms in the structure of its frequency spectrum.

  9. Identifying the impact of G-quadruplexes on Affymetrix 3' arrays using cloud computing.

    PubMed

    Memon, Farhat N; Owen, Anne M; Sanchez-Graillet, Olivia; Upton, Graham J G; Harrison, Andrew P

    2010-01-15

    A tetramer quadruplex structure is formed by four parallel strands of DNA/ RNA containing runs of guanine. These quadruplexes are able to form because guanine can Hoogsteen hydrogen bond to other guanines, and a tetrad of guanines can form a stable arrangement. Recently we have discovered that probes on Affymetrix GeneChips that contain runs of guanine do not measure gene expression reliably. We associate this finding with the likelihood that quadruplexes are forming on the surface of GeneChips. In order to cope with the rapidly expanding size of GeneChip array datasets in the public domain, we are exploring the use of cloud computing to replicate our experiments on 3' arrays to look at the effect of the location of G-spots (runs of guanines). Cloud computing is a recently introduced high-performance solution that takes advantage of the computational infrastructure of large organisations such as Amazon and Google. We expect that cloud computing will become widely adopted because it enables bioinformaticians to avoid capital expenditure on expensive computing resources and to only pay a cloud computing provider for what is used. Moreover, as well as financial efficiency, cloud computing is an ecologically-friendly technology, it enables efficient data-sharing and we expect it to be faster for development purposes. Here we propose the advantageous use of cloud computing to perform a large data-mining analysis of public domain 3' arrays.

  10. Coiled transmission line pulse generators

    DOEpatents

    McDonald, Kenneth Fox

    2010-11-09

    Methods and apparatus are provided for fabricating and constructing solid dielectric "Coiled Transmission Line" pulse generators in radial or axial coiled geometries. The pour and cure fabrication process enables a wide variety of geometries and form factors. The volume between the conductors is filled with liquid blends of monomers, polymers, oligomers, and/or cross-linkers and dielectric powders; and then cured to form high field strength and high dielectric constant solid dielectric transmission lines that intrinsically produce ideal rectangular high voltage pulses when charged and switched into matched impedance loads. Voltage levels may be increased by Marx and/or Blumlein principles incorporating spark gap or, preferentially, solid state switches (such as optically triggered thyristors) which produce reliable, high repetition rate operation. Moreover, these Marxed pulse generators can be DC charged and do not require additional pulse forming circuitry, pulse forming lines, transformers, or an a high voltage spark gap output switch. The apparatus accommodates a wide range of voltages, impedances, pulse durations, pulse repetition rates, and duty cycles. The resulting mobile or flight platform friendly cylindrical geometric configuration is much more compact, light-weight, and robust than conventional linear geometries, or pulse generators constructed from conventional components. Installing additional circuitry may accommodate optional pulse shape improvements. The Coiled Transmission Lines can also be connected in parallel to decrease the impedance, or in series to increase the pulse length.

  11. The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills

    ERIC Educational Resources Information Center

    Bae, Jungok; Lee, Yae-Sheik

    2011-01-01

    Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of…

  12. Demographic Planning: An Action Approach

    ERIC Educational Resources Information Center

    Finch, Harold L.; Smith, Joyce

    1974-01-01

    Community colleges are in a good position to obtain reliable long-term forecasts of future demand. An approach developed at Johnson County Community College in Overland Park, Kansas, has enabled the college to assist other community institutions in their parallel planning efforts. (Author/MLF)

  13. High-throughput Titration of Luciferase-expressing Recombinant Viruses

    PubMed Central

    Garcia, Vanessa; Krishnan, Ramya; Davis, Colin; Batenchuk, Cory; Le Boeuf, Fabrice; Abdelbary, Hesham; Diallo, Jean-Simon

    2014-01-01

    Standard plaque assays to determine infectious viral titers can be time consuming, are not amenable to a high volume of samples, and cannot be done with viruses that do not form plaques. As an alternative to plaque assays, we have developed a high-throughput titration method that allows for the simultaneous titration of a high volume of samples in a single day. This approach involves infection of the samples with a Firefly luciferase tagged virus, transfer of the infected samples onto an appropriate permissive cell line, subsequent addition of luciferin, reading of plates in order to obtain luminescence readings, and finally the conversion from luminescence to viral titers. The assessment of cytotoxicity using a metabolic viability dye can be easily incorporated in the workflow in parallel and provide valuable information in the context of a drug screen. This technique provides a reliable, high-throughput method to determine viral titers as an alternative to a standard plaque assay. PMID:25285536

  14. Pure quasi-P wave equation and numerical solution in 3D TTI media

    NASA Astrophysics Data System (ADS)

    Zhang, Jian-Min; He, Bing-Shou; Tang, Huai-Gu

    2017-03-01

    Based on the pure quasi-P wave equation in transverse isotropic media with a vertical symmetry axis (VTI media), a quasi-P wave equation is obtained in transverse isotropic media with a tilted symmetry axis (TTI media). This is achieved using projection transformation, which rotates the direction vector in the coordinate system of observation toward the direction vector for the coordinate system in which the z-component is parallel to the symmetry axis of the TTI media. The equation has a simple form, is easily calculated, is not influenced by the pseudo-shear wave, and can be calculated reliably when δ is greater than ɛ. The finite difference method is used to solve the equation. In addition, a perfectly matched layer (PML) absorbing boundary condition is obtained for the equation. Theoretical analysis and numerical simulation results with forward modeling prove that the equation can accurately simulate a quasi-P wave in TTI medium.

  15. An SPR based immunoassay for the sensitive detection of the soluble epithelial marker E-cadherin.

    PubMed

    Vergara, Daniele; Bianco, Monica; Pagano, Rosanna; Priore, Paola; Lunetti, Paola; Guerra, Flora; Bettini, Simona; Carallo, Sonia; Zizzari, Alessandra; Pitotti, Elena; Giotta, Livia; Capobianco, Loredana; Bucci, Cecilia; Valli, Ludovico; Maffia, Michele; Arima, Valentina; Gaballo, Antonio

    2018-06-11

    Protein biomarkers are important diagnostic tools for cancer and several other diseases. To be validated in a clinical context, a biomarker should satisfy some requirements including the ability to provide reliable information on a pathological state by measuring its expression levels. In parallel, the development of an approach capable of detecting biomarkers with high sensitivity and specificity would be ideally suited for clinical applications. Here, we performed an immune-based label free assay using Surface Plasmon Resonance (SPR)-based detection of the soluble form of E-cadherin, a cell-cell contact protein that is involved in the maintaining of tissue integrity. With this approach, we obtained a specific and quantitative detection of E-cadherin from a few hundred μl of serum of breast cancer patients by obtaining a 10-fold enhancement in the detection limit over a traditional colorimetric ELISA. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Oak Ridge Leadership Computing Facility Position Paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oral, H Sarp; Hill, Jason J; Thach, Kevin G

    This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Otto, C., Thomas, G.A.; Peticolas, W.L.; Rippe, K.

    Raman spectra of the parallel-stranded duplex formed from the deoxyoligonucleotides 5{prime}-d-((A){sub 10}TAATTTTAAATATTT)-3{prime} (D1) and 5{prime}-d((T){sub 10}ATTAAAATTTATAAA)-3{prime} (D2) in H{sub 2}O and D{sub 2}O have been acquired. The spectra of the parallel-stranded DNA are then compared to the spectra of the antiparallel double helix formed from the deoxyoligonucleotides D1 and 5{prime}-d(AAATATTTAAAATTA-(T){sub 10})-3{prime} (D3). The Raman spectra of the antiparallel-stranded (aps) duplex are reminiscent of the spectra of poly(d(A)){center dot}poly(d(T)) and a B-form structure similar to that adopted by the homopolymer duplex is assigned to the antiparallel double helix. The spectra of the parallel-stranded (ps) and antiparallel-stranded duplexes differ significantly due tomore » changes in helical organization, i.e., base pairing, base stacking, and backbone conformation. Large changes observed in the carbonyl stretching region implicate the involvement of the C(2) carbonyl of thymine in base pairing. The interaction of adenine with the C(2) carbonyl of thymine is consistent with formation of reverse Watson-Crick base pairing in parallel-stranded DNA. Phosphate-furanose vibrations similar to those observed for B-form DNA of heterogeneous sequence and high A,T content are observed at 843 and 1,092 cm{sup {minus}1} in the spectra of the parallel-stranded duplex.« less

  18. Smart photonic networks and computer security for image data

    NASA Astrophysics Data System (ADS)

    Campello, Jorge; Gill, John T.; Morf, Martin; Flynn, Michael J.

    1998-02-01

    Work reported here is part of a larger project on 'Smart Photonic Networks and Computer Security for Image Data', studying the interactions of coding and security, switching architecture simulations, and basic technologies. Coding and security: coding methods that are appropriate for data security in data fusion networks were investigated. These networks have several characteristics that distinguish them form other currently employed networks, such as Ethernet LANs or the Internet. The most significant characteristics are very high maximum data rates; predominance of image data; narrowcasting - transmission of data form one source to a designated set of receivers; data fusion - combining related data from several sources; simple sensor nodes with limited buffering. These characteristics affect both the lower level network design and the higher level coding methods.Data security encompasses privacy, integrity, reliability, and availability. Privacy, integrity, and reliability can be provided through encryption and coding for error detection and correction. Availability is primarily a network issue; network nodes must be protected against failure or routed around in the case of failure. One of the more promising techniques is the use of 'secret sharing'. We consider this method as a special case of our new space-time code diversity based algorithms for secure communication. These algorithms enable us to exploit parallelism and scalable multiplexing schemes to build photonic network architectures. A number of very high-speed switching and routing architectures and their relationships with very high performance processor architectures were studied. Indications are that routers for very high speed photonic networks can be designed using the very robust and distributed TCP/IP protocol, if suitable processor architecture support is available.

  19. A parallel algorithm for finding the shortest exit paths in mines

    NASA Astrophysics Data System (ADS)

    Jastrzab, Tomasz; Buchcik, Agata

    2017-11-01

    In the paper we study the problem of finding the shortest exit path in an underground mine in case of emergency. Since emergency situations, such as underground fires, can put the miners' lives at risk, the ability to quickly determine the safest exit path is crucial. We propose a parallel algorithm capable of finding the shortest path between the safe exit point and any other point in the mine. The algorithm is also able to take into account the characteristics of individual miners, to make the path determination more reliable.

  20. ASDTIC control and standardized interface circuits applied to buck, parallel and buck-boost dc to dc power converters

    NASA Technical Reports Server (NTRS)

    Schoenfeld, A. D.; Yu, Y.

    1973-01-01

    Versatile standardized pulse modulation nondissipatively regulated control signal processing circuits were applied to three most commonly used dc to dc power converter configurations: (1) the series switching buck-regulator, (2) the pulse modulated parallel inverter, and (3) the buck-boost converter. The unique control concept and the commonality of control functions for all switching regulators have resulted in improved static and dynamic performance and control circuit standardization. New power-circuit technology was also applied to enhance reliability and to achieve optimum weight and efficiency.

  1. The Verification-based Analysis of Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1996-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  2. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research.

    PubMed

    Koo, Terry K; Li, Mae Y

    2016-06-01

    Intraclass correlation coefficient (ICC) is a widely used reliability index in test-retest, intrarater, and interrater reliability analyses. This article introduces the basic concept of ICC in the content of reliability analysis. There are 10 forms of ICCs. Because each form involves distinct assumptions in their calculation and will lead to different interpretations, researchers should explicitly specify the ICC form they used in their calculation. A thorough review of the research design is needed in selecting the appropriate form of ICC to evaluate reliability. The best practice of reporting ICC should include software information, "model," "type," and "definition" selections. When coming across an article that includes ICC, readers should first check whether information about the ICC form has been reported and if an appropriate ICC form was used. Based on the 95% confident interval of the ICC estimate, values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.90 are indicative of poor, moderate, good, and excellent reliability, respectively. This article provides a practical guideline for clinical researchers to choose the correct form of ICC and suggests the best practice of reporting ICC parameters in scientific publications. This article also gives readers an appreciation for what to look for when coming across ICC while reading an article.

  3. Similarity of the Multidimensional Space Defined by Parallel Forms of a Mathematics Test.

    ERIC Educational Resources Information Center

    Reckase, Mark D.; And Others

    The purpose of the paper is to determine whether test forms of the Mathematics Usage Test (AAP Math) of the American College Testing Program are parallel in a multidimensional sense. The AAP Math is an achievement test of mathematics concepts acquired by high school students by the end of their third year. To determine the dimensionality of the…

  4. Visual assessment of hemiplegic gait following stroke: pilot study.

    PubMed

    Hughes, K A; Bell, F

    1994-10-01

    A form that will guide clinicians through a reliable and valid visual assessment of hemiplegic gait was designed. Six hemiplegic patients were filmed walking along an instrumented walkway. These films were shown to three physiotherapists who used the form to rate the patients' gait. Each physiotherapist rated the six patients at both stages of recovery, repeating this a further two times. This resulted in 108 completed forms. Within-rater reliability is statistically significant for some raters and some individual form sections. Between-rater reliability is significant for some sections. Detailed analysis has shown that parts of the form have caused reduced reliability. These are mainly sections that ask for severity judgments or are duplicated. Some indication of normal gait should be included on the form. To test validity fully the form should be tested on a group of patients who all have significant changes in each objective gait measurement.

  5. Automated Testability Decision Tool

    DTIC Science & Technology

    1991-09-01

    Vol. 16,1968, pp. 538-558. Bertsekas, D. P., "Constraints Optimization and Lagrange Multiplier Methods," Academic Press, New York. McLeavey , D.W... McLeavey , J.A., "Parallel Optimization Methods in Standby Reliability, " University of Connecticut, School of Business Administration, Bureau of Business

  6. Validity and Reliability of Thai Version of the Foot and Ankle Ability Measure (FAAM) Subjective Form.

    PubMed

    Arunakul, Marut; Arunakul, Preeyaphan; Suesiritumrong, Chakhrist; Angthong, Chayanin; Chernchujit, Bancha

    2015-06-01

    Self-administered questionnaires have become an important aspect for clinical outcome assessment of foot and ankle-related problems. The Foot and Ankle Ability Measure (FAAM) subjective form is a region-specific questionnaire that is widely used and has sufficient validity and reliability from previous studies. Translate the original English version of FAAM into a Thai version and evaluate the validity and reliability of Thai FAAM in patients with foot and ankle-related problems. The FAAM subjective form was translated into Thai using forward-backward translation protocol. Afterward, reliability and validity were tested. Following responses from 60 consecutive patients on two questionnaires, the Thai FAAM subjective form and the short form (SF)-36, were used. The validity was tested by correlating the scores from both questionnaires. The reliability was adopted by measuring the test-retest reliability and internal consistency. Thai FAAM score including activity of daily life (ADL) and Sport subscale demonstrated the sufficient correlations with physical functioning (PF) and physical composite score (PCS) domains of the SF-36 (statistically significant with p < 0.001 level and ≥ 0.5 values). The result of reliability revealed highly intra-class correlation coefficient as 0.8 and 0.77, respectively from test-retest study. The internal consistency was strong (Cronbach alpha = 0.94 and 0.88, respectively). The Thai version of FAAM subjective form retained the characteristics of the original version and has proved a reliable evaluation instrument for patients with foot and ankle-related problems.

  7. Development of microcomputer-based mental acuity tests for repeated-measures studies

    NASA Technical Reports Server (NTRS)

    Kennedy, R. S.; Wilkes, R. L.; Baltzley, D. R.; Fowlkes, J. E.

    1990-01-01

    The purpose of this report is to detail the development of the Automated Performance Test System (APTS), a computer battery of mental acuity tests that can be used to assess human performance in the presence of toxic elements and environmental stressors. There were four objectives in the development of APTS. First, the technical requirements for developing APTS followed the tenets of the classical theory of mental tests which requires that tests meet set criteria like stability and reliability (the lack of which constitutes insensitivity). To be employed in the study of the exotic conditions of protracted space flight, a battery with multiple parallel forms is required. The second criteria was for the battery to have factorial multidimensionality and the third was for the battery to be sensitive to factors known to compromise performance. A fourth objective was for the tests to converge on the abilities entailed in mission specialist tasks. A series of studies is reported in which candidate APTS tests were subjected to an examination of their psychometric properties for repeated-measures testing. From this work, tests were selected that possessed the requisite metric properties of stability, reliability, and factor richness. In addition, studies are reported which demonstrate the predictive validity of the tests to holistic measures of intelligence.

  8. Carbon Nanofiber versus Graphene-Based Stretchable Capacitive Touch Sensors for Artificial Electronic Skin.

    PubMed

    Cataldi, Pietro; Dussoni, Simeone; Ceseracciu, Luca; Maggiali, Marco; Natale, Lorenzo; Metta, Giorgio; Athanassiou, Athanassia; Bayer, Ilker S

    2018-02-01

    Stretchable capacitive devices are instrumental for new-generation multifunctional haptic technologies particularly suited for soft robotics and electronic skin applications. A majority of elongating soft electronics still rely on silicone for building devices or sensors by multiple-step replication. In this study, fabrication of a reliable elongating parallel-plate capacitive touch sensor, using nitrile rubber gloves as templates, is demonstrated. Spray coating both sides of a rubber piece cut out of a glove with a conductive polymer suspension carrying dispersed carbon nanofibers (CnFs) or graphene nanoplatelets (GnPs) is sufficient for making electrodes with low sheet resistance values (≈10 Ω sq -1 ). The electrodes based on CnFs maintain their conductivity up to 100% elongation whereas the GnPs-based ones form cracks before 60% elongation. However, both electrodes are reliable under elongation levels associated with human joints motility (≈20%). Strikingly, structural damages due to repeated elongation/recovery cycles could be healed through annealing. Haptic sensing characteristics of a stretchable capacitive device by wrapping it around the fingertip of a robotic hand (ICub) are demonstrated. Tactile forces as low as 0.03 N and as high as 5 N can be easily sensed by the device under elongation or over curvilinear surfaces.

  9. Study on Distribution Reliability with Parallel and On-site Distributed Generation Considering Protection Miscoordination and Tie Line

    NASA Astrophysics Data System (ADS)

    Chaitusaney, Surachai; Yokoyama, Akihiko

    In distribution system, Distributed Generation (DG) is expected to improve the system reliability as its backup generation. However, DG contribution in fault current may cause the loss of the existing protection coordination, e.g. recloser-fuse coordination and breaker-breaker coordination. This problem can drastically deteriorate the system reliability, and it is more serious and complicated when there are several DG sources in the system. Hence, the above conflict in reliability aspect unavoidably needs a detailed investigation before the installation or enhancement of DG is done. The model of composite DG fault current is proposed to find the threshold beyond which existing protection coordination is lost. Cases of protection miscoordination are described, together with their consequences. Since a distribution system may be tied with another system, the issues of tie line and on-site DG are integrated into this study. Reliability indices are evaluated and compared in the distribution reliability test system RBTS Bus 2.

  10. Reverse engineering a gene network using an asynchronous parallel evolution strategy

    PubMed Central

    2010-01-01

    Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855

  11. Composite Reliability and Standard Errors of Measurement for a Seven-Subtest Short Form of the Wechsler Adult Intelligence Scale-Revised.

    ERIC Educational Resources Information Center

    Schretlen, David; And Others

    1994-01-01

    Composite reliability and standard errors of measurement were computed for prorated Verbal, Performance, and Full-Scale intelligence quotient (IQ) scores from a seven-subtest short form of the Wechsler Adult Intelligence Scale-Revised. Results with 1,880 adults (standardization sample) indicate that this form is as reliable as the complete test.…

  12. Kappa and Rater Accuracy: Paradigms and Parameters

    ERIC Educational Resources Information Center

    Conger, Anthony J.

    2017-01-01

    Drawing parallels to classical test theory, this article clarifies the difference between rater accuracy and reliability and demonstrates how category marginal frequencies affect rater agreement and Cohen's kappa. Category assignment paradigms are developed: comparing raters to a standard (index) versus comparing two raters to one another…

  13. An integrated control strategy for the composite braking system of an electric vehicle with independently driven axles

    NASA Astrophysics Data System (ADS)

    Sun, Fengchun; Liu, Wei; He, Hongwen; Guo, Hongqiang

    2016-08-01

    For an electric vehicle with independently driven axles, an integrated braking control strategy was proposed to coordinate the regenerative braking and the hydraulic braking. The integrated strategy includes three modes, namely the hybrid composite mode, the parallel composite mode and the pure hydraulic mode. For the hybrid composite mode and the parallel composite mode, the coefficients of distributing the braking force between the hydraulic braking and the two motors' regenerative braking were optimised offline, and the response surfaces related to the driving state parameters were established. Meanwhile, the six-sigma method was applied to deal with the uncertainty problems for reliability. Additionally, the pure hydraulic mode is activated to ensure the braking safety and stability when the predictive failure of the response surfaces occurs. Experimental results under given braking conditions showed that the braking requirements could be well met with high braking stability and energy regeneration rate, and the reliability of the braking strategy was guaranteed on general braking conditions.

  14. A Comparison of Flow-Through Versus Non-Flow-Through Proton Exchange Membrane Fuel Cell Systems for NASA's Exploration Missions

    NASA Technical Reports Server (NTRS)

    Hoberecht, Mark A.

    2010-01-01

    As part of the Exploration Technology Development Program (ETDP) under the auspices of the Exploration Systems Mission Directorate (ESMD), NASA is developing both primary fuel cell power systems and regenerative fuel cell (RFC) energy storage systems within the fuel cell portion of the Energy Storage Project. This effort is being led by the NASA Glenn Research Center (GRC) in partnership with the NASA Johnson Space Center (JSC), Jet Propulsion Laboratory (JPL), NASA Kennedy Space Center (KSC), and industrial partners. The development goals are to improve fuel cell and electrolysis stack electrical performance, reduce system mass, volume, and parasitic power requirements, and increase system life and reliability. A major focus of this effort has been the parallel development of both flow-through and non-flow-through proton exchange membrane (PEM) primary fuel cell power systems. The plan has been, at the appropriate time, to select a single primary fuel cell technology for eventual flight hardware development. Ideally, that appropriate time would occur after both technologies have achieved a technology readiness level (TRL) of six, which represents an engineering model fidelity PEM fuel cell system being successfully tested in a relevant environment. Budget constraints in fiscal year 2009 and beyond have prevented NASA from continuing to pursue the parallel development of both primary fuel cell options. Because very limited data exists for either system, a toplevel, qualitative assessment based on engineering judgement was performed expeditiously to provide guidance for a selection. At that time, the non-flow-through technology was selected for continued development because of potentially major advantages in terms of weight, volume, parasitic power, reliability, and life. This author believes that the advantages are significant enough, and the potential benefits great enough, to offset the higher state of technology readiness of flow-through technology. This paper summarizes the technical considerations which helped form the engineering judgement that led to the final decision.

  15. An Investigation of the Impact of Guessing on Coefficient α and Reliability

    PubMed Central

    2014-01-01

    Guessing is known to influence the test reliability of multiple-choice tests. Although there are many studies that have examined the impact of guessing, they used rather restrictive assumptions (e.g., parallel test assumptions, homogeneous inter-item correlations, homogeneous item difficulty, and homogeneous guessing levels across items) to evaluate the relation between guessing and test reliability. Based on the item response theory (IRT) framework, this study investigated the extent of the impact of guessing on reliability under more realistic conditions where item difficulty, item discrimination, and guessing levels actually vary across items with three different test lengths (TL). By accommodating multiple item characteristics simultaneously, this study also focused on examining interaction effects between guessing and other variables entered in the simulation to be more realistic. The simulation of the more realistic conditions and calculations of reliability and classical test theory (CTT) item statistics were facilitated by expressing CTT item statistics, coefficient α, and reliability in terms of IRT model parameters. In addition to the general negative impact of guessing on reliability, results showed interaction effects between TL and guessing and between guessing and test difficulty.

  16. easyCBM Beginning Reading Measures: Grades K-1 Alternate Form Reliability and Criterion Validity with the SAT-10. Technical Report #1403

    ERIC Educational Resources Information Center

    Wray, Kraig; Lai, Cheng-Fei; Sáez, Leilani; Alonzo, Julie; Tindal, Gerald

    2013-01-01

    We report the results of an alternate form reliability and criterion validity study of kindergarten and grade 1 (N = 84-199) reading measures from the easyCBM© assessment system and Stanford Early School Achievement Test/Stanford Achievement Test, 10th edition (SESAT/SAT-­10) across 5 time points. The alternate form reliabilities ranged from…

  17. TIMEDELN: A programme for the detection and parametrization of overlapping resonances using the time-delay method

    NASA Astrophysics Data System (ADS)

    Little, Duncan A.; Tennyson, Jonathan; Plummer, Martin; Noble, Clifford J.; Sunderland, Andrew G.

    2017-06-01

    TIMEDELN implements the time-delay method of determining resonance parameters from the characteristic Lorentzian form displayed by the largest eigenvalues of the time-delay matrix. TIMEDELN constructs the time-delay matrix from input K-matrices and analyses its eigenvalues. This new version implements multi-resonance fitting and may be run serially or as a high performance parallel code with three levels of parallelism. TIMEDELN takes K-matrices from a scattering calculation, either read from a file or calculated on a dynamically adjusted grid, and calculates the time-delay matrix. This is then diagonalized, with the largest eigenvalue representing the longest time-delay experienced by the scattering particle. A resonance shows up as a characteristic Lorentzian form in the time-delay: the programme searches the time-delay eigenvalues for maxima and traces resonances when they pass through different eigenvalues, separating overlapping resonances. It also performs the fitting of the calculated data to the Lorentzian form and outputs resonance positions and widths. Any remaining overlapping resonances can be fitted jointly. The branching ratios of decay into the open channels can also be found. The programme may be run serially or in parallel with three levels of parallelism. The parallel code modules are abstracted from the main physics code and can be used independently.

  18. The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1995-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  19. A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix

    NASA Technical Reports Server (NTRS)

    Shroff, Gautam

    1989-01-01

    A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

  20. Report on noninvasive prenatal testing: classical and alternative approaches.

    PubMed

    Pantiukh, Kateryna S; Chekanov, Nikolay N; Zaigrin, Igor V; Zotov, Alexei M; Mazur, Alexander M; Prokhortchouk, Egor B

    2016-01-01

    Concerns of traditional prenatal aneuploidy testing methods, such as low accuracy of noninvasive and health risks associated with invasive procedures, were overcome with the introduction of novel noninvasive methods based on genetics (NIPT). These were rapidly adopted into clinical practice in many countries after a series of successful trials of various independent submethods. Here we present results of own NIPT trial carried out in Moscow, Russia. 1012 samples were subjected to the method aimed at measuring chromosome coverage by massive parallel sequencing. Two alternative approaches are ascertained: one based on maternal/fetal differential methylation and another based on allelic difference. While the former failed to provide stable results, the latter was found to be promising and worthy of conducting a large-scale trial. One critical point in any NIPT approach is the determination of fetal cell-free DNA fraction, which dictates the reliability of obtained results for a given sample. We show that two different chromosome Y representation measures-by real-time PCR and by whole-genome massive parallel sequencing-are practically interchangeable (r=0.94). We also propose a novel method based on maternal/fetal allelic difference which is applicable in pregnancies with fetuses of either sex. Even in its pilot form it correlates well with chromosome Y coverage estimates (r=0.74) and can be further improved by increasing the number of polymorphisms.

  1. Providers' assessment of transition readiness among adolescent and young adult kidney transplant recipients.

    PubMed

    Marchak, Jordan Gilleland; Reed-Knight, Bonney; Amaral, Sandra; Mee, Laura; Blount, Ronald L

    2015-12-01

    The Readiness for Transition Questionnaire- provider version (RTQ-Provider) was developed to evaluate adolescent patients' transition readiness and healthcare behaviors from the perspective of the healthcare provider. The RTQ-Provider is a parallel version of the RTQ-Teen and RTQ-Parent completed by patients and parents. This study seeks to evaluate the psychometric properties of the RTQ-Provider and its utility as a clinical transition planning tool. Participants consisted of 49 kidney transplant recipients between the ages of 15 and 21. The RTQ-Provider was completed by the pediatric nephrologist and psychologist from the multidisciplinary healthcare team and compared to RTQ data from teens and parents. The RTQ-Provider demonstrated good-to-excellent internal consistency and interrater reliability. Construct validity was supported through significant predictive relationships between providers' perceptions of transition readiness and older patient age, increased patient healthcare responsibility, and decreased parent involvement in health care. By providing parallel teen, parent, and provider forms, the RTQ has the potential to foster open communication between patients, families, and healthcare team members regarding transition readiness. The study provides initial support for the RTQ-Provider as a clinical tool to assess providers' perceptions of transition readiness; however, future longitudinal research is needed to evaluate predictive validity following patients' transfer to adult care. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. The parallel-sequential field subtraction techniques for nonlinear ultrasonic imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-04-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage and have sensitivity to particularly closed defects. This study utilizes two modes of focusing: parallel, in which the elements are fired together with a delay law, and sequential, in which elements are fired independently. In the parallel focusing, a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded; with elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images formed from the coherent component of the field and use this to characterize nonlinearity of closed fatigue cracks. In particular we monitor the reduction in amplitude at the fundamental frequency at each focal point and use this metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g., back wall or large scatters) and allow damage to be detected at an early stage.

  3. Multichannel microfluidic chip for rapid and reliable trapping and imaging plant-parasitic nematodes

    NASA Astrophysics Data System (ADS)

    Amrit, Ratthasart; Sripumkhai, Witsaroot; Porntheeraphat, Supanit; Jeamsaksiri, Wutthinan; Tangchitsomkid, Nuchanart; Sutapun, Boonsong

    2013-05-01

    Faster and reliable testing technique to count and identify nematode species resided in plant roots is therefore essential for export control and certification. This work proposes utilizing a multichannel microfluidic chip with an integrated flow-through microfilter to retain the nematodes in a trapping chamber. When trapped, it is rather simple and convenient to capture images of the nematodes and later identify their species by a trained technician. Multiple samples can be tested in parallel using the proposed microfluidic chip therefore increasing number of samples tested per day.

  4. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  5. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE PAGES

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett; ...

    2017-01-01

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  6. Genetic-gonadal-genitals sex (3G-sex) and the misconception of brain and gender, or, why 3G-males and 3G-females have intersex brain and intersex gender

    PubMed Central

    2012-01-01

    The categorization of individuals as “male” or “female” is based on chromosome complement and gonadal and genital phenotype. This combined genetic-gonadal-genitals sex, here referred to as 3G-sex, is internally consistent in ~99% of humans (i.e., one has either the “female” form at all levels, or the “male” form at all levels). About 1% of the human population is identified as “intersex” because of either having an intermediate form at one or more levels, or having the “male” form at some levels and the “female” form at other levels. These two types of “intersex” reflect the facts, respectively, that the different levels of 3G-sex are not completely dimorphic nor perfectly consistent. Using 3G-sex as a model to understand sex differences in other domains (e.g., brain, behavior) leads to the erroneous assumption that sex differences in these other domains are also highly dimorphic and highly consistent. But parallel lines of research have led to the conclusion that sex differences in the brain and in behavior, cognition, personality, and other gender characteristics are for the most part not dimorphic and not internally consistent (i.e., having one brain/gender characteristic with the “male” form is not a reliable predictor for the form of other brain/gender characteristics). Therefore although only ~1% percent of humans are 3G-“intersex”, when it comes to brain and gender, we all have an intersex gender (i.e., an array of masculine and feminine traits) and an intersex brain (a mosaic of “male” and “female” brain characteristics). PMID:23244600

  7. Genetic-gonadal-genitals sex (3G-sex) and the misconception of brain and gender, or, why 3G-males and 3G-females have intersex brain and intersex gender.

    PubMed

    Joel, Daphna

    2012-12-17

    The categorization of individuals as "male" or "female" is based on chromosome complement and gonadal and genital phenotype. This combined genetic-gonadal-genitals sex, here referred to as 3G-sex, is internally consistent in ~99% of humans (i.e., one has either the "female" form at all levels, or the "male" form at all levels). About 1% of the human population is identified as "intersex" because of either having an intermediate form at one or more levels, or having the "male" form at some levels and the "female" form at other levels. These two types of "intersex" reflect the facts, respectively, that the different levels of 3G-sex are not completely dimorphic nor perfectly consistent. Using 3G-sex as a model to understand sex differences in other domains (e.g., brain, behavior) leads to the erroneous assumption that sex differences in these other domains are also highly dimorphic and highly consistent. But parallel lines of research have led to the conclusion that sex differences in the brain and in behavior, cognition, personality, and other gender characteristics are for the most part not dimorphic and not internally consistent (i.e., having one brain/gender characteristic with the "male" form is not a reliable predictor for the form of other brain/gender characteristics). Therefore although only ~1% percent of humans are 3G-"intersex", when it comes to brain and gender, we all have an intersex gender (i.e., an array of masculine and feminine traits) and an intersex brain (a mosaic of "male" and "female" brain characteristics).

  8. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  9. Active parallel redundancy for electronic integrator-type control circuits

    NASA Technical Reports Server (NTRS)

    Peterson, R. A.

    1971-01-01

    Circuit extends concept of redundant feedback control from type-0 to type-1 control systems. Inactive channels are slaves to the active channel, if latter fails, it is rejected and slave channel is activated. High reliability and elimination of single-component catastrophic failure are important in closed-loop control systems.

  10. Reliable aluminum contact formation by electrostatic bonding

    NASA Astrophysics Data System (ADS)

    Kárpáti, T.; Pap, A. E.; Radnóczi, Gy; Beke, B.; Bársony, I.; Fürjes, P.

    2015-07-01

    The paper presents a detailed study of a reliable method developed for aluminum fusion wafer bonding assisted by the electrostatic force evolving during the anodic bonding process. The IC-compatible procedure described allows the parallel formation of electrical and mechanical contacts, facilitating a reliable packaging of electromechanical systems with backside electrical contacts. This fusion bonding method supports the fabrication of complex microelectromechanical systems (MEMS) and micro-opto-electromechanical systems (MOEMS) structures with enhanced temperature stability, which is crucial in mechanical sensor applications such as pressure or force sensors. Due to the applied electrical potential of  -1000 V the Al metal layers are compressed by electrostatic force, and at the bonding temperature of 450 °C intermetallic diffusion causes aluminum ions to migrate between metal layers.

  11. Aeroflex Technology as Class-Y Demonstrator

    NASA Technical Reports Server (NTRS)

    Suh, Jong-ook; Agarwal, Shri; Popelar, Scott

    2014-01-01

    Modern space field programmable gate array (FPGA) devices with increased functional density and operational frequency, such as Xilinx Virtex 4 (V4) and S (V5), are packaged in non-hermetic ceramic flip chip forms. These next generation space parts were not qualified to the MIL-PRF-38535 Qualified Manufacturer Listing (QML) class-V when they were released because class-V was only intended for hermetic parts. In order to bring Xilinx V5 type packages into the QML system, it was suggested that class-Y be set up as a new category. From 2010 through 2014, a JEDEC G12 task group developed screening and qualification requirements for Class-Y products. The Document Standardization Division of the Defense Logistics Agency (DLA) has completed an engineering practice study. In parallel with the class-Y efforts, the NASA Electronic Parts and Packaging (NEPP) program has funded JPL to study potential reliability issues of the class-Y products. The major hurdle of this task was the absence of adequate research samples. Figure 1-1 shows schematic diagrams of typical structures of class-Y type products. Typically, class-Y products are either in ceramic flip chip column grid array (CGA) or land grid array (LGA) form. In class-Y packages, underfill and heat spread adhesive materials are directly exposed to the spacecraft environment due to their non-hermeticity. One of the concerns originally raised was that the underfill material could degrade due to the spacecraft environment and negatively impact the reliability of the package. In order to study such issues, it was necessary to use ceramic daisy chain flip chip package samples so that continuity of flip chip solder bumps could be monitored during the reliability tests. However, none of the commercially available class-Y daisy chain parts had electrical connections through flip chip solder bumps; only solder columns were daisy chained, which made it impossible to test continuity of flip chip solder bumps without using extremely costly functional parts. Among space parts manufacturers who were interested in producing class-Y products, Aeroflex Microelectronic Solutions-HiRel had been developing assembly processes using their internal R&D classy type samples. In early 2012, JPL and Aeroflex initiated a collaboration to study reliability of the Aeroflex technology as a class-Y demonstrator.

  12. Processes and Procedures for Estimating Score Reliability and Precision

    ERIC Educational Resources Information Center

    Bardhoshi, Gerta; Erford, Bradley T.

    2017-01-01

    Precision is a key facet of test development, with score reliability determined primarily according to the types of error one wants to approximate and demonstrate. This article identifies and discusses several primary forms of reliability estimation: internal consistency (i.e., split-half, KR-20, a), test-retest, alternate forms, interscorer, and…

  13. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  14. ARTS III/Parallel Processor Design Study

    DOT National Transportation Integrated Search

    1975-04-01

    It was the purpose of this design study to investigate the feasibility, suitability, and cost-effectiveness of augmenting the ARTS III failsafe/failsoft multiprocessor system with a form of parallel processor to accomodate a large growth in air traff...

  15. A Structure-Toxicity Study of Aß42 Reveals a New Anti-Parallel Aggregation Pathway

    PubMed Central

    Vignaud, Hélène; Bobo, Claude; Lascu, Ioan; Sörgjerd, Karin Margareta; Zako, Tamotsu; Maeda, Mizuo; Salin, Benedicte; Lecomte, Sophie; Cullin, Christophe

    2013-01-01

    Amyloid beta (Aβ) peptides produced by APP cleavage are central to the pathology of Alzheimer’s disease. Despite widespread interest in this issue, the relationship between the auto-assembly and toxicity of these peptides remains controversial. One intriguing feature stems from their capacity to form anti-parallel ß-sheet oligomeric intermediates that can be converted into a parallel topology to allow the formation of protofibrillar and fibrillar Aβ. Here, we present a novel approach to determining the molecular aspects of Aß assembly that is responsible for its in vivo toxicity. We selected Aß mutants with varying intracellular toxicities. In vitro, only toxic Aß (including wild-type Aß42) formed urea-resistant oligomers. These oligomers were able to assemble into fibrils that are rich in anti-parallel ß-sheet structures. Our results support the existence of a new pathway that depends on the folding capacity of Aß . PMID:24244667

  16. Parallelization of implicit finite difference schemes in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel

    1990-01-01

    Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.

  17. A reliability design method for a lithium-ion battery pack considering the thermal disequilibrium in electric vehicles

    NASA Astrophysics Data System (ADS)

    Xia, Quan; Wang, Zili; Ren, Yi; Sun, Bo; Yang, Dezhen; Feng, Qiang

    2018-05-01

    With the rapid development of lithium-ion battery technology in the electric vehicle (EV) industry, the lifetime of the battery cell increases substantially; however, the reliability of the battery pack is still inadequate. Because of the complexity of the battery pack, a reliability design method for a lithium-ion battery pack considering the thermal disequilibrium is proposed in this paper based on cell redundancy. Based on this method, a three-dimensional electric-thermal-flow-coupled model, a stochastic degradation model of cells under field dynamic conditions and a multi-state system reliability model of a battery pack are established. The relationships between the multi-physics coupling model, the degradation model and the system reliability model are first constructed to analyze the reliability of the battery pack and followed by analysis examples with different redundancy strategies. By comparing the reliability of battery packs of different redundant cell numbers and configurations, several conclusions for the redundancy strategy are obtained. More notably, the reliability does not monotonically increase with the number of redundant cells for the thermal disequilibrium effects. In this work, the reliability of a 6 × 5 parallel-series configuration is the optimal system structure. In addition, the effect of the cell arrangement and cooling conditions are investigated.

  18. A novel approach for analyzing fuzzy system reliability using different types of intuitionistic fuzzy failure rates of components.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-03-01

    This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Fatigue reliability of deck structures subjected to correlated crack growth

    NASA Astrophysics Data System (ADS)

    Feng, G. Q.; Garbatov, Y.; Guedes Soares, C.

    2013-12-01

    The objective of this work is to analyse fatigue reliability of deck structures subjected to correlated crack growth. The stress intensity factors of the correlated cracks are obtained by finite element analysis and based on which the geometry correction functions are derived. The Monte Carlo simulations are applied to predict the statistical descriptors of correlated cracks based on the Paris-Erdogan equation. A probabilistic model of crack growth as a function of time is used to analyse the fatigue reliability of deck structures accounting for the crack propagation correlation. A deck structure is modelled as a series system of stiffened panels, where a stiffened panel is regarded as a parallel system composed of plates and are longitudinal. It has been proven that the method developed here can be conveniently applied to perform the fatigue reliability assessment of structures subjected to correlated crack growth.

  20. Factors which Limit the Value of Additional Redundancy in Human Rated Launch Vehicle Systems

    NASA Technical Reports Server (NTRS)

    Anderson, Joel M.; Stott, James E.; Ring, Robert W.; Hatfield, Spencer; Kaltz, Gregory M.

    2008-01-01

    The National Aeronautics and Space Administration (NASA) has embarked on an ambitious program to return humans to the moon and beyond. As NASA moves forward in the development and design of new launch vehicles for future space exploration, it must fully consider the implications that rule-based requirements of redundancy or fault tolerance have on system reliability/risk. These considerations include common cause failure, increased system complexity, combined serial and parallel configurations, and the impact of design features implemented to control premature activation. These factors and others must be considered in trade studies to support design decisions that balance safety, reliability, performance and system complexity to achieve a relatively simple, operable system that provides the safest and most reliable system within the specified performance requirements. This paper describes conditions under which additional functional redundancy can impede improved system reliability. Examples from current NASA programs including the Ares I Upper Stage will be shown.

  1. Exploring Equivalent Forms Reliability Using a Key Stage 2 Reading Test

    ERIC Educational Resources Information Center

    Benton, Tom

    2013-01-01

    This article outlines an empirical investigation into equivalent forms reliability using a case study of a national curriculum reading test. Within the situation being studied, there has been a genuine attempt to create several equivalent forms and so it is of interest to compare the actual behaviour of the relationship between these forms to the…

  2. On the polymorphism of benzocaine; a low-temperature structural phase transition for form (II).

    PubMed

    Chan, Eric J; Rae, A David; Welberry, T Richard

    2009-08-01

    A low-temperature structural phase transition has been observed for form (II) of benzocaine (BZC). Lowering the temperature doubles the b-axis repeat and changes the space group from P2(1)2(1)2(1) to P112(1) with gamma now 99.37 degrees. The structure is twinned, the twin rule corresponding to a 2(1) screw rotation parallel to a. The phase transition is associated with a sequential displacement parallel to a of zigzag bi-layers of ribbons perpendicular to b*. No similar phase transition was observed for form (I) and this was attributed to the different packing symmetries of the two room-temperature polymorphic forms.

  3. Tunable high-q superconducting notch filter

    DOEpatents

    Pang, C.S.; Falco, C.M.; Kampwirth, R.T.; Schuller, I.K.

    1979-11-29

    A superconducting notch filter is made of three substrates disposed in a cryogenic environment. A superconducting material is disposed on one substrate in a pattern of a circle and an annular ring connected together. The second substrate has a corresponding pattern to form a parallel plate capacitor and the second substrate has the circle and annular ring connected by a superconducting spiral that forms an inductor. The third substrate has a superconducting spiral that is placed parallel to the first superconducting spiral to form a transformer. Relative motion of the first substrate with respect to the second is effected from outside the cryogenic environment to vary the capacitance and hence the frequency of the resonant circuit formed by the superconducting devices.

  4. Measuring reliable change in cognition using the Edinburgh Cognitive and Behavioural ALS Screen (ECAS).

    PubMed

    Crockford, Christopher; Newton, Judith; Lonergan, Katie; Madden, Caoifa; Mays, Iain; O'Sullivan, Meabhdh; Costello, Emmet; Pinto-Grau, Marta; Vajda, Alice; Heverin, Mark; Pender, Niall; Al-Chalabi, Ammar; Hardiman, Orla; Abrahams, Sharon

    2018-02-01

    Cognitive impairment affects approximately 50% of people with amyotrophic lateral sclerosis (ALS). Research has indicated that impairment may worsen with disease progression. The Edinburgh Cognitive and Behavioural ALS Screen (ECAS) was designed to measure neuropsychological functioning in ALS, with its alternate forms (ECAS-A, B, and C) allowing for serial assessment over time. The aim of the present study was to establish reliable change scores for the alternate forms of the ECAS, and to explore practice effects and test-retest reliability of the ECAS's alternate forms. Eighty healthy participants were recruited, with 57 completing two and 51 completing three assessments. Participants were administered alternate versions of the ECAS serially (A-B-C) at four-month intervals. Intra-class correlation analysis was employed to explore test-retest reliability, while analysis of variance was used to examine the presence of practice effects. Reliable change indices (RCI) and regression-based methods were utilized to establish change scores for the ECAS alternate forms. Test-retest reliability was excellent for ALS Specific, ALS Non-Specific, and ECAS Total scores of the combined ECAS A, B, and C (all > .90). No significant practice effects were observed over the three testing sessions. RCI and regression-based methods produced similar change scores. The alternate forms of the ECAS possess excellent test-retest reliability in a healthy control sample, with no significant practice effects. The use of conservative RCI scores is recommended. Therefore, a change of ≥8, ≥4, and ≥9 for ALS Specific, ALS Non-Specific, and ECAS Total score is required for reliable change.

  5. Design considerations for parallel graphics libraries

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  6. The openGL visualization of the 2D parallel FDTD algorithm

    NASA Astrophysics Data System (ADS)

    Walendziuk, Wojciech

    2005-02-01

    This paper presents a way of visualization of a two-dimensional version of a parallel algorithm of the FDTD method. The visualization module was created on the basis of the OpenGL graphic standard with the use of the GLUT interface. In addition, the work includes the results of the efficiency of the parallel algorithm in the form of speedup charts.

  7. The effect of earthquake on architecture geometry with non-parallel system irregularity configuration

    NASA Astrophysics Data System (ADS)

    Teddy, Livian; Hardiman, Gagoek; Nuroji; Tudjono, Sri

    2017-12-01

    Indonesia is an area prone to earthquake that may cause casualties and damage to buildings. The fatalities or the injured are not largely caused by the earthquake, but by building collapse. The collapse of the building is resulted from the building behaviour against the earthquake, and it depends on many factors, such as architectural design, geometry configuration of structural elements in horizontal and vertical plans, earthquake zone, geographical location (distance to earthquake center), soil type, material quality, and construction quality. One of the geometry configurations that may lead to the collapse of the building is irregular configuration of non-parallel system. In accordance with FEMA-451B, irregular configuration in non-parallel system is defined to have existed if the vertical lateral force-retaining elements are neither parallel nor symmetric with main orthogonal axes of the earthquake-retaining axis system. Such configuration may lead to torque, diagonal translation and local damage to buildings. It does not mean that non-parallel irregular configuration should not be formed on architectural design; however the designer must know the consequence of earthquake behaviour against buildings with irregular configuration of non-parallel system. The present research has the objective to identify earthquake behaviour in architectural geometry with irregular configuration of non-parallel system. The present research was quantitative with simulation experimental method. It consisted of 5 models, where architectural data and model structure data were inputted and analyzed using the software SAP2000 in order to find out its performance, and ETAB2015 to determine the eccentricity occurred. The output of the software analysis was tabulated, graphed, compared and analyzed with relevant theories. For areas of strong earthquake zones, avoid designing buildings which wholly form irregular configuration of non-parallel system. If it is inevitable to design a building with building parts containing irregular configuration of non-parallel system, make it more rigid by forming a triangle module, and use the formula.A good collaboration is needed between architects and structural experts in creating earthquake architecture.

  8. War and peace: morphemes and full forms in a noninteractive activation parallel dual-route model.

    PubMed

    Baayen, H; Schreuder, R

    This article introduces a computational tool for modeling the process of morphological segmentation in visual and auditory word recognition in the framework of a parallel dual-route model. Copyright 1999 Academic Press.

  9. The Reliability and Validity of the Instructional Climate Inventory-Student Form.

    ERIC Educational Resources Information Center

    Worrell, Frank C.

    2000-01-01

    Study examines the reliability and validity of the Instructional Climate Survey-Form S (ICI-S), a 20-item instrument that measures school climate, administered to students (N=328) in three programs. Analysis indicates that ICI-S was best explained by one factor. Reliability coeffecients of the total score were within the acceptable range for all…

  10. The Experiences in Close Relationship Scale (ECR)-short form: reliability, validity, and factor structure.

    PubMed

    Wei, Meifen; Russell, Daniel W; Mallinckrodt, Brent; Vogel, David L

    2007-04-01

    We developed a 12-item, short form of the Experiences in Close Relationship Scale (ECR; Brennan, Clark, & Shaver, 1998) across 6 studies. In Study 1, we examined the reliability and factor structure of the measure. In Studies 2 and 3, we cross-validated the reliability, factor structure, and validity of the short form measure; whereas in Study 4, we examined test-retest reliability over a 1-month period. In Studies 5 and 6, we further assessed the reliability, factor structure, and validity of the short version of the ECR when administered as a stand-alone instrument. Confirmatory factor analyses indicated that 2 factors, labeled Anxiety and Avoidance, provided a good fit to the data after removing the influence of response sets. We found validity to be equivalent for the short and the original versions of the ECR across studies. Finally, the results were comparable when we embedded the short form within the original version of the ECR and when we administered it as a stand-alone measure.

  11. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aoki, Kenji

    A read/write head for a magnetic tape includes an elongated chip assembly and a tape running surface formed in the longitudinal direction of the chip assembly. A pair of substantially spaced parallel read/write gap lines for supporting read/write elements extend longitudinally along the tape running surface of the chip assembly. Also, at least one groove is formed on the tape running surface on both sides of each of the read/write gap lines and extends substantially parallel to the read/write gap lines.

  13. The Reliability and Validity of the Coopersmith Self-Esteem Inventory-Form B.

    ERIC Educational Resources Information Center

    Chiu, Lian-Hwang

    1985-01-01

    The purpose of this study was to determine the test-retest reliability and concurrent validity of the short form (Form B) of the Coopersmith Self-Esteem Inventory. Criterion measures for validity included: (1) sociometric measures; (2) teacher's popularity ranking; and, (3) self-esteem rating. (Author/LMO)

  14. Low-cost optical interconnect module for parallel optical data links

    NASA Astrophysics Data System (ADS)

    Noddings, Chad; Hirsch, Tom J.; Olla, M.; Spooner, C.; Yu, Jason J.

    1995-04-01

    We have designed, fabricated, and tested a prototype parallel ten-channel unidirectional optical data link. When scaled to production, we project that this technology will satisfy the following market penetration requirements: (1) up to 70 meters transmission distance, (2) at least 1 gigabyte/second data rate, and (3) 0.35 to 0.50 MByte/second volume selling price. These goals can be achieved by means of the assembly innovations described in this paper: a novel alignment method that is integrated with low-cost, few chip module packaging techniques, yielding high coupling and reducing the component count. Furthermore, high coupling efficiency increases projected reliability reducing the driver's power requirements.

  15. A Parallel Independent Component Analysis Approach to Investigate Genomic Influence on Brain Function

    PubMed Central

    Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D.

    2009-01-01

    Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings. PMID:19834575

  16. A Parallel Independent Component Analysis Approach to Investigate Genomic Influence on Brain Function.

    PubMed

    Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D

    2008-01-01

    Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings.

  17. A Massively Parallel Bayesian Approach to Planetary Protection Trajectory Analysis and Design

    NASA Technical Reports Server (NTRS)

    Wallace, Mark S.

    2015-01-01

    The NASA Planetary Protection Office has levied a requirement that the upper stage of future planetary launches have a less than 10(exp -4) chance of impacting Mars within 50 years after launch. A brute-force approach requires a decade of computer time to demonstrate compliance. By using a Bayesian approach and taking advantage of the demonstrated reliability of the upper stage, the required number of fifty-year propagations can be massively reduced. By spreading the remaining embarrassingly parallel Monte Carlo simulations across multiple computers, compliance can be demonstrated in a reasonable time frame. The method used is described here.

  18. Forward Period Analysis Method of the Periodic Hamiltonian System.

    PubMed

    Wang, Pengfei

    2016-01-01

    Using the forward period analysis (FPA), we obtain the period of a Morse oscillator and mathematical pendulum system, with the accuracy of 100 significant digits. From these results, the long-term [0, 1060] (time unit) solutions, ranging from the Planck time to the age of the universe, are computed reliably and quickly with a parallel multiple-precision Taylor series (PMT) scheme. The application of FPA to periodic systems can greatly reduce the computation time of long-term reliable simulations. This scheme provides an efficient way to generate reference solutions, against which long-term simulations using other schemes can be tested.

  19. Fiber-optic microsphere-based arrays for multiplexed biological warfare agent detection.

    PubMed

    Song, Linan; Ahn, Soohyoun; Walt, David R

    2006-02-15

    We report a multiplexed high-density DNA array capable of rapid, sensitive, and reliable identification of potential biological warfare agents. An optical fiber bundle containing 6000 individual 3.1-mum-diameter fibers was chemically etched to yield microwells and used as the substrate for the array. Eighteen different 50-mer single-stranded DNA probes were covalently attached to 3.1-mum microspheres. Probe sequences were designed for Bacillus anthracis, Yersinia pestis, Francisella tularensis, Brucella melitensis, Clostridium botulinum, Vaccinia virus, and one biological warfare agent (BWA) simulant, Bacillus thuringiensis kurstaki. The microspheres were distributed into the microwells to form a randomized multiplexed high-density DNA array. A detection limit of 10 fM in a 50-microL sample volume was achieved within 30 min of hybridization for B. anthracis, Y. pestis, Vaccinia virus, and B. thuringiensis kurstaki. We used both specific responses of probes upon hybridization to complementary targets as well as response patterns of the multiplexed array to identify BWAs with high accuracy. We demonstrated the application of this multiplexed high-density DNA array for parallel identification of target BWAs in spiked sewage samples after PCR amplification. The array's miniaturized feature size, fabrication flexibility, reusability, and high reproducibility may enable this array platform to be integrated into a highly sensitive, specific, and reliable portable instrument for in situ BWA detection.

  20. Life Cycle Systems Engineering Approach to NASA's 2nd Generation Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Thomas, Dale; Smith, Charles; Safie, Fayssal; Kittredge, Sheryl

    2002-01-01

    The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd- generation system by 2 orders of magnitude - equivalent to a crew risk of 1 -in- 10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. Given a candidate architecture that possesses credible physical processes and realistic technology assumptions, the next set of analyses address the system's functionality across the spread of operational scenarios characterized by the design reference missions. The safety/reliability and cost/economics associated with operating the system will also be modeled and analyzed to answer the questions "How safe is it?" and "How much will it cost to acquire and operate?" The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.

  1. Dynamic Assessment of School-Age Children's Narrative Ability: An Experimental Investigation of Classification Accuracy

    ERIC Educational Resources Information Center

    Pena, Elizabeth D.; Gillam, Ronald B.; Malek, Melynn; Ruiz-Felter, Roxanna; Resendiz, Maria; Fiestas, Christine; Sabel, Tracy

    2006-01-01

    Two experiments examined reliability and classification accuracy of a narration-based dynamic assessment task. Purpose: The first experiment evaluated whether parallel results were obtained from stories created in response to 2 different wordless picture books. If so, the tasks and measures would be appropriate for assessing pretest and posttest…

  2. International physical activity questionnaire: reliability and validity of the Turkish version.

    PubMed

    Saglam, Melda; Arikan, Hulya; Savci, Sema; Inal-Ince, Deniz; Bosnak-Guclu, Meral; Karabulut, Erdem; Tokgozoglu, Lale

    2010-08-01

    Physical inactivity is a global problem which is related to many chronic health disorders. Physical activity scales which allow cross-cultural comparisons have been developed. The goal was to assess the reliability and validity of a Turkish version of the International Physical Activity Questionnaire (IPAQ). 1,097 university students (721 women, 376 men; ages 18-32) volunteered. Short and long forms of the IPAQ gave good agreement and comparable 1-wk. test-retest reliabilities. Caltrac accelerometer data were compared with IPAQ scores in 80 participants with good agreement for short and long forms. Turkish versions of the IPAQ short and long forms are reliable and valid in assessment of physical activity.

  3. Current characterization methods for cellulose nanomaterials.

    PubMed

    Foster, E Johan; Moon, Robert J; Agarwal, Umesh P; Bortner, Michael J; Bras, Julien; Camarero-Espinosa, Sandra; Chan, Kathleen J; Clift, Martin J D; Cranston, Emily D; Eichhorn, Stephen J; Fox, Douglas M; Hamad, Wadood Y; Heux, Laurent; Jean, Bruno; Korey, Matthew; Nieh, World; Ong, Kimberly J; Reid, Michael S; Renneckar, Scott; Roberts, Rose; Shatkin, Jo Anne; Simonsen, John; Stinson-Bagby, Kelly; Wanasekara, Nandula; Youngblood, Jeff

    2018-04-23

    A new family of materials comprised of cellulose, cellulose nanomaterials (CNMs), having properties and functionalities distinct from molecular cellulose and wood pulp, is being developed for applications that were once thought impossible for cellulosic materials. Commercialization, paralleled by research in this field, is fueled by the unique combination of characteristics, such as high on-axis stiffness, sustainability, scalability, and mechanical reinforcement of a wide variety of materials, leading to their utility across a broad spectrum of high-performance material applications. However, with this exponential growth in interest/activity, the development of measurement protocols necessary for consistent, reliable and accurate materials characterization has been outpaced. These protocols, developed in the broader research community, are critical for the advancement in understanding, process optimization, and utilization of CNMs in materials development. This review establishes detailed best practices, methods and techniques for characterizing CNM particle morphology, surface chemistry, surface charge, purity, crystallinity, rheological properties, mechanical properties, and toxicity for two distinct forms of CNMs: cellulose nanocrystals and cellulose nanofibrils.

  4. 76 FR 73608 - Reliability Technical Conference, North American Electric Reliability Corporation, Public Service...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ... or municipal authority play in forming your bulk power system reliability plans? b. Do you support..., North American Electric Reliability Corporation (NERC) Nick Akins, CEO of American Electric Power (AEP..., EL11-62-000] Reliability Technical Conference, North American Electric Reliability Corporation, Public...

  5. Efficient partitioning and assignment on programs for multiprocessor execution

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1993-01-01

    The general problem studied is that of segmenting or partitioning programs for distribution across a multiprocessor system. Efficient partitioning and the assignment of program elements are of great importance since the time consumed in this overhead activity may easily dominate the computation, effectively eliminating any gains made by the use of the parallelism. In this study, the partitioning of sequentially structured programs (written in FORTRAN) is evaluated. Heuristics, developed for similar applications are examined. Finally, a model for queueing networks with finite queues is developed which may be used to analyze multiprocessor system architectures with a shared memory approach to the problem of partitioning. The properties of sequentially written programs form obstacles to large scale (at the procedure or subroutine level) parallelization. Data dependencies of even the minutest nature, reflecting the sequential development of the program, severely limit parallelism. The design of heuristic algorithms is tied to the experience gained in the parallel splitting. Parallelism obtained through the physical separation of data has seen some success, especially at the data element level. Data parallelism on a grander scale requires models that accurately reflect the effects of blocking caused by finite queues. A model for the approximation of the performance of finite queueing networks is developed. This model makes use of the decomposition approach combined with the efficiency of product form solutions.

  6. SIERRA - A 3-D device simulator for reliability modeling

    NASA Astrophysics Data System (ADS)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  7. SAS and SPSS macros to calculate standardized Cronbach's alpha using the upper bound of the phi coefficient for dichotomous items.

    PubMed

    Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy

    2007-02-01

    Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.

  8. Application of the DMRG in two dimensions: a parallel tempering algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Shijie; Zhao, Jize; Zhang, Xuefeng; Eggert, Sebastian

    The Density Matrix Renormalization Group (DMRG) is known to be a powerful algorithm for treating one-dimensional systems. When the DMRG is applied in two dimensions, however, the convergence becomes much less reliable and typically ''metastable states'' may appear, which are unfortunately quite robust even when keeping a very high number of DMRG states. To overcome this problem we have now successfully developed a parallel tempering DMRG algorithm. Similar to parallel tempering in quantum Monte Carlo, this algorithm allows the systematic switching of DMRG states between different model parameters, which is very efficient for solving convergence problems. Using this method we have figured out the phase diagram of the xxz model on the anisotropic triangular lattice which can be realized by hardcore bosons in optical lattices. SFB Transregio 49 of the Deutsche Forschungsgemeinschaft (DFG) and the Allianz fur Hochleistungsrechnen Rheinland-Pfalz (AHRP).

  9. PSHFT - COMPUTERIZED LIFE AND RELIABILITY MODELLING FOR TURBOPROP TRANSMISSIONS

    NASA Technical Reports Server (NTRS)

    Savage, M.

    1994-01-01

    The computer program PSHFT calculates the life of a variety of aircraft transmissions. A generalized life and reliability model is presented for turboprop and parallel shaft geared prop-fan aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on the statistical two parameter Weibull failure distribution method and classical fatigue theories. The computer program developed to calculate the transmission model is modular. In its present form, the program can analyze five different transmissions arrangements. Moreover, the program can be easily modified to include additional transmission arrangements. PSHFT uses the properties of a common block two-dimensional array to separate the component and transmission property values from the analysis subroutines. The rows correspond to specific components with the first row containing the values for the entire transmission. Columns contain the values for specific properties. Since the subroutines (which determine the transmission life and dynamic capacity) interface solely with this property array, they are separated from any specific transmission configuration. The system analysis subroutines work in an identical manner for all transmission configurations considered. Thus, other configurations can be added to the program by simply adding component property determination subroutines. PSHFT consists of a main program, a series of configuration specific subroutines, generic component property analysis subroutines, systems analysis subroutines, and a common block. The main program selects the routines to be used in the analysis and sequences their operation. The series of configuration specific subroutines input the configuration data, perform the component force and life analyses (with the help of the generic component property analysis subroutines), fill the property array, call up the system analysis routines, and finally print out the analysis results for the system and components. PSHFT is written in FORTRAN 77 and compiled on a MicroSoft FORTRAN compiler. The program will run on an IBM PC AT compatible with at least 104k bytes of memory. The program was developed in 1988.

  10. Validation and reliability of the Turkish Utian Quality-of-Life Scale in postmenopausal women.

    PubMed

    Abay, Halime; Kaplan, Sena

    2016-04-01

    There are a limited number of menopause-specific quality-of-life scales for the Turkish population. This study was conducted to evaluate the validity and reliability of the Turkish Utian Quality-of-Life Scale in postmenopausal women. The study group was comprised of 250 postmenopausal women who applied to a training and research hospital's menopause clinic in Turkey. A survey form and the Turkish Utian quality-of-Life Scale were used to collect data, and the Turkish version of Short Form-36 was used to evaluate reliability with an equivalent form. Language-validity, content-validity, and construct-validity methods were used to assess the validity of the scale, and Cronbach's α coefficient calculation and the equivalent-form reliability methods were used to assess the reliability of the scale. The Turkish Utian Quality-of-Life Scale was determined to be a valid and reliable instrument for measuring the quality of life of postmenopausal women. Confirmatory factor analysis demonstrates that the instrument fits well with 23 items and a four-factor model. The Cronbach's α coefficient for the quality-of-life domains were as follows: 0.88 overall, 0.79 health, 0.78 emotional, 0.76 sexual, and 0.75 occupational. Reliability of the instrument was confirmed through significant correlations between scores on the Turkish version of the Utian Quality-of-Life Scale and the Turkish version of the Short Form-36 (r = 0.745, P < 0.001). This research emphasizes that the Turkish Utian Quality-of-Life Scale is reliable and valid in postmenopausal women-it is a useful instrument for measuring quality of life during menopause.

  11. CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED

    EPA Science Inventory

    This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter, and the power-law relationship between...

  12. CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED

    EPA Science Inventory

    This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter , and the power-law relationship betwe...

  13. Systems Engineering Approach to Technology Integration for NASA's 2nd Generation Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Thomas, Dale; Smith, Charles; Thomas, Leann; Kittredge, Sheryl

    2002-01-01

    The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd-generation system by 2 orders of magnitude - equivalent to a crew risk of 1-in-10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. To best direct technology development decisions, analytical models are employed to accurately predict the benefits of each technology toward potential space transportation architectures as well as the risks associated with each technology. Rigorous systems analysis provides the foundation for assessing progress toward safety and cost goals. The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.

  14. Systems Engineering Approach to Technology Integration for NASA's 2nd Generation Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Thomas, Dale; Smith, Charles; Thomas, Leann; Kittredge, Sheryl

    2002-01-01

    The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd generation system by 2 orders of magnitude - equivalent to a crew risk of 1-in-10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. To best direct technology development decisions, analytical models are employed to accurately predict the benefits of each technology toward potential space transportation architectures as well as the risks associated with each technology. Rigorous systems analysis provides the foundation for assessing progress toward safety and cost goals. The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.

  15. Bristol Stool Form Scale reliability and agreement decreases when determining Rome III stool form designations

    USDA-ARS?s Scientific Manuscript database

    Rater reproducibility of the Bristol Stool Form Scale (BSFS), which categorizes stools into one of seven types, is unknown. We sought to determine reliability and agreement by individual stool type and when responses are categorized by Rome III clinical designation as normal or abnormal (constipatio...

  16. Conversion between parallel and antiparallel β -sheets in wild-type and Iowa mutant Aβ40 fibrils

    NASA Astrophysics Data System (ADS)

    Xi, Wenhui; Hansmann, Ulrich H. E.

    2018-01-01

    Using a variant of Hamilton-replica-exchange, we study for wild type and Iowa mutant Aβ40 the conversion between fibrils with antiparallel β-sheets and such with parallel β-sheets. We show that wild type and mutant form distinct salt bridges that in turn stabilize different fibril organizations. The conversion between the two fibril forms leads to the release of small aggregates that in the Iowa mutant may shift the equilibrium from fibrils to more toxic oligomers.

  17. Decentralized diagnostics based on a distributed micro-genetic algorithm for transducer networks monitoring large experimental systems.

    PubMed

    Arpaia, P; Cimmino, P; Girone, M; La Commara, G; Maisto, D; Manna, C; Pezzetti, M

    2014-09-01

    Evolutionary approach to centralized multiple-faults diagnostics is extended to distributed transducer networks monitoring large experimental systems. Given a set of anomalies detected by the transducers, each instance of the multiple-fault problem is formulated as several parallel communicating sub-tasks running on different transducers, and thus solved one-by-one on spatially separated parallel processes. A micro-genetic algorithm merges evaluation time efficiency, arising from a small-size population distributed on parallel-synchronized processors, with the effectiveness of centralized evolutionary techniques due to optimal mix of exploitation and exploration. In this way, holistic view and effectiveness advantages of evolutionary global diagnostics are combined with reliability and efficiency benefits of distributed parallel architectures. The proposed approach was validated both (i) by simulation at CERN, on a case study of a cold box for enhancing the cryogeny diagnostics of the Large Hadron Collider, and (ii) by experiments, under the framework of the industrial research project MONDIEVOB (Building Remote Monitoring and Evolutionary Diagnostics), co-funded by EU and the company Del Bo srl, Napoli, Italy.

  18. Robust Segmentation of Overlapping Cells in Histopathology Specimens Using Parallel Seed Detection and Repulsive Level Set

    PubMed Central

    Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin

    2013-01-01

    Automated image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of breast cancer. Automated segmentation of the cells comprising imaged tissue microarrays (TMA) is a prerequisite for any subsequent quantitative analysis. Unfortunately, crowding and overlapping of cells present significant challenges for most traditional segmentation algorithms. In this paper, we propose a novel algorithm which can reliably separate touching cells in hematoxylin stained breast TMA specimens which have been acquired using a standard RGB camera. The algorithm is composed of two steps. It begins with a fast, reliable object center localization approach which utilizes single-path voting followed by mean-shift clustering. Next, the contour of each cell is obtained using a level set algorithm based on an interactive model. We compared the experimental results with those reported in the most current literature. Finally, performance was evaluated by comparing the pixel-wise accuracy provided by human experts with that produced by the new automated segmentation algorithm. The method was systematically tested on 234 image patches exhibiting dense overlap and containing more than 2200 cells. It was also tested on whole slide images including blood smears and tissue microarrays containing thousands of cells. Since the voting step of the seed detection algorithm is well suited for parallelization, a parallel version of the algorithm was implemented using graphic processing units (GPU) which resulted in significant speed-up over the C/C++ implementation. PMID:22167559

  19. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1989-01-01

    Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.

  20. Parallel computation using boundary elements in solid mechanics

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1990-01-01

    The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

  1. An approach to enhance pnetCDF performance in environmental modeling applications

    EPA Science Inventory

    Data intensive simulations are often limited by their I/O (input/output) performance, and "novel" techniques need to be developed in order to overcome this limitation. The software package pnetCDF (parallel network Common Data Form), which works with parallel file syste...

  2. Geopotential Error Analysis from Satellite Gradiometer and Global Positioning System Observables on Parallel Architecture

    NASA Technical Reports Server (NTRS)

    Schutz, Bob E.; Baker, Gregory A.

    1997-01-01

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  3. Geopotential error analysis from satellite gradiometer and global positioning system observables on parallel architectures

    NASA Astrophysics Data System (ADS)

    Baker, Gregory Allen

    The recovery of a high resolution geopotential from satellite gradiometer observations motivates the examination of high performance computational techniques. The primary subject matter addresses specifically the use of satellite gradiometer and GPS observations to form and invert the normal matrix associated with a large degree and order geopotential solution. Memory resident and out-of-core parallel linear algebra techniques along with data parallel batch algorithms form the foundation of the least squares application structure. A secondary topic includes the adoption of object oriented programming techniques to enhance modularity and reusability of code. Applications implementing the parallel and object oriented methods successfully calculate the degree variance for a degree and order 110 geopotential solution on 32 processors of the Cray T3E. The memory resident gradiometer application exhibits an overall application performance of 5.4 Gflops, and the out-of-core linear solver exhibits an overall performance of 2.4 Gflops. The combination solution derived from a sun synchronous gradiometer orbit produce average geoid height variances of 17 millimeters.

  4. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  5. Assessing reliability of short and tick box forms of the ANU-ADRI: Convenient alternatives of a self-report Alzheimer's disease risk assessment.

    PubMed

    Kim, Sarang; Cherbuin, Nicolas; Anstey, Kaarin J

    2016-06-01

    To assess the reliability of short versions of the Australian National University Alzheimer's Disease Risk Index (ANU-ADRI). A short form of the ANU-ADRI (ANU-ADRI-SF) was developed by assessing risk and protective factors with single questions where possible and with short forms of sub-questionnaires where available. The tick box form of the ANU-ADRI (ANU-ADRI-TB) was developed with unique questions for each risk and protective factor for Alzheimer's disease. The short versions were evaluated in an independent community sample of 504 participants with a mean age of 45.01 (SD = 14.85, range = 18-81). The short versions demonstrated high reliabilities when compared with the ANU-ADRI. However, the proportion of misclassification was high for some risk factors and particularly for the ANU-ADRI-TB. The ANU-ADRI-SF may be considered if less reliable questions from the ANU-ADRI-SF can be replaced with more reliable questions from the ANU-ADRI for risk/protective factors with high misclassification.

  6. Reliability and equivalence of alternate forms for the Symbol Digit Modalities Test: implications for multiple sclerosis clinical trials.

    PubMed

    Benedict, Ralph H B; Smerbeck, Audrey; Parikh, Rajavi; Rodgers, Jonathan; Cadavid, Diego; Erlanger, David

    2012-09-01

    Cognitive impairment is common in multiple sclerosis (MS), but is seldom assessed in clinical trials investigating the effects of disease-modifying therapies. The Symbol Digit Modalities Test (SDMT) is a particularly promising tool due to its sensitivity and robust correlation with brain magnetic resonance imaging (MRI) and vocational disability. Unfortunately, there are no validated alternate SDMT forms, which are needed to mitigate practice effects. The aim of the study was to assess the reliability and equivalence of SDMT alternate forms. Twenty-five healthy participants completed each of five alternate versions of the SDMT - the standard form, two versions from the Rao Brief Repeatable Battery, and two forms specifically designed for this study. Order effects were controlled using a Latin-square research design. All five versions of the SDMT produced mean values within 3 raw score points of one another. Three forms were very consistent, and not different by conservative statistical tests. The SDMT test-retest reliability using these forms was good to excellent, with all r values exceeding 0.80. For the first time, we find good evidence that at least three alternate versions of the SDMT are of equivalent difficulty in healthy adults. The forms are reliable, and can be implemented in clinical trials emphasizing cognitive outcomes.

  7. Optimization Based Efficiencies in First Order Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Peck, Jeffrey A.; Mahadevan, Sankaran

    2003-01-01

    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  8. An intercalation-locked parallel-stranded DNA tetraplex

    DOE PAGES

    Tripathi, S.; Zhang, D.; Paukstelis, P. J.

    2015-01-27

    DNA has proved to be an excellent material for nanoscale construction because complementary DNA duplexes are programmable and structurally predictable. However, in the absence of Watson–Crick pairings, DNA can be structurally more diverse. Here, we describe the crystal structures of d(ACTCGGATGAT) and the brominated derivative, d(AC BrUCGGA BrUGAT). These oligonucleotides form parallel-stranded duplexes with a crystallographically equivalent strand, resulting in the first examples of DNA crystal structures that contains four different symmetric homo base pairs. Two of the parallel-stranded duplexes are coaxially stacked in opposite directions and locked together to form a tetraplex through intercalation of the 5'-most A–A basemore » pairs between adjacent G–G pairs in the partner duplex. The intercalation region is a new type of DNA tertiary structural motif with similarities to the i-motif. 1H– 1H nuclear magnetic resonance and native gel electrophoresis confirmed the formation of a parallel-stranded duplex in solution. Finally, we modified specific nucleotide positions and added d(GAY) motifs to oligonucleotides and were readily able to obtain similar crystals. This suggests that this parallel-stranded DNA structure may be useful in the rational design of DNA crystals and nanostructures.« less

  9. Sulfur isotopes of host strata for Howards Pass (Yukon–Northwest Territories) Zn-Pb deposits implicate anaerobic oxidation of methane, not basin stagnation

    USGS Publications Warehouse

    Johnson, Craig A.; Slack, John F.; Dumoulin, Julie A.; Kelley, Karen Duttweiler; Falck, Hendrik

    2018-01-01

    A new sulfur isotope stratigraphic profile has been developed for Ordovician-Silurian mudstones that host the Howards Pass Zn-Pb deposits (Canada) in an attempt to reconcile the traditional model of a stagnant euxinic basin setting with new contradictory findings. Our analyses of pyrite confirm the up-section 34S enrichment reported previously, but additional observations show parallel depletion of carbonate 13C, an increase in organic carbon weight percent, and a change in pyrite morphology. Taken together, the data suggest that the 34S enrichment reflects a transition in the mechanism of pyrite formation during diagenesis, not isotopic evolution of a stagnant water mass. Low in the stratigraphic section, pyrite formed mainly in the sulfate reduction zone in association with organic matter–driven bacterial sulfate reduction. In contrast, starting just below the Zn-Pb mineralized horizon, pyrite formed increasingly within the sulfate-methane transition zone in association with anaerobic oxidation of methane. Our new insights on diagenesis have implications for (1) the setting of Zn-Pb ore formation, (2) the reliability of redox proxies involving metals, and (3) the source of ore sulfur for Howards Pass, and potentially for other stratiform Zn-Pb deposits contained in carbonaceous strata.

  10. How do cardiorespiratory fitness improvements vary with physical training modality in heart failure patients? A quantitative guide

    PubMed Central

    Smart, Neil A

    2013-01-01

    BACKGROUND: Peak oxygen consumption (VO2) is the gold standard measure of cardiorespiratory fitness and a reliable predictor of survival in chronic heart failure patients. Furthermore, any form of physical training usually improves cardiorespiratory fitness, although the magnitude of improvement in peak VO2 may vary across different training prescriptions. OBJECTIVE: To quantify, and subsequently rank, the magnitude of improvement in peak VO2 for different physical training prescriptions using data from published meta-analyses and randomized controlled trials. METHODS: Prospective randomized controlled parallel trials and meta-analyses of exercise training in chronic heart failure patients that provided data on change in peak VO2 for nine a priori comparative analyses were examined. RESULTS: All forms of physical training were beneficial, although the improvement in peak VO2 varied with modality. High-intensity interval exercise yielded the largest increase in peak VO2, followed in descending order by moderate-intensity aerobic exercise, functional electrical stimulation, inspiratory muscle training, combined aerobic and resistance training, and isolated resistance training. With regard to setting, the present study was unable to determine whether outpatient or unsupervised home exercise provided greater benefits in terms of peak VO2 improvment. CONCLUSIONS: Interval exercise is not suitable for all patients, especially the high-intensity variety; however, when indicated, this form of exercise should be adopted to optimize peak VO2 adaptations. Other forms of activity, such as functional electrical stimulation, may be more appropriate for patients who are not capable of high-intensity interval training, especially for severely deconditioned patients who are initially unable to exercise. PMID:24294043

  11. Reliable protocol for shear wave elastography of lower limb muscles at rest and during passive stretching.

    PubMed

    Dubois, Guillaume; Kheireddine, Walid; Vergari, Claudio; Bonneau, Dominique; Thoreux, Patricia; Rouch, Philippe; Tanter, Mickael; Gennisson, Jean-Luc; Skalli, Wafa

    2015-09-01

    Development of shear wave elastography gave access to non-invasive muscle stiffness assessment in vivo. The aim of the present study was to define a measurement protocol to be used in clinical routine for quantifying the shear modulus of lower limb muscles. Four positions were defined to evaluate shear modulus in 10 healthy subjects: parallel to the fibers, in the anterior and posterior aspects of the lower limb, at rest and during passive stretching. Reliability was first evaluated on two muscles by three operators; these measurements were repeated six times. Then, measurement reliability was compared in 11 muscles by two operators; these measurements were repeated three times. Reproducibility of shear modulus was 0.48 kPa and repeatability was 0.41 kPa, with all muscles pooled. Position did not significantly influence reliability. Shear wave elastography appeared to be an appropriate and reliable tool to evaluate the shear modulus of lower limb muscles with the proposed protocol. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  12. Wikipedia and the Wisdom of Crowds: A Student Project

    ERIC Educational Resources Information Center

    Barnhisel, Greg; Rapchak, Marcia

    2014-01-01

    Students in a senior English class examined the question of whether the "wisdom of experts" or "the wisdom of crowds" is more reliable and useful in a writing course by engaging in a parallel Wikipedia project. Each student either created a new entry or made significant changes to an existing Wikipedia entry, tracked changes to…

  13. An overview of computational simulation methods for composite structures failure and life analysis

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1993-01-01

    Three parallel computational simulation methods are being developed at the LeRC Structural Mechanics Branch (SMB) for composite structures failure and life analysis: progressive fracture CODSTRAN; hierarchical methods for high-temperature composites; and probabilistic evaluation. Results to date demonstrate that these methods are effective in simulating composite structures failure/life/reliability.

  14. A Teratogenic Deformity Index for Evaluating Impacts of Selenium on Fish Populations

    Treesearch

    A. Dennis Lemly

    1997-01-01

    This paper describes a method for using teratogenic deformities in fish as the basis for evaluating impacts of selenium contamination. Teratogenicde deformaties are reliable bioindicators of selenium toxicosis in fish. They are produced in response to dietary exposure of parent fish and subsequent deposition of selenium in eggs. There is a close parallel between...

  15. The Relationship of Language and Symbolic Play in Children with Hearing Loss.

    ERIC Educational Resources Information Center

    Yoshinaga-Itano, Christine; Snyder, Lynn S.; Day, Diane

    1999-01-01

    The internal reliability and concurrent validity of the Play Assessment Questionnaire (PAQ) was compared to that of the Minnesota Child Development Inventory with 170 deaf or hard of hearing infants and toddlers. The PAQ was found to be a useful nonverbal tool that assesses symbolic play behaviors and demonstrates a parallel development with…

  16. Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels

    ERIC Educational Resources Information Center

    Wang, Han

    2010-01-01

    Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…

  17. Reliable inverter systems

    NASA Technical Reports Server (NTRS)

    Nagano, S.

    1979-01-01

    Base driver with common-load-current feedback protects paralleled inverter systems from open or short circuits. Circuit eliminates total system oscillation that can occur in conventional inverters because of open circuit in primary transformer winding. Common feedback signal produced by functioning modules forces operating frequency of failed module to coincide with clock drive so module resumes normal operating frequency in spite of open circuit.

  18. Comparative assessment of recombinant and native immunogenic forms of Fasciola hepatica proteins for serodiagnosis of sheep fasciolosis.

    PubMed

    Mokhtarian, Kobra; Meamar, Ahmad Reza; Khoshmirsafa, Majid; Razmjou, Elham; Masoori, Leila; Khanmohammadi, Majid; Akhlaghi, Lame; Falak, Reza

    2018-01-01

    Laboratory diagnosis of sheep fasciolosis is commonly performed by coprological examinations; however, this method may lead to false negative results during the acute phase of the infection. Furthermore, the poor sensitivity of coprological methods is considered to be a paradox in the chronic phase of the infection. In this study, we compared the immunoreactivity of native and recombinant forms of Fasciola hepatica excretory/secretory antigens and determined their capabilities for the development of F. hepatica-specific immunoassays. Immunoreactivity and specificity of recombinant and native forms of F. hepatica antigens, including fatty acid binding protein (FABP), glutathione-S-transferase (GST), and cathepsin L-1 (CL1), in parallel with native forms of FABP and GST, were studied for serodiagnosis of the chronic form of sheep fasciolosis, individually or in combination with each other by enzyme-linked immunosorbent assays (ELISA). The correlation of the findings was assessed by receiver-operator characteristic (ROC); furthermore, the specificity and sensitivity were assessed by Youden's J. Serologic cross-reactivity was evaluated using samples from healthy sheep (n = 40), Fasciola-infected sheep (n = 30), and sheep with other parasitic infections (n = 43). The FABPs were determined to be greater than 95% sensitive for F. hepatica serodiagnosis. The most desirable diagnostic recombinant antigen was rCL1, which showed 100% sensitivity and 97% specificity in ELISA and was capable of discriminating the positive and negative samples by maximum Youden's J results. We conclude that rCL1 can be used for routine serodiagnosis of chronic fasciolosis. Thus, it could be advantageous in development of immunoassays for screening of ovine herds in fasciolosis-endemic areas and as a reliable agent for detection of fasciolosis in non-endemic regions.

  19. Parallel processing in the honeybee olfactory pathway: structure, function, and evolution.

    PubMed

    Rössler, Wolfgang; Brill, Martin F

    2013-11-01

    Animals face highly complex and dynamic olfactory stimuli in their natural environments, which require fast and reliable olfactory processing. Parallel processing is a common principle of sensory systems supporting this task, for example in visual and auditory systems, but its role in olfaction remained unclear. Studies in the honeybee focused on a dual olfactory pathway. Two sets of projection neurons connect glomeruli in two antennal-lobe hemilobes via lateral and medial tracts in opposite sequence with the mushroom bodies and lateral horn. Comparative studies suggest that this dual-tract circuit represents a unique adaptation in Hymenoptera. Imaging studies indicate that glomeruli in both hemilobes receive redundant sensory input. Recent simultaneous multi-unit recordings from projection neurons of both tracts revealed widely overlapping response profiles strongly indicating parallel olfactory processing. Whereas lateral-tract neurons respond fast with broad (generalistic) profiles, medial-tract neurons are odorant specific and respond slower. In analogy to "what-" and "where" subsystems in visual pathways, this suggests two parallel olfactory subsystems providing "what-" (quality) and "when" (temporal) information. Temporal response properties may support across-tract coincidence coding in higher centers. Parallel olfactory processing likely enhances perception of complex odorant mixtures to decode the diverse and dynamic olfactory world of a social insect.

  20. Testing for carryover effects after cessation of treatments: a design approach.

    PubMed

    Sturdevant, S Gwynn; Lumley, Thomas

    2016-08-02

    Recently, trials addressing noisy measurements with diagnosis occurring by exceeding thresholds (such as diabetes and hypertension) have been published which attempt to measure carryover - the impact that treatment has on an outcome after cessation. The design of these trials has been criticised and simulations have been conducted which suggest that the parallel-designs used are not adequate to test this hypothesis; two solutions are that either a differing parallel-design or a cross-over design could allow for diagnosis of carryover. We undertook a systematic simulation study to determine the ability of a cross-over or a parallel-group trial design to detect carryover effects on incident hypertension in a population with prehypertension. We simulated blood pressure and focused on varying criteria to diagnose systolic hypertension. Using the difference in cumulative incidence hypertension to analyse parallel-group or cross-over trials resulted in none of the designs having acceptable Type I error rate. Under the null hypothesis of no carryover the difference is well above the nominal 5 % error rate. When a treatment is effective during the intervention period, reliable testing for a carryover effect is difficult. Neither parallel-group nor cross-over designs using the difference in cumulative incidence appear to be a feasible approach. Future trials should ensure their design and analysis is validated by simulation.

  1. Mitochondrial gene rearrangements confirm the parallel evolution of the crab-like form.

    PubMed Central

    Morrison, C L; Harvey, A W; Lavery, S; Tieu, K; Huang, Y; Cunningham, C W

    2002-01-01

    The repeated appearance of strikingly similar crab-like forms in independent decapod crustacean lineages represents a remarkable case of parallel evolution. Uncertainty surrounding the phylogenetic relationships among crab-like lineages has hampered evolutionary studies. As is often the case, aligned DNA sequences by themselves were unable to fully resolve these relationships. Four nested mitochondrial gene rearrangements--including one of the few reported movements of an arthropod protein-coding gene--are congruent with the DNA phylogeny and help to resolve a crucial node. A phylogenetic analysis of DNA sequences, and gene rearrangements, supported five independent origins of the crab-like form, and suggests that the evolution of the crab-like form may be irreversible. This result supports the utility of mitochondrial gene rearrangements in phylogenetic reconstruction. PMID:11886621

  2. Parallel fabrication of macroporous scaffolds.

    PubMed

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  3. Formation of organic layer on femtosecond laser-induced periodic surface structures

    NASA Astrophysics Data System (ADS)

    Yasumaru, Naoki; Sentoku, Eisuke; Kiuchi, Junsuke

    2017-05-01

    Two types of laser-induced periodic surface structures (LIPSS) formed on titanium by femtosecond (fs) laser pulses (λ = 800 nm, τ = 180 fs, ν = 1 kHz) in air were investigated experimentally. At a laser fluence F above the ablation threshold, LIPSS with a minimum mean spacing of D < λ⁄2 were observed perpendicular to the laser polarization direction. In contrast, for F slightly below than the ablation threshold, ultrafine LIPSS with a minimum value of D < λ/10 were formed parallel to the polarization direction. The surface roughness of the parallel-oriented LIPSS was almost the same as that of the non-irradiated surface, unlike the high roughness of the perpendicular-oriented LIPSS. In addition, although the surface state of the parallel-oriented LIPSS was the same as that of the non-irradiated surface, the perpendicular-oriented LIPSS were covered with an organic thin film similar to a cellulose derivative that cannot be easily formed by conventional chemical synthesis. The results of these surface analyses indicate that these two types of LIPSS are formed through different mechanisms. This fs-laser processing technique may become a new technology for the artificial synthesis of cellulose derivatives.

  4. Space Station Freedom power supply commonality via modular design

    NASA Technical Reports Server (NTRS)

    Krauthamer, S.; Gangal, M. D.; Das, R.

    1990-01-01

    At mature operations, Space Station Freedom will need more than 2000 power supplies to feed housekeeping and user loads. Advanced technology power supplies from 20 to 250 W have been hybridized for terrestrial, aerospace, and industry applications in compact, efficient, reliable, lightweight packages compatible with electromagnetic interference requirements. The use of these hybridized packages as modules, either singly or in parallel, to satisfy the wide range of user power supply needs for all elements of the station is proposed. Proposed characteristics for the power supplies include common mechanical packaging, digital control, self-protection, high efficiency at full and partial loads, synchronization capability to reduce electromagnetic interference, redundancy, and soft-start capability. The inherent reliability is improved compared with conventional discrete component power supplies because the hybrid circuits use high-reliability components such as ceramic capacitors. Reliability is further improved over conventional supplies because the hybrid packages, which may be treated as a single part, reduce the parts count in the power supply.

  5. Discharge reliability in ablative pulsed plasma thrusters

    NASA Astrophysics Data System (ADS)

    Wu, Zhiwen; Sun, Guorui; Yuan, Shiyue; Huang, Tiankun; Liu, Xiangyang; Xie, Kan; Wang, Ningfei

    2017-08-01

    Discharge reliability is typically neglected in low-ignition-cycle ablative pulsed plasma thrusters (APPTs). In this study, the discharge reliability of an APPT is assessed analytically and experimentally. The goals of this study are to better understand the ignition characteristics and to assess the accuracy of the analytical method. For each of six sets of operating conditions, 500 tests of a parallel-plate APPT with a coaxial semiconductor spark plug are conducted. The discharge voltage and current are measured with a high-voltage probe and a Rogowski coil, respectively, to determine whether the discharge is successful. Generally, the discharge success rate increases as the discharge voltage increases, and it decreases as the electrode gap and the number of ignitions increases. The theoretical analysis and the experimental results are reasonably consistent. This approach provides a reference for designing APPTs and improving their stability.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Godwin, Aaron

    The scope will be limited to analyzing the effect of the EFC within the system and how one improperly installed coupling affects the rest of the HPFL system. The discussion will include normal operations, impaired flow, and service interruptions. Normal operations are defined as two-way flow to buildings. Impaired operations are defined as a building that only has one-way flow being provided to the building. Service interruptions will be when a building does not have water available to it. The project will look at the following aspects of the reliability of the HPFL system: mean time to failure (MTTF) ofmore » EFCs, mean time between failures (MTBF), series system models, and parallel system models. These calculations will then be used to discuss the reliability of the system when one of the couplings fails. Compare the reliability of two-way feeds versus one-way feeds.« less

  7. Development of Creative Behavior Observation Form: A Study on Validity and Reliability

    ERIC Educational Resources Information Center

    Dere, Zeynep; Ömeroglu, Esra

    2018-01-01

    This study, Creative Behavior Observation Form was developed to assess creativity of the children. While the study group on the reliability and validity of Creative Behavior Observation Form was being developed, 257 children in total who were at the ages of 5-6 were used as samples with stratified sampling method. Content Validity Index (CVI) and…

  8. Classical test theory and Rasch analysis validation of the Upper Limb Functional Index in subjects with upper limb musculoskeletal disorders.

    PubMed

    Bravini, Elisabetta; Franchignoni, Franco; Giordano, Andrea; Sartorio, Francesco; Ferriero, Giorgio; Vercelli, Stefano; Foti, Calogero

    2015-01-01

    To perform a comprehensive analysis of the psychometric properties and dimensionality of the Upper Limb Functional Index (ULFI) using both classical test theory and Rasch analysis (RA). Prospective, single-group observational design. Freestanding rehabilitation center. Convenience sample of Italian-speaking subjects with upper limb musculoskeletal disorders (N=174). Not applicable. The Italian version of the ULFI. Data were analyzed using parallel analysis, exploratory factor analysis, and RA for evaluating dimensionality, functioning of rating scale categories, item fit, hierarchy of item difficulties, and reliability indices. Parallel analysis revealed 2 factors explaining 32.5% and 10.7% of the response variance. RA confirmed the failure of the unidimensionality assumption, and 6 items out of the 25 misfitted the Rasch model. When the analysis was rerun excluding the misfitting items, the scale showed acceptable fit values, loading meaningfully to a single factor. Item separation reliability and person separation reliability were .98 and .89, respectively. Cronbach alpha was .92. RA revealed weakness of the scale concerning dimensionality and internal construct validity. However, a set of 19 ULFI items defined through the statistical process demonstrated a unidimensional structure, good psychometric properties, and clinical meaningfulness. These findings represent a useful starting point for further analyses of the tool (based on modern psychometric approaches and confirmatory factor analysis) in larger samples, including different patient populations and nationalities. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. Silicon-fiber blanket solar-cell array concept

    NASA Technical Reports Server (NTRS)

    Eliason, J. T.

    1973-01-01

    Proposed economical manufacture of solar-cell arrays involves parallel, planar weaving of filaments made of doped silicon fibers with diffused radial junction. Each filament is a solar cell connected either in series or parallel with others to form a blanket of deposited grids or attached electrode wire mesh screens.

  10. Parallel processing via a dual olfactory pathway in the honeybee.

    PubMed

    Brill, Martin F; Rosenbaum, Tobias; Reus, Isabelle; Kleineidam, Christoph J; Nawrot, Martin P; Rössler, Wolfgang

    2013-02-06

    In their natural environment, animals face complex and highly dynamic olfactory input. Thus vertebrates as well as invertebrates require fast and reliable processing of olfactory information. Parallel processing has been shown to improve processing speed and power in other sensory systems and is characterized by extraction of different stimulus parameters along parallel sensory information streams. Honeybees possess an elaborate olfactory system with unique neuronal architecture: a dual olfactory pathway comprising a medial projection-neuron (PN) antennal lobe (AL) protocerebral output tract (m-APT) and a lateral PN AL output tract (l-APT) connecting the olfactory lobes with higher-order brain centers. We asked whether this neuronal architecture serves parallel processing and employed a novel technique for simultaneous multiunit recordings from both tracts. The results revealed response profiles from a high number of PNs of both tracts to floral, pheromonal, and biologically relevant odor mixtures tested over multiple trials. PNs from both tracts responded to all tested odors, but with different characteristics indicating parallel processing of similar odors. Both PN tracts were activated by widely overlapping response profiles, which is a requirement for parallel processing. The l-APT PNs had broad response profiles suggesting generalized coding properties, whereas the responses of m-APT PNs were comparatively weaker and less frequent, indicating higher odor specificity. Comparison of response latencies within and across tracts revealed odor-dependent latencies. We suggest that parallel processing via the honeybee dual olfactory pathway provides enhanced odor processing capabilities serving sophisticated odor perception and olfactory demands associated with a complex olfactory world of this social insect.

  11. Deep-sea tsunami deposits in the Miocene Nishizaki Formation of Boso Peninsula, Central Japan

    NASA Astrophysics Data System (ADS)

    Lee, I. T.; Ogawa, Y.

    2003-12-01

    Many sets of deep-sea deposits considered to be formed by return flow of tsunami were found from the middle Miocene Nishizaki Formation of Boso Peninsula, Central Japan, which is located near the convergent plate boundary at present as well as in the past, and has been frequently attacked by tsunami. The characteristics of the tsunami deposits in the Nishizaki Formation are as follows. Each set consists of 10-20 beds with parallel laminations formed under upper plane regime composed of alternated pumiceous beds in white and black colors. The white bed comprises coarse sands and pebbles with thickness of 5-10 cm. In contrast, the black bed is made of silts with thickness less than 1 cm. Among the 10-20 beds, the grain size is coarsest in the middle part of the set in general. The uppermost bed of each set shows cross-lamination formed by lower plane regime, gradually changing into finer graded bed on top. Sometimes, the lower part of the parallel laminated bed is associated with an underlying debrite or turbidite bed. Each set of these parallel-laminated beds is lenticular in shape thinning to the east in consistent with the generally eastward paleocurrent of the cross-lamination at the top. Such sedimentary characteristics are different from any event deposits reported in deep-sea but similar to the deep-sea K/T boundary deposits in the Caribbean region. Statistically, tsunami waves occur totally 12-13 times. Among them the height of 5-6th wave is known to be strongest. Interval time of each return flow is known to be 30-40 minutes, enough to settle the finer clastics at each bed top. The parallel-laminated parts have common dish structure and never trace fossils, indicating rather rapid deposition for the whole parts of the set. Consequently, the sedimentary characteristics shown from the parallel-laminated beds of the Nishizaki Formation are attributed to the return flow of tsunami to the deep-sea. We considered that such deep-sea parallel-laminated deposits of pumiceous clastics occur just after a large earthquake which forms the debrite or turbidite at the lowermost part.

  12. Genetic Parallel Programming: design and implementation.

    PubMed

    Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong

    2006-01-01

    This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.

  13. Growth of GaN- and ZnO-Based Nanorod Compound Structures

    DTIC Science & Technology

    2013-08-16

    parallel with or forming a 60o tilted angle with respect to the two parallel lateral sides of individual NRs. In the edge-to-edge pattern, the shortest...kV and a probe forming lens of Cs = 1.2 mm. 3. SEM and TEM Observations Figures 2(a)-2(f) show the plan-view SEM images of samples I-VI... angle annular dark field (HAADF) image in TEM observation of an InGaN/GaN QW NR of sample I. In this image, the three almost vertical bright lines

  14. Kinematics and dynamics of robotic systems with multiple closed loops

    NASA Astrophysics Data System (ADS)

    Zhang, Chang-De

    The kinematics and dynamics of robotic systems with multiple closed loops, such as Stewart platforms, walking machines, and hybrid manipulators, are studied. In the study of kinematics, focus is on the closed-form solutions of the forward position analysis of different parallel systems. A closed-form solution means that the solution is expressed as a polynomial in one variable. If the order of the polynomial is less than or equal to four, the solution has analytical closed-form. First, the conditions of obtaining analytical closed-form solutions are studied. For a Stewart platform, the condition is found to be that one rotational degree of freedom of the output link is decoupled from the other five. Based on this condition, a class of Stewart platforms which has analytical closed-form solution is formulated. Conditions of analytical closed-form solution for other parallel systems are also studied. Closed-form solutions of forward kinematics for walking machines and multi-fingered grippers are then studied. For a parallel system with three three-degree-of-freedom subchains, there are 84 possible ways to select six independent joints among nine joints. These 84 ways can be classified into three categories: Category 3:3:0, Category 3:2:1, and Category 2:2:2. It is shown that the first category has no solutions; the solutions of the second category have analytical closed-form; and the solutions of the last category are higher order polynomials. The study is then extended to a nearly general Stewart platform. The solution is a 20th order polynomial and the Stewart platform has a maximum of 40 possible configurations. Also, the study is extended to a new class of hybrid manipulators which consists of two serially connected parallel mechanisms. In the study of dynamics, a computationally efficient method for inverse dynamics of manipulators based on the virtual work principle is developed. Although this method is comparable with the recursive Newton-Euler method for serial manipulators, its advantage is more noteworthy when applied to parallel systems. An approach of inverse dynamics of a walking machine is also developed, which includes inverse dynamic modeling, foot force distribution, and joint force/torque allocation.

  15. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  16. The Importance of Considering Differences in Study Design in Network Meta-analysis: An Application Using Anti-Tumor Necrosis Factor Drugs for Ulcerative Colitis.

    PubMed

    Cameron, Chris; Ewara, Emmanuel; Wilson, Florence R; Varu, Abhishek; Dyrda, Peter; Hutton, Brian; Ingham, Michael

    2017-11-01

    Adaptive trial designs present a methodological challenge when performing network meta-analysis (NMA), as data from such adaptive trial designs differ from conventional parallel design randomized controlled trials (RCTs). We aim to illustrate the importance of considering study design when conducting an NMA. Three NMAs comparing anti-tumor necrosis factor drugs for ulcerative colitis were compared and the analyses replicated using Bayesian NMA. The NMA comprised 3 RCTs comparing 4 treatments (adalimumab 40 mg, golimumab 50 mg, golimumab 100 mg, infliximab 5 mg/kg) and placebo. We investigated the impact of incorporating differences in the study design among the 3 RCTs and presented 3 alternative methods on how to convert outcome data derived from one form of adaptive design to more conventional parallel RCTs. Combining RCT results without considering variations in study design resulted in effect estimates that were biased against golimumab. In contrast, using the 3 alternative methods to convert outcome data from one form of adaptive design to a format more consistent with conventional parallel RCTs facilitated more transparent consideration of differences in study design. This approach is more likely to yield appropriate estimates of comparative efficacy when conducting an NMA, which includes treatments that use an alternative study design. RCTs based on adaptive study designs should not be combined with traditional parallel RCT designs in NMA. We have presented potential approaches to convert data from one form of adaptive design to more conventional parallel RCTs to facilitate transparent and less-biased comparisons.

  17. Psychometric properties of the Swedish PedsQL, Pediatric Quality of Life Inventory 4.0 generic core scales.

    PubMed

    Petersen, Solveig; Hägglöf, Bruno; Stenlund, Hans; Bergström, Erik

    2009-09-01

    To study the psychometric performance of the Swedish version of the Pediatric Quality of Life Inventory (PedsQL) 4.0 generic core scales in a general child population in Sweden. PedsQL forms were distributed to 2403 schoolchildren and 888 parents in two different school settings. Reliability and validity was studied for self-reports and proxy reports, full forms and short forms. Confirmatory factor analysis tested the factor structure and multigroup confirmatory factor analysis tested measurement invariance between boys and girls. Test-retest reliability was demonstrated for all scales and internal consistency reliability was shown with alpha value exceeding 0.70 for all scales but one (self-report short form: social functioning). Child-parent agreement was low to moderate. The four-factor structure of the PedsQL and factorial invariance across sex subgroups were confirmed for the self-report forms and for the proxy short form, while model fit indices suggested improvement of several proxy full-form scales. The Swedish PedsQL 4.0 generic core scales are a reliable and valid tool for health-related quality of life (HRQoL) assessment in Swedish child populations. The proxy full form, however, should be used with caution. The study also support continued use of the PedsQL as a four-factor model, capable of revealing meaningful HRQoL differences between boys and girls.

  18. Centrifugal multiplexing fixed-volume dispenser on a plastic lab-on-a-disk for parallel biochemical single-end-point assays

    PubMed Central

    La, Moonwoo; Park, Sang Min; Kim, Dong Sung

    2015-01-01

    In this study, a multiple sample dispenser for precisely metered fixed volumes was successfully designed, fabricated, and fully characterized on a plastic centrifugal lab-on-a-disk (LOD) for parallel biochemical single-end-point assays. The dispenser, namely, a centrifugal multiplexing fixed-volume dispenser (C-MUFID) was designed with microfluidic structures based on the theoretical modeling about a centrifugal circumferential filling flow. The designed LODs were fabricated with a polystyrene substrate through micromachining and they were thermally bonded with a flat substrate. Furthermore, six parallel metering and dispensing assays were conducted at the same fixed-volume (1.27 μl) with a relative variation of ±0.02 μl. Moreover, the samples were metered and dispensed at different sub-volumes. To visualize the metering and dispensing performances, the C-MUFID was integrated with a serpentine micromixer during parallel centrifugal mixing tests. Parallel biochemical single-end-point assays were successfully conducted on the developed LOD using a standard serum with albumin, glucose, and total protein reagents. The developed LOD could be widely applied to various biochemical single-end-point assays which require different volume ratios of the sample and reagent by controlling the design of the C-MUFID. The proposed LOD is feasible for point-of-care diagnostics because of its mass-producible structures, reliable metering/dispensing performance, and parallel biochemical single-end-point assays, which can identify numerous biochemical. PMID:25610516

  19. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  20. Parallel Computing:. Some Activities in High Energy Physics

    NASA Astrophysics Data System (ADS)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  1. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  2. Comparison of the reliability of parental reporting and the direct test of the Thai Speech and Language Test.

    PubMed

    Prathanee, Benjamas; Angsupakorn, Nipa; Pumnum, Tawitree; Seepuaham, Cholada; Jaiyong, Pechcharat

    2012-11-01

    To find reliability of parental or caregiver's report and testing of the Thai Speech and Language Test for Children Aged 0-4 Years Old. Five investigators assessed speech and language abilities from video both contexts: parental or caregivers' report and test forms of Thai Speech and Language Test for Children Aged 0-4 Years Old. Twenty-five normal and 30 children with delayed development or risk for delayed speech and language skills were assessed at age intervals of 3, 6, 9, 12, 15, 18, 24, 30, 36 and 48 months. Reliability of parental or caregivers' testing and reporting was at a moderate level (0.41-0.60). Inter-rater reliability among investigators was excellent (0.86-1.00). The parental or caregivers' report form of the Thai Speech and Language test for Children aged 0-4 years old was an indicator for success at a moderate level. Trained professionals could use both forms of this test as reliable tools at an excellent level.

  3. Nonequilibrium thermodynamics and the transport phenomena in magnetically confined plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balescu, R.

    1987-09-01

    The neoclassical theory of transport in magnetically confined plasmas is reviewed. The emphasis is laid on a set of relationships existing among the banana transport coefficients. The surface-averaged entropy production in such plasmas is evaluated. It is shown that neoclassical effects emerge from the entropy production due to parallel transport processes. The Pfirsch-Schlueter effect can be clearly interpreted as due to spatial fluctuations of parallel fluxes on a magnetic surface: the corresponding entropy production is the measure of these fluctuations. The banana fluxes can be formulated in a quasithermodynamic form in which the average entropy production is a bilinear formmore » in the parallel fluxes and the conjugate generalized stresses. A formulation as a quadratic form in the thermodynamic forces is also possible, but leads to anomalies, which are discussed in some detail.« less

  4. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  5. Validity and Reliability of the Turkish Version for DSM-5 Level 2 Anger Scale (Child Form for Children Aged 11-17 Years and Parent Form for Children Aged 6-17 Years).

    PubMed

    Yalin Sapmaz, Şermin; Özek Erkuran, Handan; Yalin, Nefize; Önen, Özlem; Öztekin, Siğnem; Kavurma, Canem; Köroğlu, Ertuğrul; Aydemir, Ömer

    2017-12-01

    This study aimed to assess the validity and reliability of the Turkish version of Diagnostic and Statistical Manual of Mental Disorders (DSM-5) Level 2 Anger Scale. The scale was prepared by translation and back translation of DSM-5 Level 2 Anger Scale. Study groups consisted of a clinical sample of cases diagnosed with depressive disorder and treated in a child and adolescent psychiatry unit and a community sample. The study was continued with 218 children and 160 parents. In the assessment process, child and parent forms of DSM-5 Level 2 Anger Scale and Children's Depression Inventory and Strengths and Difficulties Questionnaire-Parent Form were used. In the reliability analyses, the Cronbach alpha internal consistency coefficient values were found very high regarding child and parent forms. Item-total score correlation coefficients were high and very high, respectively, for child and parent forms indicating a statistical significance. As for construct validity, one factor was maintained for each form and was found to be consistent with the original form of the scale. As for concurrent validity, the child form of the scale showed significant correlation with Children's Depression Inventory, while the parent form showed significant correlation with Strengths and Difficulties Questionnaire-Parent Form. It was found that the Turkish version of DSM-5 Level 2 Anger Scale could be utilized as a valid and reliable tool both in clinical practice and for research purposes.

  6. Nanomechanical DNA origami pH sensors.

    PubMed

    Kuzuya, Akinori; Watanabe, Ryosuke; Yamanaka, Yusei; Tamaki, Takuya; Kaino, Masafumi; Ohya, Yuichi

    2014-10-16

    Single-molecule pH sensors have been developed by utilizing molecular imaging of pH-responsive shape transition of nanomechanical DNA origami devices with atomic force microscopy (AFM). Short DNA fragments that can form i-motifs were introduced to nanomechanical DNA origami devices with pliers-like shape (DNA Origami Pliers), which consist of two levers of 170-nm long and 20-nm wide connected at a Holliday-junction fulcrum. DNA Origami Pliers can be observed as in three distinct forms; cross, antiparallel and parallel forms, and cross form is the dominant species when no additional interaction is introduced to DNA Origami Pliers. Introduction of nine pairs of 12-mer sequence (5'-AACCCCAACCCC-3'), which dimerize into i-motif quadruplexes upon protonation of cytosine, drives transition of DNA Origami Pliers from open cross form into closed parallel form under acidic conditions. Such pH-dependent transition was clearly imaged on mica in molecular resolution by AFM, showing potential application of the system to single-molecular pH sensors.

  7. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 5. Technical Report #1204

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the fifth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  8. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 2. Technical Report #1201

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the second-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  9. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 4. Technical Report #1203

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the fourth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  10. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 6. Technical Report #1205

    ERIC Educational Resources Information Center

    Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the sixth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  11. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 7. Technical Report #1206

    ERIC Educational Resources Information Center

    Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  12. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 3. Technical Report #1202

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Irvin, P. Shawn; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the third-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  13. Development of a bench-top device for parallel climate-controlled recordings of neuronal cultures activity with microelectrode arrays.

    PubMed

    Regalia, Giulia; Biffi, Emilia; Achilli, Silvia; Ferrigno, Giancarlo; Menegon, Andrea; Pedrocchi, Alessandra

    2016-02-01

    Two binding requirements for in vitro studies on long-term neuronal networks dynamics are (i) finely controlled environmental conditions to keep neuronal cultures viable and provide reliable data for more than a few hours and (ii) parallel operation on multiple neuronal cultures to shorten experimental time scales and enhance data reproducibility. In order to fulfill these needs with a Microelectrode Arrays (MEA)-based system, we designed a stand-alone device that permits to uninterruptedly monitor neuronal cultures activity over long periods, overcoming drawbacks of existing MEA platforms. We integrated in a single device: (i) a closed chamber housing four MEAs equipped with access for chemical manipulations, (ii) environmental control systems and embedded sensors to reproduce and remotely monitor the standard in vitro culture environment on the lab bench (i.e. in terms of temperature, air CO2 and relative humidity), and (iii) a modular MEA interface analog front-end for reliable and parallel recordings. The system has been proven to assure environmental conditions stable, physiological and homogeneos across different cultures. Prolonged recordings (up to 10 days) of spontaneous and pharmacologically stimulated neuronal culture activity have not shown signs of rundown thanks to the environmental stability and have not required to withdraw the cells from the chamber for culture medium manipulations. This system represents an effective MEA-based solution to elucidate neuronal network phenomena with slow dynamics, such as long-term plasticity, effects of chronic pharmacological stimulations or late-onset pathological mechanisms. © 2015 Wiley Periodicals, Inc.

  14. Hierarchial parallel computer architecture defined by computational multidisciplinary mechanics

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug; Johnson, Keith

    1989-01-01

    The goal is to develop an architecture for parallel processors enabling optimal handling of multi-disciplinary computation of fluid-solid simulations employing finite element and difference schemes. The goals, philosphical and modeling directions, static and dynamic poly trees, example problems, interpolative reduction, the impact on solvers are shown in viewgraph form.

  15. Scalable Parallel Algorithms for Multidimensional Digital Signal Processing

    DTIC Science & Technology

    1991-12-31

    Proceedings, San Diego CL., August 1989, pp. 132-146. 53 [13] A. L. Gorin, L. Auslander, and A. Silberger . Balanced computation of 2D trans- forms on a tree...Speech, Signal Processing. ASSP-34, Oct. 1986,pp. 1301-1309. [24] A. Norton and A. Silberger . Parallelization and performance analysis of the Cooley-Tukey

  16. Fast parallel molecular algorithms for DNA-based computation: solving the elliptic curve discrete logarithm problem over GF2.

    PubMed

    Li, Kenli; Zou, Shuting; Xv, Jin

    2008-01-01

    Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2(n)), n in Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2(n)) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations.

  17. Fast Parallel Molecular Algorithms for DNA-Based Computation: Solving the Elliptic Curve Discrete Logarithm Problem over GF(2n)

    PubMed Central

    Li, Kenli; Zou, Shuting; Xv, Jin

    2008-01-01

    Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2n), n ∈ Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2n) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations. PMID:18431451

  18. Solid oxide fuel cell having compound cross flow gas patterns

    DOEpatents

    Fraioli, A.V.

    1983-10-12

    A core construction for a fuel cell is disclosed having both parallel and cross flow passageways for the fuel and the oxidant gases. Each core passageway is defined by electrolyte and interconnect walls. Each electrolyte wall consists of cathode and anode materials sandwiching an electrolyte material. Each interconnect wall is formed as a sheet of inert support material having therein spaced small plugs of interconnect material, where cathode and anode materials are formed as layers on opposite sides of each sheet and are electrically connected together by the interconnect material plugs. Each interconnect wall in a wavy shape is connected along spaced generally parallel line-like contact areas between corresponding spaced pairs of generally parallel electrolyte walls, operable to define one tier of generally parallel flow passageways for the fuel and oxidant gases. Alternate tiers are arranged to have the passageways disposed normal to one another. Solid mechanical connection of the interconnect walls of adjacent tiers to the opposite sides of the common electrolyte wall therebetween is only at spaced point-like contact areas, 90 where the previously mentioned line-like contact areas cross one another.

  19. Solid oxide fuel cell having compound cross flow gas patterns

    DOEpatents

    Fraioli, Anthony V.

    1985-01-01

    A core construction for a fuel cell is disclosed having both parallel and cross flow passageways for the fuel and the oxidant gases. Each core passageway is defined by electrolyte and interconnect walls. Each electrolyte wall consists of cathode and anode materials sandwiching an electrolyte material. Each interconnect wall is formed as a sheet of inert support material having therein spaced small plugs of interconnect material, where cathode and anode materials are formed as layers on opposite sides of each sheet and are electrically connected together by the interconnect material plugs. Each interconnect wall in a wavy shape is connected along spaced generally parallel line-like contact areas between corresponding spaced pairs of generally parallel electrolyte walls, operable to define one tier of generally parallel flow passageways for the fuel and oxidant gases. Alternate tiers are arranged to have the passageways disposed normal to one another. Solid mechanical connection of the interconnect walls of adjacent tiers to the opposite sides of the common electrolyte wall therebetween is only at spaced point-like contact areas, 90 where the previously mentioned line-like contact areas cross one another.

  20. Haptic adaptation to slant: No transfer between exploration modes

    PubMed Central

    van Dam, Loes C. J.; Plaisier, Myrthe A.; Glowania, Catharina; Ernst, Marc O.

    2016-01-01

    Human touch is an inherently active sense: to estimate an object’s shape humans often move their hand across its surface. This way the object is sampled both in a serial (sampling different parts of the object across time) and parallel fashion (sampling using different parts of the hand simultaneously). Both the serial (moving a single finger) and parallel (static contact with the entire hand) exploration modes provide reliable and similar global shape information, suggesting the possibility that this information is shared early in the sensory cortex. In contrast, we here show the opposite. Using an adaptation-and-transfer paradigm, a change in haptic perception was induced by slant-adaptation using either the serial or parallel exploration mode. A unified shape-based coding would predict that this would equally affect perception using other exploration modes. However, we found that adaptation-induced perceptual changes did not transfer between exploration modes. Instead, serial and parallel exploration components adapted simultaneously, but to different kinaesthetic aspects of exploration behaviour rather than object-shape per se. These results indicate that a potential combination of information from different exploration modes can only occur at down-stream cortical processing stages, at which adaptation is no longer effective. PMID:27698392

  1. Dynamics and control of cable-suspended parallel robots for giant telescopes

    NASA Astrophysics Data System (ADS)

    Zhuang, Peng; Yao, Zhengqiu

    2006-06-01

    A cable-suspended parallel robot utilizes the basic idea of Stewart platform but replaces parallel links with cables and linear actuators with winches. It has many advantages over a conventional crane. The concept of applying a cable-suspended parallel robot into the construction and maintenance of giant telescope is presented in this paper. Compared with the mass and travel of the moving platform of the robot, the mass and deformation of the cables can be disregarded. Based on the premises, the kinematic and dynamic models of the robot are built. Through simulation, the inertia and gravity of moving platform are found to have dominant effect on the dynamic characteristic of the robot, while the dynamics of actuators can be disregarded, so a simplified dynamic model applicable to real-time control is obtained. Moreover, according to control-law partitioning approach and optimization theory, a workspace model-based controller is proposed considering the characteristic that the cables can only pull but not push. The simulation results indicate that the controller possesses good accuracy in pose and speed tracking, and keeps the cables in reliable tension by maintaining the minimum strain above a certain given value, thus ensures smooth motion and accurate localization for moving platform.

  2. The Virtual Short Physical Performance Battery

    PubMed Central

    Wrights, Abbie P.; Haakonssen, Eric H.; Dobrosielski, Meredith A.; Chmelo, Elizabeth A.; Barnard, Ryan T.; Pecorella, Anthony; Ip, Edward H.; Rejeski, W. Jack

    2015-01-01

    Background. Performance-based and self-report instruments of physical function are frequently used and provide complementary information. Identifying older adults with a mismatch between actual and perceived function has utility in clinical settings and in the design of interventions. Using novel, video-animated technology, the objective of this study was to develop a self-report measure that parallels the domains of objective physical function assessed by the Short Physical Performance Battery (SPPB)—the virtual SPPB (vSPPB). Methods. The SPPB, vSPPB, the self-report Pepper Assessment Tool for Disability, the Mobility Assessment Tool-short form, and a 400-m walk test were administered to 110 older adults (mean age = 80.6±5.2 years). One-week test–retest reliability of the vSPPB was examined in 30 participants. Results. The total SPPB (mean [±SD] = 7.7±2.8) and vSPPB (7.7±3.2) scores were virtually identical, yet moderately correlated (r = .601, p < .05). The component scores of the SPPB and vSPPB were also moderately correlated (all p values <.01). The vSPPB (intraclass correlation = .963, p < .05) was reliable; however, individuals with the lowest function overestimated their overall lower extremity function while participants of all functional levels overestimated their ability on chair stands, but accurately perceived their usual gait speed. Conclusion. In spite of the similarity between the SPPB and vSPPB, the moderate strength of the association between the two suggests that they offer unique perspectives on an older adult’s physical function. PMID:25829520

  3. The Development of a Motor-Free Short-Form of the Wechsler Intelligence Scale for Children-Fifth Edition.

    PubMed

    Piovesana, Adina M; Harrison, Jessica L; Ducat, Jacob J

    2017-12-01

    This study aimed to develop a motor-free short-form of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) that allows clinicians to estimate the Full Scale Intelligence Quotients of youths with motor impairments. Using the reliabilities and intercorrelations of six WISC-V motor-free subtests, psychometric methodologies were applied to develop look-up tables for four Motor-free Short-form indices: Verbal Comprehension Short-form, Perceptual Reasoning Short-form, Working Memory Short-form, and a Motor-free Intelligence Quotient. Index-level discrepancy tables were developed using the same methods to allow clinicians to statistically compare visual, verbal, and working memory abilities. The short-form indices had excellent reliabilities ( r = .92-.97) comparable to the original WISC-V. This motor-free short-form of the WISC-V is a reliable alternative for the assessment of intellectual functioning in youths with motor impairments. Clinicians are provided with user-friendly look-up tables, index level discrepancy tables, and base rates, displayed similar to those in the WISC-V manuals to enable interpretation of assessment results.

  4. Methodology Series Module 9: Designing Questionnaires and Clinical Record Forms - Part II.

    PubMed

    Setia, Maninder Singh

    2017-01-01

    This article is a continuation of the previous module on designing questionnaires and clinical record form in which we have discussed some basic points about designing the questionnaire and clinical record forms. In this section, we will discuss the reliability and validity of questionnaires. The different types of validity are face validity, content validity, criterion validity, and construct validity. The different types of reliability are test-retest reliability, inter-rater reliability, and intra-rater reliability. Some of these parameters are assessed by subject area experts. However, statistical tests should be used for evaluation of other parameters. Once the questionnaire has been designed, the researcher should pilot test the questionnaire. The items in the questionnaire should be changed based on the feedback from the pilot study participants and the researcher's experience. After the basic structure of the questionnaire has been finalized, the researcher should assess the validity and reliability of the questionnaire or the scale. If an existing standard questionnaire is translated in the local language, the researcher should assess the reliability and validity of the translated questionnaire, and these values should be presented in the manuscript. The decision to use a self- or interviewer-administered, paper- or computer-based questionnaire depends on the nature of the questions, literacy levels of the target population, and resources.

  5. Methodology Series Module 9: Designing Questionnaires and Clinical Record Forms – Part II

    PubMed Central

    Setia, Maninder Singh

    2017-01-01

    This article is a continuation of the previous module on designing questionnaires and clinical record form in which we have discussed some basic points about designing the questionnaire and clinical record forms. In this section, we will discuss the reliability and validity of questionnaires. The different types of validity are face validity, content validity, criterion validity, and construct validity. The different types of reliability are test-retest reliability, inter-rater reliability, and intra-rater reliability. Some of these parameters are assessed by subject area experts. However, statistical tests should be used for evaluation of other parameters. Once the questionnaire has been designed, the researcher should pilot test the questionnaire. The items in the questionnaire should be changed based on the feedback from the pilot study participants and the researcher's experience. After the basic structure of the questionnaire has been finalized, the researcher should assess the validity and reliability of the questionnaire or the scale. If an existing standard questionnaire is translated in the local language, the researcher should assess the reliability and validity of the translated questionnaire, and these values should be presented in the manuscript. The decision to use a self- or interviewer-administered, paper- or computer-based questionnaire depends on the nature of the questions, literacy levels of the target population, and resources. PMID:28584367

  6. Electron acceleration in a secondary magnetic island formed during magnetic reconnection with a guide field

    NASA Astrophysics Data System (ADS)

    Wang, Huanyu; Lu, Quanming; Huang, Can; Wang, Shui

    2017-05-01

    Secondary magnetic islands may be generated in the vicinity of an X line during magnetic reconnection. In this paper, by performing two-dimensional (2-D) particle-in-cell simulations, we investigate the role of a secondary magnetic island in electron acceleration during magnetic reconnection with a guide field. The electron motions are found to be adiabatic, and we analyze the contributions of the parallel electric field and Fermi and betatron mechanisms to electron acceleration in the secondary island during the evolution of magnetic reconnection. When the secondary island is formed, electrons are accelerated by the parallel electric field due to the existence of the reconnection electric field in the electron current sheet. Electrons can be accelerated by both the parallel electric field and Fermi mechanism when the secondary island begins to merge with the primary magnetic island, which is formed simultaneously with the appearance of X lines. With the increase in the guide field, the contributions of the Fermi mechanism to electron acceleration become less and less important. When the guide field is sufficiently large, the contribution of the Fermi mechanism is almost negligible.

  7. Formation of Electrostatic Potential Drops in the Auroral Zone

    NASA Technical Reports Server (NTRS)

    Schriver, D.; Ashour-Abdalla, M.; Richard, R. L.

    2001-01-01

    In order to examine the self-consistent formation of large-scale quasi-static parallel electric fields in the auroral zone on a micro/meso scale, a particle in cell simulation has been developed. The code resolves electron Debye length scales so that electron micro-processes are included and a variable grid scheme is used such that the overall length scale of the simulation is of the order of an Earth radii along the magnetic field. The simulation is electrostatic and includes the magnetic mirror force, as well as two types of plasmas, a cold dense ionospheric plasma and a warm tenuous magnetospheric plasma. In order to study the formation of parallel electric fields in the auroral zone, different magnetospheric ion and electron inflow boundary conditions are used to drive the system. It has been found that for conditions in the primary (upward) current region an upward directed quasi-static electric field can form across the system due to magnetic mirroring of the magnetospheric ions and electrons at different altitudes. For conditions in the return (downward) current region it is shown that a quasi-static parallel electric field in the opposite sense of that in the primary current region is formed, i.e., the parallel electric field is directed earthward. The conditions for how these different electric fields can be formed are discussed using satellite observations and numerical simulations.

  8. JPARSS: A Java Parallel Network Package for Grid Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jie; Akers, Walter; Chen, Ying

    2002-03-01

    The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size.more » This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services« less

  9. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  10. The Software Correlator of the Chinese VLBI Network

    NASA Technical Reports Server (NTRS)

    Zheng, Weimin; Quan, Ying; Shu, Fengchun; Chen, Zhong; Chen, Shanshan; Wang, Weihua; Wang, Guangli

    2010-01-01

    The software correlator of the Chinese VLBI Network (CVN) has played an irreplaceable role in the CVN routine data processing, e.g., in the Chinese lunar exploration project. This correlator will be upgraded to process geodetic and astronomical observation data. In the future, with several new stations joining the network, CVN will carry out crustal movement observations, quick UT1 measurements, astrophysical observations, and deep space exploration activities. For the geodetic or astronomical observations, we need a wide-band 10-station correlator. For spacecraft tracking, a realtime and highly reliable correlator is essential. To meet the scientific and navigation requirements of CVN, two parallel software correlators in the multiprocessor environments are under development. A high speed, 10-station prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm on a computer cluster platform is being developed. Another real-time software correlator for spacecraft tracking adopts the thread-parallel technology, and it runs on the SMP (Symmetric Multiple Processor) servers. Both correlators have the characteristic of flexible structure and scalability.

  11. Science Grade 7, Long Form.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Bureau of Curriculum Development.

    The Grade 7 Science course of study was prepared in two parallel forms. A short form designed for students who had achieved a high measure of success in previous science courses; the long form for those who have not been able to maintain the pace. Both forms contain similar content. The Grade 7 guide is the first in a three-year sequence for…

  12. Lineation-parallel c-axis Fabric of Quartz Formed Under Water-rich Conditions

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Zhang, J.; Li, P.

    2014-12-01

    The crystallographic preferred orientation (CPO) of quartz is of great significance because it records much valuable information pertinent to the deformation of quartz-rich rocks in the continental crust. The lineation-parallel c-axis CPO (i.e., c-axis forming a maximum parallel to the lineation) in naturally deformed quartz is generally considered to form under high temperature (> ~550 ºC) conditions. However, most laboratory deformation experiments on quartzite failed to produce such a CPO at high temperatures up to 1200 ºC. Here we reported a new occurrence of the lineation-parallel c-axis CPO of quartz from kyanite-quartz veins in eclogite. Optical microstructural observations, fourier transform infrared (FTIR) and electron backscattered diffraction (EBSD) techniques were integrated to illuminate the nature of quartz CPOs. Quartz exhibits mostly straight to slightly curved grain boundaries, modest intracrystalline plasticity, and significant shape preferred orientation (SPO) and CPOs, indicating dislocation creep dominated the deformation of quartz. Kyanite grains in the veins are mostly strain-free, suggestive of their higher strength than quartz. The pronounced SPO and CPOs in kyanite were interpreted to originate from anisotropic crystal growth and/or mechanical rotation during vein-parallel shearing. FTIR results show quartz contains a trivial amount of structurally bound water (several tens of H/106 Si), while kyanite has a water content of 384-729 H/106 Si; however, petrographic observations suggest quartz from the veins were practically deformed under water-rich conditions. We argue that the observed lineation-parallel c-axis fabric in quartz was inherited from preexisting CPOs as a result of anisotropic grain growth under stress facilitated by water, but rather than due to a dominant c-slip. The preservation of the quartz CPOs probably benefited from the preexisting quartz CPOs which renders most quartz grains unsuitably oriented for an easy a-slip at lower temperatures and the weak deformation during subsequent exhumation. This hypothesis provides a reasonable explanation for the observations that most lineation-parallel c-axis fabrics of quartz were found in veins and that deformation experiments on quartz-rich rocks at high temperature failed to produce such CPOs.

  13. Non-Cartesian Parallel Imaging Reconstruction

    PubMed Central

    Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole

    2014-01-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499

  14. 34 CFR 668.144 - Application for test approval.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the comparability of scores on the current test to scores on the previous test, and data from validity... explanation of the methodology and procedures for measuring the reliability of the test; (ii) Evidence that different forms of the test, including, if applicable, short forms, are comparable in reliability; (iii...

  15. Parallel family trees for transfer matrices in the Potts model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristobal A.; Canfora, Fabrizio; Hitschfeld, Nancy; Navarro, Gonzalo

    2015-02-01

    The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic (q , v) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O(4m) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O(3m) to build the generic (q , v) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O(3m ×4m) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O(4m ×4m) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM), especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster scenario it was in the range p ∈ [ 8 , 10 ] . Because of the parallel capabilities of the algorithm, a large-scale execution of the parallel family trees strategy in a supercomputer could contribute to the study of wider strip lattices.

  16. Study of the Reliability of CCSS-Aligned Math Measures (2012 Research Version): Grades 6-8. Technical Report #1312

    ERIC Educational Resources Information Center

    Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    In this technical report, we describe the results of a study of mathematics items written to align with the Common Core State Standards (CCSS) in grades 6-8. In each grade, CCSS items were organized into forms, and the reliability of these forms was evaluated along with an experimental form including items aligned with the National Council of…

  17. Electronic logic to enhance switch reliability in detecting openings and closures of redundant switches

    DOEpatents

    Cooper, James A.

    1986-01-01

    A logic circuit is used to enhance redundant switch reliability. Two or more switches are monitored for logical high or low output. The output for the logic circuit produces a redundant and failsafe representation of the switch outputs. When both switch outputs are high, the output is high. Similarly, when both switch outputs are low, the logic circuit's output is low. When the output states of the two switches do not agree, the circuit resolves the conflict by memorizing the last output state which both switches were simultaneously in and produces the logical complement of this output state. Thus, the logic circuit of the present invention allows the redundant switches to be treated as if they were in parallel when the switches are open and as if they were in series when the switches are closed. A failsafe system having maximum reliability is thereby produced.

  18. Electronic logic for enhanced switch reliability

    DOEpatents

    Cooper, J.A.

    1984-01-20

    A logic circuit is used to enhance redundant switch reliability. Two or more switches are monitored for logical high or low output. The output for the logic circuit produces a redundant and fail-safe representation of the switch outputs. When both switch outputs are high, the output is high. Similarly, when both switch outputs are low, the logic circuit's output is low. When the output states of the two switches do not agree, the circuit resolves the conflict by memorizing the last output state which both switches were simultaneously in and produces the logical complement of this output state. Thus, the logic circuit of the present invention allows the redundant switches to be treated as if they were in parallel when the switches are open and as if they were in series when the switches are closed. A failsafe system having maximum reliability is thereby produced.

  19. Real-time stereo matching using orthogonal reliability-based dynamic programming.

    PubMed

    Gong, Minglun; Yang, Yee-Hong

    2007-03-01

    A novel algorithm is presented in this paper for estimating reliable stereo matches in real time. Based on the dynamic programming-based technique we previously proposed, the new algorithm can generate semi-dense disparity maps using as few as two dynamic programming passes. The iterative best path tracing process used in traditional dynamic programming is replaced by a local minimum searching process, making the algorithm suitable for parallel execution. Most computations are implemented on programmable graphics hardware, which improves the processing speed and makes real-time estimation possible. The experiments on the four new Middlebury stereo datasets show that, on an ATI Radeon X800 card, the presented algorithm can produce reliable matches for 60% approximately 80% of pixels at the rate of 10 approximately 20 frames per second. If needed, the algorithm can be configured for generating full density disparity maps.

  20. A study of DC-DC converters with MCT's for arcjet power supplies

    NASA Technical Reports Server (NTRS)

    Stuart, Thomas A.

    1994-01-01

    Many arcjet DC power supplies use PWM full bridge converters with large arrays of parallel FET's. This report investigates an alternative supply using a variable frequency series resonant converter with small arrays of parallel MCT's (metal oxide semiconductor controlled thyristors). The reasons for this approach are to: increase reliability by reducing the number of switching devices; and decrease the surface mounting area of the switching arrays. The variable frequency series resonant approach is used because the relatively slow switching speed of the MCT precludes the use of PWM. The 10 kW converter operated satisfactorily with an efficiency of over 91 percent. Test results indicate this efficiency could be increased further by additional optimization of the series resonant inductor.

  1. Rough Electrode Creates Excess Capacitance in Thin-Film Capacitors

    PubMed Central

    2017-01-01

    The parallel-plate capacitor equation is widely used in contemporary material research for nanoscale applications and nanoelectronics. To apply this equation, flat and smooth electrodes are assumed for a capacitor. This essential assumption is often violated for thin-film capacitors because the formation of nanoscale roughness at the electrode interface is very probable for thin films grown via common deposition methods. In this work, we experimentally and theoretically show that the electrical capacitance of thin-film capacitors with realistic interface roughness is significantly larger than the value predicted by the parallel-plate capacitor equation. The degree of the deviation depends on the strength of the roughness, which is described by three roughness parameters for a self-affine fractal surface. By applying an extended parallel-plate capacitor equation that includes the roughness parameters of the electrode, we are able to calculate the excess capacitance of the electrode with weak roughness. Moreover, we introduce the roughness parameter limits for which the simple parallel-plate capacitor equation is sufficiently accurate for capacitors with one rough electrode. Our results imply that the interface roughness beyond the proposed limits cannot be dismissed unless the independence of the capacitance from the interface roughness is experimentally demonstrated. The practical protocols suggested in our work for the reliable use of the parallel-plate capacitor equation can be applied as general guidelines in various fields of interest. PMID:28745040

  2. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    NASA Technical Reports Server (NTRS)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  3. Rough Electrode Creates Excess Capacitance in Thin-Film Capacitors.

    PubMed

    Torabi, Solmaz; Cherry, Megan; Duijnstee, Elisabeth A; Le Corre, Vincent M; Qiu, Li; Hummelen, Jan C; Palasantzas, George; Koster, L Jan Anton

    2017-08-16

    The parallel-plate capacitor equation is widely used in contemporary material research for nanoscale applications and nanoelectronics. To apply this equation, flat and smooth electrodes are assumed for a capacitor. This essential assumption is often violated for thin-film capacitors because the formation of nanoscale roughness at the electrode interface is very probable for thin films grown via common deposition methods. In this work, we experimentally and theoretically show that the electrical capacitance of thin-film capacitors with realistic interface roughness is significantly larger than the value predicted by the parallel-plate capacitor equation. The degree of the deviation depends on the strength of the roughness, which is described by three roughness parameters for a self-affine fractal surface. By applying an extended parallel-plate capacitor equation that includes the roughness parameters of the electrode, we are able to calculate the excess capacitance of the electrode with weak roughness. Moreover, we introduce the roughness parameter limits for which the simple parallel-plate capacitor equation is sufficiently accurate for capacitors with one rough electrode. Our results imply that the interface roughness beyond the proposed limits cannot be dismissed unless the independence of the capacitance from the interface roughness is experimentally demonstrated. The practical protocols suggested in our work for the reliable use of the parallel-plate capacitor equation can be applied as general guidelines in various fields of interest.

  4. Utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to are specific to the Cray X-MP line of computers and its associated SSD (Solid-State Disk). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  5. Design and implementation of online automatic judging system

    NASA Astrophysics Data System (ADS)

    Liang, Haohui; Chen, Chaojie; Zhong, Xiuyu; Chen, Yuefeng

    2017-06-01

    For lower efficiency and poorer reliability in programming training and competition by currently artificial judgment, design an Online Automatic Judging (referred to as OAJ) System. The OAJ system including the sandbox judging side and Web side, realizes functions of automatically compiling and running the tested codes, and generating evaluation scores and corresponding reports. To prevent malicious codes from damaging system, the OAJ system utilizes sandbox, ensuring the safety of the system. The OAJ system uses thread pools to achieve parallel test, and adopt database optimization mechanism, such as horizontal split table, to improve the system performance and resources utilization rate. The test results show that the system has high performance, high reliability, high stability and excellent extensibility.

  6. The Shuttle processing contractors (SPC) reliability program at the Kennedy Space Center - The real world

    NASA Astrophysics Data System (ADS)

    McCrea, Terry

    The Shuttle Processing Contract (SPC) workforce consists of Lockheed Space Operations Co. as prime contractor, with Grumman, Thiokol Corporation, and Johnson Controls World Services as subcontractors. During the design phase, reliability engineering is instrumental in influencing the development of systems that meet the Shuttle fail-safe program requirements. Reliability engineers accomplish this objective by performing FMEA (failure modes and effects analysis) to identify potential single failure points. When technology, time, or resources do not permit a redesign to eliminate a single failure point, the single failure point information is formatted into a change request and presented to senior management of SPC and NASA for risk acceptance. In parallel with the FMEA, safety engineering conducts a hazard analysis to assure that potential hazards to personnel are assessed. The combined effort (FMEA and hazard analysis) is published as a system assurance analysis. Special ground rules and techniques are developed to perform and present the analysis. The reliability program at KSC is vigorously pursued, and has been extremely successful. The ground support equipment and facilities used to launch and land the Space Shuttle maintain an excellent reliability record.

  7. Experimental determination of pCo perturbation factors for plane-parallel chambers

    NASA Astrophysics Data System (ADS)

    Kapsch, R. P.; Bruggmoser, G.; Christ, G.; Dohm, O. S.; Hartmann, G. H.; Schüle, E.

    2007-12-01

    For plane-parallel chambers used in electron dosimetry, modern dosimetry protocols recommend a cross-calibration against a calibrated cylindrical chamber. The rationale for this is the unacceptably large (up to 3-4%) chamber-to-chamber variations of the perturbation factors (pwall)Co, which have been reported for plane-parallel chambers of a given type. In some recent publications, it was shown that this is no longer the case for modern plane-parallel chambers. The aims of the present study are to obtain reliable information about the variation of the perturbation factors for modern types of plane-parallel chambers, and—if this variation is found to be acceptably small—to determine type-specific mean values for these perturbation factors which can be used for absorbed dose measurements in electron beams using plane-parallel chambers. In an extensive multi-center study, the individual perturbation factors pCo (which are usually assumed to be equal to (pwall)Co) for a total of 35 plane-parallel chambers of the Roos type, 15 chambers of the Markus type and 12 chambers of the Advanced Markus type were determined. From a total of 188 cross-calibration measurements, variations of the pCo values for different chambers of the same type of at most 1.0%, 0.9% and 0.6% were found for the chambers of the Roos, Markus and Advanced Markus types, respectively. The mean pCo values obtained from all measurements are \\bar{p}^Roos_Co = 1.0198, \\bar{p}^Markus_Co = 1.0175 and \\bar{p}^Advanced_Co = 1.0155 ; the relative experimental standard deviation of the individual pCo values is less than 0.24% for all chamber types; the relative standard uncertainty of the mean pCo values is 1.1%.

  8. Short forms of the Schedule for Nonadaptive and Adaptive Personality (SNAP) for self- and collateral ratings: development, reliability, and validity.

    PubMed

    Harlan, E; Clark, L A

    1999-06-01

    Researchers and clinicians alike increasingly seek brief, reliable, and valid measures to obtain personality trait ratings from both selves and peers. We report the development of a paragraph-descriptor short form of a full-length personality assessment instrument, the Schedule for Nonadaptive and Adaptive Personality (SNAP) with both self- and other versions. Reliability and validity data were collected on a sample of 294 college students, from 90 of whom we also obtained parental ratings of their personality. Internal consistency reliability was good in both self- and parent data. The factorial structures of the self-report short and long forms were very similar. Convergence between parental ratings was moderately high. Self-parent convergence was variable, with lower agreement on scales assessing subjective distress than those assessing more observable behaviors; it also was stronger for higher order factors than for scales.

  9. Dark-field transmission electron microscopy of cortical bone reveals details of extrafibrillar crystals.

    PubMed

    Schwarcz, Henry P; McNally, Elizabeth A; Botton, Gianluigi A

    2014-12-01

    In a previous study we showed that most of the mineral in bone is present in the form of "mineral structures", 5-6nm-thick, elongated plates which surround and are oriented parallel to collagen fibrils. Using dark-field transmission electron microscopy, we viewed mineral structures in ion-milled sections of cortical human bone cut parallel to the collagen fibrils. Within the mineral structures we observe single crystals of apatite averaging 5.8±2.7nm in width and 28±19nm in length, their long axes oriented parallel to the fibril axis. Some appear to be composite, co-aligned crystals as thin as 2nm. From their similarity to TEM images of crystals liberated from deproteinated bone we infer that we are viewing sections through platy crystals of apatite that are assembled together to form the mineral structures. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Laboratory glassware rack for seismic safety

    NASA Technical Reports Server (NTRS)

    Cohen, M. M. (Inventor)

    1985-01-01

    A rack for laboratory bottles and jars for chemicals and medicines has been designed to provide the maximum strength and security to the glassware in the event of a significant earthquake. The rack preferably is rectangular and may be made of a variety of chemically resistant materials including polypropylene, polycarbonate, and stainless steel. It comprises a first plurality of parallel vertical walls, and a second plurality of parallel vertical walls, perpendicular to the first. These intersecting vertical walls comprise a self-supporting structure without a bottom which sits on four legs. The top surface of the rack is formed by the top edges of all the vertical walls, which are not parallel but are skewed in three dimensions. These top edges form a grid matrix having a number of intersections of the vertical walls which define a number of rectangular compartments having varying widths and lengths and varying heights.

  11. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2011-01-01

    Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

  12. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Solomon, Jeffrey Michael (Inventor); Ghuman, Parminder Singh (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  13. A New Parallel Corpus Approach to Japanese Learners' English, Using Their Corrected Essays

    ERIC Educational Resources Information Center

    Miki, Nozomi

    2010-01-01

    This research introduces unique parallel corpora to uncover linguistic behaviors in L2 argumentative writing in the exact correspondence to their appropriate forms provided by English native speakers (NSs). The current paper targets at the mysterious behavior of I think in argumentative prose. I think is regarded as arguably problematic and…

  14. Comparison of Educators' and Industrial Managers' Work Motivation Using Parallel Forms of the Work Components Study Questionnaire.

    ERIC Educational Resources Information Center

    Thornton, Billy W.; And Others

    The idea that educators would differ from business managers on Herzberg's motivation factors and Blum's security orientations was posited. Parallel questionnaires were used to measure the motivational variables. The sample was composed of 432 teachers, 118 administrators, and 192 industrial managers. Data were analyzed using multivariate and…

  15. Syntactic Change in the Parallel Architecture: The Case of Parasitic Gaps

    ERIC Educational Resources Information Center

    Culicover, Peter W.

    2017-01-01

    In Jackendoff's Parallel Architecture, the well-formed expressions of a language are licensed by correspondences between phonology, syntax, and conceptual structure. I show how this architecture can be used to make sense of the existence of parasitic gap constructions. A parasitic gap is one that is rendered acceptable because of the presence of…

  16. Growth of large aluminum nitride single crystals with thermal-gradient control

    DOEpatents

    Bondokov, Robert T; Rao, Shailaja P; Gibb, Shawn Robert; Schowalter, Leo J

    2015-05-12

    In various embodiments, non-zero thermal gradients are formed within a growth chamber both substantially parallel and substantially perpendicular to the growth direction during formation of semiconductor crystals, where the ratio of the two thermal gradients (parallel to perpendicular) is less than 10, by, e.g., arrangement of thermal shields outside of the growth chamber.

  17. Growth of large aluminum nitride single crystals with thermal-gradient control

    DOEpatents

    Bondokov, Robert T.; Rao, Shailaja P.; Schowalter, Leo J.

    2017-02-28

    In various embodiments, non-zero thermal gradients are formed within a growth chamber both substantially parallel and substantially perpendicular to the growth direction during formation of semiconductor crystals, where the ratio of the two thermal gradients (parallel to perpendicular) is less than 10, by, e.g., arrangement of thermal shields outside of the growth chamber.

  18. Repetitive resonant railgun power supply

    DOEpatents

    Honig, E.M.; Nunnally, W.C.

    1985-06-19

    A repetitive resonant railgun power supply provides energy for repetitively propelling projectiles from a pair of parallel rails. The supply comprises an energy storage capacitor, a storage inductor to form a resonant circuit with the energy storage capacitor and a magnetic switch to transfer energy between the resonant circuit and the pair of parallel rails for the propelling of projectiles.

  19. WET EFFLUENT PARALLEL PLATE DIFFUSION DENUDER COUPLED CAPILLARY ION CHROMATOGRAPH FOR THE DETERMINATION OF ATMOSPHERIC TRACE GASES. (R825344)

    EPA Science Inventory

    We describe an inexpensive, compact parallel plate diffusion denuder coupled capillary IC system for the determination of soluble ionogenic atmospheric trace gases. The active sampling area (0.6×10 cm) of the denuder is formed in a novel manner by thermally bonding silica ge...

  20. Repetitive resonant railgun power supply

    DOEpatents

    Honig, Emanuel M.; Nunnally, William C.

    1988-01-01

    A repetitive resonant railgun power supply provides energy for repetitively propelling projectiles from a pair of parallel rails. The supply comprises an energy storage capacitor, a storage inductor to form a resonant circuit with the energy storage capacitor and a magnetic switch to transfer energy between the resonant circuit and the pair of parallel rails for the propelling of projectiles.

  1. Split-Waveguide Mounts For Submillimeter-Wave Multipliers And Harmonic Mixers

    NASA Technical Reports Server (NTRS)

    Raisanen, Antti; Choudhury, Debabani; Dengler, Robert J.; Oswald, John E.; Siegel, Peter H.

    1996-01-01

    Novel variation of split-waveguide mount for millimeter-and submillimeter-wavelength frequency multipliers and harmonic mixers developed. Designed to offer wide range of available matching impedances, while maintaining relatively simple fabrication sequence. Wide tuning range achieved with separate series and parallel elements, consisting of two pairs of noncontacting sliding backshorts, at fundamental and harmonic frequencies. Advantages include ease of fabrication, reliability, and tunability.

  2. Fast Computation and Assessment Methods in Power System Analysis

    NASA Astrophysics Data System (ADS)

    Nagata, Masaki

    Power system analysis is essential for efficient and reliable power system operation and control. Recently, online security assessment system has become of importance, as more efficient use of power networks is eagerly required. In this article, fast power system analysis techniques such as contingency screening, parallel processing and intelligent systems application are briefly surveyed from the view point of their application to online dynamic security assessment.

  3. Reliable Early Classification on Multivariate Time Series with Numerical and Categorical Attributes

    DTIC Science & Technology

    2015-05-22

    design a procedure of feature extraction in REACT named MEG (Mining Equivalence classes with shapelet Generators) based on the concept of...Equivalence Classes Mining [12, 15]. MEG can efficiently and effectively generate the discriminative features. In addition, several strategies are proposed...technique of parallel computing [4] to propose a process of pa- rallel MEG for substantially reducing the computational overhead of discovering shapelet

  4. Magnon-drag thermopile.

    PubMed

    Costache, Marius V; Bridoux, German; Neumann, Ingmar; Valenzuela, Sergio O

    2011-12-18

    Thermoelectric effects in spintronics are gathering increasing attention as a means of managing heat in nanoscale structures and of controlling spin information by using heat flow. Thermal magnons (spin-wave quanta) are expected to play a major role; however, little is known about the underlying physical mechanisms involved. The reason is the lack of information about magnon interactions and of reliable methods to obtain it, in particular for electrical conductors because of the intricate influence of electrons. Here, we demonstrate a conceptually new device that enables us to gather information on magnon-electron scattering and magnon-drag effects. The device resembles a thermopile formed by a large number of pairs of ferromagnetic wires placed between a hot and a cold source and connected thermally in parallel and electrically in series. By controlling the relative orientation of the magnetization in pairs of wires, the magnon drag can be studied independently of the electron and phonon-drag thermoelectric effects. Measurements as a function of temperature reveal the effect on magnon drag following a variation of magnon and phonon populations. This information is crucial to understand the physics of electron-magnon interactions, magnon dynamics and thermal spin transport.

  5. Interactive computer modeling of combustion chemistry and coalescence-dispersion modeling of turbulent combustion

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.

    1984-01-01

    An interactive computer code for simulation of a high-intensity turbulent combustor as a single point inhomogeneous stirred reactor was developed from an existing batch processing computer code CDPSR. The interactive CDPSR code was used as a guide for interpretation and direction of DOE-sponsored companion experiments utilizing Xenon tracer with optical laser diagnostic techniques to experimentally determine the appropriate mixing frequency, and for validation of CDPSR as a mixing-chemistry model for a laboratory jet-stirred reactor. The coalescence-dispersion model for finite rate mixing was incorporated into an existing interactive code AVCO-MARK I, to enable simulation of a combustor as a modular array of stirred flow and plug flow elements, each having a prescribed finite mixing frequency, or axial distribution of mixing frequency, as appropriate. Further increase the speed and reliability of the batch kinetics integrator code CREKID was increased by rewriting in vectorized form for execution on a vector or parallel processor, and by incorporating numerical techniques which enhance execution speed by permitting specification of a very low accuracy tolerance.

  6. Octopamine Neuromodulation Regulates Gr32a-Linked Aggression and Courtship Pathways in Drosophila Males

    PubMed Central

    Andrews, Jonathan C.; Fernández, María Paz; Yu, Qin; Leary, Greg P.; Leung, Adelaine K. W.; Kavanaugh, Michael P.; Kravitz, Edward A.; Certel, Sarah J.

    2014-01-01

    Chemosensory pheromonal information regulates aggression and reproduction in many species, but how pheromonal signals are transduced to reliably produce behavior is not well understood. Here we demonstrate that the pheromonal signals detected by Gr32a-expressing chemosensory neurons to enhance male aggression are filtered through octopamine (OA, invertebrate equivalent of norepinephrine) neurons. Using behavioral assays, we find males lacking both octopamine and Gr32a gustatory receptors exhibit parallel delays in the onset of aggression and reductions in aggression. Physiological and anatomical experiments identify Gr32a to octopamine neuron synaptic and functional connections in the suboesophageal ganglion. Refining the Gr32a-expressing population indicates that mouth Gr32a neurons promote male aggression and form synaptic contacts with OA neurons. By restricting the monoamine neuron target population, we show that three previously identified OA-FruM neurons involved in behavioral choice are among the Gr32a-OA connections. Our findings demonstrate that octopaminergic neuromodulatory neurons function as early as a second-order step in this chemosensory-driven male social behavior pathway. PMID:24852170

  7. Improved silicon carbide for advanced heat engines

    NASA Technical Reports Server (NTRS)

    Whalen, Thomas J.; Mangels, J. A.

    1986-01-01

    The development of silicon carbide materials of high strength was initiated and components of complex shape and high reliability were formed. The approach was to adapt a beta-SiC powder and binder system to the injection molding process and to develop procedures and process parameters capable of providing a sintered silicon carbide material with improved properties. The initial effort was to characterize the baseline precursor materials, develop mixing and injection molding procedures for fabricating test bars, and characterize the properties of the sintered materials. Parallel studies of various mixing, dewaxing, and sintering procedures were performed in order to distinguish process routes for improving material properties. A total of 276 modulus-of-rupture (MOR) bars of the baseline material was molded, and 122 bars were fully processed to a sinter density of approximately 95 percent. Fluid mixing techniques were developed which significantly reduced flaw size and improved the strength of the material. Initial MOR tests indicated that strength of the fluid-mixed material exceeds the baseline property by more than 33 percent. the baseline property by more than 33 percent.

  8. CMOL: A New Concept for Nanoelectronics

    NASA Astrophysics Data System (ADS)

    Likharev, Konstantin

    2005-03-01

    I will review the recent work on devices and architectures for future hybrid semiconductor/molecular integrated circuits, in particular those of ``CMOL'' variety [1]. Such circuits would combine an advanced CMOS subsystem fabricated by the usual lithographic patterning, two layers of parallel metallic nanowires formed, e.g., by nanoimprint, and two-terminal molecular devices self-assembled on the nanowire crosspoints. Estimates show that this powerful combination may allow CMOL circuits to reach an unparalleled density (up to 10^12 functions per cm^2) and ultrahigh rate of information processing (up to 10^20 operations per second on a single chip), at acceptable power dissipation. The main challenges on the way toward practical CMOL technology are: (i) reliable chemically-directed self-assembly of mid-size organic molecules, and (ii) the development of efficient defect-tolerant architectures for CMOL circuits. Our recent work has shown that such architectures may be developed not only for terabit-scale memories and naturally defect-tolerant mixed-signal neuromorphic networks, but (rather unexpectedly) also for FPGA-style digital Boolean circuits. [1] For details, see http://rsfq1.physics.sunysb.edu/˜likharev/nano/Springer04.pdf

  9. Technology transfer through a network of standard methods and recommended practices - The case of petrochemicals

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.; Karvounis, Sotirios

    2012-12-01

    Technology transfer may take place in parallel with cooperative action between companies participating in the same organizational scheme or using one another as subcontractor (outsourcing). In this case, cooperation should be realized by means of Standard Methods and Recommended Practices (SRPs) to achieve (i) quality of intermediate/final products according to specifications and (ii) industrial process control as required to guarantee such quality with minimum deviation (corresponding to maximum reliability) from preset mean values of representative quality parameters. This work deals with the design of the network of SRPs needed in each case for successful cooperation, implying also the corresponding technology transfer, effectuated through a methodological framework developed in the form of an algorithmic procedure with 20 activity stages and 8 decision nodes. The functionality of this methodology is proved by presenting the path leading from (and relating) a standard test method for toluene, as petrochemical feedstock in the toluene diisocyanate production, to the (6 generations distance upstream) performance evaluation of industrial process control systems (ie., from ASTM D5606 to BS EN 61003-1:2004 in the SRPs network).

  10. Probabilistic Design of a Wind Tunnel Model to Match the Response of a Full-Scale Aircraft

    NASA Technical Reports Server (NTRS)

    Mason, Brian H.; Stroud, W. Jefferson; Krishnamurthy, T.; Spain, Charles V.; Naser, Ahmad S.

    2005-01-01

    approach is presented for carrying out the reliability-based design of a plate-like wing that is part of a wind tunnel model. The goal is to design the wind tunnel model to match the stiffness characteristics of the wing box of a flight vehicle while satisfying strength-based risk/reliability requirements that prevents damage to the wind tunnel model and fixtures. The flight vehicle is a modified F/A-18 aircraft. The design problem is solved using reliability-based optimization techniques. The objective function to be minimized is the difference between the displacements of the wind tunnel model and the corresponding displacements of the flight vehicle. The design variables control the thickness distribution of the wind tunnel model. Displacements of the wind tunnel model change with the thickness distribution, while displacements of the flight vehicle are a set of fixed data. The only constraint imposed is that the probability of failure is less than a specified value. Failure is assumed to occur if the stress caused by aerodynamic pressure loading is greater than the specified strength allowable. Two uncertain quantities are considered: the allowable stress and the thickness distribution of the wind tunnel model. Reliability is calculated using Monte Carlo simulation with response surfaces that provide approximate values of stresses. The response surface equations are, in turn, computed from finite element analyses of the wind tunnel model at specified design points. Because the response surface approximations were fit over a small region centered about the current design, the response surfaces were refit periodically as the design variables changed. Coarse-grained parallelism was used to simultaneously perform multiple finite element analyses. Studies carried out in this paper demonstrate that this scheme of using moving response surfaces and coarse-grained computational parallelism reduce the execution time of the Monte Carlo simulation enough to make the design problem tractable. The results of the reliability-based designs performed in this paper show that large decreases in the probability of stress-based failure can be realized with only small sacrifices in the ability of the wind tunnel model to represent the displacements of the full-scale vehicle.

  11. Keldysh formalism for multiple parallel worlds

    NASA Astrophysics Data System (ADS)

    Ansari, M.; Nazarov, Y. V.

    2016-03-01

    We present a compact and self-contained review of the recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to the evaluation of Renyi and Shannon entropy flows. We start with the formulation of the standard and extended Keldysh techniques in a single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds. In the end, we briefly summarize the concrete results obtained with the method.

  12. CRUNCH_PARALLEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaker, Dana E.; Steefel, Carl I.

    The code CRUNCH_PARALLEL is a parallel version of the CRUNCH code. CRUNCH code version 2.0 was previously released by LLNL, (UCRL-CODE-200063). Crunch is a general purpose reactive transport code developed by Carl Steefel and Yabusake (Steefel Yabsaki 1996). The code handles non-isothermal transport and reaction in one, two, and three dimensions. The reaction algorithm is generic in form, handling an arbitrary number of aqueous and surface complexation as well as mineral dissolution/precipitation. A standardized database is used containing thermodynamic and kinetic data. The code includes advective, dispersive, and diffusive transport.

  13. Multiple resonant railgun power supply

    DOEpatents

    Honig, E.M.; Nunnally, W.C.

    1985-06-19

    A multiple repetitive resonant railgun power supply provides energy for repetitively propelling projectiles from a pair of parallel rails. A plurality of serially connected paired parallel rails are powered by similar power supplies. Each supply comprises an energy storage capacitor, a storage inductor to form a resonant circuit with the energy storage capacitor and a magnetic switch to transfer energy between the resonant circuit and the pair of parallel rails for the propelling of projectiles. The multiple serial operation permits relatively small energy components to deliver overall relatively large amounts of energy to the projectiles being propelled.

  14. Keldysh formalism for multiple parallel worlds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ansari, M.; Nazarov, Y. V., E-mail: y.v.nazarov@tudelft.nl

    We present a compact and self-contained review of the recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to the evaluation of Renyi and Shannon entropy flows. We start with the formulation of the standard and extended Keldysh techniques in a single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds. In the end, we briefly summarize the concrete results obtained with the method.

  15. Multiple resonant railgun power supply

    DOEpatents

    Honig, Emanuel M.; Nunnally, William C.

    1988-01-01

    A multiple repetitive resonant railgun power supply provides energy for repetitively propelling projectiles from a pair of parallel rails. A plurality of serially connected paired parallel rails are powered by similar power supplies. Each supply comprises an energy storage capacitor, a storage inductor to form a resonant circuit with the energy storage capacitor and a magnetic switch to transfer energy between the resonant circuit and the pair of parallel rails for the propelling of projectiles. The multiple serial operation permits relatively small energy components to deliver overall relatively large amounts of energy to the projectiles being propelled.

  16. Accurate secondary structure prediction and fold recognition for circular dichroism spectroscopy

    PubMed Central

    Micsonai, András; Wien, Frank; Kernya, Linda; Lee, Young-Ho; Goto, Yuji; Réfrégiers, Matthieu; Kardos, József

    2015-01-01

    Circular dichroism (CD) spectroscopy is a widely used technique for the study of protein structure. Numerous algorithms have been developed for the estimation of the secondary structure composition from the CD spectra. These methods often fail to provide acceptable results on α/β-mixed or β-structure–rich proteins. The problem arises from the spectral diversity of β-structures, which has hitherto been considered as an intrinsic limitation of the technique. The predictions are less reliable for proteins of unusual β-structures such as membrane proteins, protein aggregates, and amyloid fibrils. Here, we show that the parallel/antiparallel orientation and the twisting of the β-sheets account for the observed spectral diversity. We have developed a method called β-structure selection (BeStSel) for the secondary structure estimation that takes into account the twist of β-structures. This method can reliably distinguish parallel and antiparallel β-sheets and accurately estimates the secondary structure for a broad range of proteins. Moreover, the secondary structure components applied by the method are characteristic to the protein fold, and thus the fold can be predicted to the level of topology in the CATH classification from a single CD spectrum. By constructing a web server, we offer a general tool for a quick and reliable structure analysis using conventional CD or synchrotron radiation CD (SRCD) spectroscopy for the protein science research community. The method is especially useful when X-ray or NMR techniques fail. Using BeStSel on data collected by SRCD spectroscopy, we investigated the structure of amyloid fibrils of various disease-related proteins and peptides. PMID:26038575

  17. Influences on and Limitations of Classical Test Theory Reliability Estimates.

    ERIC Educational Resources Information Center

    Arnold, Margery E.

    It is incorrect to say "the test is reliable" because reliability is a function not only of the test itself, but of many factors. The present paper explains how different factors affect classical reliability estimates such as test-retest, interrater, internal consistency, and equivalent forms coefficients. Furthermore, the limits of classical test…

  18. Reliability Generalization of the Psychopathy Checklist Applied in Youthful Samples

    ERIC Educational Resources Information Center

    Campbell, Justin S.; Pulos, Steven; Hogan, Mike; Murry, Francie

    2005-01-01

    This study examines the average reliability of Hare Psychopathy Checklists (PCLs) adapted for use in samples of youthful offenders (aged 12 to 21 years). Two forms of reliability are examined: 18 alpha estimates of internal consistency and 18 intraclass correlation (two or more raters) estimates of interrater reliability. The results, an average…

  19. Alternate Forms Reliability of the Behavioral Relaxation Scale: Preliminary Results

    ERIC Educational Resources Information Center

    Lundervold, Duane A.; Dunlap, Angel L.

    2006-01-01

    Alternate forms reliability of the Behavioral Relaxation Scale (BRS; Poppen,1998), a direct observation measure of relaxed behavior, was examined. A single BRS score, based on long duration observation (5-minute), has been found to be a valid measure of relaxation and is correlated with self-report and some physiological measures. Recently,…

  20. [Development of a Japanese version of a short form of the Profile of Emotional Competence].

    PubMed

    Nozaki, Yuki; Koyasu, Masuo

    2015-06-01

    Emotional competence refers to individual differences in the ability to appropriately identity, understand, express, regulate, and utilize one's own emotions and those of others. This study developed a Japanese version of a short form of the Profile of Emotional Competence, a measure that allows the comprehensive assessment of intra- and interpersonal emotional competence with shorter items, and investigated its reliability and validity. In Study 1, we selected items for a short version and compared it with the full scale in terms of scores, internal consistency, and validity. In Study 2, we examined the short form's test-retest reliability. Results supported the original two-factor model and the measure had adequate reliability and validity. We discuss the construct validity and practical applicability of the short form of the Profile of Emotional Competence.

  1. A massively parallel computational approach to coupled thermoelastic/porous gas flow problems

    NASA Technical Reports Server (NTRS)

    Shia, David; Mcmanus, Hugh L.

    1995-01-01

    A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.

  2. Supercomputing on massively parallel bit-serial architectures

    NASA Technical Reports Server (NTRS)

    Iobst, Ken

    1985-01-01

    Research on the Goodyear Massively Parallel Processor (MPP) suggests that high-level parallel languages are practical and can be designed with powerful new semantics that allow algorithms to be efficiently mapped to the real machines. For the MPP these semantics include parallel/associative array selection for both dense and sparse matrices, variable precision arithmetic to trade accuracy for speed, micro-pipelined train broadcast, and conditional branching at the processing element (PE) control unit level. The preliminary design of a FORTRAN-like parallel language for the MPP has been completed and is being used to write programs to perform sparse matrix array selection, min/max search, matrix multiplication, Gaussian elimination on single bit arrays and other generic algorithms. A description is given of the MPP design. Features of the system and its operation are illustrated in the form of charts and diagrams.

  3. Multi-aircraft dynamics, navigation and operation

    NASA Astrophysics Data System (ADS)

    Houck, Sharon Wester

    Air traffic control stands on the brink of a revolution. Fifty years from now, we will look back and marvel that we ever flew by radio beacons and radar alone, much as we now marvel that early aviation pioneers flew by chronometer and compass alone. The microprocessor, satellite navigation systems, and air-to-air data links are the technical keys to this revolution. Many airports are near or at capacity now for at least portions of the day, making it clear that major increases in airport capacity will be required in order to support the projected growth in air traffic. This can be accomplished by adding airports, adding runways at existing airports, or increasing the capacity of the existing runways. Technology that allows use of ultra closely spaced (750 ft to 2500 ft) parallel approaches would greatly reduce the environmental impact of airport capacity increases. This research tackles the problem of multi aircraft dynamics, navigation, and operation, specifically in the terminal area, and presents new findings on how ultra closely spaced parallel approaches may be accomplished. The underlying approach considers how multiple aircraft are flown in visual conditions, where spacing criteria is much less stringent, and then uses this data to study the critical parameters for collision avoidance during an ultra closely spaced parallel approach. Also included is experimental and analytical investigations on advanced guidance systems that are critical components of precision approaches. Together, these investigations form a novel approach to the design and analysis of parallel approaches for runways spaced less than 2500 ft apart. This research has concluded that it is technically feasible to reduce the required runway spacing during simultaneous instrument approaches to less than the current minimum of 3400 ft with the use of advanced navigation systems while maintaining the currently accepted levels of safety. On a smooth day with both pilots flying a tunnel-in-the-sky display and being guided by a Category I LAAS, it is technically feasible to reduce the runway spacing to 1100 ft. If a Category I LAAS and an "intelligent auto-pilot" that executes both the approach and emergency escape maneuver are used, the technically achievable required runway spacing is reduced to 750 ft. Both statements presume full aircraft state information, including position, velocity, and attitude, is being reliably passed between aircraft at a rate equal to or greater than one Hz.

  4. A positive-definite form of bounce-averaged quasilinear velocity diffusion for the parallel inhomogeneity in a tokamak

    NASA Astrophysics Data System (ADS)

    Lee, Jungpyo; Smithe, David; Wright, John; Bonoli, Paul

    2018-02-01

    In this paper, the analytical form of the quasilinear diffusion coefficients is modified from the Kennel-Engelmann diffusion coefficients to guarantee the positive definiteness of its bounce average in a toroidal geometry. By evaluating the parallel inhomogeneity of plasmas and magnetic fields in the trajectory integral, we can ensure the positive definiteness and help illuminate some non-resonant toroidal effects in the quasilinear diffusion. When the correlation length of the plasma-wave interaction is comparable to the magnetic field variation length, the variation becomes important and the parabolic variation at the outer-midplane, the inner-midplane, and trapping tips can be evaluated by Airy functions. The new form allows the coefficients to include both resonant and non-resonant contributions, and the correlations between the consecutive resonances and in many poloidal periods. The positive-definite form is implemented in a wave code TORIC and we present an example for ITER using this form.

  5. Method for protecting chip corners in wet chemical etching of wafers

    DOEpatents

    Hui, Wing C.

    1994-01-01

    The present invention is a corner protection mask design that protects chip corners from undercutting during anisotropic etching of wafers. The corner protection masks abut the chip corner point and extend laterally from segments along one or both corner sides of the corner point, forming lateral extensions. The protection mask then extends from the lateral extensions, parallel to the direction of the corner side of the chip and parallel to scribe lines, thus conserving wafer space. Unmasked bomb regions strategically formed in the protection mask facilitate the break-up of the protection mask during etching. Corner protection masks are useful for chip patterns with deep grooves and either large or small chip mask areas. Auxiliary protection masks form nested concentric frames that etch from the center outward are useful for small chip mask patterns. The protection masks also form self-aligning chip mask areas. The present invention is advantageous for etching wafers with thin film windows, microfine and micromechanical structures, and for forming chip structures more elaborate than presently possible.

  6. Method for protecting chip corners in wet chemical etching of wafers

    DOEpatents

    Hui, W.C.

    1994-02-15

    The present invention is a corner protection mask design that protects chip corners from undercutting during anisotropic etching of wafers. The corner protection masks abut the chip corner point and extend laterally from segments along one or both corner sides of the corner point, forming lateral extensions. The protection mask then extends from the lateral extensions, parallel to the direction of the corner side of the chip and parallel to scribe lines, thus conserving wafer space. Unmasked bomb regions strategically formed in the protection mask facilitate the break-up of the protection mask during etching. Corner protection masks are useful for chip patterns with deep grooves and either large or small chip mask areas. Auxiliary protection masks form nested concentric frames that etch from the center outward are useful for small chip mask patterns. The protection masks also form self-aligning chip mask areas. The present invention is advantageous for etching wafers with thin film windows, microfine and micromechanical structures, and for forming chip structures more elaborate than presently possible. 63 figures.

  7. Development of An Assessment Test for An Anesthetic Machine.

    PubMed

    Tiviraj, Supinya; Yokubol, Bencharatana; Amornyotin, Somchai

    2016-05-01

    The study is aimed to develop and assess the quality of an evaluation form used to evaluate the nurse anesthetic trainees' skills in undertaking a pre-use check of an anesthetic machine. An evaluation form comprising 25 items was developed, informed by the guidelines published by national anesthesiologist societies and refined to reflect the anesthetic machine used in our institution. The item-checking included the cylinder supplies and medical gas pipelines, vaporizer back bar, ventilator anesthetic breathing system, scavenging system and emergency back-up equipment. The authors sought the opinions of five experienced anesthetic trainers to judge the validity of the content. The authors measured its inter-rater reliability when used by two achievement scores evaluating the performance of 36 nurse anesthetic trainees undertaking 15-minute anesthetic machine checks and test-retest the reliability correlation scores between the two performances in the seven days interval. The five experienced anesthesiologists agreed that the evaluation form accurately reflected the objectives of anesthetic machine checking, equating to an index of congruency of 1.00. The inter-rater reliability of the independent assessors scoring was 0.977 (p = 0.01) and the test-retest reliability was 0.883 (p = 0.01). An evaluation form proved to be a reliable and effective tool for assessing the anesthetic nurse trainees' checking of an anesthetic machine before the use. This evaluation form was brief clear and practical to use, and should help to improve anesthetic nurse education and the patient safety.

  8. The application of image processing in the measurement for three-light-axis parallelity of laser ranger

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Wang, Qianqian

    2008-12-01

    When laser ranger is transported or used in field operations, the transmitting axis, receiving axis and aiming axis may be not parallel. The nonparallelism of the three-light-axis will affect the range-measuring ability or make laser ranger not be operated exactly. So testing and adjusting the three-light-axis parallelity in the production and maintenance of laser ranger is important to ensure using laser ranger reliably. The paper proposes a new measurement method using digital image processing based on the comparison of some common measurement methods for the three-light-axis parallelity. It uses large aperture off-axis paraboloid reflector to get the images of laser spot and white light cross line, and then process the images on LabVIEW platform. The center of white light cross line can be achieved by the matching arithmetic in LABVIEW DLL. And the center of laser spot can be achieved by gradation transformation, binarization and area filter in turn. The software system can set CCD, detect the off-axis paraboloid reflector, measure the parallelity of transmitting axis and aiming axis and control the attenuation device. The hardware system selects SAA7111A, a programmable vedio decoding chip, to perform A/D conversion. FIFO (first-in first-out) is selected as buffer.USB bus is used to transmit data to PC. The three-light-axis parallelity can be achieved according to the position bias between them. The device based on this method has been already used. The application proves this method has high precision, speediness and automatization.

  9. Development of a short form of the Japanese version of the Interpersonal Relationship Inventory.

    PubMed

    Sumi, Katsunori

    2009-10-01

    The Interpersonal Relationship Inventory (Tilden, Nelson, & May, 1990a) is a 39-item self-report measure to assess three aspects (support, reciprocity, and conflict) of perceived social relationships. In this research, short forms of the Japanese version (Sumi, 2003) of the inventory were developed on the basis of data from two sources. For the item selection, data from 340 Japanese college students (148 women, 192 men; M age = 21.6 yr., SD = 1.6) who completed the original full form of the inventory were used to create three internally consistent short forms. The reliability and construct validity of the short forms were examined upon administering them to among 513 college students (226 women, 287 men; M age =19.9 yr., SD = 1.4). All the subscales of the short forms had acceptable internal consistency (alphas = .70-.90) and test-retest reliability (rs = .72-.81). Confirmatory factor analysis of each short form supported the fact that each form had a three-factor structure. Scores on the subscales shared acceptable overlapping variance with the corresponding subscale scores of the original full form, and these scores were weakly but significantly correlated with the scores for satisfaction with social support, loneliness, and perceived stress. All the short forms had acceptable reliability and construct validity.

  10. A cephalometric study to establish the relationship of the occlusal plane to the three different ala-tragal lines and the Frankfort horizontal plane in different head forms.

    PubMed

    Subhas, S; Rupesh, P L; Devanna, R; Kumar, D R V; Paliwal, A; Solanki, P

    2017-04-01

    The aim of the study is to compare the relationship of the occlusal plane to 3 different ala-tragal lines, namely the superior, middle and inferior lines, in individuals having different head forms and its relation to the Frankfort horizontal plane. A total of 75 lateral cephalometric radiographs of subjects with natural dentition, having full complement of teeth, between the age group of 18-25 were screened and selected. Lateral cephalogram were made for each subjects in an open mouth position. Prior to making the lateral cephalogram, radiopaque markers were placed on the superior, middle and inferior tragus points and on the inferior border of the ala of the nose. Cephalometric tracing was done over each cephalogram. In mesiocephalic head form the middle ala-tragal line was most parallel to the occlusal plane having a mean angle of (1.96°). In dolichocephalic headform, the superior ala-tragal line was most parallel to the occlusal plane having a mean angle of (0.48°). In brachycephalic head form, the middle ala-tragal line was most parallel to the occlusal plane having a mean angle of (2.08°). The mean angulations of occlusal plane to FH plane is 11.04°, 10.16° and 10.60° in mesiocephalic, dolichocephalic and brachycephalic head forms, respectively. The study concludes that the middle ala-tragal line can be used as a reference for the mesiocephalic head form and the superior ala-tragal line for the dolichocephalic and brachycephalic head form as a reference to establish the occlusal plane. Copyright © 2016. Published by Elsevier Masson SAS.

  11. Method of electroforming a rocket chamber

    NASA Technical Reports Server (NTRS)

    Fortini, A. (Inventor)

    1974-01-01

    A transpiration cooled rocket chamber is made by forming a porous metal wall on a suitably shaped mandrel. The porous wall may be made of sintered powdered metal, metal fibers sintered on the mandrel or wires woven onto the mandrel and then sintered to bond the interfaces of the wires. Intersecting annular and longitudinal ribs are then electroformed on the porous wall. An interchamber wall having orifices therein is then electroformed over the annular and longitudinal ribs. Parallel longitudinal ribs are then formed on the outside surface of the interchamber wall after which an annular jacket is electroformed over the parallel ribs to form distribution passages therewith. A feed manifold communicating with the distribution passages may be fabricated and welded to the rocket chamber or the feed manifold may be electroformed in place.

  12. 5-Methylpyrazine-2-carboxamide

    DOE PAGES

    Rillema, D. Paul; Senaratne, Nilmini K.; Moore, Curtis; ...

    2017-07-28

    The title compound, C 6H 7N 3O, is nearly planar, with a dihedral angle of 2.14 (11)° between the pyrazine ring and the mean plane of the carboxamide group [C—C(=O)—N]. In the crystal, molecules are linked via pairs of N—H...O hydrogen bonds forming inversion dimers with an R 2 2 (8) ring motif. These dimers are further linked by a pair of N—H...N hydrogen bonds, enclosing an R 2 2 (10) ring motif, and C—H...O hydrogen bonds, forming ribbons lying parallel to the ab plane. The ribbons are linked by offset π–π interactions [intercentroid distance = 3.759(1)Å], forming two setsmore » of mutually perpendicular slabs parallel to planes (110) and (1-10).« less

  13. Magnetosheath Filamentary Structures Formed by Ion Acceleration at the Quasi-Parallel Bow Shock

    NASA Technical Reports Server (NTRS)

    Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.

    2014-01-01

    Results from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.

  14. Reliability analysis and initial requirements for FC systems and stacks

    NASA Astrophysics Data System (ADS)

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  15. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  16. 14 CFR 29.1387 - Position light system dihedral angles.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... described in this section. (b) Dihedral angle L (left) is formed by two intersecting vertical planes, the... formed by two intersecting vertical planes, the first parallel to the longitudinal axis of the rotorcraft... longitudinal axis. (d) Dihedral angle A (aft) is formed by two intersecting vertical planes making angles of 70...

  17. 14 CFR 23.1387 - Position light system dihedral angles.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... described in this section. (b) Dihedral angle L (left) is formed by two intersecting vertical planes, the... formed by two intersecting vertical planes, the first parallel to the longitudinal axis of the airplane... longitudinal axis. (d) Dihedral angle A (aft) is formed by two intersecting vertical planes making angles of 70...

  18. Heating element support clip

    DOEpatents

    Sawyer, William C.

    1995-01-01

    An apparatus for supporting a heating element in a channel formed in a heater base is disclosed. A preferred embodiment includes a substantially U-shaped tantalum member. The U-shape is characterized by two substantially parallel portions of tantalum that each have an end connected to opposite ends of a base portion of tantalum. The parallel portions are each substantially perpendicular to the base portion and spaced apart a distance not larger than a width of the channel and not smaller than a width of a graphite heating element. The parallel portions each have a hole therein, and the centers of the holes define an axis that is substantially parallel to the base portion. An aluminum oxide ceramic retaining pin extends through the holes in the parallel portions and into a hole in a wall of the channel to retain the U-shaped member in the channel and to support the graphite heating element. The graphite heating element is confined by the parallel portions of tantalum, the base portion of tantalum, and the retaining pin. A tantalum tube surrounds the retaining pin between the parallel portions of tantalum.

  19. Heating element support clip

    DOEpatents

    Sawyer, W.C.

    1995-08-15

    An apparatus for supporting a heating element in a channel formed in a heater base is disclosed. A preferred embodiment includes a substantially U-shaped tantalum member. The U-shape is characterized by two substantially parallel portions of tantalum that each have an end connected to opposite ends of a base portion of tantalum. The parallel portions are each substantially perpendicular to the base portion and spaced apart a distance not larger than a width of the channel and not smaller than a width of a graphite heating element. The parallel portions each have a hole therein, and the centers of the holes define an axis that is substantially parallel to the base portion. An aluminum oxide ceramic retaining pin extends through the holes in the parallel portions and into a hole in a wall of the channel to retain the U-shaped member in the channel and to support the graphite heating element. The graphite heating element is confined by the parallel portions of tantalum, the base portion of tantalum, and the retaining pin. A tantalum tube surrounds the retaining pin between the parallel portions of tantalum. 6 figs.

  20. Claims about the Reliability of Student Evaluations of Instruction: The Ecological Fallacy Rides Again

    ERIC Educational Resources Information Center

    Morley, Donald D.

    2012-01-01

    The vast majority of the research on student evaluation of instruction has assessed the reliability of groups of courses and yielded either a single reliability coefficient for the entire group, or grouped reliability coefficients for each student evaluation of teaching (SET) item. This manuscript argues that these practices constitute a form of…

  1. Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data

    ERIC Educational Resources Information Center

    Dinno, Alexis

    2009-01-01

    Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make assertions…

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, S.; Zhang, D.; Paukstelis, P. J.

    DNA has proved to be an excellent material for nanoscale construction because complementary DNA duplexes are programmable and structurally predictable. However, in the absence of Watson–Crick pairings, DNA can be structurally more diverse. Here, we describe the crystal structures of d(ACTCGGATGAT) and the brominated derivative, d(AC BrUCGGA BrUGAT). These oligonucleotides form parallel-stranded duplexes with a crystallographically equivalent strand, resulting in the first examples of DNA crystal structures that contains four different symmetric homo base pairs. Two of the parallel-stranded duplexes are coaxially stacked in opposite directions and locked together to form a tetraplex through intercalation of the 5'-most A–A basemore » pairs between adjacent G–G pairs in the partner duplex. The intercalation region is a new type of DNA tertiary structural motif with similarities to the i-motif. 1H– 1H nuclear magnetic resonance and native gel electrophoresis confirmed the formation of a parallel-stranded duplex in solution. Finally, we modified specific nucleotide positions and added d(GAY) motifs to oligonucleotides and were readily able to obtain similar crystals. This suggests that this parallel-stranded DNA structure may be useful in the rational design of DNA crystals and nanostructures.« less

  3. On the suitability of the connection machine for direct particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonard

    1990-01-01

    The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.

  4. Parallel-SymD: A Parallel Approach to Detect Internal Symmetry in Protein Domains.

    PubMed

    Jha, Ashwani; Flurchick, K M; Bikdash, Marwan; Kc, Dukka B

    2016-01-01

    Internally symmetric proteins are proteins that have a symmetrical structure in their monomeric single-chain form. Around 10-15% of the protein domains can be regarded as having some sort of internal symmetry. In this regard, we previously published SymD (symmetry detection), an algorithm that determines whether a given protein structure has internal symmetry by attempting to align the protein to its own copy after the copy is circularly permuted by all possible numbers of residues. SymD has proven to be a useful algorithm to detect symmetry. In this paper, we present a new parallelized algorithm called Parallel-SymD for detecting symmetry of proteins on clusters of computers. The achieved speedup of the new Parallel-SymD algorithm scales well with the number of computing processors. Scaling is better for proteins with a larger number of residues. For a protein of 509 residues, a speedup of 63 was achieved on a parallel system with 100 processors.

  5. Parallel-SymD: A Parallel Approach to Detect Internal Symmetry in Protein Domains

    PubMed Central

    Jha, Ashwani; Flurchick, K. M.; Bikdash, Marwan

    2016-01-01

    Internally symmetric proteins are proteins that have a symmetrical structure in their monomeric single-chain form. Around 10–15% of the protein domains can be regarded as having some sort of internal symmetry. In this regard, we previously published SymD (symmetry detection), an algorithm that determines whether a given protein structure has internal symmetry by attempting to align the protein to its own copy after the copy is circularly permuted by all possible numbers of residues. SymD has proven to be a useful algorithm to detect symmetry. In this paper, we present a new parallelized algorithm called Parallel-SymD for detecting symmetry of proteins on clusters of computers. The achieved speedup of the new Parallel-SymD algorithm scales well with the number of computing processors. Scaling is better for proteins with a larger number of residues. For a protein of 509 residues, a speedup of 63 was achieved on a parallel system with 100 processors. PMID:27747230

  6. Raising the Reliability of Forming Rolls by Alloying Their Core with Copper

    NASA Astrophysics Data System (ADS)

    Zhizhkina, N. A.

    2016-11-01

    The mechanical properties and the structure of forming rolls from cast irons of different compositions are studied. A novel iron including a copper additive that lowers its chilling and raises the homogeneity of the structure is suggested for the roll cores. The use of such iron should raise the reliability of the rolls in operation.

  7. A Test Reliability Analysis of an Abbreviated Version of the Pupil Control Ideology Form.

    ERIC Educational Resources Information Center

    Gaffney, Patrick V.

    A reliability analysis was conducted of an abbreviated, 10-item version of the Pupil Control Ideology Form (PCI), using the Cronbach's alpha technique (L. J. Cronbach, 1951) and the computation of the standard error of measurement. The PCI measures a teacher's orientation toward pupil control. Subjects were 168 preservice teachers from one private…

  8. Developing Form Assembly Specifications for Exams with Multiple Choice and Constructed Response Items: Balancing Reliability and Validity Concerns

    ERIC Educational Resources Information Center

    Hendrickson, Amy; Patterson, Brian; Ewing, Maureen

    2010-01-01

    The psychometric considerations and challenges associated with including constructed response items on tests are discussed along with how these issues affect the form assembly specifications for mixed-format exams. Reliability and validity, security and fairness, pretesting, content and skills coverage, test length and timing, weights, statistical…

  9. Measurement of impulsive choice in rats: Same and alternate form test-retest reliability and temporal tracking

    PubMed Central

    Peterson, Jennifer R.; Hill, Catherine C.; Kirkpatrick, Kimberly

    2016-01-01

    Impulsive choice is typically measured by presenting smaller-sooner (SS) versus larger-later (LL) rewards, with biases towards the SS indicating impulsivity. The current study tested rats on different impulsive choice procedures with LL delay manipulations to assess same-form and alternate-form test-retest reliability. In the systematic-GE procedure (Green & Estle, 2003), the LL delay increased after several sessions of training; in the systematic-ER procedure (Evenden & Ryan, 1996), the delay increased within each session; and in the adjusting-M procedure (Mazur, 1987), the delay changed after each block of trials within a session based on each rat’s choices in the previous block. In addition to measuring choice behavior, we also assessed temporal tracking of the LL delays using the median times of responding during LL trials. The two systematic procedures yielded similar results in both choice and temporal tracking measures following extensive training, whereas the adjusting procedure resulted in relatively more impulsive choices and poorer temporal tracking. Overall, the three procedures produced acceptable same form test-retest reliability over time, but the adjusting procedure did not show significant alternate form test-retest reliability with the other two procedures. The results suggest that systematic procedures may supply better measurements of impulsive choice in rats. PMID:25490901

  10. Parallel-vector unsymmetric Eigen-Solver on high performance computers

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Jiangning, Qin

    1993-01-01

    The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.

  11. Parallel and perpendicular velocity sheared flows driven tripolar vortices in an inhomogeneous electron-ion quantum magnetoplasma

    NASA Astrophysics Data System (ADS)

    Mirza, Arshad M.; Masood, W.

    2011-12-01

    Nonlinear equations governing the dynamics of finite amplitude drift-ion acoustic-waves are derived by taking into account sheared ion flows parallel and perpendicular to the ambient magnetic field in a quantum magnetoplasma comprised of electrons and ions. It is shown that stationary solution of the nonlinear equations can be represented in the form of a tripolar vortex for specific profiles of the equilibrium sheared flows. The tripolar vortices are, however, observed to form on very short scales in dense quantum plasmas. The relevance of the present investigation with regard to dense astrophysical environments is also pointed out.

  12. The utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to in this paper are specific to the Cray X-MP line of computers and its associated SSD (Solid-state Storage Device). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  13. NUCLEAR REACTOR FUEL ELEMENT

    DOEpatents

    Wheelock, C.W.; Baumeister, E.B.

    1961-09-01

    A reactor fuel element utilizing fissionable fuel materials in plate form is described. This fuel element consists of bundles of fuel-bearing plates. The bundles are stacked inside of a tube which forms the shell of the fuel element. The plates each have longitudinal fins running parallel to the direction of coolant flow, and interspersed among and parallel to the fins are ribs which position the plates relative to each other and to the fuel element shell. The plate bundles are held together by thin bands or wires. The ex tended surface increases the heat transfer capabilities of a fuel element by a factor of 3 or more over those of a simple flat plate.

  14. Querying databases of trajectories of differential equations 2: Index functions

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1991-01-01

    Suppose that a large number of parameterized trajectories (gamma) of a dynamical system evolving in R sup N are stored in a database. Let eta is contained R sup N denote a parameterized path in Euclidean space, and let parallel to center dot parallel to denote a norm on the space of paths. A data structures and indices for trajectories are defined and algorithms are given to answer queries of the following forms: Query 1. Given a path eta, determine whether eta occurs as a subtrajectory of any trajectory gamma from the database. If so, return the trajectory; otherwise, return null. Query 2. Given a path eta, return the trajectory gamma from the database which minimizes the norm parallel to eta - gamma parallel.

  15. Requirements for implementing real-time control functional modules on a hierarchical parallel pipelined system

    NASA Technical Reports Server (NTRS)

    Wheatley, Thomas E.; Michaloski, John L.; Lumia, Ronald

    1989-01-01

    Analysis of a robot control system leads to a broad range of processing requirements. One fundamental requirement of a robot control system is the necessity of a microcomputer system in order to provide sufficient processing capability.The use of multiple processors in a parallel architecture is beneficial for a number of reasons, including better cost performance, modular growth, increased reliability through replication, and flexibility for testing alternate control strategies via different partitioning. A survey of the progression from low level control synchronizing primitives to higher level communication tools is presented. The system communication and control mechanisms of existing robot control systems are compared to the hierarchical control model. The impact of this design methodology on the current robot control systems is explored.

  16. QCM-D on mica for parallel QCM-D-AFM studies.

    PubMed

    Richter, Ralf P; Brisson, Alain

    2004-05-25

    Quartz crystal microbalance with dissipation monitoring (QCM-D) has developed into a recognized method to study adsorption processes in liquid, such as the formation of supported lipid bilayers and protein adsorption. However, the large intrinsic roughness of currently used gold-coated or silica-coated QCM-D sensors limits parallel structural characterization by atomic force microscopy (AFM). We present a method for coating QCM-D sensors with thin mica sheets operating in liquid with high stability and sensitivity. We define criteria to objectively assess the reliability of the QCM-D measurements and demonstrate that the mica-coated sensors can be used to follow the formation of supported lipid membranes and subsequent protein adsorption. This method allows combining QCM-D and AFM investigations on identical supports, providing detailed physicochemical and structural characterization of model membranes.

  17. Applications of Massive Mathematical Computations

    DTIC Science & Technology

    1990-04-01

    particles from the first principles of QCD . This problem is under intensive numerical study 11-6 using special purpose parallel supercomputers in...several places around the world. The method used here is the Monte Carlo integration for a fixed 3-D plus time lattices . Reliable results are still years...mathematical and theoretical physics, but its most promising applications are in the numerical realization of QCD computations. Our programs for the solution

  18. Active Cells for Multifunctional Structures

    DTIC Science & Technology

    2014-09-24

    techniques to explore a variety of cell designs.  Designed a simplified active cell using Nitinol as the actuation method and relying on Joule heating...for contraction of the cell.  Developed manufacturing techniques for reliably creating Nitinol spring coils in a variety of diameters and gauges...design of the active cells to maximum the stroked length of the active cells by tuning the stiffness of a passive spring in parallel with the Nitinol

  19. Using a Multivariate Multilevel Polytomous Item Response Theory Model to Study Parallel Processes of Change: The Dynamic Association between Adolescents' Social Isolation and Engagement with Delinquent Peers in the National Youth Survey

    ERIC Educational Resources Information Center

    Hsieh, Chueh-An; von Eye, Alexander A.; Maier, Kimberly S.

    2010-01-01

    The application of multidimensional item response theory models to repeated observations has demonstrated great promise in developmental research. It allows researchers to take into consideration both the characteristics of item response and measurement error in longitudinal trajectory analysis, which improves the reliability and validity of the…

  20. Parallel Ada benchmarks for the SVMS

    NASA Technical Reports Server (NTRS)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  1. Distinct lateral inhibitory circuits drive parallel processing of sensory information in the mammalian olfactory bulb

    PubMed Central

    Geramita, Matthew A; Burton, Shawn D; Urban, Nathan N

    2016-01-01

    Splitting sensory information into parallel pathways is a common strategy in sensory systems. Yet, how circuits in these parallel pathways are composed to maintain or even enhance the encoding of specific stimulus features is poorly understood. Here, we have investigated the parallel pathways formed by mitral and tufted cells of the olfactory system in mice and characterized the emergence of feature selectivity in these cell types via distinct lateral inhibitory circuits. We find differences in activity-dependent lateral inhibition between mitral and tufted cells that likely reflect newly described differences in the activation of deep and superficial granule cells. Simulations show that these circuit-level differences allow mitral and tufted cells to best discriminate odors in separate concentration ranges, indicating that segregating information about different ranges of stimulus intensity may be an important function of these parallel sensory pathways. DOI: http://dx.doi.org/10.7554/eLife.16039.001 PMID:27351103

  2. Psychometric Evaluation of the Medical Outcome Study (MOS) Social Support Survey Among Malay Postpartum Women in Kedah, North West of Peninsular Malaysia

    PubMed Central

    Mahmud, Wan Mohd Rushidi Wan; Awang, Amir; Mohamed, Mahmood Nazar

    2004-01-01

    The Malay version of the Medical Outcome Study (MOS) Social Support Survey was validated among a sample of postpartum Malay women attending selected health centers in Kedah, North West of Peninsular Malaysia. 215 women between 4 to 12 weeks postpartum were recruited for the validation study. They were given questionnaires on socio-demography, the Malay-versions of the MOS Social Support Survey, Edinburgh Postnatal Depression Scale (EPDS) and the 21-items Beck Depression Inventory-II (BDI-II). 30 of the women, who were bilingual, were also given the original English version of the instrument. A week later, these women were again given the Malay version of the MOS Social Support Survey. The scale displayed good internal consistency (Cronbach’s alpha = 0.93), parallel form reliability (0.98) and test-retest reliability (0.97) (Spearman’s rho; p<0.01). The negative correlations of the overall support index (total social support measure) with the Malay versions of EPDS and BDI-II confirmed its validity. Extraction method of the 19 items (item 2 to item 20) from the MOS Social Support Survey using principle axis factoring with direct oblimin rotation converged into 3 dimensions of functional social support (informational, affectionate / positive social interaction and instrumental support) with reliability coefficients of 0.91, 0.83 and 0.75 respectively. The overall support index also displayed low but significant correlations with item 1 which represents a single measure of structural social support in the instrument (p <0.01). The Malay version of the MOS Social Support Survey demonstrated good psychometric properties in measuring social support among a sample of Malay postpartum Malay women attending selected health centers in Kedah, North West of Peninsular Malaysia and it could be used as a simple instrument in primary care settings. PMID:22973124

  3. Design, analysis, operation, and advanced control of hybrid renewable energy systems

    NASA Astrophysics Data System (ADS)

    Whiteman, Zachary S.

    Because using non-renewable energy systems (e.g., coal-powered co-generation power plants) to generate electricity is an unsustainable, environmentally hazardous practice, it is important to develop cost-effective and reliable renewable energy systems, such as photovoltaics (PVs), wind turbines (WTs), and fuel cells (FCs). Non-renewable energy systems, however, are currently less expensive than individual renewable energy systems (IRESs). Furthermore, IRESs based on intermittent natural resources (e.g., solar irradiance and wind) are incapable of meeting continuous energy demands. Such shortcomings can be mitigated by judiciously combining two or more complementary IRESs to form a hybrid renewable energy system (HRES). Although previous research efforts focused on the design, operation, and control of HRESs has proven useful, no prior HRES research endeavor has taken a systematic and comprehensive approach towards establishing guidelines by which HRESs should be designed, operated, and controlled. The overall goal of this dissertation, therefore, is to establish the principles governing the design, operation, and control of HRESs resulting in cost-effective and reliable energy solutions for stationary and mobile applications. To achieve this goal, we developed and demonstrated four separate HRES principles. Rational selection of HRES type: HRES components and their sizes should be rationally selected using knowledge of component costs, availability of renewable energy resources, and expected power demands of the application. HRES design: by default, the components of a HRES should be arranged in parallel for increased efficiency and reliability. However, a series HRES design may be preferred depending on the operational considerations of the HRES components. HRES control strategy selection: the choice of HRES control strategy depends on the dynamics of HRES components, their operational considerations, and the practical limitations of the HRES end-use. HRES data-driven control: information-rich data should be used to assist in the intelligent coordination of HRES components in meeting its operating objectives when additional computation can be afforded and significant benefits can be realized.

  4. Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering.

    PubMed

    Slepoy, A; Peters, M D; Thompson, A P

    2007-11-30

    Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.

  5. Integrated protocol for reliable and fast quantification and documentation of electrophoresis gels.

    PubMed

    Rehbein, Peter; Schwalbe, Harald

    2015-06-01

    Quantitative analysis of electrophoresis gels is an important part in molecular cloning, as well as in protein expression and purification. Parallel quantifications in yield and purity can be most conveniently obtained from densitometric analysis. This communication reports a comprehensive, reliable and simple protocol for gel quantification and documentation, applicable for single samples and with special features for protein expression screens. As major component of the protocol, the fully annotated code of a proprietary open source computer program for semi-automatic densitometric quantification of digitized electrophoresis gels is disclosed. The program ("GelQuant") is implemented for the C-based macro-language of the widespread integrated development environment of IGOR Pro. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. The ventriloquist in periphery: impact of eccentricity-related reliability on audio-visual localization.

    PubMed

    Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier

    2013-10-28

    The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.

  7. The use of parallel imaging for MRI assessment of knees in children and adolescents.

    PubMed

    Doria, Andrea S; Chaudry, Gulraiz A; Nasui, Cristina; Rayner, Tammy; Wang, Chenghua; Moineddin, Rahim; Babyn, Paul S; White, Larry M; Sussman, Marshall S

    2010-03-01

    Parallel imaging provides faster scanning at the cost of reduced signal-to-noise ratio (SNR) and increased artifacts. To compare the diagnostic performance of two parallel MRI protocols (PPs) for assessment of pathologic knees using an 8-channel knee coil (reference standard, conventional protocol [CP]) and to characterize the SNR losses associated with parallel imaging. Two radiologists blindly interpreted 1.5 Tesla knee MRI images in 21 children (mean 13 years, range 9-18 years) with clinical indications for an MRI scan. Sagittal proton density, T2-W fat-saturated FSE, axial T2-W fat-saturated FSE, and coronal T1-W (NEX of 1,1,1) images were obtained with both CP and PP. Images were read for soft tissue and osteochondral findings. There was a 75% decrease in acquisition time using PP in comparison to CP. The CP and PP protocols fell within excellent or upper limits of substantial agreement: CP, kappa coefficient, 0.81 (95% CIs, 0.73-0.89); PP, 0.80-0.81 (0.73-0.89). The sensitivity of the two PPs was similar for assessment of soft (0.98-1.00) and osteochondral (0.89-0.94) tissues. Phantom data indicated an SNR of 1.67, 1.6, and 1.51 (axial, sagittal and coronal planes) between CP and PP scans. Parallel MRI provides a reliable assessment for pediatric knees in a significantly reduced scan time without affecting the diagnostic performance of MRI.

  8. Parallel-plate heat pipe apparatus having a shaped wick structure

    DOEpatents

    Rightley, Michael J.; Adkins, Douglas R.; Mulhall, James J.; Robino, Charles V.; Reece, Mark; Smith, Paul M.; Tigges, Chris P.

    2004-12-07

    A parallel-plate heat pipe is disclosed that utilizes a plurality of evaporator regions at locations where heat sources (e.g. semiconductor chips) are to be provided. A plurality of curvilinear capillary grooves are formed on one or both major inner surfaces of the heat pipe to provide an independent flow of a liquid working fluid to the evaporator regions to optimize heat removal from different-size heat sources and to mitigate the possibility of heat-source shadowing. The parallel-plate heat pipe has applications for heat removal from high-density microelectronics and laptop computers.

  9. Validity and reliability of the robotic objective structured assessment of technical skills

    PubMed Central

    Siddiqui, Nazema Y.; Galloway, Michael L.; Geller, Elizabeth J.; Green, Isabel C.; Hur, Hye-Chun; Langston, Kyle; Pitter, Michael C.; Tarr, Megan E.; Martino, Martin A.

    2015-01-01

    Objective Objective structured assessments of technical skills (OSATS) have been developed to measure the skill of surgical trainees. Our aim was to develop an OSATS specifically for trainees learning robotic surgery. Study Design This is a multi-institutional study in eight academic training programs. We created an assessment form to evaluate robotic surgical skill through five inanimate exercises. Obstetrics/gynecology, general surgery, and urology residents, fellows, and faculty completed five robotic exercises on a standard training model. Study sessions were recorded and randomly assigned to three blinded judges who scored performance using the assessment form. Construct validity was evaluated by comparing scores between participants with different levels of surgical experience; inter- and intra-rater reliability were also assessed. Results We evaluated 83 residents, 9 fellows, and 13 faculty, totaling 105 participants; 88 (84%) were from obstetrics/gynecology. Our assessment form demonstrated construct validity, with faculty and fellows performing significantly better than residents (mean scores: 89 ± 8 faculty; 74 ± 17 fellows; 59 ± 22 residents, p<0.01). In addition, participants with more robotic console experience scored significantly higher than those with fewer prior console surgeries (p<0.01). R-OSATS demonstrated good inter-rater reliability across all five drills (mean Cronbach's α: 0.79 ± 0.02). Intra-rater reliability was also high (mean Spearman's correlation: 0.91 ± 0.11). Conclusions We developed an assessment form for robotic surgical skill that demonstrates construct validity, inter- and intra-rater reliability. When paired with standardized robotic skill drills this form may be useful to distinguish between levels of trainee performance. PMID:24807319

  10. Peer Bullying among High School Students: Turkish Version of Bullying Scale

    ERIC Educational Resources Information Center

    Arslan, Nihan

    2017-01-01

    The aim of study was to conduct the reliability and validity studies of the Turkish version of The Forms of Bullying Scale (FBS; Shaw at el., 2013). The Turkish form of the scale was applied on 357 high school students. Scale was examined by the reliability analysis and confirmatory factor analysis within the scope of the adaptation study.…

  11. A Validation Study of the Dutch Childhood Trauma Questionnaire-Short Form: Factor Structure, Reliability, and Known-Groups Validity

    ERIC Educational Resources Information Center

    Thombs, Brett D.; Bernstein, David P.; Lobbestael, Jill; Arntz, Arnoud

    2009-01-01

    Objective: The 28-item Childhood Trauma Questionnaire-Short Form (CTQ-SF) has been translated into at least 10 different languages. The validity of translated versions of the CTQ-SF, however, has generally not been examined. The objective of this study was to investigate the factor structure, internal consistency reliability, and known-groups…

  12. Bruininks-Oseretsky Test of Motor Proficiency: Further Verification with 3- to 5- yr. -old Children.

    ERIC Educational Resources Information Center

    Beitel, Patricia A.; Mead, Barbara J.

    1982-01-01

    The Bruininks-Oseretsky Test of Motor Proficiency was evaluated to determine test-retest reliability and if there were presensitizing effects at retest for four- to five-year olds. Test reliability was significantly high. No significant test sensitization of the short form to retesting with the short form or subtests was found. (Author/RD)

  13. Formation of interconnections to microfluidic devices

    DOEpatents

    Matzke, Carolyn M [Los Lunas, NM; Ashby, Carol I. H. [Edgewood, NM; Griego, Leonardo [Tijeras, NM

    2003-07-29

    A method is disclosed to form external interconnections to a microfluidic device for coupling of a fluid or light or both into a microchannel of the device. This method can be used to form optical or fluidic interconnections to microchannels previously formed on a substrate, or to form both the interconnections and microchannels during the same process steps. The optical and fluidic interconnections are formed parallel to the plane of the substrate, and are fluid tight.

  14. Validity and reliability of the NAB Naming Test.

    PubMed

    Sachs, Bonnie C; Rush, Beth K; Pedraza, Otto

    2016-05-01

    Confrontation naming is commonly assessed in neuropsychological practice, but few standardized measures of naming exist and those that do are susceptible to the effects of education and culture. The Neuropsychological Assessment Battery (NAB) Naming Test is a 31-item measure used to assess confrontation naming. Despite adequate psychometric information provided by the test publisher, there has been limited independent validation of the test. In this study, we investigated the convergent and discriminant validity, internal consistency, and alternate forms reliability of the NAB Naming Test in a sample of adults (Form 1: n = 247, Form 2: n = 151) clinically referred for neuropsychological evaluation. Results indicate adequate-to-good internal consistency and alternate forms reliability. We also found strong convergent validity as demonstrated by relationships with other neurocognitive measures. We found preliminary evidence that the NAB Naming Test demonstrates a more pronounced ceiling effect than other commonly used measures of naming. To our knowledge, this represents the largest published independent validation study of the NAB Naming Test in a clinical sample. Our findings suggest that the NAB Naming Test demonstrates adequate validity and reliability and merits consideration in the test arsenal of clinical neuropsychologists.

  15. 78 FR 10638 - Proposed Amendment to the Information Collection Requirements of Prohibited Transaction Exemption...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-14

    ... specified Internet site.\\3\\ \\1\\ See 74 FR 4546 (January 26, 2009). The final rule adopted, among other things, parallel amendments to SEC Form N-1A (the registration form for mutual funds) and to Rule 498...

  16. Local Norms and Test Characteristics for Selected Forms of the M.A.A. Placement Test.

    ERIC Educational Resources Information Center

    Melancon, Janet G.; Thompson, Bruce

    The psychometric integrity of selected items from the Mathematics Association of America (MAA) placement tests for college students was investigated. Two alternative and parallel versions of the test were developed (Form A and Form B) for this study. Data for 539 students seeking admission into an undergraduate mathematics curriculum at a private…

  17. 77 FR 73369 - Approval and Promulgation of Air Quality Implementation Plans; State of Florida; Regional Haze...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-10

    ... for parallel processing and re-submitted in final form as part of the State's September 17, 2012... characters, any form of encryption, and be free of any defects or viruses. For additional information about... the Internet and will be publicly available only in hard copy form. Publicly available docket...

  18. Linking Competencies in Educational Settings and Measuring Growth. Research Report. ETS RR-06-12

    ERIC Educational Resources Information Center

    von Davier, Alina A.; Carstensen, Claus H.; von Davier, Matthias

    2006-01-01

    Measuring and linking competencies require special instruments, special data collection designs, and special statistical models. The measurement instruments are tests or tests forms, which can be used in the following situations: The same test can be given repeatedly; two or more parallel tests forms (i.e., forms intended to be similar in…

  19. Laser weld jig. [Patent application

    DOEpatents

    Van Blarigan, P.; Haupt, D.L.

    1980-12-05

    A system is provided for welding a workpiece along a predetermined weld line that may be of irregular shape, which includes the step of forming a lip on the workpiece to extend parallel to the weld line, and moving the workpiece by engaging the lip between a pair of rotatable members. Rotation of one of the members at a constant speed, causes the workpiece to move so that all points on the weld line sequentially pass a fixed point in space at a constant speed, so that a laser welding beam can be directed at that fixed point to form a weld along the weld line. The workpiece can include a reusable jig forming the lip, and with the jig constructed to detachably hold parts to be welded at a position wherein the weld line of the parts extends parallel to the lip on the jig.

  20. NEUTRONIC REACTORS

    DOEpatents

    Anderson, J.B.

    1960-01-01

    A reactor is described which comprises a tank, a plurality of coaxial steel sleeves in the tank, a mass of water in the tank, and wire grids in abutting relationship within a plurality of elongated parallel channels within the steel sleeves, the wire being provided with a plurality of bends in the same plane forming adjacent parallel sections between bends, and the sections of adjacent grids being normally disposed relative to each other.

  1. Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach

    ERIC Educational Resources Information Center

    Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

    2012-01-01

    Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

  2. Big data driven cycle time parallel prediction for production planning in wafer manufacturing

    NASA Astrophysics Data System (ADS)

    Wang, Junliang; Yang, Jungang; Zhang, Jie; Wang, Xiaoxi; Zhang, Wenjun Chris

    2018-07-01

    Cycle time forecasting (CTF) is one of the most crucial issues for production planning to keep high delivery reliability in semiconductor wafer fabrication systems (SWFS). This paper proposes a novel data-intensive cycle time (CT) prediction system with parallel computing to rapidly forecast the CT of wafer lots with large datasets. First, a density peak based radial basis function network (DP-RBFN) is designed to forecast the CT with the diverse and agglomerative CT data. Second, the network learning method based on a clustering technique is proposed to determine the density peak. Third, a parallel computing approach for network training is proposed in order to speed up the training process with large scaled CT data. Finally, an experiment with respect to SWFS is presented, which demonstrates that the proposed CTF system can not only speed up the training process of the model but also outperform the radial basis function network, the back-propagation-network and multivariate regression methodology based CTF methods in terms of the mean absolute deviation and standard deviation.

  3. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  4. Temperature Control with Two Parallel Small Loop Heat Pipes for GLM Program

    NASA Technical Reports Server (NTRS)

    Khrustalev, Dmitry; Stouffer, Chuck; Ku, Jentung; Hamilton, Jon; Anderson, Mark

    2014-01-01

    The concept of temperature control of an electronic component using a single Loop Heat Pipe (LHP) is well established for Aerospace applications. Using two LHPs is often desirable for redundancy/reliability reasons or for increasing the overall heat source-sink thermal conductance. This effort elaborates on temperature controlling operation of a thermal system that includes two small ammonia LHPs thermally coupled together at the evaporator end as well as at the condenser end and operating "in parallel". A transient model of the LHP system was developed on the Thermal Desktop (TradeMark) platform to understand some fundamental details of such parallel operation of the two LHPs. Extensive thermal-vacuum testing was conducted with two thermally coupled LHPs operating simultaneously as well as with only one LHP operating at a time. This paper outlines the temperature control procedures for two LHPs operating simultaneously with widely varying sink temperatures. The test data obtained during the thermal-vacuum testing, with both LHPs running simultaneously in comparison with only one LHP operating at a time, are presented with detailed explanations.

  5. Neural network architecture for form and motion perception (Abstract Only)

    NASA Astrophysics Data System (ADS)

    Grossberg, Stephen

    1991-08-01

    Evidence is given for a new neural network theory of biological motion perception, a motion boundary contour system. This theory clarifies why parallel streams V1 yields V2 and V1 yields MT exist for static form and motion form processing among the areas V1, V2, and MT of visual cortex. The motion boundary contour system consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a motion oriented contrast (MOC) filter, for preprocessing moving images; and a cooperative-competitive feedback (CC) loop, for generating emergent boundary segmentations of the filtered signals. The present work uses the MOC filter to explain a variety of classical and recent data about short-range and long- range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed- up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte''s Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90 degree(s), whereas opposite directions differ by 180 degree(s), and why a cortical stream V1 yields V2 yields MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the motion boundary contour system design.

  6. Predator-induced phenotypic plasticity of shape and behavior: parallel and unique patterns across sexes and species

    PubMed Central

    Kinnison, Michael T.

    2017-01-01

    Abstract Phenotypic plasticity is often an adaptation of organisms to cope with temporally or spatially heterogenous landscapes. Like other adaptations, one would predict that different species, populations, or sexes might thus show some degree of parallel evolution of plasticity, in the form of parallel reaction norms, when exposed to analogous environmental gradients. Indeed, one might even expect parallelism of plasticity to repeatedly evolve in multiple traits responding to the same gradient, resulting in integrated parallelism of plasticity. In this study, we experimentally tested for parallel patterns of predator-mediated plasticity of size, shape, and behavior of 2 species and sexes of mosquitofish. Examination of behavioral trials indicated that the 2 species showed unique patterns of behavioral plasticity, whereas the 2 sexes in each species showed parallel responses. Fish shape showed parallel patterns of plasticity for both sexes and species, albeit males showed evidence of unique plasticity related to reproductive anatomy. Moreover, patterns of shape plasticity due to predator exposure were broadly parallel to what has been depicted for predator-mediated population divergence in other studies (slender bodies, expanded caudal regions, ventrally located eyes, and reduced male gonopodia). We did not find evidence of phenotypic plasticity in fish size for either species or sex. Hence, our findings support broadly integrated parallelism of plasticity for sexes within species and less integrated parallelism for species. We interpret these findings with respect to their potential broader implications for the interacting roles of adaptation and constraint in the evolutionary origins of parallelism of plasticity in general. PMID:29491997

  7. Reliable noninvasive prenatal testing by massively parallel sequencing of circulating cell-free DNA from maternal plasma processed up to 24h after venipuncture.

    PubMed

    Buysse, Karen; Beulen, Lean; Gomes, Ingrid; Gilissen, Christian; Keesmaat, Chantal; Janssen, Irene M; Derks-Willemen, Judith J H T; de Ligt, Joep; Feenstra, Ilse; Bekker, Mireille N; van Vugt, John M G; Geurts van Kessel, Ad; Vissers, Lisenka E L M; Faas, Brigitte H W

    2013-12-01

    Circulating cell-free fetal DNA (ccffDNA) in maternal plasma is an attractive source for noninvasive prenatal testing (NIPT). The amount of total cell-free DNA significantly increases 24h after venipuncture, leading to a relative decrease of the ccffDNA fraction in the blood sample. In this study, we evaluated the downstream effects of extended processing times on the reliability of aneuploidy detection by massively parallel sequencing (MPS). Whole blood from pregnant women carrying normal and trisomy 21 (T21) fetuses was collected in regular EDTA anti-coagulated tubes and processed within 6h, 24 and 48h after venipuncture. Samples of all three different time points were further analyzed by MPS using Z-score calculation and the percentage of ccffDNA based on X-chromosome reads. Both T21 samples were correctly identified as such at all time-points. However, after 48h, a higher deviation in Z-scores was noticed. Even though the percentage of ccffDNA in a plasma sample has been shown previously to significantly decrease 24h after venipuncture, the percentages based on MPS results did not show a significant decrease after 6, 24 or 48h. The quality and quantity of ccffDNA extracted from plasma samples processed up to 24h after venipuncture are sufficiently high for reliable downstream NIPT analysis by MPS. Furthermore, we show that it is important to determine the percentage of ccffDNA in the fraction of the sample that is actually used for NIPT, as downstream procedures might influence the fetal or maternal fraction. © 2013.

  8. The Xpress Transfer Protocol (XTP): A tutorial (expanded version)

    NASA Technical Reports Server (NTRS)

    Sanders, Robert M.; Weaver, Alfred C.

    1990-01-01

    The Xpress Transfer Protocol (XTP) is a reliable, real-time, light weight transfer layer protocol. Current transport layer protocols such as DoD's Transmission Control Protocol (TCP) and ISO's Transport Protocol (TP) were not designed for the next generation of high speed, interconnected reliable networks such as fiber distributed data interface (FDDI) and the gigabit/second wide area networks. Unlike all previous transport layer protocols, XTP is being designed to be implemented in hardware as a VLSI chip set. By streamlining the protocol, combining the transport and network layers and utilizing the increased speed and parallelization possible with a VLSI implementation, XTP will be able to provide the end-to-end data transmission rates demanded in high speed networks without compromising reliability and functionality. This paper describes the operation of the XTP protocol and in particular, its error, flow and rate control; inter-networking addressing mechanisms; and multicast support features, as defined in the XTP Protocol Definition Revision 3.4.

  9. Earth's field NMR; a surface moisture detector?

    NASA Astrophysics Data System (ADS)

    Fukushima, Eiichi; Altobelli, Stephen; McDowell, Andrew; Zhang, Tongsheng

    2012-10-01

    Earth's field NMR (EFNMR), being free of magnets, would be an ideal teaching medium as well as a mobile NMR technique except for its weak S/N. The common EFNMR apparatus uses a powerful prepolarization field to enhance the spin magnetization before the experiment. We introduce a coil design geared to larger but manageable samples with sufficient sensitivity without prepolarization to move EFNMR closer to routine use and to provide an inexpensive teaching tool. Our coil consists of parallel wires spread out on a plywood to form a current sheet with the current return wires separated so they will not influence the main part of the coil assembly. The sensitive region is a relatively thin region parallel to the coil and close to it. A single turn of the coil is wound to be topologically equivalent to a figure-8. The two crossing segments in the center of a figure-8 form two of the parallel wires of the flat coil. Thus, a two-turn figure-8 has four crossing wires so its topologically equivalent coil will have four parallel wires with currents in phase. Together with the excellent sensitivity, this coil offers outstanding interference rejection because of the figure-8 geometry. An example of such a coil has 328 parallel wires covering a ˜1 meter square plywood which yields a good NMR signal from 26 liters of water spread out roughly over the area of the coil in less than one minute in a nearby park.

  10. The Reliability of Environmental Measures of the College Alcohol Environment.

    ERIC Educational Resources Information Center

    Clapp, John D.; Whitney, Mike; Shillington, Audrey M.

    2002-01-01

    Assesses the inter-rater reliability of two environmental scanning tools designed to identify alcohol-related advertisements targeting college students. Inter-rater reliability for these forms varied across different rating categories and ranged from poor to excellent. Suggestions for future research are addressed. (Contains 26 references and 6…

  11. The reliability of a modified Kalamazoo Consensus Statement Checklist for assessing the communication skills of multidisciplinary clinicians in the simulated environment.

    PubMed

    Peterson, Eleanor B; Calhoun, Aaron W; Rider, Elizabeth A

    2014-09-01

    With increased recognition of the importance of sound communication skills and communication skills education, reliable assessment tools are essential. This study reports on the psychometric properties of an assessment tool based on the Kalamazoo Consensus Statement Essential Elements Communication Checklist. The Gap-Kalamazoo Communication Skills Assessment Form (GKCSAF), a modified version of an existing communication skills assessment tool, the Kalamazoo Essential Elements Communication Checklist-Adapted, was used to assess learners in a multidisciplinary, simulation-based communication skills educational program using multiple raters. 118 simulated conversations were available for analysis. Internal consistency and inter-rater reliability were determined by calculating a Cronbach's alpha score and intra-class correlation coefficients (ICC), respectively. The GKCSAF demonstrated high internal consistency with a Cronbach's alpha score of 0.844 (faculty raters) and 0.880 (peer observer raters), and high inter-rater reliability with an ICC of 0.830 (faculty raters) and 0.89 (peer observer raters). The Gap-Kalamazoo Communication Skills Assessment Form is a reliable method of assessing the communication skills of multidisciplinary learners using multi-rater methods within the learning environment. The Gap-Kalamazoo Communication Skills Assessment Form can be used by educational programs that wish to implement a reliable assessment and feedback system for a variety of learners. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Jagged Tiling for Intra-tile Parallelism and Fine-Grain Multithreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Sunil; Manzano Franco, Joseph B.; Marquez, Andres

    In this paper, we have developed a novel methodology that takes into consideration multithreaded many-core designs to better utilize memory/processing resources and improve memory residence on tileable applications. It takes advantage of polyhedral analysis and transformation in the form of PLUTO, combined with a highly optimized finegrain tile runtime to exploit parallelism at all levels. The main contributions of this paper include the introduction of multi-hierarchical tiling techniques that increases intra tile parallelism; and a data-flow inspired runtime library that allows the expression of parallel tiles with an efficient synchronization registry. Our current implementation shows performance improvements on an Intelmore » Xeon Phi board up to 32.25% against instances produced by state-of-the-art compiler frameworks for selected stencil applications.« less

  13. Age-forming aluminum panels

    NASA Technical Reports Server (NTRS)

    Baxter, G. I.

    1976-01-01

    Contoured-stiffened 63 by 337 inch 2124 aluminum alloy panels are machined in-the-flat to make integral, tapered T-capped stringers, parallel with longitudinal centerline. Aging fixture, which includes net contour formers made from lofted contour templates, has eggcrate-like structure for use in forming and checking panels.

  14. Direct drive digital servo press with high parallel control

    NASA Astrophysics Data System (ADS)

    Murata, Chikara; Yabe, Jun; Endou, Junichi; Hasegawa, Kiyoshi

    2013-12-01

    Direct drive digital servo press has been developed as the university-industry joint research and development since 1998. On the basis of this result, 4-axes direct drive digital servo press has been developed and in the market on April of 2002. This servo press is composed of 1 slide supported by 4 ball screws and each axis has linearscale measuring the position of each axis with high accuracy less than μm order level. Each axis is controlled independently by servo motor and feedback system. This system can keep high level parallelism and high accuracy even with high eccentric load. Furthermore the 'full stroke full power' is obtained by using ball screws. Using these features, new various types of press forming and stamping have been obtained by development and production. The new stamping and forming methods are introduced and 'manufacturing' need strategy of press forming with high added value and also the future direction of press forming are also introduced.

  15. No more CKY two-forms in the NHEK

    NASA Astrophysics Data System (ADS)

    Mitsuka, Yoshihiro; Moutsopoulos, George

    2012-02-01

    We show that in the near-horizon limit of a Kerr-NUT-AdS black hole, the space of conformal Killing-Yano two-forms does not enhance and remains of dimension 2. The same holds for an analogous polar limit in the case of extremal NUT charge. We also derive the conformal Killing-Yano p-form equation for any background in an arbitrary dimension in the form of parallel transport.

  16. A sampling and classification item selection approach with content balancing.

    PubMed

    Chen, Pei-Hua

    2015-03-01

    Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.

  17. Lewis Structures Technology, 1988. Volume 1: Structural Dynamics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The specific purpose of the symposium was to familiarize the engineering structures community with the depth and range of research performed by the Structures Division of the Lewis Research Center and its academic and industrial partners. Sessions covered vibration control, fracture mechanics, ceramic component reliability, parallel computing, nondestructive testing, dynamical systems, fatigue and damage, wind turbines, hot section technology, structural mechanics codes, computational methods for dynamics, structural optimization, and applications of structural dynamics.

  18. Map Projections and the Visual Detective: How to Tell if a Map Is Equal-Area, Conformal, or Neither

    ERIC Educational Resources Information Center

    Olson, Judy M.

    2006-01-01

    The ability to see whether a map is equal-area, conformal, or neither is useful for looking intelligently at large-area maps. For example, only if a map is equal-area can reliable judgments of relative size be made. If a map is equal-area, latitude-longitude cells are equal in size between a given pair of parallels, the cells between a given pair…

  19. Validity and Reliability of the Turkish version of DSM-5 Social Anxiety Disorder Severity Scale- Child Form.

    PubMed

    Yalin Sapmaz, Şermin; Ergin, Dilek; Şen Celasin, Nesrin; Karaarslan, Duygu; Öztürk, Masum; Özek Erkuran, Handan; Köroğlu, Ertuğrul; Aydemir, Ömer

    2017-12-01

    This study aimed to assess the validity and reliability of the Turkish version of the Diagnostic and statistical manual of Mental Disorders. (5 th ed.) (DSM-5) Social Anxiety Disorder Severity Scale- Child Form. The scale was prepared by carrying out the translation and back translation of the DSM-5 Social Anxiety Disorder Severity Scale - Child Form. The study group consisted of 31 patients that had been treated in a child psychiatry unit and diagnosed with social anxiety disorder and 99 healthy volunteers that were attending middle or high school during the study period. For the assessment, the Screen for Child Anxiety and Related Emotional Disorders (SCARED) was also used along with the DSM-5 Social Anxiety Disorder Severity Scale - Child Form. Regarding reliability analyses, Cronbach's alpha internal consistency coefficient was calculated as 0.941, while item-total score correlation coefficients were measured between 0.566 and 0.866. A test-retest correlation coefficient was calculated as r=0.711. As for construct validity, one factor that could explain 66.0 % of the variance was obtained. As for concurrent validity, the scale showed a high correlation with the SCARED. It was concluded that the Turkish version of the DSM-5 Social Anxiety Disorder Severity Scale - Child Form could be utilized as a valid and reliable tool both in clinical practice and for research purposes.

  20. Validity and Reliability of International Physical Activity Questionnaire-Short Form in Chinese Youth

    ERIC Educational Resources Information Center

    Wang, Chao; Chen, Peijie; Zhuang, Jie

    2013-01-01

    Purpose: The psychometric profiles of the widely used International Physical Activity Questionnaire-Short Form (IPAQ-SF) in Chinese youth have not been reported. The purpose of this study was to examine the validity and reliability of the IPAQ-SF using a sample of Chinese youth. Method: One thousand and twenty-one youth (M[subscript age] = 14.26 ±…

  1. The validity and reliability of the Thai version of the Kujala score for patients with patellofemoral pain syndrome.

    PubMed

    Apivatgaroon, Adinun; Angthong, Chayanin; Sanguanjit, Prakasit; Chernchujit, Bancha

    2016-10-01

    To develop a Thai version of the Kujala score and show the evaluation of the validity and reliability of the score. The Thai version of the Kujala score was developed using the forward-backward translation protocol. The 49 PFPS patients answered the Thai version of questionnaires including the Kujala score, Short Form-36 (SF-36) and International Knee Documentation Committee (IKDC) Subjective Knee Form. The validity between the scores has been tested. The reliability was assessed using test-retest reliability and internal consistency. The Thai version of the Kujala score showed a good correlation with Thai IKDC Subjective Knee Form (Pearson's correlation coefficient; r = 0.74: p < 0.01) and moderate correlation with the Thai SF-36 subscales of physical component summary, total score and role physical (r = 0.586, 0.571 and 0.524, respectively: p < 0.01). The test-retest reliability was excellent with an intra-class correlation coefficient of 0.908 (p < 0.001; 95% CI [0.842-0.947]). The internal consistency was strong with Cronbach's alpha of 0.952 (p < 0.001). No floor and ceiling effects were observed. The Thai version of the Kujala score has shown good validity and reliability. This score can be effectively used for evaluating Thai patients with patellofemoral pain syndrome. Implications for Rehabilitation The Kujala score is a self-administered questionnaire for patients with patellofemoral pain syndrome (PFPS). The validity and reliability of the Thai version of Kujala are compatible with other versions (Turkish, Chinese and Persian version). The Thai version of Kujala has been shown to have validity and reliability in Thai PFPS patients and can be used for clinical evaluation and also in the research work.

  2. Processing and mechanical properties of metal-ceramic composites with controlled microstructure formed by reactive metal penetration

    NASA Astrophysics Data System (ADS)

    Ellerby, Donald Thomas

    1999-12-01

    Compared to monolithic ceramics, metal-reinforced ceramic composites offer the potential for improved toughness and reliability in ceramic materials. As such, there is significant scientific and commercial interest in the microstructure and properties of metal-ceramic composites. Considerable work has been conducted on modeling the toughening behavior of metal reinforcements in ceramics; however, there has been limited application and testing of these concepts on real systems. Composites formed by newly developed reactive processes now offer the flexibility to systematically control metal-ceramic composite microstructure, and to test some of the property models that have been proposed for these materials. In this work, the effects of metal-ceramic composite microstructure on resistance curve (R-curve) behavior, strength, and reliability were systematically investigated. Al/Al2O3 composites were formed by reactive metal penetration (RMP) of aluminum metal into aluminosilicate ceramic preforms. Processing techniques were developed to control the metal content, metal composition, and metal ligament size in the resultant composite microstructure. Quantitative stereology and microscopy were used to characterize the composite microstructures, and then the influence of microstructure on strength, toughness, R-curve behavior, and reliability, was investigated. To identify the strength limiting flaws in the composite microstructure, fractography was used to determine the failure origins. Additionally, the crack bridging tractions produced by the metal ligaments in metal-ceramic composites formed by the RMP process were modeled. Due to relatively large flaws and low bridging stresses in RMP composites, no dependence of reliability on R-curve behavior was observed. The inherent flaws formed during reactive processing appear to limit the strength and reliability of composites formed by the RMP process. This investigation has established a clear relationship between processing, microstructure, and properties in metal-ceramic composites formed by the RMP process. RMP composite properties are determined by the metal-ceramic composite microstructure (e.g., metal content and ligament size), which can be systematically varied by processing. Furthermore, relative to the ceramic preforms used to make the composites, metal-ceramic composites formed by RMP generally have improved properties and combinations of properties that make them more desirable for advanced engineering applications.

  3. Compliant Robot Wrist

    NASA Technical Reports Server (NTRS)

    Voellmer, George

    1992-01-01

    Compliant element for robot wrist accepts small displacements in one direction only (to first approximation). Three such elements combined to obtain translational compliance along three orthogonal directions, without rotational compliance along any of them. Element is double-blade flexure joint in which two sheets of spring steel attached between opposing blocks, forming rectangle. Blocks moved parallel to each other in one direction only. Sheets act as double cantilever beams deforming in S-shape, keeping blocks parallel.

  4. Photovoltaic cell array

    NASA Technical Reports Server (NTRS)

    Eliason, J. T. (Inventor)

    1976-01-01

    A photovoltaic cell array consisting of parallel columns of silicon filaments is described. Each fiber is doped to produce an inner region of one polarity type and an outer region of an opposite polarity type to thereby form a continuous radial semi conductor junction. Spaced rows of electrical contacts alternately connect to the inner and outer regions to provide a plurality of electrical outputs which may be combined in parallel or in series.

  5. Spontaneous Hot Flow Anomalies at Quasi-Parallel Shocks: 2. Hybrid Simulations

    NASA Technical Reports Server (NTRS)

    Omidi, N.; Zhang, H.; Sibeck, D.; Turner, D.

    2013-01-01

    Motivated by recent THEMIS observations, this paper uses 2.5-D electromagnetic hybrid simulations to investigate the formation of Spontaneous Hot Flow Anomalies (SHFA) upstream of quasi-parallel bow shocks during steady solar wind conditions and in the absence of discontinuities. The results show the formation of a large number of structures along and upstream of the quasi-parallel bow shock. Their outer edges exhibit density and magnetic field enhancements, while their cores exhibit drops in density, magnetic field, solar wind velocity and enhancements in ion temperature. Using virtual spacecraft in the simulation, we show that the signatures of these structures in the time series data are very similar to those of SHFAs seen in THEMIS data and conclude that they correspond to SHFAs. Examination of the simulation data shows that SHFAs form as the result of foreshock cavitons interacting with the bow shock. Foreshock cavitons in turn form due to the nonlinear evolution of ULF waves generated by the interaction of the solar wind with the backstreaming ions. Because foreshock cavitons are an inherent part of the shock dissipation process, the formation of SHFAs is also an inherent part of the dissipation process leading to a highly non-uniform plasma in the quasi-parallel magnetosheath including large scale density and magnetic field cavities.

  6. Selective recognition of parallel and anti-parallel thrombin-binding aptamer G-quadruplexes by different fluorescent dyes

    PubMed Central

    Zhao, Dan; Dong, Xiongwei; Jiang, Nan; Zhang, Dan; Liu, Changlin

    2014-01-01

    G-quadruplexes (G4) have been found increasing potential in applications, such as molecular therapeutics, diagnostics and sensing. Both Thioflavin T (ThT) and N-Methyl mesoporphyrin IX (NMM) become fluorescent in the presence of most G4, but thrombin-binding aptamer (TBA) has been reported as the only exception of the known G4-forming oligonucleotides when ThT is used as a high-throughput assay to identify G4 formation. Here, we investigate the interactions between ThT/NMM and TBA through fluorescence spectroscopy, circular dichroism and molecular docking simulation experiments in the absence or presence of cations. The results display that a large ThT fluorescence enhancement can be observed only when ThT bind to the parallel TBA quadruplex, which is induced to form by ThT in the absence of cations. On the other hand, great promotion in NMM fluorescence can be obtained only in the presence of anti-parallel TBA quadruplex, which is induced to fold by K+ or thrombin. The highly selective recognition of TBA quadruplex with different topologies by the two probes may be useful to investigate the interactions between conformation-specific G4 and the associated proteins, and could also be applied in label-free fluorescent sensing of other biomolecules. PMID:25245945

  7. Hyperswitch communication network

    NASA Technical Reports Server (NTRS)

    Peterson, J.; Pniel, M.; Upchurch, E.

    1991-01-01

    The Hyperswitch Communication Network (HCN) is a large scale parallel computer prototype being developed at JPL. Commercial versions of the HCN computer are planned. The HCN computer being designed is a message passing multiple instruction multiple data (MIMD) computer, and offers many advantages in price-performance ratio, reliability and availability, and manufacturing over traditional uniprocessors and bus based multiprocessors. The design of the HCN operating system is a uniquely flexible environment that combines both parallel processing and distributed processing. This programming paradigm can achieve a balance among the following competing factors: performance in processing and communications, user friendliness, and fault tolerance. The prototype is being designed to accommodate a maximum of 64 state of the art microprocessors. The HCN is classified as a distributed supercomputer. The HCN system is described, and the performance/cost analysis and other competing factors within the system design are reviewed.

  8. Reliability Measure of a Clinical Test: Appreciation of Music in Cochlear Implantees (AMICI)

    PubMed Central

    Cheng, Min-Yu; Spitzer, Jaclyn B.; Shafiro, Valeriy; Sheft, Stanley; Mancuso, Dean

    2014-01-01

    Purpose The goals of this study were (1) to investigate the reliability of a clinical music perception test, Appreciation of Music in Cochlear Implantees (AMICI), and (2) examine associations between the perception of music and speech. AMICI was developed as a clinical instrument for assessing music perception in persons with cochlear implants (CIs). The test consists of four subtests: (1) music versus environmental noise discrimination, (2) musical instrument identification (closed-set), (3) musical style identification (closed-set), and (4) identification of musical pieces (open-set). To be clinically useful, it is crucial for AMICI to demonstrate high test-retest reliability, so that CI users can be assessed and retested after changes in maps or programming strategies. Research Design Thirteen CI subjects were tested with AMICI for the initial visit and retested again 10–14 days later. Two speech perception tests (consonant-nucleus-consonant [CNC] and Bamford-Kowal-Bench Speech-in-Noise [BKB-SIN]) were also administered. Data Analysis Test-retest reliability and equivalence of the test’s three forms were analyzed using paired t-tests and correlation coefficients, respectively. Correlation analysis was also conducted between results from the music and speech perception tests. Results Results showed no significant difference between test and retest (p > 0.05) with adequate power (0.9) as well as high correlations between the three forms (Forms A and B, r = 0.91; Forms A and C, r = 0.91; Forms B and C, r = 0.95). Correlation analysis showed high correlation between AMICI and BKB-SIN (r = −0.71), and moderate correlation between AMICI and CNC (r = 0.4). Conclusions The study showed AMICI is highly reliable for assessing musical perception in CI users. PMID:24384082

  9. Strengthening the reliability and credibility of observational epidemiology studies by creating an Observational Studies Register.

    PubMed

    Swaen, Gerard M H; Carmichael, Neil; Doe, John

    2011-05-01

    To evaluate the need for the creation of a system in which observational epidemiology studies are registered; an Observational Studies Register (OSR). The current scientific process for observational epidemiology studies is described. Next, a parallel is made with the clinical trials area, where the creation of clinical trial registers has greatly restored and improved their credibility and reliability. Next, the advantages and disadvantages of an OSR are compared. The advantages of an OSR outweigh its disadvantages. The creation of an OSR, similar to the existing Clinical Trials Registers, will improve the assessment of publication bias and will provide an opportunity to compare the original study protocol with the results reported in the publication. Reliability, credibility, and transparency of observational epidemiology studies are strengthened by the creation of an OSR. We propose a structured, collaborative, and coordinated approach for observational epidemiology studies that can provide solutions for existing weaknesses and will strengthen credibility and reliability, similar to the approach currently used in clinical trials, where Clinical Trials Registers have played a key role in strengthening their scientific value. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    NASA Astrophysics Data System (ADS)

    Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-04-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.

  11. Learning Style Scales: a valid and reliable questionnaire.

    PubMed

    Abdollahimohammad, Abdolghani; Ja'afar, Rogayah

    2014-01-01

    Learning-style instruments assist students in developing their own learning strategies and outcomes, in eliminating learning barriers, and in acknowledging peer diversity. Only a few psychometrically validated learning-style instruments are available. This study aimed to develop a valid and reliable learning-style instrument for nursing students. A cross-sectional survey study was conducted in two nursing schools in two countries. A purposive sample of 156 undergraduate nursing students participated in the study. Face and content validity was obtained from an expert panel. The LSS construct was established using principal axis factoring (PAF) with oblimin rotation, a scree plot test, and parallel analysis (PA). The reliability of LSS was tested using Cronbach's α, corrected item-total correlation, and test-retest. Factor analysis revealed five components, confirmed by PA and a relatively clear curve on the scree plot. Component strength and interpretability were also confirmed. The factors were labeled as perceptive, solitary, analytic, competitive, and imaginative learning styles. Cronbach's α was >0.70 for all subscales in both study populations. The corrected item-total correlations were >0.30 for the items in each component. The LSS is a valid and reliable inventory for evaluating learning style preferences in nursing students in various multicultural environments.

  12. A parallel-processing approach to computing for the geographic sciences; applications and systems enhancements

    USGS Publications Warehouse

    Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Liu, Shu-Guang; Nichols, Erin; Haga, Jim; Maddox, Brian; Bilderback, Chris; Feller, Mark; Homer, George

    2001-01-01

    The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost, personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting information science research into parallel computing systems and applications.

  13. Re-forming supercritical quasi-parallel shocks. I - One- and two-dimensional simulations

    NASA Technical Reports Server (NTRS)

    Thomas, V. A.; Winske, D.; Omidi, N.

    1990-01-01

    The process of reforming supercritical quasi-parallel shocks is investigated using one-dimensional and two-dimensional hybrid (particle ion, massless fluid electron) simulations both of shocks and of simpler two-stream interactions. It is found that the supercritical quasi-parallel shock is not steady. Instread of a well-defined shock ramp between upstream and downstream states that remains at a fixed position in the flow, the ramp periodically steepens, broadens, and then reforms upstream of its former position. It is concluded that the wave generation process is localized at the shock ramp and that the reformation process proceeds in the absence of upstream perturbations intersecting the shock.

  14. Laser weld jig

    DOEpatents

    Van Blarigan, Peter; Haupt, David L.

    1982-01-01

    A system is provided for welding a workpiece (10, FIG. 1) along a predetermined weld line (12) that may be of irregular shape, which includes the step of forming a lip (32) on the workpiece to extend parallel to the weld line, and moving the workpiece by engaging the lip between a pair of rotatable members (34, 36). Rotation of one of the members at a constant speed, causes the workpiece to move so that all points on the weld line sequentially pass a fixed point in space (17) at a constant speed, so that a laser welding beam can be directed at that fixed point to form a weld along the weld line. The workpiece can include a reuseable jig (24) forming the lip, and with the jig constructed to detachably hold parts (22, 20) to be welded at a position wherein the weld line of the parts extends parallel to the lip on the jig.

  15. The probability estimation of the electronic lesson implementation taking into account software reliability

    NASA Astrophysics Data System (ADS)

    Gurov, V. V.

    2017-01-01

    Software tools for educational purposes, such as e-lessons, computer-based testing system, from the point of view of reliability, have a number of features. The main ones among them are the need to ensure a sufficiently high probability of their faultless operation for a specified time, as well as the impossibility of their rapid recovery by the way of replacing it with a similar running program during the classes. The article considers the peculiarities of reliability evaluation of programs in contrast to assessments of hardware reliability. The basic requirements to reliability of software used for carrying out practical and laboratory classes in the form of computer-based training programs are given. The essential requirements applicable to the reliability of software used for conducting the practical and laboratory studies in the form of computer-based teaching programs are also described. The mathematical tool based on Markov chains, which allows to determine the degree of debugging of the training program for use in the educational process by means of applying the graph of the software modules interaction, is presented.

  16. The 10m incremental shuttle walk test is a highly reliable field exercise test for patients referred to cardiac rehabilitation: a retest reliability study.

    PubMed

    Hanson, Lisa C; Taylor, Nicholas F; McBurney, Helen

    2016-09-01

    To determine the retest reliability of the 10m incremental shuttle walk test (ISWT) in a mixed cardiac rehabilitation population. Participants completed two 10m ISWTs in a single session in a repeated measures study. Ten participants completed a third 10m ISWT as part of a pilot study. Hospital physiotherapy department. 62 adults aged a mean of 68 years (SD 10) referred to a cardiac rehabilitation program. Retest reliability of the 10m ISWT expressed as relative reliability and measurement error. Relative reliability was expressed in a ratio in the form of an intraclass correlation coefficient (ICC) and measurement error in the form of the standard error of measurement (SEM) and 95% confidence intervals for the group and individual. There was a high level of relative reliability over the two walks with an ICC of .99. The SEMagreement was 17m, and a change of at least 23m for the group and 54m for the individual would be required to be 95% confident of exceeding measurement error. The 10m ISWT demonstrated good retest reliability and is sufficiently reliable to be applied in practice in this population without the use of a practice test. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  17. Confirming Pseudomonas putida as a reliable bioassay for demonstrating biocompatibility enhancement by solar photo-oxidative processes of a biorecalcitrant effluent.

    PubMed

    García-Ripoll, A; Amat, A M; Arques, A; Vicente, R; Ballesteros Martín, M M; Pérez, J A Sánchez; Oller, I; Malato, S

    2009-03-15

    Experiments based on Vibrio fischeri, activated sludge and Pseudomonas putida have been employed to check variation in the biocompatibility of an aqueous solution of a commercial pesticide, along solar photo-oxidative process (TiO(2) and Fenton reagent). Activated sludge-based experiments have demonstrated a complete detoxification of the solution, although important toxicity is still detected according to the more sensitive V. fischeri assays. In parallel, the biodegradability of organic matter is strongly enhanced, with BOD(5)/COD ratio above 0.8. Bioassays run with P. putida have given similar trends, remarking the convenience of using P. putida culture as a reliable and reproducible method for assessing both toxicity and biodegradability, as a substitute to other more time consuming methods.

  18. Reliability of the International Physical Activity Questionnaire in Research Settings: Last 7-Day Self-Administered Long Form

    ERIC Educational Resources Information Center

    Levy, Susan S.; Readdy, R. Tucker

    2009-01-01

    The purpose of this study was to examine the test-retest reliability of the last 7-day long form International Physical Activity Questionnaire (Craig et al., 2003) and to examine the construct validity for the measure in a research setting. Participants were 151 male (n = 52) and female (n = 99) university students (M age = 24.15 years, SD = 5.01)…

  19. Free Energy Reconstruction from Logarithmic Mean-Force Dynamics Using Multiple Nonequilibrium Trajectories.

    PubMed

    Morishita, Tetsuya; Yonezawa, Yasushige; Ito, Atsushi M

    2017-07-11

    Efficient and reliable estimation of the mean force (MF), the derivatives of the free energy with respect to a set of collective variables (CVs), has been a challenging problem because free energy differences are often computed by integrating the MF. Among various methods for computing free energy differences, logarithmic mean-force dynamics (LogMFD) [ Morishita et al., Phys. Rev. E 2012 , 85 , 066702 ] invokes the conservation law in classical mechanics to integrate the MF, which allows us to estimate the free energy profile along the CVs on-the-fly. Here, we present a method called parallel dynamics, which improves the estimation of the MF by employing multiple replicas of the system and is straightforwardly incorporated in LogMFD or a related method. In the parallel dynamics, the MF is evaluated by a nonequilibrium path-ensemble using the multiple replicas based on the Crooks-Jarzynski nonequilibrium work relation. Thanks to the Crooks relation, realizing full-equilibrium states is no longer mandatory for estimating the MF. Additionally, sampling in the hidden subspace orthogonal to the CV space is highly improved with appropriate weights for each metastable state (if any), which is hardly achievable by typical free energy computational methods. We illustrate how to implement parallel dynamics by combining it with LogMFD, which we call logarithmic parallel dynamics (LogPD). Biosystems of alanine dipeptide and adenylate kinase in explicit water are employed as benchmark systems to which LogPD is applied to demonstrate the effect of multiple replicas on the accuracy and efficiency in estimating the free energy profiles using parallel dynamics.

  20. Dynamic performance of high speed solenoid valve with parallel coils

    NASA Astrophysics Data System (ADS)

    Kong, Xiaowu; Li, Shizhen

    2014-07-01

    The methods of improving the dynamic performance of high speed on/off solenoid valve include increasing the magnetic force of armature and the slew rate of coil current, decreasing the mass and stroke of moving parts. The increase of magnetic force usually leads to the decrease of current slew rate, which could increase the delay time of the dynamic response of solenoid valve. Using a high voltage to drive coil can solve this contradiction, but a high driving voltage can also lead to more cost and a decrease of safety and reliability. In this paper, a new scheme of parallel coils is investigated, in which the single coil of solenoid is replaced by parallel coils with same ampere turns. Based on the mathematic model of high speed solenoid valve, the theoretical formula for the delay time of solenoid valve is deduced. Both the theoretical analysis and the dynamic simulation show that the effect of dividing a single coil into N parallel sub-coils is close to that of driving the single coil with N times of the original driving voltage as far as the delay time of solenoid valve is concerned. A specific test bench is designed to measure the dynamic performance of high speed on/off solenoid valve. The experimental results also prove that both the delay time and switching time of the solenoid valves can be decreased greatly by adopting the parallel coil scheme. This research presents a simple and practical method to improve the dynamic performance of high speed on/off solenoid valve.

  1. Valles Marineris as a Cryokarstic Structure Formed by a Giant Dyke System: Support From New Analogue Experiments

    NASA Astrophysics Data System (ADS)

    Ozeren, M. S.; Sengor, A. M. C.; Acar, D.; Ülgen, S. C.; Onsel, I. E.

    2014-12-01

    Valles Marineris is the most significant near-linear depression on Mars. It is some 4000 km long, up to about 200 km wide and some 7 km deep. Although its margins look parallel at first sight, the entire structure has a long spindle shape with significant enlargement in its middle (Melas Chasma) caused by cuspate slope retreat mechanisms. Farther to its north is Hebes Chasma which is an entirely closed depression with a more pronounced spindle shape. Tithonium Chasma is a parallel, but much narrower depression to its northeast. All these chasmae have axes parallel with one another and such structures occur nowhere else on Mars. A scabland surface exists to the east of the Valles Marineris and the causative water mass seems to have issued from it. The great resemblance of these chasmae on mars to poljes in the karstic regions on earth have led us to assume that they owed their existence to dissolution of rock layers underlying them. We assumed that the dissolving layer consisted of water ice forming substantial layers, in fact entirely frozen seas of several km depth. We have simulated this geometry by using bentonite and flour layers (in different experiments) overlying layers of ice in which a resistant coil was used to simulate a dyke. We used different thicknesses of bentonite and flour overlying ice layers again of various thicknesses. The flour seems to simulate the Martian crust better because on Mars, g is only about 3/8ths of its value on Earth, so (for equal crustal density) the depth to which the cohesion term C remains important in the Mohr-Coulomb shear failure criterion is about 8/3 times greater. As examples we show two of those experiments in which both the rock analogue and ice layers were of 1.5 cm. thick. Perfect analogues of the Valles Marineris formed above the dyke analogue thermal source complete with the near-linear structure, overall flat spindle shape, cuspate margins, a central ridge, parallel side faults, parallel depressions resembling the Tithonium Chasma. When water was allowed to drain from the beginning, closed depressions formed that have an amazing resemblance to Hebes chasma. We postulate that the entire system of chasmae here discussed formed atop a major dyke swarm some 4000 km length, not dissimilar to the 3500 km long Mesoproterozoic (Ectasian) dyke swarm disrupting the Canadian Shield.

  2. 14 CFR 25.1387 - Position light system dihedral angles.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... this section. (b) Dihedral angle L (left) is formed by two intersecting vertical planes, the first... two intersecting vertical planes, the first parallel to the longitudinal axis of the airplane, and the... axis. (d) Dihedral angle A (aft) is formed by two intersecting vertical planes making angles of 70...

  3. 14 CFR 27.1387 - Position light system dihedral angles.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... this section. (b) Dihedral angle L (left) is formed by two intersecting vertical planes, the first... two intersecting vertical planes, the first parallel to the longitudinal axis of the rotorcraft, and... longitudinal axis. (d) Dihedral angle A (aft) is formed by two intersecting vertical planes making angles of 70...

  4. Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.

    PubMed

    Dematté, Lorenzo

    2012-01-01

    Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output

  5. Computer calculation of device, circuit, equipment, and system reliability.

    NASA Technical Reports Server (NTRS)

    Crosby, D. R.

    1972-01-01

    A grouping into four classes is proposed for all reliability computations that are related to electronic equipment. Examples are presented of reliability computations in three of these four classes. Each of the three specific reliability tasks described was originally undertaken to satisfy an engineering need for reliability data. The form and interpretation of the print-out of the specific reliability computations is presented. The justification for the costs of these computations is indicated. The skills of the personnel used to conduct the analysis, the interfaces between the personnel, and the timing of the projects is discussed.

  6. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    PubMed

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations.

  7. Microglomerular Synaptic Complexes in the Sky-Compass Network of the Honeybee Connect Parallel Pathways from the Anterior Optic Tubercle to the Central Complex.

    PubMed

    Held, Martina; Berz, Annuska; Hensgen, Ronja; Muenz, Thomas S; Scholl, Christina; Rössler, Wolfgang; Homberg, Uwe; Pfeiffer, Keram

    2016-01-01

    While the ability of honeybees to navigate relying on sky-compass information has been investigated in a large number of behavioral studies, the underlying neuronal system has so far received less attention. The sky-compass pathway has recently been described from its input region, the dorsal rim area (DRA) of the compound eye, to the anterior optic tubercle (AOTU). The aim of this study is to reveal the connection from the AOTU to the central complex (CX). For this purpose, we investigated the anatomy of large microglomerular synaptic complexes in the medial and lateral bulbs (MBUs/LBUs) of the lateral complex (LX). The synaptic complexes are formed by tubercle-lateral accessory lobe neuron 1 (TuLAL1) neurons of the AOTU and GABAergic tangential neurons of the central body's (CB) lower division (TL neurons). Both TuLAL1 and TL neurons strongly resemble neurons forming these complexes in other insect species. We further investigated the ultrastructure of these synaptic complexes using transmission electron microscopy. We found that single large presynaptic terminals of TuLAL1 neurons enclose many small profiles (SPs) of TL neurons. The synaptic connections between these neurons are established by two types of synapses: divergent dyads and divergent tetrads. Our data support the assumption that these complexes are a highly conserved feature in the insect brain and play an important role in reliable signal transmission within the sky-compass pathway.

  8. Predictive Genomic Analyses Inform the Basis for Vitamin Metabolism and Provisioning in Bacteria-Arthropod Endosymbioses

    PubMed Central

    Serbus, Laura R.; Rodriguez, Brian Garcia; Sharmin, Zinat; Momtaz, A. J. M. Zehadee; Christensen, Steen

    2017-01-01

    The requirement of vitamins for core metabolic processes creates a unique set of pressures for arthropods subsisting on nutrient-limited diets. While endosymbiotic bacteria carried by arthropods have been widely implicated in vitamin provisioning, the underlying molecular mechanisms are not well understood. To address this issue, standardized predictive assessment of vitamin metabolism was performed in 50 endosymbionts of insects and arachnids. The results predicted that arthropod endosymbionts overall have little capacity for complete de novo biosynthesis of conventional or active vitamin forms. Partial biosynthesis pathways were commonly predicted, suggesting a substantial role in vitamin provisioning. Neither taxonomic relationships between host and symbiont, nor the mode of host-symbiont interaction were clear predictors of endosymbiont vitamin pathway capacity. Endosymbiont genome size and the synthetic capacity of nonsymbiont taxonomic relatives were more reliable predictors. We developed a new software application that also predicted that last-step conversion of intermediates into active vitamin forms may contribute further to vitamin biosynthesis by endosymbionts. Most instances of predicted vitamin conversion were paralleled by predictions of vitamin use. This is consistent with achievement of provisioning in some cases through upregulation of pathways that were retained for endosymbiont benefit. The predicted absence of other enzyme classes further suggests a baseline of vitamin requirement by the majority of endosymbionts, as well as some instances of putative mutualism. Adaptation of this workflow to analysis of other organisms and metabolic pathways will provide new routes for considering the molecular basis for symbiosis on a comprehensive scale. PMID:28455417

  9. VCSEL technology for medical diagnostics and therapeutics

    NASA Astrophysics Data System (ADS)

    Hibbs-Brenner, M. K.; Johnson, K. L.; Bendett, M.

    2009-02-01

    In the 1990's a new laser technology, Vertical Cavity Surface Emitting Lasers, or VCSELs, emerged and transformed the data communication industry. The combination of performance characteristics, reliability and performance/cost ratio allowed high data rate communication to occur over short distances at a commercially viable price. VCSELs have not been widely used outside of this application space, but with the development of new attributes, such as a wider range of available wavelengths, the demonstration of arrays of VCSELs on a single chip, and a variety of package form factors, VCSELs can have a significant impact on medical diagnostic and therapeutic applications. One area of potential application is neurostimulation. Researchers have previously demonstrated the feasibility of using 1850nm light for nerve stimulation. The ability to create an array of VCSELs emitting at this wavelength would allow significantly improved spatial resolution, and multiple parallel channels of stimulation. For instance, 2D arrays of 100 lasers or more can be integrated on a single chip less than 2mm on a side. A second area of interest is non-invasive sensing. Performance attributes such as the narrow spectral width, low power consumption, and packaging flexibility open up new possibilities in non-invasive and/or continuous sensing. This paper will suggest ways in which VCSELs can be implemented within these application areas, and the advantages provided by the unique performance characteristics of the VCSEL. The status of VCSEL technology as a function of available wavelength and array size and form factors will be summarized.

  10. Study of solid rocket motors for a space shuttle booster. Volume 2, book 1: Analysis and design

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An analysis of the factors which determined the selection of the solid rocket propellant engines for the space shuttle booster is presented. The 156 inch diameter, parallel burn engine was selected because of its transportability, cost effectiveness, and reliability. Other factors which caused favorable consideration are: (1) recovery and reuse are feasible and offer substantial cost savings, (2) abort can be easily accomplished. and (3) ecological effects are acceptable.

  11. Geodetic Observatory Wettzell - 20-m Radio Telescope and Twin Telescope

    NASA Technical Reports Server (NTRS)

    Neidhardt, Alexander; Kronschnabl, Gerhard; Schatz, Raimund

    2013-01-01

    In the year 2012, the 20-m radio telescope at the Geodetic Observatory Wettzell, Germany again contributed very successfully to the International VLBI Service for Geodesy and Astrometry observing program. Technical changes, developments, improvements, and upgrades were made to increase the reliability of the entire VLBI observing system. In parallel, the new Twin radio telescope Wettzell (TTW) got the first feedhorn, while the construction of the HF-receiving and the controlling system was continued.

  12. Reliable, Memory Speed Storage for Cluster Computing Frameworks

    DTIC Science & Technology

    2014-06-16

    specification API that can capture computations in many of today’s popular data -parallel computing models, e.g., MapReduce and SQL. We also ported the Hadoop ...today’s big data workloads: • Immutable data : Data is immutable once written, since dominant underlying storage systems, such as HDFS [3], only support...network transfers, so reads can be data -local. • Program size vs. data size: In big data processing, the same operation is repeatedly applied on massive

  13. Voltage-controlled magnetization switching in MRAMs in conjunction with spin-transfer torque and applied magnetic field

    NASA Astrophysics Data System (ADS)

    Munira, Kamaram; Pandey, Sumeet C.; Kula, Witold; Sandhu, Gurtej S.

    2016-11-01

    Voltage-controlled magnetic anisotropy (VCMA) effect has attracted a significant amount of attention in recent years because of its low cell power consumption during the anisotropy modulation of a thin ferromagnetic film. However, the applied voltage or electric field alone is not enough to completely and reliably reverse the magnetization of the free layer of a magnetic random access memory (MRAM) cell from anti-parallel to parallel configuration or vice versa. An additional symmetry-breaking mechanism needs to be employed to ensure the deterministic writing process. Combinations of voltage-controlled magnetic anisotropy together with spin-transfer torque (STT) and with an applied magnetic field (Happ) were evaluated for switching reliability, time taken to switch with low error rate, and energy consumption during the switching process. In order to get a low write error rate in the MRAM cell with VCMA switching mechanism, a spin-transfer torque current or an applied magnetic field comparable to the critical current and field of the free layer is necessary. In the hybrid processes, the VCMA effect lowers the duration during which the higher power hungry secondary mechanism is in place. Therefore, the total energy consumed during the hybrid writing processes, VCMA + STT or VCMA + Happ, is less than the energy consumed during pure spin-transfer torque or applied magnetic field switching.

  14. Surface-modified CMOS IC electrochemical sensor array targeting single chromaffin cells for highly parallel amperometry measurements.

    PubMed

    Huang, Meng; Delacruz, Joannalyn B; Ruelas, John C; Rathore, Shailendra S; Lindau, Manfred

    2018-01-01

    Amperometry is a powerful method to record quantal release events from chromaffin cells and is widely used to assess how specific drugs modify quantal size, kinetics of release, and early fusion pore properties. Surface-modified CMOS-based electrochemical sensor arrays allow simultaneous recordings from multiple cells. A reliable, low-cost technique is presented here for efficient targeting of single cells specifically to the electrode sites. An SU-8 microwell structure is patterned on the chip surface to provide insulation for the circuitry as well as cell trapping at the electrode sites. A shifted electrode design is also incorporated to increase the flexibility of the dimension and shape of the microwells. The sensitivity of the electrodes is validated by a dopamine injection experiment. Microwells with dimensions slightly larger than the cells to be trapped ensure excellent single-cell targeting efficiency, increasing the reliability and efficiency for on-chip single-cell amperometry measurements. The surface-modified device was validated with parallel recordings of live chromaffin cells trapped in the microwells. Rapid amperometric spikes with no diffusional broadening were observed, indicating that the trapped and recorded cells were in very close contact with the electrodes. The live cell recording confirms in a single experiment that spike parameters vary significantly from cell to cell but the large number of cells recorded simultaneously provides the statistical significance.

  15. Femtosecond laser cutting of human corneas for the subbasal nerve plexus evaluation.

    PubMed

    Kowtharapu, B S; Marfurt, C; Hovakimyan, M; Will, F; Richter, H; Wree, A; Stachs, O; Guthoff, R F

    2017-01-01

    Assessment of various morphological parameters of the corneal subbasal nerve plexus is a valuable method of documenting the structural and presumably functional integrity of the corneal innervation in health and disease. The aim of this work is to establish a rapid, reliable and reproducible method for visualization of the human corneal SBP using femtosecond laser cut corneal tissue sections. Trephined healthy corneal buttons were fixed and processed using TissueSurgeon-a femtosecond laser based microtome, to obtain thick tissue sections of the corneal epithelium and anterior stroma cut parallel to the ocular surface within approximately 15 min. A near infrared femtosecond laser was focused on to the cornea approximately 70-90 μm from the anterior surface to induce material separation using TissueSurgeon. The obtained corneal sections were stained following standard immunohistochemical procedures with anti-neuronal β-III tubulin antibody for visualization of the corneal nerves. Sections that contained the epithelium and approximately 20-30 μm of anterior stroma yielded excellent visualisation of the SBP with minimal optical interference from underlying stromal nerves. In conclusion, the results of this study have demonstrated that femtosecond laser cutting of the human cornea offers greater speed, ease and reliability than standard tissue preparation methods for obtaining high quality thick sections of the anterior cornea cut parallel to the ocular surface. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  16. A systematic approach to embedded biomedical decision making.

    PubMed

    Song, Zhe; Ji, Zhongkai; Ma, Jian-Guo; Sputh, Bernhard; Acharya, U Rajendra; Faust, Oliver

    2012-11-01

    An embedded decision making is a key feature for many biomedical systems. In most cases human life directly depends on correct decisions made by these systems, therefore they have to work reliably. This paper describes how we applied systems engineering principles to design a high performance embedded classification system in a systematic and well structured way. We introduce the structured design approach by discussing requirements capturing, specifications refinement, implementation and testing. Thereby, we follow systems engineering principles and execute each of these processes as formal as possible. The requirements, which motivate the system design, describe an automated decision making system for diagnostic support. These requirements are refined into the implementation of a support vector machine (SVM) algorithm which enables us to integrate automated decision making in embedded systems. With a formal model we establish functionality, stability and reliability of the system. Furthermore, we investigated different parallel processing configurations of this computationally complex algorithm. We found that, by adding SVM processes, an almost linear speedup is possible. Once we established these system properties, we translated the formal model into an implementation. The resulting implementation was tested using XMOS processors with both normal and failure cases, to build up trust in the implementation. Finally, we demonstrated that our parallel implementation achieves the speedup, predicted by the formal model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Electron parallel closures for various ion charge numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Jeong-Young, E-mail: j.ji@usu.edu; Held, Eric D.; Kim, Sang-Kyeun

    2016-03-15

    Electron parallel closures for the ion charge number Z = 1 [J.-Y. Ji and E. D. Held, Phys. Plasmas 21, 122116 (2014)] are extended for 1 ≤ Z ≤ 10. Parameters are computed for various Z with the same form of the Z = 1 kernels adopted. The parameters are smoothly varying in Z and hence can be used to interpolate parameters and closures for noninteger, effective ion charge numbers.

  18. Optimal Super Dielectric Material

    DTIC Science & Technology

    2015-09-01

    INTENTIONALLY LEFT BLANK i REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704–0188 Public reporting burden for this collection of information is estimated...containing liquid with dissolved ionic species will form large dipoles, polarized opposite the applied field. Large dipole SDM placed between the...electrodes of a parallel plate capacitor will reduce the net field to an unprecedented extent. This family of materials can form materials with

  19. Geometrization of the Dirac theory of the electron

    NASA Technical Reports Server (NTRS)

    Fock, V.

    1977-01-01

    Using the concept of parallel displacement of a half vector, the Dirac equations are generally written in invariant form. The energy tensor is formed and both the macroscopic and quantum mechanic equations of motion are set up. The former have the usual form: divergence of the energy tensor equals the Lorentz force and the latter are essentially identical with those of the geodesic line.

  20. Development and validation of the Spanish-English Language Proficiency Scale (SELPS).

    PubMed

    Smyk, Ekaterina; Restrepo, M Adelaida; Gorin, Joanna S; Gray, Shelley

    2013-07-01

    This study examined the development and validation of a criterion-referenced Spanish-English Language Proficiency Scale (SELPS) that was designed to assess the oral language skills of sequential bilingual children ages 4-8. This article reports results for the English proficiency portion of the scale. The SELPS assesses syntactic complexity, grammatical accuracy, verbal fluency, and lexical diversity based on 2 story retell tasks. In Study 1, 40 children were given 2 story retell tasks to evaluate the reliability of parallel forms. In Study 2, 76 children participated in the validation of the scale against language sample measures and teacher ratings of language proficiency. Study 1 indicated no significant differences between the SELPS scores on the 2 stories. Study 2 indicated that the SELPS scores correlated significantly with their counterpart language sample measures. Correlations between the SELPS and teacher ratings were moderate. The 2 story retells elicited comparable SELPS scores, providing a valuable tool for test-retest conditions in the assessment of language proficiency. Correlations between the SELPS scores and external variables indicated that these measures assessed the same language skills. Results provided empirical evidence regarding the validity of inferences about language proficiency based on the SELPS score.

  1. Sensory motor systems of artificial and natural hands.

    PubMed

    Chappell, Paul H; Cranny, Andy; Cotton, Darryl P J; White, Neil M; Beeby, Steve P

    2007-12-01

    The surgeon Ambroise Paré designed an anthropomorphic hand for wounded soldiers in the 16th century. Since that time, there have been advances in technology through the use of computer-aided design, modern materials, electronic controllers and sensors to realise artificial hands which have good functionality and reliability. Data from touch, object slip, finger position and temperature sensors, mounted in the fingers and on the palm, can be used in feedback loops to automatically hold objects. A study of the natural neuromuscular systems reveals a complexity which can only in part be realised today with technology. Highlights of the parallels and differences between natural and artificial hands are discussed with reference to the Southampton Hand. The anatomical structure of parts of the natural systems can be made artificially such as the antagonist muscles using tendons. Theses solutions look promising as they are based on the natural form but in practice lack the desired physical specification. However, concepts of the lower spinal loops can be mimicked in principle. Some future devices will require greater skills from the surgeon to create the interface between the natural system and an artificial device. Such developments may offer a more natural control with ease of use for the limb deficient person.

  2. Correlation of Bedrock Type with the Geography of Leptospirosis

    PubMed Central

    Kingscote, Barbara F.

    1970-01-01

    Leptospirosis occurs enzootically over most of Southern Ontario. Leptospira pomona is the serotype most commonly found in outbreaks. Antibodies to L. pomona occur frequently in the sera of deer in wilderness areas. The geographic location of leptospirosis presents a pattern which closely parallels the distribution of Paleozoic bedrock. By contrast, L. pomona infection is absent from areas underlain by Precambrian bedrock. Comparisons of water chemistry, soil type, habitat, and host and pathogen availability in these two geologically distinct environments have not defined the mechanisms involved in the disease pattern. Leptospires resembling saphophytic strains occur widely, regardless of bedrock type. High titers to L. biflexa, a saprophytic serotype, were found frequently in deer sera from a Precambrian area which was surveyed intensively. Antibodies to L. hardjo and L. sejroe occur in many bovine sera from a predominantly Precambrian area where Paleozoic outliers are numerous. Colloidal clay is common to leptospiral habitats. A microenvironment structured by the surface activity of clay is likely to be a key ecological factor in the landscape epizootiology of leptospirosis. In Ontario, bedrock composed of limestone and dolomite formed in the Paleozoic era appears to be a reliable ecological marker for Leptospira pomona infection. PMID:4246001

  3. Active flow control insight gained from a modified integral boundary layer equation

    NASA Astrophysics Data System (ADS)

    Seifert, Avraham

    2016-11-01

    Active Flow Control (AFC) can alter the development of boundary layers with applications (e.g., reducing drag by separation delay or separating the boundary layers and enhancing vortex shedding to increase drag). Historically, significant effects of steady AFC methods were observed. Unsteady actuation is significantly more efficient than steady. Full-scale AFC tests were conducted with varying levels of success. While clearly relevant to industry, AFC implementation relies on expert knowledge with proven intuition and or costly and lengthy computational efforts. This situation hinders the use of AFC while simple, quick and reliable design method is absent. An updated form of the unsteady integral boundary layer (UIBL) equations, that include AFC terms (unsteady wall transpiration and body forces) can be used to assist in AFC analysis and design. With these equations and given a family of suitable velocity profiles, the momentum thickness can be calculated and matched with an outer, potential flow solution in 2D and 3D manner to create an AFC design tool, parallel to proven tools for airfoil design. Limiting cases of the UIBL equation can be used to analyze candidate AFC concepts in terms of their capability to modify the boundary layers development and system performance.

  4. Determination of Fermi contour and spin polarization of ν = 3 2 composite fermions via ballistic commensurability measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamburov, D.; Mueed, M. A.; Jo, I.

    2014-12-01

    We report ballistic transport commensurability minima in the magnetoresistance of nu = 3/2 composite fermions (CFs). The CFs are formed in high-quality two-dimensional electron systems confined to wide GaAs quantum wells and subjected to an in-plane, unidirectional periodic potential modulation. We observe a slight asymmetry of the CF commensurability positions with respect to nu = 3/2, which we explain quantitatively by comparing three CF density models and concluding that the nu = 3/2 CFs are likely formed by the minority carriers in the upper energy spin state of the lowest Landau level. Our data also allow us to probe themore » shape and size of the CF Fermi contour. At a fixed electron density of similar or equal to 1.8x10(11) cm(-2), as the quantum well width increases from 30 to 60 nm, the CFs show increasing spin polarization. We attribute this to the enhancement of the Zeeman energy relative to the Coulomb energy in wider wells where the latter is softened because of the larger electron layer thickness. The application of an additional parallel magnetic field (B-parallel to) leads to a significant distortion of the CF Fermi contour as B-parallel to couples to the CFs' out-of-plane orbital motion. The distortion is much more severe compared to the nu = 1/2 CF case at comparable B-parallel to. Moreover, the applied B-parallel to further spin-polarizes the nu = 3/2 CFs as deduced from the positions of the commensurability minima.« less

  5. Parallel computation of multigroup reactivity coefficient using iterative method

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  6. Evolution of Kelvin-Helmholtz instability at Venus in the presence of the parallel magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, H. Y.; Key Laboratory of Planetary Sciences, Chinese Academy of Sciences, Nanjing 210008; Cao, J. B.

    2015-06-15

    Two-dimensional MHD simulations were performed to study the evolution of the Kelvin-Helmholtz (KH) instability at the Venusian ionopause in response to the strong flow shear in presence of the in-plane magnetic field parallel to the flow direction. The physical behavior of the KH instability as well as the triggering and occurrence conditions for highly rolled-up vortices are characterized through several physical parameters, including Alfvén Mach number on the upper side of the layer, the density ratio, and the ratio of parallel magnetic fields between two sides of the layer. Using these parameters, the simulations show that both the high densitymore » ratio and the parallel magnetic field component across the boundary layer play a role of stabilizing the instability. In the high density ratio case, the amount of total magnetic energy in the final quasi-steady status is much more than that in the initial status, which is clearly different from the case with low density ratio. We particularly investigate the nonlinear development of the case that has a high density ratio and uniform magnetic field. Before the instability saturation, a single magnetic island is formed and evolves into two quasi-steady islands in the non-linear phase. A quasi-steady pattern eventually forms and is embedded within a uniform magnetic field and a broadened boundary layer. The estimation of loss rates of ions from Venus indicates that the stabilizing effect of the parallel magnetic field component on the KH instability becomes strong in the case of high density ratio.« less

  7. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  8. Plagiarism Detection for Indonesian Language using Winnowing with Parallel Processing

    NASA Astrophysics Data System (ADS)

    Arifin, Y.; Isa, S. M.; Wulandhari, L. A.; Abdurachman, E.

    2018-03-01

    The plagiarism has many forms, not only copy paste but include changing passive become active voice, or paraphrasing without appropriate acknowledgment. It happens on all language include Indonesian Language. There are many previous research that related with plagiarism detection in Indonesian Language with different method. But there are still some part that still has opportunity to improve. This research proposed the solution that can improve the plagiarism detection technique that can detect not only copy paste form but more advance than that. The proposed solution is using Winnowing with some addition process in pre-processing stage. With stemming processing in Indonesian Language and generate fingerprint in parallel processing that can saving time processing and produce the plagiarism result on the suspected document.

  9. Ethyl 2-[(carbamothioyl-amino)-imino]-propano-ate.

    PubMed

    Corrêa, Charlane C; Graúdo, José Eugênio J C; de Oliveira, Luiz Fernando C; de Almeida, Mauro V; Diniz, Renata

    2011-08-01

    The title compound, C(6)H(11)N(3)O(2)S, consists of a roughly planar mol-ecule (r.m.s deviation from planarity = 0.077 Å for the non-H atoms) and has the S atom in an anti position to the imine N atom. This N atom is the acceptor of a strongly bent inter-nal N-H⋯N hydrogen bond donated by the amino group. In the crystal, mol-ecules are arranged in undulating layers parallel to (010). The mol-ecules are linked via inter-molecular amino-carboxyl N-H⋯O hydrogen bonds, forming chains parallel to [001]. The chains are cross-linked by N(carbazone)-H⋯S and C-H⋯S inter-actions, forming infinite sheets.

  10. Validation of the Short Form of the Academic Procrastination Scale.

    PubMed

    Yockey, Ronald D

    2016-02-01

    The factor structure, internal consistency reliability, and convergent validity of the five-item Academic Procrastination Scale-Short Form was investigated on an ethnically diverse sample of college students. The results provided support for the Academic Procrastination Scale-Short Form as a unidimensional measure of academic procrastination, which possessed good internal consistency reliability in this sample of 282 students. The scale also demonstrated good convergent validity, with moderate to large correlations with both the Procrastination Assessment Scale-Students and the Tuckman Procrastination Scale. Implications of the results are discussed and recommendations for future work provided.

  11. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 1. Technical Report #1216

    ERIC Educational Resources Information Center

    Anderson, Daniel; Park, Jasmine, Bitnara; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest/and alternate form) and G-Theory/D-Study research on the easy CBM reading measures, grades 1-5. Data were gathered in the spring 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due…

  12. Exploiting Data Sparsity in Parallel Matrix Powers Computations

    DTIC Science & Technology

    2013-05-03

    2013 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour...matrices of the form A = D+USV H, where D is sparse and USV H has low rank but may be dense. Matrices of this form arise in many practical applications...methods numerical partial di erential equation solvers, and preconditioned iterative methods. If A has this form , our algorithm enables a communication

  13. Factors related to the parallel use of complementary and alternative medicine with conventional medicine among patients with chronic conditions in South Korea.

    PubMed

    Choi, Byunghee; Han, Dongwoon; Na, Seonsam; Lim, Byungmook

    2017-06-01

    This study aims to examine the characteristics and behavioral patterns of patients with chronic conditions behind their parallel use of the conventional medicine (CM) and the complementary and alternative medicine (CAM) that includes traditional Korean Medicine (KM). This cross-sectional study used the self-administered anonymous survey method to obtain the results from inpatients who were staying in three hospitals in Gyeongnam province in Korea. Of the 423 participants surveyed, 334 participants (79.0%) used some form of CAM among which KM therapies were the most common modalities. The results of a logistic regression analysis showed that the parallel use pattern was most apparent in the groups aged over 40. Patients with hypertension or joint diseases were seen to have higher propensity to show the parallel use patterns, whereas patients with diabetes were not. In addition, many sociodemographic and health-related characteristics are related to the patterns of the parallel use of CAM and CM. In the rural area of Korea, most inpatients who used CM for the management of chronic conditions used CAM in parallel. KM was the most common in CAM modalities, and the aspect of parallel use varied according to the disease conditions.

  14. SQDFT: Spectral Quadrature method for large-scale parallel O(N) Kohn-Sham calculations at high temperature

    NASA Astrophysics Data System (ADS)

    Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; Pask, John E.

    2018-03-01

    We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for O(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect O(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.

  15. Parallel Syndromes: Two Dimensions of Narcissism and the Facets of Psychopathic Personality in Criminally-Involved Individuals

    PubMed Central

    2012-01-01

    Little research has examined different dimensions of narcissism that may parallel psychopathy facets in criminally-involved individuals. The present study examined the pattern of relationships between grandiose and vulnerable narcissism, assessed using the Narcissistic Personality Inventory-16 and the Hypersensitive Narcissism Scale, respectively, and the four facets of psychopathy (interpersonal, affective, lifestyle, and antisocial) assessed via the Psychopathy Checklist: Screening Version (PCL:SV). As predicted, grandiose and vulnerable narcissism showed differential relationships to psychopathy facets, with grandiose narcissism relating positively to the interpersonal facet of psychopathy and vulnerable narcissism relating positively to the lifestyle facet of psychopathy. Paralleling existing psychopathy research, vulnerable narcissism showed stronger associations than grandiose narcissism to 1) other forms of psychopathology, including internalizing and substance use disorders, and 2) self- and other-directed aggression, measured using the Life History of Aggression and the Forms of Aggression Questionnaire. Grandiose narcissism was nonetheless associated with social dysfunction marked by a manipulative and deceitful interpersonal style and unprovoked aggression. Potentially important implications for uncovering etiological pathways and developing treatment interventions for these disorders in externalizing adults are discussed. PMID:22448731

  16. Real-time multi-mode neutron multiplicity counter

    DOEpatents

    Rowland, Mark S; Alvarez, Raymond A

    2013-02-26

    Embodiments are directed to a digital data acquisition method that collects data regarding nuclear fission at high rates and performs real-time preprocessing of large volumes of data into directly useable forms for use in a system that performs non-destructive assaying of nuclear material and assemblies for mass and multiplication of special nuclear material (SNM). Pulses from a multi-detector array are fed in parallel to individual inputs that are tied to individual bits in a digital word. Data is collected by loading a word at the individual bit level in parallel, to reduce the latency associated with current shift-register systems. The word is read at regular intervals, all bits simultaneously, with no manipulation. The word is passed to a number of storage locations for subsequent processing, thereby removing the front-end problem of pulse pileup. The word is used simultaneously in several internal processing schemes that assemble the data in a number of more directly useable forms. The detector includes a multi-mode counter that executes a number of different count algorithms in parallel to determine different attributes of the count data.

  17. Equivalent circuit for the characterization of the resonance mode in piezoelectric systems

    NASA Astrophysics Data System (ADS)

    Fernández-Afonso, Y.; García-Zaldívar, O.; Calderón-Piñar, F.

    2015-12-01

    The impedance properties in polarized piezoelectric can be described by electric equivalent circuits. The classic circuit used in the literature to describe real systems is formed by one resistor (R), one inductance (L) and one capacitance C connected in series and one capacity (C0) connected in parallel with the formers. Nevertheless, the equation that describe the resonance and anti-resonance frequencies depends on a complex manner of R, L, C and C0. In this work is proposed a simpler model formed by one inductance (L) and one capacity (C) in series; one capacity (C0) in parallel; one resistor (RP) in parallel and one resistor (RS) in series with other components. Unlike the traditional circuit, the equivalent circuit elements in the proposed model can be simply determined by knowing the experimental values of the resonance frequency fr, anti-resonance frequency fa, impedance module at resonance frequency |Zr|, impedance module at anti-resonance frequency |Za| and low frequency capacitance C0, without fitting the impedance experimental data to the obtained equation.

  18. Drastic stabilization of parallel DNA hybridizations by a polylysine comb-type copolymer with hydrophilic graft chain.

    PubMed

    Miyoshi, Daisuke; Ueda, Yu-Mi; Shimada, Naohiko; Nakano, Shu-Ichi; Sugimoto, Naoki; Maruyama, Atsushi

    2014-09-01

    Electrostatic interactions play a major role in protein-DNA interactions. As a model system of a cationic protein, herein we focused on a comb-type copolymer of a polycation backbone and dextran side chains, poly(L-lysine)-graft-dextran (PLL-g-Dex), which has been reported to form soluble interpolyelectrolyte complexes with DNA strands. We investigated the effects of PLL-g-Dex on the conformation and thermodynamics of DNA oligonucleotides forming various secondary structures. Thermodynamic analysis of the DNA structures showed that the parallel conformations involved in both DNA duplexes and triplexes were significantly and specifically stabilized by PLL-g-Dex. On the basis of thermodynamic parameters, it was further possible to design DNA switches that undergo structural transition responding to PLL-g-Dex from an antiparallel duplex to a parallel triplex even with mismatches in the third strand hybridization. These results suggest that polycationic molecules are able to induce structural polymorphism of DNA oligonucleotides, because of the conformation-selective stabilization effects. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Thermal stability of G-rich anti-parallel DNA triplexes upon insertion of LNA and α-L-LNA.

    PubMed

    Kosbar, Tamer R; Sofan, Mamdouh A; Abou-Zeid, Laila; Pedersen, Erik B

    2015-05-14

    G-rich anti-parallel DNA triplexes were modified with LNA or α-L-LNA in their Watson-Crick and TFO strands. The triplexes were formed by targeting a pyrimidine strand to a putative hairpin formed by Hoogsteen base pairing in order to use the UV melting method to evaluate the stability of the triplexes. Their thermal stability was reduced when the TFO strand was modified with LNA or α-L-LNA. The same trend was observed when the TFO strand and the purine Watson-Crick strand both were modified with LNA. When all triad components were modified with α-L-LNA and LNA in the middle of the triplex, the thermal melting was increased. When the pyrimidine sequence was modified with a single insertion of LNA or α-L-LNA the ΔTm increased. Moreover, increasing the number of α-L-LNA in the pyrimidine target sequence to six insertions, leads to a high increase in the thermal stability. The conformational S-type structure of α-L-LNA in anti-parallel triplexes is preferable for triplex stability.

  20. On some methods for improving time of reachability sets computation for the dynamic system control problem

    NASA Astrophysics Data System (ADS)

    Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir

    2016-12-01

    The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.

  1. Fluorous Parallel Synthesis of A Hydantoin/Thiohydantoin Library

    PubMed Central

    Lu, Yimin; Zhang, Wei

    2007-01-01

    Fluorous tagging strategy is applied to solution-phase parallel synthesis of a library containing hydantoin and thiohydantoin analogs. Two perfluoroalkyl (Rf)-tagged α-amino esters each react with 6 aromatic aldehydes under reductive amination conditions. Twelve amino esters then each react with 10 isocyanates and isothiocyanates in parallel. The resulting 120 ureas and thioureas undergo spontaneous cyclization to form the corresponding hydantoins and thiohydantoins. The intermediate and final product purifications are performed with solid-phase extraction (SPE) over FluoroFlash™ cartridges, no chromatography is required. Using standard instruments and straightforward SPE technique, one chemist accomplished the 120-member library synthesis in less than 5 working days, including starting material synthesis and product analysis. PMID:15789556

  2. Parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Amin-Javaheri, Masoud; Orin, David E.

    1989-01-01

    The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.

  3. Optimization of a Focusable and Rotatable Shear-Wave Periodic Permanent Magnet Electromagnetic Acoustic Transducers for Plates Inspection

    PubMed Central

    Qiu, Gongzhe

    2017-01-01

    Due to the symmetry of conventional periodic-permanent-magnet electromagnetic acoustic transducers (PPM EMATs), two shear (SH) waves can be generated and propagated simultaneously in opposite directions, which makes the signal recognition and interpretation complicatedly. Thus, this work presents a new SH wave PPM EMAT design, rotating the parallel line sources to realize the wave beam focusing in a single-direction. The theoretical model of distributed line sources was deduced firstly, and the effects of some parameters, such as the inner coil width, adjacent line sources spacing and the angle between parallel line sources, on SH wave focusing and directivity were studied mainly with the help of 3D FEM. Employing the proposed PPM EMATs, some experiments are carried out to verify the reliability of FEM simulation. The results indicate that rotating the parallel line sources can strength the wave on the closing side of line sources, decreasing the inner coil width and the adjacent line sources spacing can improve the amplitude and directivity of signals excited by transducers. Compared with traditional PPM EMATs, both the capacity of unidirectional excitation and directivity of the proposed PPM EMATs are improved significantly. PMID:29186790

  4. Optimization of a Focusable and Rotatable Shear-Wave Periodic Permanent Magnet Electromagnetic Acoustic Transducers for Plates Inspection.

    PubMed

    Song, Xiaochun; Qiu, Gongzhe

    2017-11-24

    Due to the symmetry of conventional periodic-permanent-magnet electromagnetic acoustic transducers (PPM EMATs), two shear (SH) waves can be generated and propagated simultaneously in opposite directions, which makes the signal recognition and interpretation complicatedly. Thus, this work presents a new SH wave PPM EMAT design, rotating the parallel line sources to realize the wave beam focusing in a single-direction. The theoretical model of distributed line sources was deduced firstly, and the effects of some parameters, such as the inner coil width, adjacent line sources spacing and the angle between parallel line sources, on SH wave focusing and directivity were studied mainly with the help of 3D FEM. Employing the proposed PPM EMATs, some experiments are carried out to verify the reliability of FEM simulation. The results indicate that rotating the parallel line sources can strength the wave on the closing side of line sources, decreasing the inner coil width and the adjacent line sources spacing can improve the amplitude and directivity of signals excited by transducers. Compared with traditional PPM EMATs, both the capacity of unidirectional excitation and directivity of the proposed PPM EMATs are improved significantly.

  5. Data flow modeling techniques

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  6. Resource Provisioning in SLA-Based Cluster Computing

    NASA Astrophysics Data System (ADS)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  7. Network Adjustment of Orbit Errors in SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Bahr, Hermann; Hanssen, Ramon

    2010-03-01

    Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.

  8. In-Bore Prostate Transperineal Interventions with an MRI-guided Parallel Manipulator: System Development and Preliminary Evaluation

    PubMed Central

    Eslami, Sohrab; Shang, Weijian; Li, Gang; Patel, Nirav; Fischer, Gregory S.; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Iordachita, Iulian

    2015-01-01

    Background The robot-assisted minimally-invasive surgery is well recognized as a feasible solution for diagnosis and treatment of the prostate cancer in human. Methods In this paper the kinematics of a parallel 4 Degrees-of-Freedom (DOF) surgical manipulator designed for minimally invasive in-bore prostate percutaneous interventions through the patient's perineum. The proposed manipulator takes advantage of 4 sliders actuated by MRI-compatible piezoelectric motors and incremental rotary encoders. Errors, mostly originating from the design and manufacturing process, need to be identified and reduced before the robot is deployed in the clinical trials. Results The manipulator has undergone several experiments to evaluate the repeatability and accuracy of the needle placement which is an essential concern in percutaneous prostate interventions. Conclusion The acquired results endorse the sustainability, precision (about 1 mm in air (in x or y direction) at the needle's reference point) and reliability of the manipulator. PMID:26111458

  9. Research on retailer data clustering algorithm based on Spark

    NASA Astrophysics Data System (ADS)

    Huang, Qiuman; Zhou, Feng

    2017-03-01

    Big data analysis is a hot topic in the IT field now. Spark is a high-reliability and high-performance distributed parallel computing framework for big data sets. K-means algorithm is one of the classical partition methods in clustering algorithm. In this paper, we study the k-means clustering algorithm on Spark. Firstly, the principle of the algorithm is analyzed, and then the clustering analysis is carried out on the supermarket customers through the experiment to find out the different shopping patterns. At the same time, this paper proposes the parallelization of k-means algorithm and the distributed computing framework of Spark, and gives the concrete design scheme and implementation scheme. This paper uses the two-year sales data of a supermarket to validate the proposed clustering algorithm and achieve the goal of subdividing customers, and then analyze the clustering results to help enterprises to take different marketing strategies for different customer groups to improve sales performance.

  10. Wide-field high-speed space-division multiplexing optical coherence tomography using an integrated photonic device

    PubMed Central

    Huang, Yongyang; Badar, Mudabbir; Nitkowski, Arthur; Weinroth, Aaron; Tansu, Nelson; Zhou, Chao

    2017-01-01

    Space-division multiplexing optical coherence tomography (SDM-OCT) is a recently developed parallel OCT imaging method in order to achieve multi-fold speed improvement. However, the assembly of fiber optics components used in the first prototype system was labor-intensive and susceptible to errors. Here, we demonstrate a high-speed SDM-OCT system using an integrated photonic chip that can be reliably manufactured with high precisions and low per-unit cost. A three-layer cascade of 1 × 2 splitters was integrated in the photonic chip to split the incident light into 8 parallel imaging channels with ~3.7 mm optical delay in air between each channel. High-speed imaging (~1s/volume) of porcine eyes ex vivo and wide-field imaging (~18.0 × 14.3 mm2) of human fingers in vivo were demonstrated with the chip-based SDM-OCT system. PMID:28856055

  11. Life and dynamic capacity modeling for aircraft transmissions

    NASA Technical Reports Server (NTRS)

    Savage, Michael

    1991-01-01

    A computer program to simulate the dynamic capacity and life of parallel shaft aircraft transmissions is presented. Five basic configurations can be analyzed: single mesh, compound, parallel, reverted, and single plane reductions. In execution, the program prompts the user for the data file prefix name, takes input from a ASCII file, and writes its output to a second ASCII file with the same prefix name. The input data file includes the transmission configuration, the input shaft torque and speed, and descriptions of the transmission geometry and the component gears and bearings. The program output file describes the transmission, its components, their capabilities, locations, and loads. It also lists the dynamic capability, ninety percent reliability, and mean life of each component and the transmission as a system. Here, the program, its input and output files, and the theory behind the operation of the program are described.

  12. A real time, FEM based optimal control algorithm and its implementation using parallel processing hardware (transistors) in a microprocessor environment

    NASA Technical Reports Server (NTRS)

    Patten, William Neff

    1989-01-01

    There is an evident need to discover a means of establishing reliable, implementable controls for systems that are plagued by nonlinear and, or uncertain, model dynamics. The development of a generic controller design tool for tough-to-control systems is reported. The method utilizes a moving grid, time infinite element based solution of the necessary conditions that describe an optimal controller for a system. The technique produces a discrete feedback controller. Real time laboratory experiments are now being conducted to demonstrate the viability of the method. The algorithm that results is being implemented in a microprocessor environment. Critical computational tasks are accomplished using a low cost, on-board, multiprocessor (INMOS T800 Transputers) and parallel processing. Progress to date validates the methodology presented. Applications of the technique to the control of highly flexible robotic appendages are suggested.

  13. Extreme Forms of Child Labour in Turkey

    ERIC Educational Resources Information Center

    Degirmencioglu, Serdar M.; Acar, Hakan; Acar, Yuksel Baykara

    2008-01-01

    Two little known forms of child labour in Turkey are examined. The process through which these children are made to work has parallels with the experiences of slaves. First, a long-standing practice from Northwestern Turkey of parents hiring children to better-off farmers is examined. Further, a more recent problem is examined where children are…

  14. Large boron--epoxy filament-wound pressure vessels

    NASA Technical Reports Server (NTRS)

    Jensen, W. M.; Bailey, R. L.; Knoell, A. C.

    1973-01-01

    Advanced composite material used to fabricate pressure vessel is prepeg (partially cured) consisting of continuous, parallel boron filaments in epoxy resin matrix arranged to form tape. To fabricate chamber, tape is wound on form which must be removable after composite has been cured. Configuration of boron--epoxy composite pressure vessel was determined by computer program.

  15. Wood

    Treesearch

    David W. Green; Robert H. White; Antoni TenWolde; William Simpson; Joseph Murphy; Robert J. Ross; Roland Hernandez; Stan T. Lebow

    2006-01-01

    Wood is a naturally formed organic material consisting essentially of elongated tubular elements called cells arranged in a parallel manner for the most part. These cells vary in dimensions and wall thickness with position in the tree, age, conditions of growth, and kind of tree. The walls of the cells are formed principally of chain molecules of cellulose, polymerized...

  16. Parallel computational and experimental studies of the morphological modification of calcium carbonate by cobalt

    NASA Astrophysics Data System (ADS)

    Braybrook, A. L.; Heywood, B. R.; Jackson, R. A.; Pitt, K.

    2002-08-01

    Crystal growth can be controlled by the incorporation of dopant ions into the lattice and yet the question of how such substituents affect the morphology has not been addressed. This paper describes the forms of calcite (CaCO 3) which arise when the growth assay is doped with cobalt. Distinct and specific morphological changes are observed; the calcite crystals adopt a morphology which is dominated by the {01.1} family of faces. These experimental studies paralleled the development of computational methods for the analysis of crystal habit as a function of dopant concentration. In this case, the predicted defect morphology also argued for the dominance of the (01.1) face in the growth form. The appearance of this face was related to the preferential segregation of the dopant ions to the crystal surface. This study confirms the evolution of a robust computational model for the analysis of calcite growth forms under a range of environmental conditions and presages the use of such tools for the predictive development of crystal morphologies in those applications where chemico-physical functionality is linked closely to a specific crystallographic form.

  17. Aggregation and Gelation of Aromatic Polyamides with Parallel and Anti-parallel Alignment of Molecular Dipole Along the Backbone

    NASA Astrophysics Data System (ADS)

    Zhu, Dan; Shang, Jing; Ye, Xiaodong; Shen, Jian

    2016-12-01

    The understanding of macromolecular structures and interactions is important but difficult, due to the facts that a macromolecules are of versatile conformations and aggregate states, which vary with environmental conditions and histories. In this work two polyamides with parallel or anti-parallel dipoles along the linear backbone, named as ABAB (parallel) and AABB (anti-parallel) have been studied. By using a combination of methods, the phase behaviors of the polymers during the aggregate and gelation, i.e., the forming or dissociation processes of nuclei and fibril, cluster of fibrils, and cluster-cluster aggregation have been revealed. Such abundant phase behaviors are dominated by the inter-chain interactions, including dispersion, polarity and hydrogen bonding, and correlatd with the solubility parameters of solvents, the temperature, and the polymer concentration. The results of X-ray diffraction and fast-mode dielectric relaxation indicate that AABB possesses more rigid conformation than ABAB, and because of that AABB aggregates are of long fibers while ABAB is of hairy fibril clusters, the gelation concentration in toluene is 1 w/v% for AABB, lower than the 3 w/v% for ABAB.

  18. P-HARP: A parallel dynamic spectral partitioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Biswas, R.; Simon, H.D.

    1997-05-01

    Partitioning unstructured graphs is central to the parallel solution of problems in computational science and engineering. The authors have introduced earlier the sequential version of an inertial spectral partitioner called HARP which maintains the quality of recursive spectral bisection (RSB) while forming the partitions an order of magnitude faster than RSB. The serial HARP is known to be the fastest spectral partitioner to date, three to four times faster than similar partitioners on a variety of meshes. This paper presents a parallel version of HARP, called P-HARP. Two types of parallelism have been exploited: loop level parallelism and recursive parallelism.more » P-HARP has been implemented in MPI on the SGI/Cray T3E and the IBM SP2. Experimental results demonstrate that P-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.25 seconds on a 64-processor T3E. Experimental results further show that P-HARP can give nearly a 20-fold speedup on 64 processors. These results indicate that graph partitioning is no longer a major bottleneck that hinders the advancement of computational science and engineering for dynamically-changing real-world applications.« less

  19. Sensor Fusion, Prognostics, Diagnostics and Failure Mode Control for Complex Aerospace Systems

    DTIC Science & Technology

    2010-10-01

    algorithm   and   to   then   tune   the   candidates   individually   using   known   metaheuristics .  As  will  be...parallel. The result of this arrangement is that the processing is a form that is analogous to standard parallel genetic algorithms , and as such...search algorithm then uses the hybrid of fitness data to rank the results. The ETRAS controller is developed using pre-selection, showing that a

  20. Vehicular impact absorption system

    NASA Technical Reports Server (NTRS)

    Knoell, A. C.; Wilson, A. H. (Inventor)

    1978-01-01

    An improved vehicular impact absorption system characterized by a plurality of aligned crash cushions of substantially cubic configuration is described. Each consists of a plurality of voided aluminum beverage cans arranged in substantial parallelism within a plurality of superimposed tiers and a covering envelope formed of metal hardware cloth. A plurality of cables is extended through the cushions in substantial parallelism with an axis of alignment for the cushions adapted to be anchored at each of the opposite end thereof.

Top