Sample records for test method variability

  1. Error response test system and method using test mask variable

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  2. Analysis of Within-Test Variability of Non-Destructive Test Methods to Evaluate Compressive Strength of Normal Vibrated and Self-Compacting Concretes

    NASA Astrophysics Data System (ADS)

    Nepomuceno, Miguel C. S.; Lopes, Sérgio M. R.

    2017-10-01

    Non-destructive tests (NDT) have been used in the last decades for the assessment of in-situ quality and integrity of concrete elements. An important step in the application of NDT methods concerns to the interpretation and validation of the test results. In general, interpretation of NDT results should involve three distinct phases leading to the development of conclusions: processing of collected data, analysis of within-test variability and quantitative evaluation of property under investigation. The analysis of within-test variability can provide valuable information, since this can be compared with that of within-test variability associated with the NDT method in use, either to provide a measure of the quality control or to detect the presence of abnormal circumstances during the in-situ application. This paper reports the analysis of the experimental results of within-test variability of NDT obtained for normal vibrated concrete and self-compacting concrete. The NDT reported includes the surface hardness test, ultrasonic pulse velocity test, penetration resistance test, pull-off test, pull-out test and maturity test. The obtained results are discussed and conclusions are presented.

  3. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables.

    PubMed

    Fagerland, Morten W; Sandvik, Leiv; Mowinckel, Petter

    2011-04-13

    The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. The Welch U test (the T test with adjustment for unequal variances) and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group). The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  4. Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research.

    PubMed

    Pizer, Steven D

    2016-04-01

    To demonstrate how falsification tests can be used to evaluate instrumental variables methods applicable to a wide variety of comparative effectiveness research questions. Brief conceptual review of instrumental variables and falsification testing principles and techniques accompanied by an empirical application. Sample STATA code related to the empirical application is provided in the Appendix. Comparative long-term risks of sulfonylureas and thiazolidinediones for management of type 2 diabetes. Outcomes include mortality and hospitalization for an ambulatory care-sensitive condition. Prescribing pattern variations are used as instrumental variables. Falsification testing is an easily computed and powerful way to evaluate the validity of the key assumption underlying instrumental variables analysis. If falsification tests are used, instrumental variables techniques can help answer a multitude of important clinical questions. © Health Research and Educational Trust.

  5. Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard

    PubMed Central

    Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton

    2017-01-01

    The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385

  6. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  7. Determining the anaerobic threshold in water aerobic exercises: a comparison between the heart rate deflection point and the ventilatory method.

    PubMed

    Alberton, C L; Kanitz, A C; Pinto, S S; Antunes, A H; Finatto, P; Cadore, E L; Kruel, L F M

    2013-08-01

    The aim of this study was to compare the cardiorespiratory variables corresponding to the anaerobic threshold (AT) between different water-based exercises using two methods of determining the AT, the heart rate deflection point and ventilatory method, and to correlate the variables in both methods. Twenty young women performed three exercise sessions in the water. Maximal tests were performed in the water-based exercises stationary running, frontal kick and cross country skiing. The protocol started at a rate of 80 cycles per minute (cycle.min-1) for 2 min with subsequent increments of 10 cycle.min-1 every minute until exhaustion, with measurements of heart rate, oxygen uptake and ventilation throughout test. After, the two methods were used to determine the values of these variables corresponding to the AT for each of the exercises. Comparisons were made using two-way ANOVA for repeated measures with Bonferroni's post hoc test. To correlate the same variables determined by the two methods, the intra-class correlation coefficient test (ICC) was used. For all the variables, no significant differences were found between the methods of determining the AT and the three exercises. Moreover, the ICC values of each variable determined by the two methods were high and significant. The estimation of the heart rate deflection point can be used as a simple and practical method of determining the AT, which can be used when prescribing these exercises. In addition, these cardiorespiratory parameters may be determined performing the test with only one of the evaluated exercises, since there were no differences in the evaluated variables.

  8. Test methods for environment-assisted cracking

    NASA Astrophysics Data System (ADS)

    Turnbull, A.

    1992-03-01

    The test methods for assessing environment assisted cracking of metals in aqueous solution are described. The advantages and disadvantages are examined and the interrelationship between results from different test methods is discussed. The source of differences in susceptibility to cracking occasionally observed from the varied mechanical test methods arises often from the variation between environmental parameters in the different test conditions and the lack of adequate specification, monitoring, and control of environmental variables. Time is also a significant factor when comparing results from short term tests with long exposure tests. In addition to these factors, the intrinsic difference in the important mechanical variables, such as strain rate, associated with the various mechanical tests methods can change the apparent sensitivity of the material to stress corrosion cracking. The increasing economic pressure for more accelerated testing is in conflict with the characteristic time dependence of corrosion processes. Unreliable results may be inevitable in some cases but improved understanding of mechanisms and the development of mechanistically based models of environment assisted cracking which incorporate the key mechanical, material, and environmental variables can provide the framework for a more realistic interpretation of short term data.

  9. The Potential for Differential Findings among Invariance Testing Strategies for Multisample Measured Variable Path Models

    ERIC Educational Resources Information Center

    Mann, Heather M.; Rutstein, Daisy W.; Hancock, Gregory R.

    2009-01-01

    Multisample measured variable path analysis is used to test whether causal/structural relations among measured variables differ across populations. Several invariance testing approaches are available for assessing cross-group equality of such relations, but the associated test statistics may vary considerably across methods. This study is a…

  10. A survey of variable selection methods in two Chinese epidemiology journals

    PubMed Central

    2010-01-01

    Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163) via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44%) used stepwise procedures, 89 (41%) tested individual regression coefficients, but 33 (15%) did not mention how variables were selected. Sixty percent (58/97) of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals. PMID:20920252

  11. LandScape: a simple method to aggregate p-values and other stochastic variables without a priori grouping.

    PubMed

    Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob

    2016-08-01

    In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).

  12. LLNA variability: An essential ingredient for a comprehensive assessment of non-animal skin sensitization test methods and strategies.

    PubMed

    Hoffmann, Sebastian

    2015-01-01

    The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.

  13. Testing high SPF sunscreens: a demonstration of the accuracy and reproducibility of the results of testing high SPF formulations by two methods and at different testing sites.

    PubMed

    Agin, Patricia Poh; Edmonds, Susan H

    2002-08-01

    The goals of this study were (i) to demonstrate that existing and widely used sun protection factor (SPF) test methodologies can produce accurate and reproducible results for high SPF formulations and (ii) to provide data on the number of test-subjects needed, the variability of the data, and the appropriate exposure increments needed for testing high SPF formulations. Three high SPF formulations were tested, according to the Food and Drug Administration's (FDA) 1993 tentative final monograph (TFM) 'very water resistant' test method and/or the 1978 proposed monograph 'waterproof' test method, within one laboratory. A fourth high SPF formulation was tested at four independent SPF testing laboratories, using the 1978 waterproof SPF test method. All laboratories utilized xenon arc solar simulators. The data illustrate that the testing conducted within one laboratory, following either the 1978 proposed or the 1993 TFM SPF test method, was able to reproducibly determine the SPFs of the formulations tested, using either the statistical analysis method in the proposed monograph or the statistical method described in the TFM. When one formulation was tested at four different laboratories, the anticipated variation in the data owing to the equipment and other operational differences was minimized through the use of the statistical method described in the 1993 monograph. The data illustrate that either the 1978 proposed monograph SPF test method or the 1993 TFM SPF test method can provide accurate and reproducible results for high SPF formulations. Further, these results can be achieved with panels of 20-25 subjects with an acceptable level of variability. Utilization of the statistical controls from the 1993 sunscreen monograph can help to minimize lab-to-lab variability for well-formulated products.

  14. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    PubMed

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  15. Mediators and moderators in early intervention research

    PubMed Central

    Breitborde, Nicholas J. K.; Srihari, Vinod H.; Pollard, Jessica M.; Addington, Donald N.; Woods, Scott W.

    2015-01-01

    Aim The goal of this paper is to provide clarification with regard to the nature of mediator and moderator variables and the statistical methods used to test for the existence of these variables. Particular attention will be devoted to discussing the ways in which the identification of mediator and moderator variables may help to advance the field of early intervention in psychiatry. Methods We completed a literature review of the methodological strategies used to test for mediator and moderator variables. Results Although several tests for mediator variables are currently available, recent evaluations suggest that tests which directly evaluate the indirect effect are superior. With regard to moderator variables, two approaches (‘pick-a-point’ and regions of significance) are available, and we provide guidelines with regard to how researchers can determine which approach may be most appropriate to use for their specific study. Finally, we discuss how to evaluate the clinical importance of mediator and moderator relationships as well as the methodology to calculate statistical power for tests of mediation and moderation. Conclusion Further exploration of mediator and moderator variables may provide valuable information with regard to interventions provided early in the course of a psychiatric illness. PMID:20536970

  16. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    PubMed Central

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  17. Vector wind profile gust model

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1981-01-01

    To enable development of a vector wind gust model suitable for orbital flight test operations and trade studies, hypotheses concerning the distributions of gust component variables were verified. Methods for verification of hypotheses that observed gust variables, including gust component magnitude, gust length, u range, and L range, are gamma distributed and presented. Observed gust modulus has been drawn from a bivariate gamma distribution that can be approximated with a Weibull distribution. Zonal and meridional gust components are bivariate gamma distributed. An analytical method for testing for bivariate gamma distributed variables is presented. Two distributions for gust modulus are described and the results of extensive hypothesis testing of one of the distributions are presented. The validity of the gamma distribution for representation of gust component variables is established.

  18. Performance of the AOAC use-dilution method with targeted modifications: collaborative study.

    PubMed

    Tomasino, Stephen F; Parker, Albert E; Hamilton, Martin A; Hamilton, Gordon C

    2012-01-01

    The U.S. Environmental Protection Agency (EPA), in collaboration with an industry work group, spearheaded a collaborative study designed to further enhance the AOAC use-dilution method (UDM). Based on feedback from laboratories that routinely conduct the UDM, improvements to the test culture preparation steps were prioritized. A set of modifications, largely based on culturing the test microbes on agar as specified in the AOAC hard surface carrier test method, were evaluated in a five-laboratory trial. The modifications targeted the preparation of the Pseudomonas aeruginosa test culture due to the difficulty in separating the pellicle from the broth in the current UDM. The proposed modifications (i.e., the modified UDM) were compared to the current UDM methodology for P. aeruginosa and Staphylococcus aureus. Salmonella choleraesuis was not included in the study. The goal was to determine if the modifications reduced method variability. Three efficacy response variables were statistically analyzed: the number of positive carriers, the log reduction, and the pass/fail outcome. The scope of the collaborative study was limited to testing one liquid disinfectant (an EPA-registered quaternary ammonium product) at two levels of presumed product efficacies, high and low. Test conditions included use of 400 ppm hard water as the product diluent and a 5% organic soil load (horse serum) added to the inoculum. Unfortunately, the study failed to support the adoption of the major modification (use of an agar-based approach to grow the test cultures) based on an analysis of method's variability. The repeatability and reproducibility standard deviations for the modified method were equal to or greater than those for the current method across the various test variables. However, the authors propose retaining the frozen stock preparation step of the modified method, and based on the statistical equivalency of the control log densities, support its adoption as a procedural change to the current UDM. The current UDM displayed acceptable responsiveness to changes in product efficacy; acceptable repeatability across multiple tests in each laboratory for the control counts and log reductions; and acceptable reproducibility across multiple laboratories for the control log density values and log reductions. Although the data do not support the adoption of all modifications, the UDM collaborative study data are valuable for assessing sources of method variability and a reassessment of the performance standard for the UDM.

  19. Similitude assessment method for comparing PMHS response data from impact loading across multiple test devices.

    PubMed

    Dooley, Christopher J; Tenore, Francesco V; Gayzik, F Scott; Merkle, Andrew C

    2018-04-27

    Biological tissue testing is inherently susceptible to the wide range of variability specimen to specimen. A primary resource for encapsulating this range of variability is the biofidelity response corridor or BRC. In the field of injury biomechanics, BRCs are often used for development and validation of both physical, such as anthropomorphic test devices, and computational models. For the purpose of generating corridors, post-mortem human surrogates were tested across a range of loading conditions relevant to under-body blast events. To sufficiently cover the wide range of input conditions, a relatively small number of tests were performed across a large spread of conditions. The high volume of required testing called for leveraging the capabilities of multiple impact test facilities, all with slight variations in test devices. A method for assessing similitude of responses between test devices was created as a metric for inclusion of a response in the resulting BRC. The goal of this method was to supply a statistically sound, objective method to assess the similitude of an individual response against a set of responses to ensure that the BRC created from the set was affected primarily by biological variability, not anomalies or differences stemming from test devices. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Test Program for Evaluation of Variable Frequency Power Conditioners

    DOT National Transportation Integrated Search

    1973-08-01

    A test program is outlined for variable frequency power conditioners for 3-phase induction motors in vehicle propulsion applications. The Power Conditioner Unit (PCU) performance characteristics are discussed in some detail. Measurement methods, reco...

  1. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to a preliminary assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  2. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Evans, I.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to an assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  3. Examining Parallelism of Sets of Psychometric Measures Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Patelis, Thanos; Marcoulides, George A.

    2011-01-01

    A latent variable modeling approach that can be used to examine whether several psychometric tests are parallel is discussed. The method consists of sequentially testing the properties of parallel measures via a corresponding relaxation of parameter constraints in a saturated model or an appropriately constructed latent variable model. The…

  4. A study of short test and charge retention test methods for nickel-cadmium spacecraft cells

    NASA Technical Reports Server (NTRS)

    Scott, W. R.

    1975-01-01

    Methods for testing nickel-cadmium cells for internal shorts and charge retention were studied. Included were (a) open circuit voltage decay after a brief charge, (b) open circuit voltage recovery after shorting, and (c) open circuit voltage decay and capacity loss after a full charge. The investigation included consideration of the effects of prior history, of conditioning cells prior to testing, and of various test method variables on the results of the tests. Sensitivity of the tests was calibrated in terms of equivalent external resistance. The results were correlated. It was shown that a large number of variables may affect the results of these tests. It is concluded that the voltage decay after a brief charge and the voltage recovery methods are more sensitive than the charged stand method, and can detect an internal short equivalent to a resistance of about (10,000/C)ohms where "C' is the numerical value of the capacity of the cell in ampere hours.

  5. A comparative review of methods for comparing means using partially paired data.

    PubMed

    Guo, Beibei; Yuan, Ying

    2017-06-01

    In medical experiments with the objective of testing the equality of two means, data are often partially paired by design or because of missing data. The partially paired data represent a combination of paired and unpaired observations. In this article, we review and compare nine methods for analyzing partially paired data, including the two-sample t-test, paired t-test, corrected z-test, weighted t-test, pooled t-test, optimal pooled t-test, multiple imputation method, mixed model approach, and the test based on a modified maximum likelihood estimate. We compare the performance of these methods through extensive simulation studies that cover a wide range of scenarios with different effect sizes, sample sizes, and correlations between the paired variables, as well as true underlying distributions. The simulation results suggest that when the sample size is moderate, the test based on the modified maximum likelihood estimator is generally superior to the other approaches when the data is normally distributed and the optimal pooled t-test performs the best when the data is not normally distributed, with well-controlled type I error rates and high statistical power; when the sample size is small, the optimal pooled t-test is to be recommended when both variables have missing data and the paired t-test is to be recommended when only one variable has missing data.

  6. Residual interference and wind tunnel wall adaption

    NASA Technical Reports Server (NTRS)

    Mokry, Miroslav

    1989-01-01

    Measured flow variables near the test section boundaries, used to guide adjustments of the walls in adaptive wind tunnels, can also be used to quantify the residual interference. Because of a finite number of wall control devices (jacks, plenum compartments), the finite test section length, and the approximation character of adaptation algorithms, the unconfined flow conditions are not expected to be precisely attained even in the fully adapted stage. The procedures for the evaluation of residual wall interference are essentially the same as those used for assessing the correction in conventional, non-adaptive wind tunnels. Depending upon the number of flow variables utilized, one can speak of one- or two-variable methods; in two dimensions also of Schwarz- or Cauchy-type methods. The one-variable methods use the measured static pressure and normal velocity at the test section boundary, but do not require any model representation. This is clearly of an advantage for adaptive wall test section, which are often relatively small with respect to the test model, and for the variety of complex flows commonly encountered in wind tunnel testing. For test sections with flexible walls the normal component of velocity is given by the shape of the wall, adjusted for the displacement effect of its boundary layer. For ventilated test section walls it has to be measured by the Calspan pipes, laser Doppler velocimetry, or other appropriate techniques. The interface discontinuity method, also described, is a genuine residual interference assessment technique. It is specific to adaptive wall wind tunnels, where the computation results for the fictitious flow in the exterior of the test section are provided.

  7. The sensitivity of relative toxicity rankings by the USF/NASA test method to some test variables

    NASA Technical Reports Server (NTRS)

    Hilado, C. J.; Labossiere, L. A.; Leon, H. A.; Kourtides, D. A.; Parker, J. A.; Hsu, M.-T. S.

    1976-01-01

    Pyrolysis temperature and the distance between the source and sensor of effluents are two important variables in tests for relative toxicity. Modifications of the USF/NASA toxicity screening test method to increase the upper temperature limit of pyrolysis, reduce the distance between the sample and the test animals, and increase the chamber volume available for animal occupancy, did not significantly alter rankings of relative toxicity of four representative materials. The changes rendered some differences no longer significant, but did not reverse any rankings. The materials studied were cotton, wool, aromatic polyamide, and polybenzimidazole.

  8. Testing common stream sampling methods for broad-scale, long-term monitoring

    Treesearch

    Eric K. Archer; Brett B. Roper; Richard C. Henderson; Nick Bouwes; S. Chad Mellison; Jeffrey L. Kershner

    2004-01-01

    We evaluated sampling variability of stream habitat sampling methods used by the USDA Forest Service and the USDI Bureau of Land Management monitoring program for the upper Columbia River Basin. Three separate studies were conducted to describe the variability of individual measurement techniques, variability between crews, and temporal variation throughout the summer...

  9. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    NASA Technical Reports Server (NTRS)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  10. Binary recursive partitioning: background, methods, and application to psychology.

    PubMed

    Merkle, Edgar C; Shaffer, Victoria A

    2011-02-01

    Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.

  11. New parameters in adaptive testing of ferromagnetic materials utilizing magnetic Barkhausen noise

    NASA Astrophysics Data System (ADS)

    Pal'a, Jozef; Ušák, Elemír

    2016-03-01

    A new method of magnetic Barkhausen noise (MBN) measurement and optimization of the measured data processing with respect to non-destructive evaluation of ferromagnetic materials was tested. Using this method we tried to found, if it is possible to enhance sensitivity and stability of measurement results by replacing the traditional MBN parameter (root mean square) with some new parameter. In the tested method, a complex set of the MBN from minor hysteresis loops is measured. Afterward, the MBN data are collected into suitably designed matrices and optimal parameters of MBN with respect to maximum sensitivity to the evaluated variable are searched. The method was verified on plastically deformed steel samples. It was shown that the proposed measuring method and measured data processing bring an improvement of the sensitivity to the evaluated variable when comparing with measuring traditional MBN parameter. Moreover, we found a parameter of MBN, which is highly resistant to the changes of applied field amplitude and at the same time it is noticeably more sensitive to the evaluated variable.

  12. Second ventilatory threshold from heart-rate variability: valid when the upper body is involved?

    PubMed

    Mourot, Laurent; Fabre, Nicolas; Savoldelli, Aldo; Schena, Federico

    2014-07-01

    To determine the most accurate method based on spectral analysis of heart-rate variability (SA-HRV) during an incremental and continuous maximal test involving the upper body, the authors tested 4 different methods to obtain the heart rate (HR) at the second ventilatory threshold (VT(2)). Sixteen ski mountaineers (mean ± SD; age 25 ± 3 y, height 177 ± 8 cm, mass 69 ± 10 kg) performed a roller-ski test on a treadmill. Respiratory variables and HR were continuously recorded, and the 4 SA-HRV methods were compared with the gas-exchange method through Bland and Altman analyses. The best method was the one based on a time-varying spectral analysis with high frequency ranging from 0.15 Hz to a cutoff point relative to the individual's respiratory sinus arrhythmia. The HR values were significantly correlated (r(2) = .903), with a mean HR difference with the respiratory method of 0.1 ± 3.0 beats/min and low limits of agreements (around -6 /+6 beats/min). The 3 other methods led to larger errors and lower agreements (up to 5 beats/min and around -23/+20 beats/min). It is possible to accurately determine VT(2) with an HR monitor during an incremental test involving the upper body if the appropriate HRV method is used.

  13. Transferability and inter-laboratory variability assessment of the in vitro bovine oocyte fertilization test.

    PubMed

    Tessaro, Irene; Modina, Silvia C; Crotti, Gabriella; Franciosi, Federica; Colleoni, Silvia; Lodde, Valentina; Galli, Cesare; Lazzari, Giovanna; Luciano, Alberto M

    2015-01-01

    The dramatic increase in the number of animals required for reproductive toxicity testing imposes the validation of alternative methods to reduce the use of laboratory animals. As we previously demonstrated for in vitro maturation test of bovine oocytes, the present study describes the transferability assessment and the inter-laboratory variability of an in vitro test able to identify chemical effects during the process of bovine oocyte fertilization. Eight chemicals with well-known toxic properties (benzo[a]pyrene, busulfan, cadmium chloride, cycloheximide, diethylstilbestrol, ketoconazole, methylacetoacetate, mifepristone/RU-486) were tested in two well-trained laboratories. The statistical analysis demonstrated no differences in the EC50 values for each chemical in within (inter-runs) and in between-laboratory variability of the proposed test. We therefore conclude that the bovine in vitro fertilization test could advance toward the validation process as alternative in vitro method and become part of an integrated testing strategy in order to predict chemical hazards on mammalian fertility. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    PubMed Central

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  15. Outcomes Definitions and Statistical Tests in Oncology Studies: A Systematic Review of the Reporting Consistency

    PubMed Central

    Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie

    2016-01-01

    Background Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. Methods From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. Results 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31–0.89] (P value = 0.009). Conclusion Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies. PMID:27716793

  16. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  17. A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen

    2012-01-01

    Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent…

  18. Exact tests using two correlated binomial variables in contemporary cancer clinical trials.

    PubMed

    Yu, Jihnhee; Kepner, James L; Iyer, Renuka

    2009-12-01

    New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.

  19. Exhaustive Search for Sparse Variable Selection in Linear Regression

    NASA Astrophysics Data System (ADS)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  20. Motivational antecedents to contraceptive method change following a pregnancy scare: a couple analysis.

    PubMed

    Miller, W B; Pasta, D J

    2001-01-01

    In this study we develop and then test a couple model of contraceptive method choice decision-making following a pregnancy scare. The central constructs in our model are satisfaction with one's current method and confidence in the use of it. Downstream in the decision sequence, satisfaction and confidence predict desires and intentions to change methods. Upstream they are predicted by childbearing motivations, contraceptive attitudes, and the residual effects of the couples' previous method decisions. We collected data from 175 mostly unmarried and racially/ethnically diverse couples who were seeking pregnancy tests. We used LISREL and its latent variable capacity to estimate a structural equation model of the couple decision-making sequence leading to a change (or not) in contraceptive method. Results confirm most elements in our model and demonstrate a number of important cross-partner effects. Almost one-half of the sample had positive pregnancy tests and the base model fitted to this subsample indicates less accuracy in partner perception and greater influence of the female partner on method change decision-making. The introduction of some hypothesis-generating exogenous variables to our base couple model, together with some unexpected findings for the contraceptive attitude variables, suggest interesting questions that require further exploration.

  1. Effect of workload setting on propulsion technique in handrim wheelchair propulsion.

    PubMed

    van Drongelen, Stefan; Arnet, Ursina; Veeger, Dirkjan H E J; van der Woude, Lucas H V

    2013-03-01

    To investigate the influence of workload setting (speed at constant power, method to impose power) on the propulsion technique (i.e. force and timing characteristics) in handrim wheelchair propulsion. Twelve able-bodied men participated in this study. External forces were measured during handrim wheelchair propulsion on a motor driven treadmill at different velocities and constant power output (to test the forced effect of speed) and at power outputs imposed by incline vs. pulley system (to test the effect of method to impose power). Outcome measures were the force and timing variables of the propulsion technique. FEF and timing variables showed significant differences between the speed conditions when propelling at the same power output (p < 0.01). Push time was reduced while push angle increased. The method to impose power only showed slight differences in the timing variables, however not in the force variables. Researchers and clinicians must be aware of testing and evaluation conditions that may differently affect propulsion technique parameters despite an overall constant power output. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. Assessment of WENO-extended two-fluid modelling in compressible multiphase flows

    NASA Astrophysics Data System (ADS)

    Kitamura, Keiichi; Nonomura, Taku

    2017-03-01

    The two-fluid modelling based on an advection-upwind-splitting-method (AUSM)-family numerical flux function, AUSM+-up, following the work by Chang and Liou [Journal of Computational Physics 2007;225: 840-873], has been successfully extended to the fifth order by weighted-essentially-non-oscillatory (WENO) schemes. Then its performance is surveyed in several numerical tests. The results showed a desired performance in one-dimensional benchmark test problems: Without relying upon an anti-diffusion device, the higher-order two-fluid method captures the phase interface within a fewer grid points than the conventional second-order method, as well as a rarefaction wave and a very weak shock. At a high pressure ratio (e.g. 1,000), the interpolated variables appeared to affect the performance: the conservative-variable-based characteristic-wise WENO interpolation showed less sharper but more robust representations of the shocks and expansions than the primitive-variable-based counterpart did. In two-dimensional shock/droplet test case, however, only the primitive-variable-based WENO with a huge void fraction realised a stable computation.

  3. Outcomes Definitions and Statistical Tests in Oncology Studies: A Systematic Review of the Reporting Consistency.

    PubMed

    Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie

    2016-01-01

    Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31-0.89] (P value = 0.009). Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies.

  4. Autonomic Evaluation of Patients With Gastroparesis and Neurostimulation: Comparisons of Direct/Systemic and Indirect/Cardiac Measures

    PubMed Central

    Stocker, Abigail; Abell, Thomas L.; Rashed, Hani; Kedar, Archana; Boatright, Ben; Chen, Jiande

    2016-01-01

    Background Disorders of nausea, vomiting, abdominal pain, and related problems often are manifestations of gastrointestinal, neuromuscular, and/or autonomic dysfunction. Many of these patients respond to neurostimulation, either gastric electrical stimulation or electroacupuncture. Both of these therapeutic techniques appear to influence the autonomic nervous system which can be evaluated directly by traditional testing and indirectly by heart rate variability. Methods We studied patients undergoing gastric neuromodulation by both systemic autonomic testing (39 patients, six males and 33 females, mean age 38 years) and systemic autonomic testing and heart rate variability (35 patients, seven males and 28 females, mean age 37 years) testing before and after gastric neuromodulation. We also performed a pilot study using both systemic autonomic testing and heart rate variability in a small number of patients (five patients, all females, mean age 48.6 years) with diabetic gastroparesis at baseline to compare the two techniques at baseline. Systemic autonomic testing and heart rate variability were performed with standardized techniques and gastric electrical stimulation was performed as previously described with electrodes implanted serosally in the myenteric plexus. Results Both systemic autonomic testing and heart rate variability measures were often abnormal at baseline and showed changes after gastric neuromodulation therapy in two groups of symptomatic patients. Pilot data on a small group of similar patients with systemic automatic nervous measures and heart rate variability showed good concordance between the two techniques. Conclusions Both traditional direct autonomic measures and indirect measures such as heart rate variability were evaluated, including a pilot study of both methods in the same patient group. Both appear to be useful in evaluation of patients at baseline and after stimulation therapies; however, a future full head-to-head comparison is warranted. PMID:27785318

  5. Dry and wet arc track propagation resistance testing

    NASA Technical Reports Server (NTRS)

    Beach, Rex

    1995-01-01

    The wet arc-propagation resistance test for wire insulation provides an assessment of the ability of an insulation to prevent damage in an electrical environment. Results of an arc-propagation test may vary slightly due to the method of arc initiation; therefore a standard test method must be selected to evaluate the general arc-propagation resistance characteristics of an insulation. This test method initiates an arc by dripping salt water over pre-damaged wires which creates a conductive path between the wires. The power supply, test current, circuit resistances, and other variables are optimized for testing 20 guage wires. The use of other wire sizes may require modifications to the test variables. The dry arc-propagation resistance test for wire insulation also provides an assessment of the ability of an insulation to prevent damage in an electrical arc environment. In service, electrical arcs may originate form a variety of factors including insulation deterioration, faulty installation, and chafing. Here too, a standard test method must be selected to evaluate the general arc-propagation resistance characteristics of an insulation. This test method initiates an arc with a vibrating blade. The test also evaluates the ability of the insulation to prevent further arc-propagation when the electrical arc is re-energized.

  6. Comparison of manual and automatic techniques for substriatal segmentation in 11C-raclopride high-resolution PET studies.

    PubMed

    Johansson, Jarkko; Alakurtti, Kati; Joutsa, Juho; Tohka, Jussi; Ruotsalainen, Ulla; Rinne, Juha O

    2016-10-01

    The striatum is the primary target in regional C-raclopride-PET studies, and despite its small volume, it contains several functional and anatomical subregions. The outcome of the quantitative dopamine receptor study using C-raclopride-PET depends heavily on the quality of the region-of-interest (ROI) definition of these subregions. The aim of this study was to evaluate subregional analysis techniques because new approaches have emerged, but have not yet been compared directly. In this paper, we compared manual ROI delineation with several automatic methods. The automatic methods used either direct clustering of the PET image or individualization of chosen brain atlases on the basis of MRI or PET image normalization. State-of-the-art normalization methods and atlases were applied, including those provided in the FreeSurfer, Statistical Parametric Mapping8, and FSL software packages. Evaluation of the automatic methods was based on voxel-wise congruity with the manual delineations and the test-retest variability and reliability of the outcome measures using data from seven healthy male participants who were scanned twice with C-raclopride-PET on the same day. The results show that both manual and automatic methods can be used to define striatal subregions. Although most of the methods performed well with respect to the test-retest variability and reliability of binding potential, the smallest average test-retest variability and SEM were obtained using a connectivity-based atlas and PET normalization (test-retest variability=4.5%, SEM=0.17). The current state-of-the-art automatic ROI methods can be considered good alternatives for subjective and laborious manual segmentation in C-raclopride-PET studies.

  7. Noninvasive Determination of Anaerobic Threshold Based on the Heart Rate Deflection Point in Water Cycling.

    PubMed

    Pinto, Stephanie S; Brasil, Roxana M; Alberton, Cristine L; Ferreira, Hector K; Bagatini, Natália C; Calatayud, Joaquin; Colado, Juan C

    2016-02-01

    This study compared heart rate (HR), oxygen uptake (VO2), percentage of maximal HR (%HRmax), percentage of maximal VO2, and cadence (Cad) related to the anaerobic threshold (AT) during a water cycling maximal test between heart rate deflection point (HRDP) and ventilatory (VT) methods. In addition, the correlations between both methods were assessed for all variables. The test was performed by 27 men in a cycle ergometer in an aquatic environment. The protocol started at a Cad of 100 b · min(-1) for 3 minutes with subsequent increments of 15 b · min(-1) every 2 minutes until exhaustion. A paired two-tailed Student's t-test was used to compare the variables between the HRDP and VT methods. The Pearson product-moment correlation test was used to correlate the same variables determined by the 2 methods. There was no difference in HR (166 ± 13 vs. 166 ± 13 b · min(-1)), VO2 (38.56 ± 6.26 vs. 39.18 ± 6.13 ml · kg(-1) · min(-1)), %HRmax (89.24 ± 3.84 vs. 89.52 ± 4.29%), VO2max (70.44 ± 7.99 vs. 71.64 ± 8.32%), and Cad (174 ± 14 b · min(-1) vs. 171 ± 8 b · min(-1)) related to AT between the HRDP and VT methods. Moreover, significant relationships were found between the methods to determine the AT for all variables analyzed (r = 0.57-0.97). The estimation of the HRDP may be a noninvasive and easy method to determine the AT, which could be used to adapt individualized training intensities to practitioners during water cycling classes.

  8. A demonstration of lack of variability among six tuberculin skin test readers.

    PubMed Central

    Perez-Stable, E J; Slutkin, G

    1985-01-01

    The variability of tuberculin skin test readings among six trained and experienced readers was evaluated using a modified sliding caliper method. Each of 537 tests were read independently by two readers. There were 23 disagreements between paired readers resulting in an overall interobserver reliability of 95.7 per cent. In 82 per cent of the paired readings the results were different by 2 mm or less. The observer lack of variability was likely due to the training and experience of the readers. PMID:4051078

  9. The "g" Factor and Cognitive Test Session Behavior: Using a Latent Variable Approach in Examining Measurement Invariance Across Age Groups on the WJ III

    ERIC Educational Resources Information Center

    Frisby, Craig L.; Wang, Ze

    2016-01-01

    Data from the standardization sample of the Woodcock-Johnson Psychoeducational Battery--Third Edition (WJ III) Cognitive standard battery and Test Session Observation Checklist items were analyzed to understand the relationship between g (general mental ability) and test session behavior (TSB; n = 5,769). Latent variable modeling methods were used…

  10. Mediators and moderators in early intervention research.

    PubMed

    Breitborde, Nicholas J K; Srihari, Vinod H; Pollard, Jessica M; Addington, Donald N; Woods, Scott W

    2010-05-01

    The goal of this paper is to provide clarification with regard to the nature of mediator and moderator variables and the statistical methods used to test for the existence of these variables. Particular attention will be devoted to discussing the ways in which the identification of mediator and moderator variables may help to advance the field of early intervention in psychiatry. We completed a literature review of the methodological strategies used to test for mediator and moderator variables. Although several tests for mediator variables are currently available, recent evaluations suggest that tests which directly evaluate the indirect effect are superior. With regard to moderator variables, two approaches ('pick-a-point' and regions of significance) are available, and we provide guidelines with regard to how researchers can determine which approach may be most appropriate to use for their specific study. Finally, we discuss how to evaluate the clinical importance of mediator and moderator relationships as well as the methodology to calculate statistical power for tests of mediation and moderation. Further exploration of mediator and moderator variables may provide valuable information with regard to interventions provided early in the course of a psychiatric illness.

  11. Space Suit Joint Torque Testing

    NASA Technical Reports Server (NTRS)

    Valish, Dana J.

    2011-01-01

    In 2009 and early 2010, a test was performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design meets the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future space suits. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis and a variance in torque values for some of the tested joints was apparent. Potential variables that could have affected the data were identified and re-testing was conducted in an attempt to eliminate these variables. The results of the retest will be used to determine if further testing and modification is necessary before the method can be validated.

  12. Association between heart rate variability and manual pulse rate.

    PubMed

    Hart, John

    2013-09-01

    One model for neurological assessment in chiropractic pertains to autonomic variability, tested commonly with heart rate variability (HRV). Since HRV may not be convenient to use on all patient visits, more user-friendly methods may help fill-in the gaps. Accordingly, this study tests the association between manual pulse rate and heart rate variability. The manual rates were also compared to the heart rate derived from HRV. Forty-eight chiropractic students were examined with heart rate variability (SDNN and mean heart rate) and two manual radial pulse rate measurements. Inclusion criteria consisted of participants being chiropractic students. Exclusion criteria for 46 of the participants consisted of a body mass index being greater than 30, age greater than 35, and history of: a) dizziness upon standing, b) treatment of psychiatric disorders, and c) diabetes. No exclusion criteria were applied to the remaining two participants who were also convenience sample volunteers. Linear associations between the manual pulse rate methods and the two heart rate variability measures (SDNN and mean heart) were tested with Pearson's correlation and simple linear regression. Moderate strength inverse (expected) correlations were observed between both manual pulse rate methods and SDNN (r = -0.640, 95% CI -0.781, -0.435; r = -0.632, 95% CI -0.776, -0.425). Strong direct (expected) relationships were observed between the manual pulse rate methods and heart rate derived from HRV technology (r = 0.934, 95% CI 0.885, 0.962; r = 0.941, 95% CI 0.897, 0.966). Manual pulse rates may be a useful option for assessing autonomic variability. Furthermore, this study showed a strong relationship between manual pulse rates and heart rate derived from HRV technology.

  13. Investigating Causal DIF via Propensity Score Methods

    ERIC Educational Resources Information Center

    Liu, Yan; Zumbo, Bruno D.; Gustafson, Paul; Huang, Yi; Kroc, Edward; Wu, Amery D.

    2016-01-01

    A variety of differential item functioning (DIF) methods have been proposed and used for ensuring that a test is fair to all test takers in a target population in the situations of, for example, a test being translated to other languages. However, once a method flags an item as DIF, it is difficult to conclude that the grouping variable (e.g.,…

  14. Resampling and Distribution of the Product Methods for Testing Indirect Effects in Complex Models

    ERIC Educational Resources Information Center

    Williams, Jason; MacKinnon, David P.

    2008-01-01

    Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional "z" test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a…

  15. Evaluation of variability and quality control procedures for a receptor-binding assay for paralytic shellfish poisoning toxins.

    PubMed

    Ruberu, S R; Langlois, G W; Masuda, M; Perera, S Kusum

    2012-01-01

    The receptor-binding assay (RBA) method for determining saxatoxin (STX) and its numerous analogues, which cause paralytic shellfish poisoning (PSP) in humans, was evaluated in a single laboratory study. Each step of the assay preparation procedure including the performance of the multi-detector TopCount® instrument was evaluated for its contribution to method variability. The overall inherent RBA variability was determined to be 17%. Variability within the 12 detectors was observed; however, there was no reproducible pattern in detector performance. This observed variability among detectors could be attributed to other factors, such as pipetting errors. In an attempt to reduce the number of plates rejected due to excessive variability in the method's quality control parameters, a statistical approach was evaluated using either Grubbs' test or the Student's t-test for rejecting outliers in the measurement of triplicate wells. This approach improved the ratio of accepted versus rejected plates, saving cost and time for rerunning the assay. However, the potential reduction in accuracy and the lack of improvement in precision suggests caution when using this approach. The current study has recommended an alternate quality control procedure for accepting or rejecting plates in place of the criteria currently used in the published assay, or the alternative of outlier testing. The recommended procedure involves the development of control charts to monitor the critical parameters identified in the published method (QC sample, EC₅₀, slope of calibration curve), with the addition of a fourth critical parameter which is the top value (100% binding) of the calibration curve.

  16. A NEW APPROACH FOR CULTURING LEMNA MINOR (DUCKWEED) AND STANDARDIZED METHOD FOR USING ATRAZINE AS A REFERENCE TOXICANT

    EPA Science Inventory

    Lemna minor (Duckweed) is commonly used in aquatic toxicity investigations. Methods for culturing and testing with reference toxicants, such as atrazine, are somewhat variable among researchers. Our goal was to develop standardized methods of culturing and testing for use with L....

  17. Intra- and interlaboratory variability in acute toxicity tests with glochidia and juveniles of freshwater mussels (Unionidae)

    USGS Publications Warehouse

    Wang, N.; Augspurger, T.; Barnhart, M.C.; Bidwell, Joseph R.; Cope, W.G.; Dwyer, F.J.; Geis, S.; Greer, I.E.; Ingersoll, C.G.; Kane, C.M.; May, T.W.; Neves, R.J.; Newton, T.J.; Roberts, A.D.; Whites, D.W.

    2007-01-01

    The present study evaluated the performance and variability in acute toxicity tests with glochidia and newly transformed juvenile mussels using the standard methods outlined in American Society for Testing and Materials (ASTM). Multiple 48-h toxicity tests with glochidia and 96-h tests with juvenile mussels were conducted within a single laboratory and among five laboratories. All tests met the test acceptability requirements (e.g., ???90% control survival). Intralaboratory tests were conducted over two consecutive mussel-spawning seasons with mucket (Actinonaias ligamentina) or fatmucket (Lampsilis siliquoidea) using copper, ammonia, or chlorine as a toxicant. For the glochidia of both species, the variability of intralaboratory median effective concentrations (EC50s) for the three toxicants, expressed as the coefficient of variation (CV), ranged from 14 to 27% in 24-h exposures and from 13 to 36% in 48-h exposures. The intralaboratory CV of copper EC50s for juvenile fatmucket was 24% in 48-h exposures and 13% in 96-h exposures. Interlaboratory tests were conducted with fatmucket glochidia and juveniles by five laboratories using copper as a toxicant. The interlaboratory CV of copper EC50s for glochidia was 13% in 24-h exposures and 24% in 48-h exposures, and the interlaboratory CV for juveniles was 22% in 48-h exposures and 42% in 96-h exposures. The high completion success and the overall low variability in test results indicate that the test methods have acceptable precision and can be performed routinely. ?? 2007 SETAC.

  18. Humidity effects on wire insulation breakdown strength.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appelhans, Leah

    2013-08-01

    Methods for the testing of the dielectric breakdown strength of insulation on metal wires under variable humidity conditions were developed. Two methods, an ASTM method and the twisted pair method, were compared to determine if the twisted pair method could be used for determination of breakdown strength under variable humidity conditions. It was concluded that, although there were small differences in outcomes between the two testing methods, the non-standard method (twisted pair) would be appropriate to use for further testing of the effects of humidity on breakdown performance. The dielectric breakdown strength of 34G copper wire insulated with double layermore » Poly-Thermaleze/Polyamide-imide insulation was measured using the twisted pair method under a variety of relative humidity (RH) conditions and exposure times. Humidity at 50% RH and below was not found to affect the dielectric breakdown strength. At 80% RH the dielectric breakdown strength was significantly diminished. No effect for exposure time up to 140 hours was observed at 50 or 80%RH.« less

  19. Replicates in high dimensions, with applications to latent variable graphical models.

    PubMed

    Tan, Kean Ming; Ning, Yang; Witten, Daniela M; Liu, Han

    2016-12-01

    In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.

  20. Direct estimation and correction of bias from temporally variable non-stationary noise in a channelized Hotelling model observer.

    PubMed

    Fetterly, Kenneth A; Favazza, Christopher P

    2016-08-07

    Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ([Formula: see text]) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame(-1) resulted in [Formula: see text] estimates which were as much as 2.9×  greater than expected of a quantum limited system. Over-estimation of [Formula: see text] was presumed to be a result of bias error due to temporally variable non-stationary noise. Statistical theory which allows for independent contributions of 'signal' from a test object (o) and temporally variable non-stationary noise (ns) was developed. The theory demonstrates that the biased [Formula: see text] is the sum of the detectability indices associated with the test object [Formula: see text] and non-stationary noise ([Formula: see text]). Given the nature of the imaging system and the experimental methods, [Formula: see text] cannot be directly determined independent of [Formula: see text]. However, methods to estimate [Formula: see text] independent of [Formula: see text] were developed. In accordance with the theory, [Formula: see text] was subtracted from experimental estimates of [Formula: see text], providing an unbiased estimate of [Formula: see text]. Estimates of [Formula: see text] exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide [Formula: see text] estimates which are accurate and precise for [Formula: see text]. Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the presence of bias in Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.

  1. The variability of software scoring of the CDMAM phantom associated with a limited number of images

    NASA Astrophysics Data System (ADS)

    Yang, Chang-Ying J.; Van Metter, Richard

    2007-03-01

    Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.

  2. Impact of Uniform Methods on Interlaboratory Antibody Titration Variability: Antibody Titration and Uniform Methods.

    PubMed

    Bachegowda, Lohith S; Cheng, Yan H; Long, Thomas; Shaz, Beth H

    2017-01-01

    -Substantial variability between different antibody titration methods prompted development and introduction of uniform methods in 2008. -To determine whether uniform methods consistently decrease interlaboratory variation in proficiency testing. -Proficiency testing data for antibody titration between 2009 and 2013 were obtained from the College of American Pathologists. Each laboratory was supplied plasma and red cells to determine anti-A and anti-D antibody titers by their standard method: gel or tube by uniform or other methods at different testing phases (immediate spin and/or room temperature [anti-A], and/or anti-human globulin [AHG: anti-A and anti-D]) with different additives. Interlaboratory variations were compared by analyzing the distribution of titer results by method and phase. -A median of 574 and 1100 responses were reported for anti-A and anti-D antibody titers, respectively, during a 5-year period. The 3 most frequent (median) methods performed for anti-A antibody were uniform tube room temperature (147.5; range, 119-159), uniform tube AHG (143.5; range, 134-150), and other tube AHG (97; range, 82-116); for anti-D antibody, the methods were other tube (451; range, 431-465), uniform tube (404; range, 382-462), and uniform gel (137; range, 121-153). Of the larger reported methods, uniform gel AHG phase for anti-A and anti-D antibodies had the most participants with the same result (mode). For anti-A antibody, 0 of 8 (uniform versus other tube room temperature) and 1 of 8 (uniform versus other tube AHG), and for anti-D antibody, 0 of 8 (uniform versus other tube) and 0 of 8 (uniform versus other gel) proficiency tests showed significant titer variability reduction. -Uniform methods harmonize laboratory techniques but rarely reduce interlaboratory titer variance in comparison with other methods.

  3. Variable selection under multiple imputation using the bootstrap in a prognostic study

    PubMed Central

    Heymans, Martijn W; van Buuren, Stef; Knol, Dirk L; van Mechelen, Willem; de Vet, Henrica CW

    2007-01-01

    Background Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable selection. Method In our prospective cohort study we merged data from three different randomized controlled trials (RCTs) to assess prognostic variables for chronicity of low back pain. Among the outcome and prognostic variables data were missing in the range of 0 and 48.1%. We used four methods to investigate the influence of respectively sampling and imputation variation: MI only, bootstrap only, and two methods that combine MI and bootstrapping. Variables were selected based on the inclusion frequency of each prognostic variable, i.e. the proportion of times that the variable appeared in the model. The discriminative and calibrative abilities of prognostic models developed by the four methods were assessed at different inclusion levels. Results We found that the effect of imputation variation on the inclusion frequency was larger than the effect of sampling variation. When MI and bootstrapping were combined at the range of 0% (full model) to 90% of variable selection, bootstrap corrected c-index values of 0.70 to 0.71 and slope values of 0.64 to 0.86 were found. Conclusion We recommend to account for both imputation and sampling variation in sets of missing data. The new procedure of combining MI with bootstrapping for variable selection, results in multivariable prognostic models with good performance and is therefore attractive to apply on data sets with missing values. PMID:17629912

  4. Space Suit Joint Torque Measurement Method Validation

    NASA Technical Reports Server (NTRS)

    Valish, Dana; Eversley, Karina

    2012-01-01

    In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.

  5. Relative toxicity of products of pyrolysis and combustion of polymeric materials using various test conditions

    NASA Technical Reports Server (NTRS)

    Hilado, C. J.

    1976-01-01

    Relative toxicity data for a large number of natural and synthetic polymeric materials are presented which were obtained by 11 pyrolysis and three flaming-combustion test methods. The materials tested include flexible and rigid polyurethane foams, different kinds of fabrics and woods, and a variety of commodity polymers such as polyethylene. Animal exposure chambers of different volumes containing mice, rats, or rabbits were used in the tests, which were performed over the temperature range from ambient to 800 C with and without air flow or recirculation. The test results are found to be sensitive to such variables as exposure mode, temperature, air flow and dilution, material concentration, and animal species, but relative toxicity rankings appear to be similar for many methods and materials. It is concluded that times to incapacitance and to death provide a more suitable basis for relative toxicity rankings than percent mortality alone, that temperature is the most important variable in the tests reported, and that variables such as chamber volume and animal species may not significantly affect the rankings.

  6. EVALUATING PROGRAMMED TEST INTERPRETATION USING EMOTIONAL AROUSAL AS A CRITERION.

    ERIC Educational Resources Information Center

    FORSTER, JERALD R.

    THE QUESTION AS TO WHETHER OR NOT THE EMOTIONAL ASPECTS OF TEST INTERPRETATION MAKE IT INAPPROPRIATE FOR PROGRAMED METHODS WAS EXAMINED BY TWO METHODS OF TEST RESULT COMMUNICATION--PROGRAMED MATERIALS AND VERBAL COMMUNICATION BY A COUNSELOR. DEPENDENT VARIABLES WERE MEASURES OF EMOTIONAL AROUSAL RECORDED BY SKIN CONDUCTANCE UNITS AND THE GAIN IN…

  7. EXPERIMENTAL DEVELOPMENT OF VARIABILITY IN READING RATE IN GRADES FOUR, FIVE AND SIX.

    ERIC Educational Resources Information Center

    HARRIS, THEODORE L.; AND OTHERS

    METHODS OF TESTING, EVALUATING, AND TEACHING READING IN THE FOURTH, FIFTH AND SIXTH GRADES ARE DESCRIBED. CONSTRUCTION AND DESIGN OF EXPERIMENTAL TESTS OF VARIABILITY IN READING SPEED ARE DISCUSSED. DESIGN WAS BASED ON THE RATIONALE THAT A MEANINGFUL READING-TIME SCORE DIRECTLY RELATED TO THE SUBJECT'S PURPOSE FOR READING. WHILE READING SPEED MAY…

  8. Application of the modified chi-square ratio statistic in a stepwise procedure for cascade impactor equivalence testing.

    PubMed

    Weber, Benjamin; Lee, Sau L; Delvadia, Renishkumar; Lionberger, Robert; Li, Bing V; Tsong, Yi; Hochhaus, Guenther

    2015-03-01

    Equivalence testing of aerodynamic particle size distribution (APSD) through multi-stage cascade impactors (CIs) is important for establishing bioequivalence of orally inhaled drug products. Recent work demonstrated that the median of the modified chi-square ratio statistic (MmCSRS) is a promising metric for APSD equivalence testing of test (T) and reference (R) products as it can be applied to a reduced number of CI sites that are more relevant for lung deposition. This metric is also less sensitive to the increased variability often observed for low-deposition sites. A method to establish critical values for the MmCSRS is described here. This method considers the variability of the R product by employing a reference variance scaling approach that allows definition of critical values as a function of the observed variability of the R product. A stepwise CI equivalence test is proposed that integrates the MmCSRS as a method for comparing the relative shapes of CI profiles and incorporates statistical tests for assessing equivalence of single actuation content and impactor sized mass. This stepwise CI equivalence test was applied to 55 published CI profile scenarios, which were classified as equivalent or inequivalent by members of the Product Quality Research Institute working group (PQRI WG). The results of the stepwise CI equivalence test using a 25% difference in MmCSRS as an acceptance criterion provided the best matching with those of the PQRI WG as decisions of both methods agreed in 75% of the 55 CI profile scenarios.

  9. Factors Associated with HIV Testing Among Participants from Substance Use Disorder Treatment Programs in the US: A Machine Learning Approach.

    PubMed

    Pan, Yue; Liu, Hongmei; Metsch, Lisa R; Feaster, Daniel J

    2017-02-01

    HIV testing is the foundation for consolidated HIV treatment and prevention. In this study, we aim to discover the most relevant variables for predicting HIV testing uptake among substance users in substance use disorder treatment programs by applying random forest (RF), a robust multivariate statistical learning method. We also provide a descriptive introduction to this method for those who are unfamiliar with it. We used data from the National Institute on Drug Abuse Clinical Trials Network HIV testing and counseling study (CTN-0032). A total of 1281 HIV-negative or status unknown participants from 12 US community-based substance use disorder treatment programs were included and were randomized into three HIV testing and counseling treatment groups. The a priori primary outcome was self-reported receipt of HIV test results. Classification accuracy of RF was compared to logistic regression, a standard statistical approach for binary outcomes. Variable importance measures for the RF model were used to select the most relevant variables. RF based models produced much higher classification accuracy than those based on logistic regression. Treatment group is the most important predictor among all covariates, with a variable importance index of 12.9%. RF variable importance revealed that several types of condomless sex behaviors, condom use self-efficacy and attitudes towards condom use, and level of depression are the most important predictors of receipt of HIV testing results. There is a non-linear negative relationship between count of condomless sex acts and the receipt of HIV testing. In conclusion, RF seems promising in discovering important factors related to HIV testing uptake among large numbers of predictors and should be encouraged in future HIV prevention and treatment research and intervention program evaluations.

  10. Decadal predictions of Southern Ocean sea ice : testing different initialization methods with an Earth-system Model of Intermediate Complexity

    NASA Astrophysics Data System (ADS)

    Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana

    2013-04-01

    The sea ice extent in the Southern Ocean has increased since 1979 but the causes of this expansion have not been firmly identified. In particular, the contribution of internal variability and external forcing to this positive trend has not been fully established. In this region, the lack of observations and the overestimation of internal variability of the sea ice by contemporary General Circulation Models (GCMs) make it difficult to understand the behaviour of the sea ice. Nevertheless, if its evolution is governed by the internal variability of the system and if this internal variability is in some way predictable, a suitable initialization method should lead to simulations results that better fit the reality. Current GCMs decadal predictions are generally initialized through a nudging towards some observed fields. This relatively simple method does not seem to be appropriated to the initialization of sea ice in the Southern Ocean. The present study aims at identifying an initialization method that could improve the quality of the predictions of Southern Ocean sea ice at decadal timescales. We use LOVECLIM, an Earth-system Model of Intermediate Complexity that allows us to perform, within a reasonable computational time, the large amount of simulations required to test systematically different initialization procedures. These involve three data assimilation methods: a nudging, a particle filter and an efficient particle filter. In a first step, simulations are performed in an idealized framework, i.e. data from a reference simulation of LOVECLIM are used instead of observations, herein after called pseudo-observations. In this configuration, the internal variability of the model obviously agrees with the one of the pseudo-observations. This allows us to get rid of the issues related to the overestimation of the internal variability by models compared to the observed one. This way, we can work out a suitable methodology to assess the efficiency of the initialization procedures tested. It also allows us determine the upper limit of improvement that can be expected if more sophisticated initialization methods are used in decadal prediction simulations and if models have an internal variability agreeing with the observed one. Furthermore, since pseudo-observations are available everywhere at any time step, we also analyse the differences between simulations initialized with a complete dataset of pseudo-observations and the ones for which pseudo-observations data are not assimilated everywhere. In a second step, simulations are realized in a realistic framework, i.e. through the use of actual available observations. The same data assimilation methods are tested in order to check if more sophisticated methods can improve the reliability and the accuracy of decadal prediction simulations, even if they are performed with models that overestimate the internal variability of the sea ice extent in the Southern Ocean.

  11. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  12. Correlation of Gerkin, Queen's College, George, and Jackson methods in estimating maximal oxygen consumption.

    PubMed

    Heydari, Payam; Varmazyar, Sakineh; Variani, Ali Safari; Hashemi, Fariba; Ataei, Seyed Sajad

    2017-10-01

    Test of maximal oxygen consumption is the gold standard for measuring cardio-pulmonary fitness. This study aimed to determine correlation of Gerkin, Queen's College, George, and Jackson methods in estimating maximal oxygen consumption, and demographic factors affecting maximal oxygen consumption. This descriptive cross-sectional study was conducted in a census of medical emergency students (n=57) in Qazvin University of Medical Sciences in 2016. The subjects firstly completed the General Health Questionnaire (PAR-Q) and demographic characteristics. Then eligible subjects were assessed using exercise tests of Gerkin treadmill, Queen's College steps and non-exercise George, and Jackson. Data analysis was carried out using independent t-test, one way analysis of variance and Pearson correlation in the SPSS software. The mean age of participants was 21.69±4.99 years. The mean of maximal oxygen consumption using Gerkin, Queen's College, George, and Jackson tests was 4.17, 3.36, 3.64, 3.63 liters per minute, respectively. Pearson statistical test showed a significant correlation among fours tests. George and Jackson tests had the greatest correlation (r=0.85, p>0.001). Results of tests of one-way analysis of variance and t-test showed a significant relationship between independent variable of weight and height in four tests, and dependent variable of maximal oxygen consumption. Also, there was a significant relationship between variable of body mass index in two tests of Gerkin and Queen's College and variable of exercise hours per week with the George and Jackson tests (p>0.001). Given the obtained correlation, these tests have the potential to replace each other as necessary, so that the non-exercise Jackson test can be used instead of the Gerkin test.

  13. Formulation characteristics and in vitro release testing of cyclosporine ophthalmic ointments.

    PubMed

    Dong, Yixuan; Qu, Haiou; Pavurala, Naresh; Wang, Jiang; Sekar, Vasanthakumar; Martinez, Marilyn N; Fahmy, Raafat; Ashraf, Muhammad; Cruz, Celia N; Xu, Xiaoming

    2018-06-10

    The aim of the present study was to investigate the relationship between formulation/process variables versus the critical quality attributes (CQAs) of cyclosporine ophthalmic ointments and to explore the feasibility of using an in vitro approach to assess product sameness. A definitive screening design (DSD) was used to evaluate the impact of formulation and process variables. The formulation variables included drug percentage, percentage of corn oil and lanolin alcohol. The process variables studied were mixing temperature, mixing time and the method of mixing. The quality and performance attributes examined included drug assay, content uniformity, image analysis, rheology (storage modulus, shear viscosity) and in vitro drug release. Of the formulation variables evaluated, the percentage of the drug substance and the percentage of corn oil in the matrix were the most influential factors with respect to in vitro drug release. Conversely, the process parameters tested were observed to have minimal impact. An evaluation of the release mechanism of cyclosporine from the ointment revealed an interplay between formulation (e.g. physicochemical properties of the drug and ointment matrix type) and the release medium. These data provide a scientific basis to guide method development for in vitro drug release testing of ointment dosage forms. These results demonstrate that the in vitro methods used in this investigation were fit-for-purpose for detecting formulation and process changes and therefore amenable to assessment of product sameness. Published by Elsevier B.V.

  14. The potential of clustering methods to define intersection test scenarios: Assessing real-life performance of AEB.

    PubMed

    Sander, Ulrich; Lubbe, Nils

    2018-04-01

    Intersection accidents are frequent and harmful. The accident types 'straight crossing path' (SCP), 'left turn across path - oncoming direction' (LTAP/OD), and 'left-turn across path - lateral direction' (LTAP/LD) represent around 95% of all intersection accidents and one-third of all police-reported car-to-car accidents in Germany. The European New Car Assessment Program (Euro NCAP) have announced that intersection scenarios will be included in their rating from 2020; however, how these scenarios are to be tested has not been defined. This study investigates whether clustering methods can be used to identify a small number of test scenarios sufficiently representative of the accident dataset to evaluate Intersection Automated Emergency Braking (AEB). Data from the German In-Depth Accident Study (GIDAS) and the GIDAS-based Pre-Crash Matrix (PCM) from 1999 to 2016, containing 784 SCP and 453 LTAP/OD accidents, were analyzed with principal component methods to identify variables that account for the relevant total variances of the sample. Three different methods for data clustering were applied to each of the accident types, two similarity-based approaches, namely Hierarchical Clustering (HC) and Partitioning Around Medoids (PAM), and the probability-based Latent Class Clustering (LCC). The optimum number of clusters was derived for HC and PAM with the silhouette method. The PAM algorithm was both initiated with random start medoid selection and medoids from HC. For LCC, the Bayesian Information Criterion (BIC) was used to determine the optimal number of clusters. Test scenarios were defined from optimal cluster medoids weighted by their real-life representation in GIDAS. The set of variables for clustering was further varied to investigate the influence of variable type and character. We quantified how accurately each cluster variation represents real-life AEB performance using pre-crash simulations with PCM data and a generic algorithm for AEB intervention. The usage of different sets of clustering variables resulted in substantially different numbers of clusters. The stability of the resulting clusters increased with prioritization of categorical over continuous variables. For each different set of cluster variables, a strong in-cluster variance of avoided versus non-avoided accidents for the specified Intersection AEB was present. The medoids did not predict the most common Intersection AEB behavior in each cluster. Despite thorough analysis using various cluster methods and variable sets, it was impossible to reduce the diversity of intersection accidents into a set of test scenarios without compromising the ability to predict real-life performance of Intersection AEB. Although this does not imply that other methods cannot succeed, it was observed that small changes in the definition of a scenario resulted in a different avoidance outcome. Therefore, we suggest using limited physical testing to validate more extensive virtual simulations to evaluate vehicle safety. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. A rapid and repeatable method to deposit bioaerosols on material surfaces.

    PubMed

    Calfee, M Worth; Lee, Sang Don; Ryan, Shawn P

    2013-03-01

    A simple method for repeatably inoculating surfaces with a precise quantity of aerosolized spores was developed. Laboratory studies were conducted to evaluate the variability of the method within and between experiments, the spatial distribution of spore deposition, the applicability of the method to complex surface types, and the relationship between material surface roughness and spore recoveries. Surface concentrations, as estimated by recoveries from wetted-wipe sampling, were between 5×10(3) and 1.5×10(4)CFUcm(-2) across the entire area (930cm(2)) inoculated. Between-test variability (Cv) in spore recoveries was 40%, 81%, 66%, and 20% for stainless steel, concrete, wood, and drywall, respectively. Within-test variability was lower, and did not exceed 33%, 47%, 52%, and 20% for these materials. The data demonstrate that this method is repeatable, is effective at depositing spores across a target surface area, and can be used to dose complex materials such as concrete, wood, and drywall. In addition, the data demonstrate that surface sampling recoveries vary by material type, and this variability can partially be explained by the material surface roughness index. This deposition method was developed for use in biological agent detection, sampling, and decontamination studies, however, is potentially beneficial to any scientific discipline that investigates surfaces containing aerosol-borne particles. Published by Elsevier B.V.

  16. Methods specification for diagnostic test accuracy studies in fine-needle aspiration cytology: a survey of reporting practice.

    PubMed

    Schmidt, Robert L; Factor, Rachel E; Affolter, Kajsa E; Cook, Joshua B; Hall, Brian J; Narra, Krishna K; Witt, Benjamin L; Wilson, Andrew R; Layfield, Lester J

    2012-01-01

    Diagnostic test accuracy (DTA) studies on fine-needle aspiration cytology (FNAC) often show considerable variability in diagnostic accuracy between study centers. Many factors affect the accuracy of FNAC. A complete description of the testing parameters would help make valid comparisons between studies and determine causes of performance variation. We investigated the manner in which test conditions are specified in FNAC DTA studies to determine which parameters are most commonly specified and the frequency with which they are specified and to see whether there is significant variability in reporting practice. We identified 17 frequently reported test parameters and found significant variation in the reporting of these test specifications across studies. On average, studies reported 5 of the 17 items that would be required to specify the test conditions completely. A more complete and standardized reporting of methods, perhaps by means of a checklist, would improve the interpretation of FNAC DTA studies.

  17. A global × global test for testing associations between two large sets of variables.

    PubMed

    Chaturvedi, Nimisha; de Menezes, Renée X; Goeman, Jelle J

    2017-01-01

    In high-dimensional omics studies where multiple molecular profiles are obtained for each set of patients, there is often interest in identifying complex multivariate associations, for example, copy number regulated expression levels in a certain pathway or in a genomic region. To detect such associations, we present a novel approach to test for association between two sets of variables. Our approach generalizes the global test, which tests for association between a group of covariates and a single univariate response, to allow high-dimensional multivariate response. We apply the method to several simulated datasets as well as two publicly available datasets, where we compare the performance of multivariate global test (G2) with univariate global test. The method is implemented in R and will be available as a part of the globaltest package in R. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Longitudinal Reliability of Self-Reported Age at Menarche in Adolescent Girls: Variability across Time and Setting

    ERIC Educational Resources Information Center

    Dorn, Lorah D.; Sontag-Padilla, Lisa M.; Pabst, Stephanie; Tissot, Abbigail; Susman, Elizabeth J.

    2013-01-01

    Age at menarche is critical in research and clinical settings, yet there is a dearth of studies examining its reliability in adolescents. We examined age at menarche during adolescence, specifically, (a) average method reliability across 3 years, (b) test-retest reliability between time points and methods, (c) intraindividual variability of…

  19. Rolling-Element Fatigue Testing and Data Analysis - A Tutorial

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.

    2011-01-01

    In order to rank bearing materials, lubricants and other design variables using rolling-element bench type fatigue testing of bearing components and full-scale rolling-element bearing tests, the investigator needs to be cognizant of the variables that affect rolling-element fatigue life and be able to maintain and control them within an acceptable experimental tolerance. Once these variables are controlled, the number of tests and the test conditions must be specified to assure reasonable statistical certainty of the final results. There is a reasonable correlation between the results from elemental test rigs with those results obtained with full-scale bearings. Using the statistical methods of W. Weibull and L. Johnson, the minimum number of tests required can be determined. This paper brings together and discusses the technical aspects of rolling-element fatigue testing and data analysis as well as making recommendations to assure quality and reliable testing of rolling-element specimens and full-scale rolling-element bearings.

  20. Do Two or More Multicomponent Instruments Measure the Same Construct? Testing Construct Congruence Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Tong, Bing

    2016-01-01

    A latent variable modeling procedure is discussed that can be used to test if two or more homogeneous multicomponent instruments with distinct components are measuring the same underlying construct. The method is widely applicable in scale construction and development research and can also be of special interest in construct validation studies.…

  1. Multicollinearity is a red herring in the search for moderator variables: A guide to interpreting moderated multiple regression models and a critique of Iacobucci, Schneider, Popovich, and Bakamitsos (2016).

    PubMed

    McClelland, Gary H; Irwin, Julie R; Disatnik, David; Sivan, Liron

    2017-02-01

    Multicollinearity is irrelevant to the search for moderator variables, contrary to the implications of Iacobucci, Schneider, Popovich, and Bakamitsos (Behavior Research Methods, 2016, this issue). Multicollinearity is like the red herring in a mystery novel that distracts the statistical detective from the pursuit of a true moderator relationship. We show multicollinearity is completely irrelevant for tests of moderator variables. Furthermore, readers of Iacobucci et al. might be confused by a number of their errors. We note those errors, but more positively, we describe a variety of methods researchers might use to test and interpret their moderated multiple regression models, including two-stage testing, mean-centering, spotlighting, orthogonalizing, and floodlighting without regard to putative issues of multicollinearity. We cite a number of recent studies in the psychological literature in which the researchers used these methods appropriately to test, to interpret, and to report their moderated multiple regression models. We conclude with a set of recommendations for the analysis and reporting of moderated multiple regression that should help researchers better understand their models and facilitate generalizations across studies.

  2. Biological variability of the sweat chloride in diagnostic sweat tests: A retrospective analysis.

    PubMed

    Vermeulen, F; Lebecque, P; De Boeck, K; Leal, T

    2017-01-01

    The sweat test is the current gold standard for the diagnosis of cystic fibrosis (CF). CF is unlikely when sweat chloride (Cl sw ) is lower than 30mmol/L, Cl sw >60 is suggestive of CF, with intermediate values between 30 and 60mmol/L. To correctly interpret a sweat chloride value, the biological variability of the sweat chloride has to be known. Sweat tests performed in two centers using the classic Gibson and Cooke method were retrospectively reviewed (n=5904). Within test variability of Cl sw was measured by comparing results from right and left arm collected on the same day. Between test variability was calculated from subjects with sweat tests performed on more than one occasion. Within test variability of Cl sw calculated in 1022 subjects was low with differences between -3.2 (p5) and +3.6mmol/L (p95). Results from left and right arm were classified differently in only 3 subjects. Between test variability of Cl sw in 197 subjects was larger, with differences between -18.2mmol/L (p5) and +14.1mmol/L (p95) between repeat tests. Changes in diagnostic conclusion were seen in 55/197 subjects, the most frequent being changing from indeterminate to 'CF unlikely' range (48/102). Variability of sweat chloride is substantial, with frequent changes in diagnostic conclusion, especially in the intermediate range. Copyright © 2016 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  3. The Reliability of Pharyngeal High Resolution Manometry with Impedance for Derivation of Measures of Swallowing Function in Healthy Volunteers

    PubMed Central

    Omari, Taher I.; Savilampi, Johanna; Kokkinn, Karmen; Schar, Mistyka; Lamvik, Kristin; Doeltgen, Sebastian; Cock, Charles

    2016-01-01

    Purpose. We evaluated the intra- and interrater agreement and test-retest reliability of analyst derivation of swallow function variables based on repeated high resolution manometry with impedance measurements. Methods. Five subjects swallowed 10 × 10 mL saline on two occasions one week apart producing a database of 100 swallows. Swallows were repeat-analysed by six observers using software. Swallow variables were indicative of contractility, intrabolus pressure, and flow timing. Results. The average intraclass correlation coefficients (ICC) for intra- and interrater comparisons of all variable means showed substantial to excellent agreement (intrarater ICC 0.85–1.00; mean interrater ICC 0.77–1.00). Test-retest results were less reliable. ICC for test-retest comparisons ranged from slight to excellent depending on the class of variable. Contractility variables differed most in terms of test-retest reliability. Amongst contractility variables, UES basal pressure showed excellent test-retest agreement (mean ICC 0.94), measures of UES postrelaxation contractile pressure showed moderate to substantial test-retest agreement (mean Interrater ICC 0.47–0.67), and test-retest agreement of pharyngeal contractile pressure ranged from slight to substantial (mean Interrater ICC 0.15–0.61). Conclusions. Test-retest reliability of HRIM measures depends on the class of variable. Measures of bolus distension pressure and flow timing appear to be more test-retest reliable than measures of contractility. PMID:27190520

  4. Objective Evaluation of Vergence Disorders and a Research-Based Novel Method for Vergence Rehabilitation

    PubMed Central

    Kapoula, Zoï; Morize, Aurélien; Daniel, François; Jonqua, Fabienne; Orssaud, Christophe; Brémond-Gignac, Dominique

    2016-01-01

    Purpose We performed video-oculography to evaluate vergence eye movement abnormalities in students diagnosed clinically with vergence disorders. We tested the efficiency of a novel rehabilitation method and evaluated its benefits with video-oculography cross-correlated with clinical tests and symptomatology. Methods A total of 19 students (20–27 years old) underwent ophthalmologic, orthoptic examination, and a vergence test coupled with video-oculography. Eight patients were diagnosed with vergence disorders with a high symptomatology score (CISS) and performed a 5-week session of vergence rehabilitation. Vergence and rehabilitation tasks were performed with a trapezoid surface of light emitting diodes (LEDs) and adjacent buzzers (US 8851669). We used a novel Vergence double-step (Vd-s) protocol: the target stepped to a second position before the vergence movement completion. Afterward the vergence test was repeated 1 week and 1 month later. Results Abnormally increased intertrial variability was observed for many vergence parameters (gain, duration, and speed) for the subjects with vergence disorders. High CISS scores were correlated with variability and increased latency. After the Vd-s, variability of all parameters dropped to normal or better levels. Moreover, the convergence and divergence latency diminished significantly to levels better than normal; benefits were maintained 1 month after completion of Vd-s. CISS scores dropped to normal level, which was maintained up to 1 year. Conclusions and Translational Relevance: Intertrial variability is the major marker of vergence disorders. The Vd-s research-based method leads to normalization of vergence properties and lasting removal of symptoms. The efficiency of the method is due to the spatiotemporal parameters of repetitive trials that stimulate neural plasticity. PMID:26981330

  5. 40 CFR 60.46b - Compliance and performance test methods and procedures for particulate matter and nitrogen oxides.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....8, and shall conduct subsequent performance tests as requested by the Administrator, using the... Administrator when necessitated by process variables or other factors. (4) For Method 5 of appendix A of this... to the Administrator's satisfaction suitable methods to determine the average hourly heat input rate...

  6. An imbalance fault detection method based on data normalization and EMD for marine current turbines.

    PubMed

    Zhang, Milu; Wang, Tianzhen; Tang, Tianhao; Benbouzid, Mohamed; Diallo, Demba

    2017-05-01

    This paper proposes an imbalance fault detection method based on data normalization and Empirical Mode Decomposition (EMD) for variable speed direct-drive Marine Current Turbine (MCT) system. The method is based on the MCT stator current under the condition of wave and turbulence. The goal of this method is to extract blade imbalance fault feature, which is concealed by the supply frequency and the environment noise. First, a Generalized Likelihood Ratio Test (GLRT) detector is developed and the monitoring variable is selected by analyzing the relationship between the variables. Then, the selected monitoring variable is converted into a time series through data normalization, which makes the imbalance fault characteristic frequency into a constant. At the end, the monitoring variable is filtered out by EMD method to eliminate the effect of turbulence. The experiments show that the proposed method is robust against turbulence through comparing the different fault severities and the different turbulence intensities. Comparison with other methods, the experimental results indicate the feasibility and efficacy of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Vibration Testing of Electrical Cables to Quantify Loads at Tie-Down Locations

    NASA Technical Reports Server (NTRS)

    Dutson, Joseph D.

    2013-01-01

    The standard method for defining static equivalent structural load factors for components is based on Mile s equation. Unless test data is available, 5% critical damping is assumed for all components when calculating loads. Application of this method to electrical cable tie-down hardware often results in high loads, which often exceed the capability of typical tie-down options such as cable ties and P-clamps. Random vibration testing of electrical cables was used to better understand the factors that influence component loads: natural frequency, damping, and mass participation. An initial round of vibration testing successfully identified variables of interest, checked out the test fixture and instrumentation, and provided justification for removing some conservatism in the standard method. Additional testing is planned that will include a larger range of cable sizes for the most significant contributors to load as variables to further refine loads at cable tie-down points. Completed testing has provided justification to reduce loads at cable tie-downs by 45% with additional refinement based on measured cable natural frequencies.

  8. Can Statistical Machine Learning Algorithms Help for Classification of Obstructive Sleep Apnea Severity to Optimal Utilization of Polysomnography Resources?

    PubMed

    Bozkurt, Selen; Bostanci, Asli; Turhan, Murat

    2017-08-11

    The goal of this study is to evaluate the results of machine learning methods for the classification of OSA severity of patients with suspected sleep disorder breathing as normal, mild, moderate and severe based on non-polysomnographic variables: 1) clinical data, 2) symptoms and 3) physical examination. In order to produce classification models for OSA severity, five different machine learning methods (Bayesian network, Decision Tree, Random Forest, Neural Networks and Logistic Regression) were trained while relevant variables and their relationships were derived empirically from observed data. Each model was trained and evaluated using 10-fold cross-validation and to evaluate classification performances of all methods, true positive rate (TPR), false positive rate (FPR), Positive Predictive Value (PPV), F measure and Area Under Receiver Operating Characteristics curve (ROC-AUC) were used. Results of 10-fold cross validated tests with different variable settings promisingly indicated that the OSA severity of suspected OSA patients can be classified, using non-polysomnographic features, with 0.71 true positive rate as the highest and, 0.15 false positive rate as the lowest, respectively. Moreover, the test results of different variables settings revealed that the accuracy of the classification models was significantly improved when physical examination variables were added to the model. Study results showed that machine learning methods can be used to estimate the probabilities of no, mild, moderate, and severe obstructive sleep apnea and such approaches may improve accurate initial OSA screening and help referring only the suspected moderate or severe OSA patients to sleep laboratories for the expensive tests.

  9. Group Comparisons in the Presence of Missing Data Using Latent Variable Modeling Techniques

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2010-01-01

    A latent variable modeling approach for examining population similarities and differences in observed variable relationship and mean indexes in incomplete data sets is discussed. The method is based on the full information maximum likelihood procedure of model fitting and parameter estimation. The procedure can be employed to test group identities…

  10. Dissociation Predicts Later Attention Problems in Sexually Abused Children

    ERIC Educational Resources Information Center

    Kaplow, Julie B.; Hall, Erin; Koenen, Karestan C.; Dodge, Kenneth A.; Amaya-Jackson, Lisa

    2008-01-01

    Objective: The goals of this research are to develop and test a prospective model of attention problems in sexually abused children that includes fixed variables (e.g., gender), trauma, and disclosure-related pathways. Methods: At Time 1, fixed variables, trauma variables, and stress reactions upon disclosure were assessed in 156 children aged…

  11. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  12. Estimating verbal fluency and naming ability from the test of premorbid functioning and demographic variables: Regression equations derived from a regional UK sample.

    PubMed

    Jenkinson, Toni-Marie; Muncer, Steven; Wheeler, Miranda; Brechin, Don; Evans, Stephen

    2018-06-01

    Neuropsychological assessment requires accurate estimation of an individual's premorbid cognitive abilities. Oral word reading tests, such as the test of premorbid functioning (TOPF), and demographic variables, such as age, sex, and level of education, provide a reasonable indication of premorbid intelligence, but their ability to predict other related cognitive abilities is less well understood. This study aimed to develop regression equations, based on the TOPF and demographic variables, to predict scores on tests of verbal fluency and naming ability. A sample of 119 healthy adults provided demographic information and were tested using the TOPF, FAS, animal naming test (ANT), and graded naming test (GNT). Multiple regression analyses, using the TOPF and demographics as predictor variables, were used to estimate verbal fluency and naming ability test scores. Change scores and cases of significant impairment were calculated for two clinical samples with diagnosed neurological conditions (TBI and meningioma) using the method in Knight, McMahon, Green, and Skeaff (). Demographic variables provided a significant contribution to the prediction of all verbal fluency and naming ability test scores; however, adding TOPF score to the equation considerably improved prediction beyond that afforded by demographic variables alone. The percentage of variance accounted for by demographic variables and/or TOPF score varied from 19 per cent (FAS), 28 per cent (ANT), and 41 per cent (GNT). Change scores revealed significant differences in performance in the clinical groups, particularity the TBI group. Demographic variables, particularly education level, and scores on the TOPF should be taken into consideration when interpreting performance on tests of verbal fluency and naming ability. © 2017 The British Psychological Society.

  13. A comparison of the weights-of-evidence method and probabilistic neural networks

    USGS Publications Warehouse

    Singer, Donald A.; Kouda, Ryoichi

    1999-01-01

    The need to integrate large quantities of digital geoscience information to classify locations as mineral deposits or nondeposits has been met by the weights-of-evidence method in many situations. Widespread selection of this method may be more the result of its ease of use and interpretation rather than comparisons with alternative methods. A comparison of the weights-of-evidence method to probabilistic neural networks is performed here with data from Chisel Lake-Andeson Lake, Manitoba, Canada. Each method is designed to estimate the probability of belonging to learned classes where the estimated probabilities are used to classify the unknowns. Using these data, significantly lower classification error rates were observed for the neural network, not only when test and training data were the same (0.02 versus 23%), but also when validation data, not used in any training, were used to test the efficiency of classification (0.7 versus 17%). Despite these data containing too few deposits, these tests of this set of data demonstrate the neural network's ability at making unbiased probability estimates and lower error rates when measured by number of polygons or by the area of land misclassified. For both methods, independent validation tests are required to ensure that estimates are representative of real-world results. Results from the weights-of-evidence method demonstrate a strong bias where most errors are barren areas misclassified as deposits. The weights-of-evidence method is based on Bayes rule, which requires independent variables in order to make unbiased estimates. The chi-square test for independence indicates no significant correlations among the variables in the Chisel Lake–Andeson Lake data. However, the expected number of deposits test clearly demonstrates that these data violate the independence assumption. Other, independent simulations with three variables show that using variables with correlations of 1.0 can double the expected number of deposits as can correlations of −1.0. Studies done in the 1970s on methods that use Bayes rule show that moderate correlations among attributes seriously affect estimates and even small correlations lead to increases in misclassifications. Adverse effects have been observed with small to moderate correlations when only six to eight variables were used. Consistent evidence of upward biased probability estimates from multivariate methods founded on Bayes rule must be of considerable concern to institutions and governmental agencies where unbiased estimates are required. In addition to increasing the misclassification rate, biased probability estimates make classification into deposit and nondeposit classes an arbitrary subjective decision. The probabilistic neural network has no problem dealing with correlated variables—its performance depends strongly on having a thoroughly representative training set. Probabilistic neural networks or logistic regression should receive serious consideration where unbiased estimates are required. The weights-of-evidence method would serve to estimate thresholds between anomalies and background and for exploratory data analysis.

  14. A Method for Modeling the Intrinsic Dynamics of Intraindividual Variability: Recovering the Parameters of Simulated Oscillators in Multi-Wave Panel Data.

    ERIC Educational Resources Information Center

    Boker, Steven M.; Nesselroade, John R.

    2002-01-01

    Examined two methods for fitting models of intrinsic dynamics to intraindividual variability data by testing these techniques' behavior in equations through simulation studies. Among the main results is the demonstration that a local linear approximation of derivatives can accurately recover the parameters of a simulated linear oscillator, with…

  15. An Analysis of Methods Used To Reduce Nonresponse Bias in Survey Research.

    ERIC Educational Resources Information Center

    Johnson, Victoria A.

    The effectiveness of five methods used to estimate the population parameters of a variable of interest from a random sample in the presence of non-response to mail surveys was tested in conditions that vary the return rate and the relationship of the variable of interest to the likelihood of response. Data from 125,092 adult Alabama residents in…

  16. Development and Testing of a Coupled Ocean-atmosphere Mesoscale Ensemble Prediction System

    DTIC Science & Technology

    2011-06-28

    wind, temperature, and moisture variables, while the oceanographic ET is derived from ocean current, temperature, and salinity variables. Estimates of...wind, temperature, and moisture variables while the oceanographic ET is derived from ocean current temperature, and salinity variables. Estimates of...uncertainty in the model. Rigorously accurate ensemble methods for describing the distribution of future states given past information include particle

  17. On the reliability of Fusarium oxysporum f. sp. niveum research: Do we need standardized testing methods?

    USDA-ARS?s Scientific Manuscript database

    Fusarium oxysporum f. sp. nivium (Fon) is a pathogen highly variable in aggressiveness that requires a standardized testing method to more accurately define isolate aggressiveness (races) and to identify resistant watermelon lines. Isolates of Fon vary in aggressiveness from weakly to highly aggres...

  18. Examination of the Diurnal Assumptions of the Test of Variables of Attention for Elementary Students

    ERIC Educational Resources Information Center

    Hurford, David P.; Lasater, Kara A.; Erickson, Sara E.; Kiesling, Nicole E.

    2013-01-01

    Objective: To examine the diurnal assumptions of the Test of Variables of Attention (TOVA). Method: The present study assessed 122 elementary students aged 5.5 to 10.0 years who were randomly assigned to one of four different groups based on time of administration (M-M: Morning-Morning, M-A: Morning-Afternoon, A-M: Afternoon-Morning, and A-A:…

  19. A Revised Simplex Method for Test Construction Problems. Research Report 90-5.

    ERIC Educational Resources Information Center

    Adema, Jos J.

    Linear programming models with 0-1 variables are useful for the construction of tests from an item bank. Most solution strategies for these models start with solving the relaxed 0-1 linear programming model, allowing the 0-1 variables to take on values between 0 and 1. Then, a 0-1 solution is found by just rounding, optimal rounding, or a…

  20. The Effect of Educational Leadership on Organizational Variables: A Meta-Analysis Study in the Sample of Turkey

    ERIC Educational Resources Information Center

    Çogaltay, Nazim; Karadag, Engin

    2016-01-01

    The purpose of this study is to test the effect of educational leadership on some organizational variables using meta-analysis method. In this context, the results of independent researches were merged together and the hypotheses created within the scope of the study were tested. In order to determine the researches to be included in the study,…

  1. Variability of the QuantiFERON®-TB gold in-tube test using automated and manual methods.

    PubMed

    Whitworth, William C; Goodwin, Donald J; Racster, Laura; West, Kevin B; Chuke, Stella O; Daniels, Laura J; Campbell, Brandon H; Bohanon, Jamaria; Jaffar, Atheer T; Drane, Wanzer; Sjoberg, Paul A; Mazurek, Gerald H

    2014-01-01

    The QuantiFERON®-TB Gold In-Tube test (QFT-GIT) detects Mycobacterium tuberculosis (Mtb) infection by measuring release of interferon gamma (IFN-γ) when T-cells (in heparinized whole blood) are stimulated with specific Mtb antigens. The amount of IFN-γ is determined by enzyme-linked immunosorbent assay (ELISA). Automation of the ELISA method may reduce variability. To assess the impact of ELISA automation, we compared QFT-GIT results and variability when ELISAs were performed manually and with automation. Blood was collected into two sets of QFT-GIT tubes and processed at the same time. For each set, IFN-γ was measured in automated and manual ELISAs. Variability in interpretations and IFN-γ measurements was assessed between automated (A1 vs. A2) and manual (M1 vs. M2) ELISAs. Variability in IFN-γ measurements was also assessed on separate groups stratified by the mean of the four ELISAs. Subjects (N = 146) had two automated and two manual ELISAs completed. Overall, interpretations were discordant for 16 (11%) subjects. Excluding one subject with indeterminate results, 7 (4.8%) subjects had discordant automated interpretations and 10 (6.9%) subjects had discordant manual interpretations (p = 0.17). Quantitative variability was not uniform; within-subject variability was greater with higher IFN-γ measurements and with manual ELISAs. For subjects with mean TB Responses ±0.25 IU/mL of the 0.35 IU/mL cutoff, the within-subject standard deviation for two manual tests was 0.27 (CI95 = 0.22-0.37) IU/mL vs. 0.09 (CI95 = 0.07-0.12) IU/mL for two automated tests. QFT-GIT ELISA automation may reduce variability near the test cutoff. Methodological differences should be considered when interpreting and using IFN-γ release assays (IGRAs).

  2. Variable ranking based on the estimated degree of separation for two distributions of data by the length of the receiver operating characteristic curve.

    PubMed

    Maswadeh, Waleed M; Snyder, A Peter

    2015-05-30

    Variable responses are fundamental for all experiments, and they can consist of information-rich, redundant, and low signal intensities. A dataset can consist of a collection of variable responses over multiple classes or groups. Usually some of the variables are removed in a dataset that contain very little information. Sometimes all the variables are used in the data analysis phase. It is common practice to discriminate between two distributions of data; however, there is no formal algorithm to arrive at a degree of separation (DS) between two distributions of data. The DS is defined herein as the average of the sum of the areas from the probability density functions (PDFs) of A and B that contain a≥percentage of A and/or B. Thus, DS90 is the average of the sum of the PDF areas of A and B that contain ≥90% of A and/or B. To arrive at a DS value, two synthesized PDFs or very large experimental datasets are required. Experimentally it is common practice to generate relatively small datasets. Therefore, the challenge was to find a statistical parameter that can be used on small datasets to estimate and highly correlate with the DS90 parameter. Established statistical methods include the overlap area of the two data distribution profiles, Welch's t-test, Kolmogorov-Smirnov (K-S) test, Mann-Whitney-Wilcoxon test, and the area under the receiver operating characteristics (ROC) curve (AUC). The area between the ROC curve and diagonal (ACD) and the length of the ROC curve (LROC) are introduced. The established, ACD, and LROC methods were correlated to the DS90 when applied on many pairs of synthesized PDFs. The LROC method provided the best linear correlation with, and estimation of, the DS90. The estimated DS90 from the LROC (DS90-LROC) is applied to a database, as an example, of three Italian wines consisting of thirteen variable responses for variable ranking consideration. An important highlight of the DS90-LROC method is utilizing the LROC curve methodology to test all variables one-at-a-time with all pairs of classes in a dataset. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Methodology for the development of normative data for Spanish-speaking pediatric populations.

    PubMed

    Rivera, D; Arango-Lasprilla, J C

    2017-01-01

    To describe the methodology utilized to calculate reliability and the generation of norms for 10 neuropsychological tests for children in Spanish-speaking countries. The study sample consisted of over 4,373 healthy children from nine countries in Latin America (Chile, Cuba, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Puerto Rico) and Spain. Inclusion criteria for all countries were to have between 6 to 17 years of age, an Intelligence Quotient of≥80 on the Test of Non-Verbal Intelligence (TONI-2), and score of <19 on the Children's Depression Inventory. Participants completed 10 neuropsychological tests. Reliability and norms were calculated for all tests. Test-retest analysis showed excellent or good- reliability on all tests (r's>0.55; p's<0.001) except M-WCST perseverative errors whose coefficient magnitude was fair. All scores were normed using multiple linear regressions and standard deviations of residual values. Age, age2, sex, and mean level of parental education (MLPE) were included as predictors in the models by country. The non-significant variables (p > 0.05) were removed and the analysis were run again. This is the largest Spanish-speaking children and adolescents normative study in the world. For the generation of normative data, the method based on linear regression models and the standard deviation of residual values was used. This method allows determination of the specific variables that predict test scores, helps identify and control for collinearity of predictive variables, and generates continuous and more reliable norms than those of traditional methods.

  4. Study on color difference estimation method of medicine biochemical analysis

    NASA Astrophysics Data System (ADS)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun

    2006-01-01

    The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.

  5. Implementation of testing equipment for asphalt materials.

    DOT National Transportation Integrated Search

    2009-05-01

    Three new automated methods for related asphalt testing were evaluated under this study in an effort to save time and reduce variability in testing. The Therymolyne SSDetect, Instrotek CoreLok and Instrotek CoreDry devices were evaluated throughout t...

  6. Implementation of Testing Equipment for Asphalt Materials

    DOT National Transportation Integrated Search

    2009-05-01

    Three new automated methods for related asphalt testing were evaluated under this study in an effort to save time and reduce variability in testing. The Therymolyne SSDetect, Instrotek CoreLok and Instrotek CoreDry devices were evaluated throughout t...

  7. Relationships between Speech Intelligibility and Word Articulation Scores in Children with Hearing Loss

    PubMed Central

    Ertmer, David J.

    2012-01-01

    Purpose This investigation sought to determine whether scores from a commonly used word-based articulation test are closely associated with speech intelligibility in children with hearing loss. If the scores are closely related, articulation testing results might be used to estimate intelligibility. If not, the importance of direct assessment of intelligibility would be reinforced. Methods Forty-four children with hearing losses produced words from the Goldman-Fristoe Test of Articulation-2 and sets of 10 short sentences. Correlation analyses were conducted between scores for seven word-based predictor variables and percent-intelligible scores derived from listener judgments of stimulus sentences. Results Six of seven predictor variables were significantly correlated with percent-intelligible scores. However, regression analysis revealed that no single predictor variable or multi- variable model accounted for more than 25% of the variability in intelligibility scores. Implications The findings confirm the importance of assessing connected speech intelligibility directly. PMID:20220022

  8. The Work Endurance Recovery Method for Quantifying Training Loads in Judo.

    PubMed

    Morales, Jose; Franchini, Emerson; Garcia-Massó, Xavier; Solana-Tramunt, Mónica; Buscà, Bernat; González, Luis-Millán

    2016-10-01

    To adapt the work endurance recovery (WER) method based on randori maximal time to exhaustion (RMTE) for combat situations in judo. Eleven international-standard judo athletes (7 men and 4 women; mean age 20.73 ± 2.49 y, height 1.72 ± 0.11 m, body mass 67.36 ± 10.67 kg) were recruited to take part in the study. All participants performed a maximal incremental test (MIT), a Wingate test (WIN), a Special Judo Fitness Test (SJFT), and 2 RMTE tests. They then took part in a session at an international training camp in Barcelona, Spain, in which 4 methods of load quantification were implemented: the WER method, the Stagno method, the Lucia method, and the session rating of perceived exertion (RPE session ). RMTE demonstrated a very high test-retest reliability (intraclass correlation coefficient = .91), and correlations of the performance tests ranged from moderate to high: RMTE and MIT (r = .66), RMTE and WIN variables (r = .38-.53), RMTE and SJFT variables (r = .74-.77). The correlation between the WER method, which considers time to exhaustion, and the other systems for quantifying training load was high: WER and RPE session (r = .87), WER and Stagno (r = .77), WER and Lucia (r = .73). A comparative repeated-measures analysis of variance of the normalized values of the quantification did not yield statistically significant differences. The WER method using RMTE is highly adaptable to quantify randori judo sessions and enables one to plan a priori individualized training loads.

  9. The Construction of a Long Variable of Conceptual Development in Social Education.

    ERIC Educational Resources Information Center

    Doig, Brian

    This paper demonstrates a method for constructing long variables using items that elicit partically correct responses across ages. Long variables may be defined by students at different ages (year levels) attempting common items within a test containing other items considered to be appropriate for each age or year level. A developmental model of…

  10. An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor.

    PubMed

    Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei

    2016-01-11

    Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.

  11. SPSS and SAS programming for the testing of mediation models.

    PubMed

    Dudley, William N; Benuzillo, Jose G; Carrico, Mineh S

    2004-01-01

    Mediation modeling can explain the nature of the relation among three or more variables. In addition, it can be used to show how a variable mediates the relation between levels of intervention and outcome. The Sobel test, developed in 1990, provides a statistical method for determining the influence of a mediator on an intervention or outcome. Although interactive Web-based and stand-alone methods exist for computing the Sobel test, SPSS and SAS programs that automatically run the required regression analyses and computations increase the accessibility of mediation modeling to nursing researchers. To illustrate the utility of the Sobel test and to make this programming available to the Nursing Research audience in both SAS and SPSS. The history, logic, and technical aspects of mediation testing are introduced. The syntax files sobel.sps and sobel.sas, created to automate the computation of the regression analysis and test statistic, are available from the corresponding author. The reported programming allows the user to complete mediation testing with the user's own data in a single-step fashion. A technical manual included with the programming provides instruction on program use and interpretation of the output. Mediation modeling is a useful tool for describing the relation between three or more variables. Programming and manuals for using this model are made available.

  12. Multi-locus variable number tandem repeat analysis for Escherichia coli causing extraintestinal infections.

    PubMed

    Manges, Amee R; Tellis, Patricia A; Vincent, Caroline; Lifeso, Kimberley; Geneau, Geneviève; Reid-Smith, Richard J; Boerlin, Patrick

    2009-11-01

    Discriminatory genotyping methods for the analysis of Escherichia coli other than O157:H7 are necessary for public health-related activities. A new multi-locus variable number tandem repeat analysis protocol is presented; this method achieves an index of discrimination of 99.5% and is reproducible and valid when tested on a collection of 836 diverse E. coli.

  13. Evaluating mediation and moderation effects in school psychology: A presentation of methods and review of current practice

    PubMed Central

    Fairchild, Amanda J.; McQuillin, Samuel D.

    2017-01-01

    Third variable effects elucidate the relation between two other variables, and can describe why they are related or under what conditions they are related. This article demonstrates methods to analyze two third-variable effects: moderation and mediation. The utility of examining moderation and mediation effects in school psychology is described and current use of the analyses in applied school psychology research is reviewed and evaluated. Proper statistical methods to test the effects are presented, and different effect size measures for the models are provided. Extensions of the basic moderator and mediator models are also described. PMID:20006988

  14. Evaluating mediation and moderation effects in school psychology: a presentation of methods and review of current practice.

    PubMed

    Fairchild, Amanda J; McQuillin, Samuel D

    2010-02-01

    Third variable effects elucidate the relation between two other variables, and can describe why they are related or under what conditions they are related. This article demonstrates methods to analyze two third-variable effects: moderation and mediation. The utility of examining moderation and mediation effects in school psychology is described and current use of the analyses in applied school psychology research is reviewed and evaluated. Proper statistical methods to test the effects are presented, and different effect size measures for the models are provided. Extensions of the basic moderator and mediator models are also described.

  15. Clustering and variable selection in the presence of mixed variable types and missing data.

    PubMed

    Storlie, C B; Myers, S M; Katusic, S K; Weaver, A L; Voigt, R G; Croarkin, P E; Stoeckel, R E; Port, J D

    2018-05-17

    We consider the problem of model-based clustering in the presence of many correlated, mixed continuous, and discrete variables, some of which may have missing values. Discrete variables are treated with a latent continuous variable approach, and the Dirichlet process is used to construct a mixture model with an unknown number of components. Variable selection is also performed to identify the variables that are most influential for determining cluster membership. The work is motivated by the need to cluster patients thought to potentially have autism spectrum disorder on the basis of many cognitive and/or behavioral test scores. There are a modest number of patients (486) in the data set along with many (55) test score variables (many of which are discrete valued and/or missing). The goal of the work is to (1) cluster these patients into similar groups to help identify those with similar clinical presentation and (2) identify a sparse subset of tests that inform the clusters in order to eliminate unnecessary testing. The proposed approach compares very favorably with other methods via simulation of problems of this type. The results of the autism spectrum disorder analysis suggested 3 clusters to be most likely, while only 4 test scores had high (>0.5) posterior probability of being informative. This will result in much more efficient and informative testing. The need to cluster observations on the basis of many correlated, continuous/discrete variables with missing values is a common problem in the health sciences as well as in many other disciplines. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Dissolution comparisons using a Multivariate Statistical Distance (MSD) test and a comparison of various approaches for calculating the measurements of dissolution profile comparison.

    PubMed

    Cardot, J-M; Roudier, B; Schütz, H

    2017-07-01

    The f 2 test is generally used for comparing dissolution profiles. In cases of high variability, the f 2 test is not applicable, and the Multivariate Statistical Distance (MSD) test is frequently proposed as an alternative by the FDA and EMA. The guidelines provide only general recommendations. MSD tests can be performed either on raw data with or without time as a variable or on parameters of models. In addition, data can be limited-as in the case of the f 2 test-to dissolutions of up to 85% or to all available data. In the context of the present paper, the recommended calculation included all raw dissolution data up to the first point greater than 85% as a variable-without the various times as parameters. The proposed MSD overcomes several drawbacks found in other methods.

  17. Geophysical testing of rock and its relationships to physical properties

    DOT National Transportation Integrated Search

    2011-02-01

    Testing techniques were designed to characterize spatial variability in geotechnical engineering physical parameters of : rock formations. Standard methods using seismic waves, which are routinely used for shallow subsurface : investigation, have lim...

  18. A FORTRAN technique for correlating a circular environmental variable with a linear physiological variable in the sugar maple.

    PubMed

    Pease, J M; Morselli, M F

    1987-01-01

    This paper deals with a computer program adapted to a statistical method for analyzing an unlimited quantity of binary recorded data of an independent circular variable (e.g. wind direction), and a linear variable (e.g. maple sap flow volume). Circular variables cannot be statistically analyzed with linear methods, unless they have been transformed. The program calculates a critical quantity, the acrophase angle (PHI, phi o). The technique is adapted from original mathematics [1] and is written in Fortran 77 for easier conversion between computer networks. Correlation analysis can be performed following the program or regression which, because of the circular nature of the independent variable, becomes periodic regression. The technique was tested on a file of approximately 4050 data pairs.

  19. Active Learning to Overcome Sample Selection Bias: Application to Photometric Variable Star Classification

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  20. Apparatus for and method of testing an electrical ground fault circuit interrupt device

    DOEpatents

    Andrews, L.B.

    1998-08-18

    An apparatus for testing a ground fault circuit interrupt device includes a processor, an input device connected to the processor for receiving input from an operator, a storage media connected to the processor for storing test data, an output device connected to the processor for outputting information corresponding to the test data to the operator, and a calibrated variable load circuit connected between the processor and the ground fault circuit interrupt device. The ground fault circuit interrupt device is configured to trip a corresponding circuit breaker. The processor is configured to receive signals from the calibrated variable load circuit and to process the signals to determine a trip threshold current and/or a trip time. A method of testing the ground fault circuit interrupt device includes a first step of providing an identification for the ground fault circuit interrupt device. Test data is then recorded in accordance with the identification. By comparing test data from an initial test with test data from a subsequent test, a trend of performance for the ground fault circuit interrupt device is determined. 17 figs.

  1. Apparatus for and method of testing an electrical ground fault circuit interrupt device

    DOEpatents

    Andrews, Lowell B.

    1998-01-01

    An apparatus for testing a ground fault circuit interrupt device includes a processor, an input device connected to the processor for receiving input from an operator, a storage media connected to the processor for storing test data, an output device connected to the processor for outputting information corresponding to the test data to the operator, and a calibrated variable load circuit connected between the processor and the ground fault circuit interrupt device. The ground fault circuit interrupt device is configured to trip a corresponding circuit breaker. The processor is configured to receive signals from the calibrated variable load circuit and to process the signals to determine a trip threshold current and/or a trip time. A method of testing the ground fault circuit interrupt device includes a first step of providing an identification for the ground fault circuit interrupt device. Test data is then recorded in accordance with the identification. By comparing test data from an initial test with test data from a subsequent test, a trend of performance for the ground fault circuit interrupt device is determined.

  2. Prediction system of hydroponic plant growth and development using algorithm Fuzzy Mamdani method

    NASA Astrophysics Data System (ADS)

    Sudana, I. Made; Purnawirawan, Okta; Arief, Ulfa Mediaty

    2017-03-01

    Hydroponics is a method of farming without soil. One of the Hydroponic plants is Watercress (Nasturtium Officinale). The development and growth process of hydroponic Watercress was influenced by levels of nutrients, acidity and temperature. The independent variables can be used as input variable system to predict the value level of plants growth and development. The prediction system is using Fuzzy Algorithm Mamdani method. This system was built to implement the function of Fuzzy Inference System (Fuzzy Inference System/FIS) as a part of the Fuzzy Logic Toolbox (FLT) by using MATLAB R2007b. FIS is a computing system that works on the principle of fuzzy reasoning which is similar to humans' reasoning. Basically FIS consists of four units which are fuzzification unit, fuzzy logic reasoning unit, base knowledge unit and defuzzification unit. In addition to know the effect of independent variables on the plants growth and development that can be visualized with the function diagram of FIS output surface that is shaped three-dimensional, and statistical tests based on the data from the prediction system using multiple linear regression method, which includes multiple linear regression analysis, T test, F test, the coefficient of determination and donations predictor that are calculated using SPSS (Statistical Product and Service Solutions) software applications.

  3. Evaluation of a direct blood culture disk diffusion antimicrobial susceptibility test.

    PubMed Central

    Doern, G V; Scott, D R; Rashad, A L; Kim, K S

    1981-01-01

    A total of 556 unique blood culture isolates of nonfastidious aerobic and facultatively anaerobic bacteria were examined by direct and standardized disk susceptibility test methods (4,234 antibiotic-organism comparisons). When discrepancies which could be accounted for by the variability inherent in disk diffusion susceptibility tests were excluded, the direct method demonstrated 96.8% overall agreement with the standardized method. A total of 1.6% minor, 1.5% major, and 0.1% very major discrepancies were noted. PMID:7325634

  4. An inter-laboratory comparison study of the ANSI/BIFMA standard test method M7.1 for furniture

    EPA Science Inventory

    Five laboratories using five different test chambers participated in the study to quantify within- and between-laboratory variability in the measurement of emissions of volatile organic compounds (VOCs) from new commercial furniture test items following ANSI/BIFMA M7.1. Test item...

  5. Controlling the type I error rate in two-stage sequential adaptive designs when testing for average bioequivalence.

    PubMed

    Maurer, Willi; Jones, Byron; Chen, Ying

    2018-05-10

    In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one-sided test (TOST) procedure, which depends, among other things, on the estimated within-subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within-subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse-normal combination of p-values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first-stage results. This necessitates a discussion and exploitation of the peculiar properties of the power curve of the TOST testing procedure. We illustrate our approach with an example based on a real ABE study and compare the operating characteristics of our proposed method with those of Method B of Povin et al. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Quantifying and mapping spatial variability in simulated forest plots

    Treesearch

    Gavin R. Corral; Harold E. Burkhart

    2016-01-01

    We used computer simulations to test the efficacy of multivariate statistical methods to detect, quantify, and map spatial variability of forest stands. Simulated stands were developed of regularly-spaced plantations of loblolly pine (Pinus taeda L.). We assumed no affects of competition or mortality, but random variability was added to individual tree characteristics...

  7. Six Sigma methods applied to cryogenic coolers assembly line

    NASA Astrophysics Data System (ADS)

    Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René

    2009-05-01

    Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.

  8. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  9. Clinical Trials With Large Numbers of Variables: Important Advantages of Canonical Analysis.

    PubMed

    Cleophas, Ton J

    2016-01-01

    Canonical analysis assesses the combined effects of a set of predictor variables on a set of outcome variables, but it is little used in clinical trials despite the omnipresence of multiple variables. The aim of this study was to assess the performance of canonical analysis as compared with traditional multivariate methods using multivariate analysis of covariance (MANCOVA). As an example, a simulated data file with 12 gene expression levels and 4 drug efficacy scores was used. The correlation coefficient between the 12 predictor and 4 outcome variables was 0.87 (P = 0.0001) meaning that 76% of the variability in the outcome variables was explained by the 12 covariates. Repeated testing after the removal of 5 unimportant predictor and 1 outcome variable produced virtually the same overall result. The MANCOVA identified identical unimportant variables, but it was unable to provide overall statistics. (1) Canonical analysis is remarkable, because it can handle many more variables than traditional multivariate methods such as MANCOVA can. (2) At the same time, it accounts for the relative importance of the separate variables, their interactions and differences in units. (3) Canonical analysis provides overall statistics of the effects of sets of variables, whereas traditional multivariate methods only provide the statistics of the separate variables. (4) Unlike other methods for combining the effects of multiple variables such as factor analysis/partial least squares, canonical analysis is scientifically entirely rigorous. (5) Limitations include that it is less flexible than factor analysis/partial least squares, because only 2 sets of variables are used and because multiple solutions instead of one is offered. We do hope that this article will stimulate clinical investigators to start using this remarkable method.

  10. A Unified Framework for Association Analysis with Multiple Related Phenotypes

    PubMed Central

    Stephens, Matthew

    2013-01-01

    We consider the problem of assessing associations between multiple related outcome variables, and a single explanatory variable of interest. This problem arises in many settings, including genetic association studies, where the explanatory variable is genotype at a genetic variant. We outline a framework for conducting this type of analysis, based on Bayesian model comparison and model averaging for multivariate regressions. This framework unifies several common approaches to this problem, and includes both standard univariate and standard multivariate association tests as special cases. The framework also unifies the problems of testing for associations and explaining associations – that is, identifying which outcome variables are associated with genotype. This provides an alternative to the usual, but conceptually unsatisfying, approach of resorting to univariate tests when explaining and interpreting significant multivariate findings. The method is computationally tractable genome-wide for modest numbers of phenotypes (e.g. 5–10), and can be applied to summary data, without access to raw genotype and phenotype data. We illustrate the methods on both simulated examples, and to a genome-wide association study of blood lipid traits where we identify 18 potential novel genetic associations that were not identified by univariate analyses of the same data. PMID:23861737

  11. Effects of variables upon pyrotechnically induced shock response spectra

    NASA Technical Reports Server (NTRS)

    Smith, J. L.

    1986-01-01

    Throughout the aerospace industry, large variations of 50 percent (6 dB) or more are continually noted for linear shaped charge (LSC) generated shock response spectra (SRS) from flight data (from the exact same location on different flights) and from plate tests (side by side measurements on the same test). A research program was developed to investigate causes of these large SRS variations. A series of ball drop calibration tests to verify calibration of accelerometers and a series of plate tests to investigate charge and assembly variables were performed. The resulting data were analyzed to determine if and to what degree manufacturing and assembly variables, distance from the shock source, data acquisition instrumentation, and shock energy propagation affect the SRS. LSC variables consisted of coreload, standoff, and apex angle. The assembly variable was the torque on the LSC holder. Other variables were distance from source of accelerometers, accelerometer mounting methods, and joint effects. Results indicated that LSC variables did not affect SRS as long as the plate was severed. Accelerometers mounted on mounting blocks showed significantly lower levels above 5000 Hz. Lap joints did not affect SRS levels. The test plate was mounted in an almost free-free state; therefore, distance from the source did not affect the SRS. Several varieties and brands of accelerometers were used, and all but one demonstrated very large variations in SRS.

  12. Comparison of correlated correlations.

    PubMed

    Cohen, A

    1989-12-01

    We consider a problem where kappa highly correlated variables are available, each being a candidate for predicting a dependent variable. Only one of the kappa variables can be chosen as a predictor and the question is whether there are significant differences in the quality of the predictors. We review several tests derived previously and propose a method based on the bootstrap. The motivating medical problem was to predict 24 hour proteinuria by protein-creatinine ratio measured at either 08:00, 12:00 or 16:00. The tests which we discuss are illustrated by this example and compared using a small Monte Carlo study.

  13. [Modeling the academic performance of medical students in basic sciences and pre-clinical courses: a longitudinal study].

    PubMed

    Zúñiga, Denisse; Mena, Beltrán; Oliva, Rose; Pedrals, Nuria; Padilla, Oslando; Bitran, Marcela

    2009-10-01

    The study of predictors of academic performance is relevant for medical education. Most studies of academic performance use global ratings as outcome measure, and do not evaluate the influence of the assessment methods. To model by multivariate analysis, the academic performance of medical considering, besides academic and demographic variables, the methods used to assess students' learning and their preferred modes of information processing. Two hundred seventy two students admitted to the medical school of the Pontificia Universidad Católica de Chile from 2000 to 2003. Six groups of variables were studied to model the students' performance in five basic science courses (Anatomy, Biology, Calculus, Chemistry and Physics) and two pre-clinical courses (Integrated Medical Clinic I and IT). The assessment methods examined were multiple choice question tests, Objective Structured Clinical Examination and tutor appraisal. The results of the university admission tests (high school grades, mathematics and biology tests), the assessment methods used, the curricular year and previous application to medical school, were predictors of academic performance. The information processing modes influenced academic performance, but only in interaction with other variables. Perception (abstract or concrete) interacted with the assessment methods, and information use (active or reflexive), with sex. The correlation between the real and predicted grades was 0.7. In addition to the academic results obtained prior to university entrance, the methods of assessment used in the university and the information processing modes influence the academic performance of medical students in basic and preclinical courses.

  14. A method for monitoring the variability in nuclear absorption characteristics of aviation fuels

    NASA Technical Reports Server (NTRS)

    Sprinkle, Danny R.; Shen, Chih-Ping

    1988-01-01

    A technique for monitoring variability in the nuclear absorption characteristics of aviation fuels has been developed. It is based on a highly collimated low energy gamma radiation source and a sodium iodide counter. The source and the counter assembly are separated by a geometrically well-defined test fuel cell. A computer program for determining the mass attenuation coefficient of the test fuel sample, based on the data acquired for a preset counting period, has been developed and tested on several types of aviation fuel.

  15. The influence of inspiratory muscle training combined with the Pilates method on lung function in elderly women: A randomized controlled trial

    PubMed Central

    de Alvarenga, Guilherme Medeiros; Charkovski, Simone Arando; dos Santos, Larissa Kelin; da Silva, Mayara Alves Barbosa; Tomaz, Guilherme Oliveira; Gamba, Humberto Remigio

    2018-01-01

    OBJECTIVE: Aging is progressive, and its effects on the respiratory system include changes in the composition of the connective tissues of the lung that influence thoracic and lung compliance. The Powerbreathe® K5 is a device used for inspiratory muscle training with resistance adapted to the level of the inspiratory muscles to be trained. The Pilates method promotes muscle rebalancing exercises that emphasize the powerhouse. The aim of this study was to evaluate the influence of inspiratory muscle training combined with the Pilates method on lung function in elderly women. METHODS: The participants were aged sixty years or older, were active women with no recent fractures, and were not gait device users. They were randomly divided into a Pilates with inspiratory training group (n=11), a Pilates group (n=11) and a control group (n=9). Spirometry, manovacuometry, a six-minute walk test, an abdominal curl-up test, and pulmonary variables were assessed before and after twenty intervention sessions. RESULTS: The intervention led to an increase in maximal inspiratory muscle strength and pressure and power pulmonary variables (p<0.0001), maximal expiratory muscle strength (p<0.0014), six-minute walk test performance (p<0.01), and abdominal curl-up test performance (p<0.00001). The control group showed no differences in the analyzed variables (p>0.05). CONCLUSION: The results of this study suggest inspiratory muscle training associated with the Pilates method provides an improvement in the lung function and physical conditioning of elderly patients. PMID:29924184

  16. Study of methods of improving the performance of the Langley Research Center Transonic Dynamics Tunnel (TDT)

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A study has been made of possible ways to improve the performance of the Langley Research Center's Transonic Dynamics Tunnel (TDT). The major effort was directed toward obtaining increased dynamic pressure in the Mach number range from 0.8 to 1.2, but methods to increase Mach number capability were also considered. Methods studied for increasing dynamic pressure capability were higher total pressure, auxiliary suction, reducing circuit losses, reduced test medium temperature, smaller test section and higher molecular weight test medium. Increased Mach number methods investigated were nozzle block inserts, variable geometry nozzle, changes in test section wall configuration, and auxiliary suction.

  17. Advances in Testing the Statistical Significance of Mediation Effects

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Abraham, W. Todd; Wei, Meifen; Russell, Daniel W.

    2006-01-01

    P. A. Frazier, A. P. Tix, and K. E. Barron (2004) highlighted a normal theory method popularized by R. M. Baron and D. A. Kenny (1986) for testing the statistical significance of indirect effects (i.e., mediator variables) in multiple regression contexts. However, simulation studies suggest that this method lacks statistical power relative to some…

  18. A Statistical Discrimination Experiment for Eurasian Events Using a Twenty-Seven-Station Network

    DTIC Science & Technology

    1980-07-08

    to test the effectiveness of a multivariate method of analysis for distinguishing earthquakes from explosions. The data base for the experiment...to test the effectiveness of a multivariate method of analysis for distinguishing earthquakes from explosions. The data base for the experiment...the weight assigned to each variable whenever a new one is added. Jennrich, R. I. (1977). Stepwise discriminant analysis , in Statistical Methods for

  19. The influence of inspiratory muscle training combined with the Pilates method on lung function in elderly women: A randomized controlled trial.

    PubMed

    Alvarenga, Guilherme Medeiros de; Charkovski, Simone Arando; Santos, Larissa Kelin Dos; Silva, Mayara Alves Barbosa da; Tomaz, Guilherme Oliveira; Gamba, Humberto Remigio

    2018-01-01

    Aging is progressive, and its effects on the respiratory system include changes in the composition of the connective tissues of the lung that influence thoracic and lung compliance. The Powerbreathe® K5 is a device used for inspiratory muscle training with resistance adapted to the level of the inspiratory muscles to be trained. The Pilates method promotes muscle rebalancing exercises that emphasize the powerhouse. The aim of this study was to evaluate the influence of inspiratory muscle training combined with the Pilates method on lung function in elderly women. The participants were aged sixty years or older, were active women with no recent fractures, and were not gait device users. They were randomly divided into a Pilates with inspiratory training group (n=11), a Pilates group (n=11) and a control group (n=9). Spirometry, manovacuometry, a six-minute walk test, an abdominal curl-up test, and pulmonary variables were assessed before and after twenty intervention sessions. The intervention led to an increase in maximal inspiratory muscle strength and pressure and power pulmonary variables (p<0.0001), maximal expiratory muscle strength (p<0.0014), six-minute walk test performance (p<0.01), and abdominal curl-up test performance (p<0.00001). The control group showed no differences in the analyzed variables (p>0.05). The results of this study suggest inspiratory muscle training associated with the Pilates method provides an improvement in the lung function and physical conditioning of elderly patients.

  20. Accelerated Training for Large Feedforward Neural Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.

  1. A default Bayesian hypothesis test for mediation.

    PubMed

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  2. Construction of Response Surface with Higher Order Continuity and Its Application to Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Romero, V. J.

    2002-01-01

    The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.

  3. Categorical Variables in Multiple Regression: Some Cautions.

    ERIC Educational Resources Information Center

    O'Grady, Kevin E.; Medoff, Deborah R.

    1988-01-01

    Limitations of dummy coding and nonsense coding as methods of coding categorical variables for use as predictors in multiple regression analysis are discussed. The combination of these approaches often yields estimates and tests of significance that are not intended by researchers for inclusion in their models. (SLD)

  4. WWC Review of the Report "Learning the Control of Variables Strategy in Higher and Lower Achieving Classrooms: Contributions of Explicit Instruction and Experimentation"

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2012

    2012-01-01

    The study reviewed in this paper examined three separate methods for teaching the "control of variables strategy" ("CVS"), a procedure for conducting a science experiment so that only one variable is tested and all others are held constant, or "controlled." The study analyzed data from a randomized controlled trial of…

  5. Evaluation of the analytical variability of dipstick protein pads in canine urine.

    PubMed

    Giraldi, Marco; Paltrinieri, Saverio; Zatelli, Andrea

    2018-06-01

    The dipstick is a first-line and inexpensive test that can exclude the presence of proteinuria in dogs. However, no information is available about the analytical variability of canine urine dipstick analysis. The aim of this study was to assess the analytical variability in 2 dipsticks and the inter-operator variability in dipstick interpretation. Canine urine supernatants (n = 174) were analyzed with 2 commercially available dipsticks. Two observers evaluated each result blinded to the other observer and to the results of the other dipstick. Intra- and inter-assay variability was assessed in 5 samples (corresponding to the 5 different semi-quantitative results) tested 10 consecutive times over 5 consecutive days. The agreement between observers and between dipsticks was evaluated with Cohen's k test. Intra-assay repeatability was good (≤3/10 errors), whereas inter-assay variability was higher (from 1/5 to 4/5 discordant results). The concordance between the operators (k = 0.68 and 0.79 for the 2 dipsticks) and that of the dipsticks (k = 0.66 and 0.74 for the 2 operators) was good. However, 1 observer and 1 dipstick overestimated the results compared with the second observer or dipstick. In any case, discordant results accounted for a single unit of the semi-quantitative scale. As for any other method, analytic variability may affect the semi-quantitation of urinary proteins when using the dipstick method. Subjective interpretation of the pad and, to a lesser extent, intrinsic staining properties of the pads could affect the results. Further studies are warranted to evaluate the effect of this variability on clinical decisions. © 2018 American Society for Veterinary Clinical Pathology.

  6. Can the super model (SUMO) method improve hydrological simulations? Exploratory tests with the GR hydrological models

    NASA Astrophysics Data System (ADS)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles

    2017-04-01

    Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.

  7. Correlation between mobility assessed by the Modified Rivermead Mobility Index and physical function in stroke patients

    PubMed Central

    Park, Gi-Tae; Kim, Mihyun

    2016-01-01

    [Purpose] The purpose of this study was to investigate the relationship between mobility assessed by the Modified Rivermead Mobility Index and variables associated with physical function in stroke patients. [Subjects and Methods] One hundred stroke patients (35 males and 65 females; age 58.60 ± 13.91 years) participated in this study. Modified Rivermead Mobility Index, muscle strength (manual muscle test), muscle tone (Modified Ashworth Scale), range of motion of lower extremity, sensory function (light touch and proprioception tests), and coordination (heel to shin and lower-extremity motor coordination tests) were assessed. [Results] The Modified Rivermead Mobility Index was correlated with all the physical function variables assessed, except the degree of knee extension. In addition, stepwise linear regression analysis revealed that coordination (heel to shin test) was the explanatory variable closely associated with mobility in stroke patients. [Conclusion] The Modified Rivermead Mobility Index score was significantly correlated with all the physical function variables. Coordination (heel to shin test) was closely related to mobility function. These results may be useful in developing rehabilitation programs for stroke patients. PMID:27630440

  8. Comparison Between Two Methods for Estimating the Vertical Scale of Fluctuation for Modeling Random Geotechnical Problems

    NASA Astrophysics Data System (ADS)

    Pieczyńska-Kozłowska, Joanna M.

    2015-12-01

    The design process in geotechnical engineering requires the most accurate mapping of soil. The difficulty lies in the spatial variability of soil parameters, which has been a site of investigation of many researches for many years. This study analyses the soil-modeling problem by suggesting two effective methods of acquiring information for modeling that consists of variability from cone penetration test (CPT). The first method has been used in geotechnical engineering, but the second one has not been associated with geotechnics so far. Both methods are applied to a case study in which the parameters of changes are estimated. The knowledge of the variability of parameters allows in a long term more effective estimation, for example, bearing capacity probability of failure.

  9. NONPARAMETRIC MANOVA APPROACHES FOR NON-NORMAL MULTIVARIATE OUTCOMES WITH MISSING VALUES

    PubMed Central

    He, Fanyin; Mazumdar, Sati; Tang, Gong; Bhatia, Triptish; Anderson, Stewart J.; Dew, Mary Amanda; Krafty, Robert; Nimgaonkar, Vishwajit; Deshpande, Smita; Hall, Martica; Reynolds, Charles F.

    2017-01-01

    Between-group comparisons often entail many correlated response variables. The multivariate linear model, with its assumption of multivariate normality, is the accepted standard tool for these tests. When this assumption is violated, the nonparametric multivariate Kruskal-Wallis (MKW) test is frequently used. However, this test requires complete cases with no missing values in response variables. Deletion of cases with missing values likely leads to inefficient statistical inference. Here we extend the MKW test to retain information from partially-observed cases. Results of simulated studies and analysis of real data show that the proposed method provides adequate coverage and superior power to complete-case analyses. PMID:29416225

  10. Process fault detection and nonlinear time series analysis for anomaly detection in safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, T.L.; Mullen, M.F.; Wangen, L.E.

    In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less

  11. Do College Faculty Embrace Web 2.0 Technology?

    ERIC Educational Resources Information Center

    Siha, Samia M.; Bell, Reginald Lamar; Roebuck, Deborah

    2016-01-01

    The authors sought to determine if Rogers's Innovation Decision Process model could analyze Web 2.0 usage within the collegiate environment. The key independent variables studied in relationship to this model were gender, faculty rank, course content delivery method, and age. Chi-square nonparametric tests on the independent variables across…

  12. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  13. 40 CFR 60.316 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Section 60.316 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... for the measurement of VOC concentration. (3) Method 1 for sample and velocity traverses. (4) Method 2... smaller volumes, when necessitated by process variables or other factors, may be approved by the...

  14. 40 CFR 61.174 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Section 61.174 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... follows: (1) Method 5 for the measurement of particulate matter, (2) Method 1 for sample and velocity... when necessitated by process variables or other factors may be approved by the Administrator. (f) For...

  15. Parametric Methods for Dynamic 11C-Phenytoin PET Studies.

    PubMed

    Mansor, Syahir; Yaqub, Maqsood; Boellaard, Ronald; Froklage, Femke E; de Vries, Anke; Bakker, Esther D M; Voskuyl, Rob A; Eriksson, Jonas; Schwarte, Lothar A; Verbeek, Joost; Windhorst, Albert D; Lammertsma, Adriaan A

    2017-03-01

    In this study, the performance of various methods for generating quantitative parametric images of dynamic 11 C-phenytoin PET studies was evaluated. Methods: Double-baseline 60-min dynamic 11 C-phenytoin PET studies, including online arterial sampling, were acquired for 6 healthy subjects. Parametric images were generated using Logan plot analysis, a basis function method, and spectral analysis. Parametric distribution volume (V T ) and influx rate ( K 1 ) were compared with those obtained from nonlinear regression analysis of time-activity curves. In addition, global and regional test-retest (TRT) variability was determined for parametric K 1 and V T values. Results: Biases in V T observed with all parametric methods were less than 5%. For K 1 , spectral analysis showed a negative bias of 16%. The mean TRT variabilities of V T and K 1 were less than 10% for all methods. Shortening the scan duration to 45 min provided similar V T and K 1 with comparable TRT performance compared with 60-min data. Conclusion: Among the various parametric methods tested, the basis function method provided parametric V T and K 1 values with the least bias compared with nonlinear regression data and showed TRT variabilities lower than 5%, also for smaller volume-of-interest sizes (i.e., higher noise levels) and shorter scan duration. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  16. Optimization of Answer Keys for Script Concordance Testing: Should We Exclude Deviant Panelists, Deviant Responses, or Neither?

    ERIC Educational Resources Information Center

    Gagnon, Robert; Lubarsky, Stuart; Lambert, Carole; Charlin, Bernard

    2011-01-01

    The Script Concordance Test (SCT) uses a panel-based, aggregate scoring method that aims to capture the variability of responses of experienced practitioners to particular clinical situations. The use of this type of scoring method is a key determinant of the tool's discriminatory power, but deviant answers could potentially diminish the…

  17. Techniques and Methods for Testing the Postural Function in Healthy and Pathological Subjects

    PubMed Central

    Paillard, Thierry; Noé, Frédéric

    2015-01-01

    The different techniques and methods employed as well as the different quantitative and qualitative variables measured in order to objectify postural control are often chosen without taking into account the population studied, the objective of the postural test, and the environmental conditions. For these reasons, the aim of this review was to present and justify the different testing techniques and methods with their different quantitative and qualitative variables to make it possible to precisely evaluate each sensory, central, and motor component of the postural function according to the experiment protocol under consideration. The main practical and technological methods and techniques used in evaluating postural control were explained and justified according to the experimental protocol defined. The main postural conditions (postural stance, visual condition, balance condition, and test duration) were also analyzed. Moreover, the mechanistic exploration of the postural function often requires implementing disturbing postural conditions by using motor disturbance (mechanical disturbance), sensory stimulation (sensory manipulation), and/or cognitive disturbance (cognitive task associated with maintaining postural balance) protocols. Each type of disturbance was tackled in order to facilitate understanding of subtle postural control mechanisms and the means to explore them. PMID:26640800

  18. Techniques and Methods for Testing the Postural Function in Healthy and Pathological Subjects.

    PubMed

    Paillard, Thierry; Noé, Frédéric

    2015-01-01

    The different techniques and methods employed as well as the different quantitative and qualitative variables measured in order to objectify postural control are often chosen without taking into account the population studied, the objective of the postural test, and the environmental conditions. For these reasons, the aim of this review was to present and justify the different testing techniques and methods with their different quantitative and qualitative variables to make it possible to precisely evaluate each sensory, central, and motor component of the postural function according to the experiment protocol under consideration. The main practical and technological methods and techniques used in evaluating postural control were explained and justified according to the experimental protocol defined. The main postural conditions (postural stance, visual condition, balance condition, and test duration) were also analyzed. Moreover, the mechanistic exploration of the postural function often requires implementing disturbing postural conditions by using motor disturbance (mechanical disturbance), sensory stimulation (sensory manipulation), and/or cognitive disturbance (cognitive task associated with maintaining postural balance) protocols. Each type of disturbance was tackled in order to facilitate understanding of subtle postural control mechanisms and the means to explore them.

  19. Development and Evaluation of a Contrast Sensitivity Perimetry Test for Patients with Glaucoma

    PubMed Central

    Hot, Aliya; Dul, Mitchell W.; Swanson, William H.

    2008-01-01

    Purpose To design a contrast sensitivity perimetry (CSP) protocol that decreases variability in glaucomatous defects while maintaining good sensitivity to glaucomatous loss. Methods Twenty patients with glaucoma and 20 control subjects were tested with a CSP protocol implemented on a monitor-based testing station. In the protocol 26 locations were tested over the central visual field with Gabor patches with a peak spatial frequency of 0.4 cyc/deg and a two-dimensional spatial Gaussian envelope, with most of the energy concentrated within a 4° circular region. Threshold was estimated by a staircase method. Patients and 10 age-similar control subjects were also tested on conventional automated perimetry (CAP), with the 24−2 pattern with the SITA Standard testing strategy. The neuroretinal rim area of the patients was measured with a retinal tomograph (Retina Tomograph II [HRT]; Heidelberg Engineering, Heidelberg, Germany). A Bland-Altman analysis of agreement was used to assess test–retest variability, compare depth of defect shown by the two perimetric tests, and investigate the relations between contrast sensitivity and neuroretinal rim area. Results Variability showed less dependence on defect depth for CSP than for CAP (z = 9.3, P < 0.001). Defect depth was similar for CAP and CSP when averaged by quadrant (r = 0.26, P > 0.13). The relation between defect depth and rim area was more consistent with CSP than with CAP (z = 9, P < 0.001). Conclusions The implementation of CSP was successful in reducing test–retest variability in glaucomatous defects. CSP was in general agreement with CAP in terms of depth of defect and was in better agreement than CAP with HRT-determined rim area. PMID:18378580

  20. Machine learning search for variable stars

    NASA Astrophysics Data System (ADS)

    Pashchenko, Ilya N.; Sokolovsky, Kirill V.; Gavras, Panagiotis

    2018-04-01

    Photometric variability detection is often considered as a hypothesis testing problem: an object is variable if the null hypothesis that its brightness is constant can be ruled out given the measurements and their uncertainties. The practical applicability of this approach is limited by uncorrected systematic errors. We propose a new variability detection technique sensitive to a wide range of variability types while being robust to outliers and underestimated measurement uncertainties. We consider variability detection as a classification problem that can be approached with machine learning. Logistic Regression (LR), Support Vector Machines (SVM), k Nearest Neighbours (kNN), Neural Nets (NN), Random Forests (RF), and Stochastic Gradient Boosting classifier (SGB) are applied to 18 features (variability indices) quantifying scatter and/or correlation between points in a light curve. We use a subset of Optical Gravitational Lensing Experiment phase two (OGLE-II) Large Magellanic Cloud (LMC) photometry (30 265 light curves) that was searched for variability using traditional methods (168 known variable objects) as the training set and then apply the NN to a new test set of 31 798 OGLE-II LMC light curves. Among 205 candidates selected in the test set, 178 are real variables, while 13 low-amplitude variables are new discoveries. The machine learning classifiers considered are found to be more efficient (select more variables and fewer false candidates) compared to traditional techniques using individual variability indices or their linear combination. The NN, SGB, SVM, and RF show a higher efficiency compared to LR and kNN.

  1. Applying multibeam sonar and mathematical modeling for mapping seabed substrate and biota of offshore shallows

    NASA Astrophysics Data System (ADS)

    Herkül, Kristjan; Peterson, Anneliis; Paekivi, Sander

    2017-06-01

    Both basic science and marine spatial planning are in a need of high resolution spatially continuous data on seabed habitats and biota. As conventional point-wise sampling is unable to cover large spatial extents in high detail, it must be supplemented with remote sensing and modeling in order to fulfill the scientific and management needs. The combined use of in situ sampling, sonar scanning, and mathematical modeling is becoming the main method for mapping both abiotic and biotic seabed features. Further development and testing of the methods in varying locations and environmental settings is essential for moving towards unified and generally accepted methodology. To fill the relevant research gap in the Baltic Sea, we used multibeam sonar and mathematical modeling methods - generalized additive models (GAM) and random forest (RF) - together with underwater video to map seabed substrate and epibenthos of offshore shallows. In addition to testing the general applicability of the proposed complex of techniques, the predictive power of different sonar-based variables and modeling algorithms were tested. Mean depth, followed by mean backscatter, were the most influential variables in most of the models. Generally, mean values of sonar-based variables had higher predictive power than their standard deviations. The predictive accuracy of RF was higher than that of GAM. To conclude, we found the method to be feasible and with predictive accuracy similar to previous studies of sonar-based mapping.

  2. An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems

    NASA Astrophysics Data System (ADS)

    Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri

    2018-01-01

    The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.

  3. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  4. Variability aware compact model characterization for statistical circuit design optimization

    NASA Astrophysics Data System (ADS)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  5. Critical evaluation of connectivity-based point of care testing systems of glucose in a hospital environment.

    PubMed

    Floré, Katelijne M J; Fiers, Tom; Delanghe, Joris R

    2008-01-01

    In recent years a number of point of care testing (POCT) glucometers were introduced on the market. We investigated the analytical variability (lot-to-lot variation, calibration error, inter-instrument and inter-operator variability) of glucose POCT systems in a university hospital environment and compared these results with the analytical needs required for tight glucose monitoring. The reference hexokinase method was compared to different POCT systems based on glucose oxidase (blood gas instruments) or glucose dehydrogenase (handheld glucometers). Based upon daily internal quality control data, total errors were calculated for the various glucose methods and the analytical variability of the glucometers was estimated. The total error of the glucometers exceeded by far the desirable analytical specifications (based on a biological variability model). Lot-to-lot variation, inter-instrument variation and inter-operator variability contributed approximately equally to total variance. As in a hospital environment, distribution of hematocrit values is broad, converting blood glucose into plasma values using a fixed factor further increases variance. The percentage of outliers exceeded the ISO 15197 criteria in a broad glucose concentration range. Total analytical variation of handheld glucometers is larger than expected. Clinicians should be aware that the variability of glucose measurements obtained by blood gas instruments is lower than results obtained with handheld glucometers on capillary blood.

  6. Choice of Stimulus Range and Size Can Reduce Test-Retest Variability in Glaucomatous Visual Field Defects

    PubMed Central

    Swanson, William H.; Horner, Douglas G.; Dul, Mitchell W.; Malinovsky, Victor E.

    2014-01-01

    Purpose To develop guidelines for engineering perimetric stimuli to reduce test-retest variability in glaucomatous defects. Methods Perimetric testing was performed on one eye for 62 patients with glaucoma and 41 age-similar controls on size III and frequency-doubling perimetry and three custom tests with Gaussian blob and Gabor sinusoid stimuli. Stimulus range was controlled by values for ceiling (maximum sensitivity) and floor (minimum sensitivity). Bland-Altman analysis was used to derive 95% limits of agreement on test and retest, and bootstrap analysis was used to test the hypotheses about peak variability. Results Limits of agreement for the three custom stimuli were similar in width (0.72 to 0.79 log units) and peak variability (0.22 to 0.29 log units) for a stimulus range of 1.7 log units. The width of the limits of agreement for size III decreased from 1.78 to 1.37 to 0.99 log units for stimulus ranges of 3.9, 2.7, and 1.7 log units, respectively (F = 3.23, P < 0.001); peak variability was 0.99, 0.54, and 0.34 log units, respectively (P < 0.01). For a stimulus range of 1.3 log units, limits of agreement were narrowest with Gabor and widest with size III stimuli, and peak variability was lower (P < 0.01) with Gabor (0.18 log units) and frequency-doubling perimetry (0.24 log units) than with size III stimuli (0.38 log units). Conclusions Test-retest variability in glaucomatous visual field defects was substantially reduced by engineering the stimuli. Translational Relevance The guidelines should allow developers to choose from a wide range of stimuli. PMID:25371855

  7. Testing Pairwise Association between Spatially Autocorrelated Variables: A New Approach Using Surrogate Lattice Data

    PubMed Central

    Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre

    2012-01-01

    Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961

  8. Generalized Likelihood Uncertainty Estimation (GLUE) methodology for optimization of extraction in natural products.

    PubMed

    Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H

    2018-06-01

    Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Normal Variability of Weekly Musculoskeletal Screening Scores and the Influence of Training Load across an Australian Football League Season.

    PubMed

    Esmaeili, Alireza; Stewart, Andrew M; Hopkins, William G; Elias, George P; Lazarus, Brendan H; Rowell, Amber E; Aughey, Robert J

    2018-01-01

    Aim: The sit and reach test (S&R), dorsiflexion lunge test (DLT), and adductor squeeze test (AST) are commonly used in weekly musculoskeletal screening for athlete monitoring and injury prevention purposes. The aim of this study was to determine the normal week to week variability of the test scores, individual differences in variability, and the effects of training load on the scores. Methods: Forty-four elite Australian rules footballers from one club completed the weekly screening tests on day 2 or 3 post-main training (pre-season) or post-match (in-season) over a 10 month season. Ratings of perceived exertion and session duration for all training sessions were used to derive various measures of training load via both simple summations and exponentially weighted moving averages. Data were analyzed via linear and quadratic mixed modeling and interpreted using magnitude-based inference. Results: Substantial small to moderate variability was found for the tests at both season phases; for example over the in-season, the normal variability ±90% confidence limits were as follows: S&R ±1.01 cm, ±0.12; DLT ±0.48 cm, ±0.06; AST ±7.4%, ±0.6%. Small individual differences in variability existed for the S&R and AST (factor standard deviations between 1.31 and 1.66). All measures of training load had trivial effects on the screening scores. Conclusion: A change in a test score larger than the normal variability is required to be considered a true change. Athlete monitoring and flagging systems need to account for the individual differences in variability. The tests are not sensitive to internal training load when conducted 2 or 3 days post-training or post-match, and the scores should be interpreted cautiously when used as measures of recovery.

  10. Critical assessment of precracked specimen configuration and experimental test variables for stress corrosion testing of 7075-T6 aluminum alloy plate

    NASA Technical Reports Server (NTRS)

    Domack, M. S.

    1985-01-01

    A research program was conducted to critically assess the effects of precracked specimen configuration, stress intensity solutions, compliance relationships and other experimental test variables for stress corrosion testing of 7075-T6 aluminum alloy plate. Modified compact and double beam wedge-loaded specimens were tested and analyzed to determine the threshold stress intensity factor and stress corrosion crack growth rate. Stress intensity solutions and experimentally determined compliance relationships were developed and compared with other solutions available in the literature. Crack growth data suggests that more effective crack length measurement techniques are necessary to better characterize stress corrosion crack growth. Final load determined by specimen reloading and by compliance did not correlate well, and was considered a major source of interlaboratory variability. Test duration must be determined systematically, accounting for crack length measurement resolution, time for crack arrest, and experimental interferences. This work was conducted as part of a round robin program sponsored by ASTM committees G1.06 and E24.04 to develop a standard test method for stress corrosion testing using precracked specimens.

  11. Heart Rate Variability Dynamics for the Prognosis of Cardiovascular Risk

    PubMed Central

    Ramirez-Villegas, Juan F.; Lam-Espinosa, Eric; Ramirez-Moreno, David F.; Calvo-Echeverry, Paulo C.; Agredo-Rodriguez, Wilfredo

    2011-01-01

    Statistical, spectral, multi-resolution and non-linear methods were applied to heart rate variability (HRV) series linked with classification schemes for the prognosis of cardiovascular risk. A total of 90 HRV records were analyzed: 45 from healthy subjects and 45 from cardiovascular risk patients. A total of 52 features from all the analysis methods were evaluated using standard two-sample Kolmogorov-Smirnov test (KS-test). The results of the statistical procedure provided input to multi-layer perceptron (MLP) neural networks, radial basis function (RBF) neural networks and support vector machines (SVM) for data classification. These schemes showed high performances with both training and test sets and many combinations of features (with a maximum accuracy of 96.67%). Additionally, there was a strong consideration for breathing frequency as a relevant feature in the HRV analysis. PMID:21386966

  12. Expert system for testing industrial processes and determining sensor status

    DOEpatents

    Gross, K.C.; Singer, R.M.

    1998-06-02

    A method and system are disclosed for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 24 figs.

  13. Expert system for testing industrial processes and determining sensor status

    DOEpatents

    Gross, Kenneth C.; Singer, Ralph M.

    1998-01-01

    A method and system for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.

  14. Enhanced Conformational Sampling Using Replica Exchange with Collective-Variable Tempering.

    PubMed

    Gil-Ley, Alejandro; Bussi, Giovanni

    2015-03-10

    The computational study of conformational transitions in RNA and proteins with atomistic molecular dynamics often requires suitable enhanced sampling techniques. We here introduce a novel method where concurrent metadynamics are integrated in a Hamiltonian replica-exchange scheme. The ladder of replicas is built with different strengths of the bias potential exploiting the tunability of well-tempered metadynamics. Using this method, free-energy barriers of individual collective variables are significantly reduced compared with simple force-field scaling. The introduced methodology is flexible and allows adaptive bias potentials to be self-consistently constructed for a large number of simple collective variables, such as distances and dihedral angles. The method is tested on alanine dipeptide and applied to the difficult problem of conformational sampling in a tetranucleotide.

  15. Decreased Variability of the 6-Minute Walk Test by Heart Rate Correction in Patients with Neuromuscular Disease

    PubMed Central

    Prahm, Kira P.; Witting, Nanna; Vissing, John

    2014-01-01

    Objective The 6-minute walk test is widely used to assess functional status in neurological disorders. However, the test is subject to great inter-test variability due to fluctuating motivation, fatigue and learning effects. We investigated whether inter-test variability of the 6MWT can be reduced by heart rate correction. Methods Sixteen patients with neuromuscular diseases, including Facioscapulohumeral muscular dystrophy, Limb-girdle muscular dystrophy, Charcot-Marie-Tooths, Dystrophia Myotonica and Congenital Myopathy and 12 healthy subjects were studied. Patients were excluded if they had cardiac arrhythmias, if they received drug treatment for hypertension or any other medical conditions that could interfere with the interpretation of the heart rate and walking capability. All completed three 6-minute walk tests on three different test-days. Heart rate was measured continuously. Results Successive standard 6-minute walk tests showed considerable learning effects between Tests 1 and 2 (4.9%; P = 0.026), and Tests 2 and 3 (4.5%; P = 0.020) in patients. The same was seen in controls between Tests 1 and 2 (8.1%; P = 0.039)). Heart rate correction abolished this learning effect. Conclusion A modified 6-minute walk test, by correcting walking distance with average heart rate during walking, decreases the variability among repeated 6-minute walk tests, and should be considered as an alternative outcome measure to the standard 6-minute walk test in future clinical follow-up and treatment trials. PMID:25479403

  16. The comparison of rapid bioassays for the assessment of urban groundwater quality.

    PubMed

    Dewhurst, R E; Wheeler, J R; Chummun, K S; Mather, J D; Callaghan, A; Crane, M

    2002-05-01

    Groundwater is a complex mixture of chemicals that is naturally variable. Current legislation in the UK requires that groundwater quality and the degree of contamination are assessed using chemical methods. Such methods do not consider the synergistic or antagonistic interactions that may affect the bioavailability and toxicity of pollutants in the environment. Bioassays are a method for assessing the toxic impact of whole groundwater samples on the environment. Three rapid bioassays, Eclox, Microtox and ToxAlert, and a Daphnia magna 48-h immobilisation test were used to assess groundwater quality from sites with a wide range of historical uses. Eclox responses indicated that the test was very sensitive to changes in groundwater chemistry; 77% of the results had a percentage inhibition greater than 90%. ToxAlert, although suitable for monitoring changes in water quality under laboratory conditions, produced highly variable results due to fluctuations in temperature and the chemical composition of the samples. Microtox produced replicable results that correlated with those from D. magna tests.

  17. An issue encountered in solving problems in electricity and magnetism: curvilinear coordinates

    NASA Astrophysics Data System (ADS)

    Gülçiçek, Çağlar; Damlı, Volkan

    2016-11-01

    In physics lectures on electromagnetic theory and mathematical methods, physics teacher candidates have some difficulties with curvilinear coordinate systems. According to our experience, based on both in-class interactions and teacher candidates’ answers in test papers, they do not seem to have understood the variables in curvilinear coordinate systems very well. For this reason, the problems that physics teacher candidates have with variables in curvilinear coordinate systems have been selected as a study subject. The aim of this study is to find the physics teacher candidates’ problems with determining the variables of drawn shapes, and problems with drawing shapes based on given variables in curvilinear coordinate systems. Two different assessment tests were used in the study to achieve this aim. The curvilinear coordinates drawing test (CCDrT) was used to discover their problems related to drawing shapes, and the curvilinear coordinates detection test (CCDeT) was used to find out about problems related to determining variables. According to the findings obtained from both tests, most physics teacher candidates have problems with the ϕ variable, while they have limited problems with the r variable. Questions that are mostly answered wrongly have some common properties, such as value. According to inferential statistics, there is no significant difference between the means of the CCDeT and CCDrT scores. The mean of the CCDeT scores is only 4.63 and the mean of the CCDrT is only 4.66. Briefly, we can say that most physics teacher candidates have problems with drawing a shape using the variables of curvilinear coordinate systems or in determining the variables of drawn shapes. Part of this study was presented at the XI. National Science and Mathematics Education Congress (UFBMEK) in 2014.

  18. Evaluation of direct-to-consumer low-volume lab tests in healthy adults

    PubMed Central

    Kidd, Brian A.; Hoffman, Gabriel; Zimmerman, Noah; Li, Li; Morgan, Joseph W.; Glowe, Patricia K.; Botwin, Gregory J.; Parekh, Samir; Babic, Nikolina; Doust, Matthew W.; Stock, Gregory B.; Schadt, Eric E.; Dudley, Joel T.

    2016-01-01

    BACKGROUND. Clinical laboratory tests are now being prescribed and made directly available to consumers through retail outlets in the USA. Concerns with these test have been raised regarding the uncertainty of testing methods used in these venues and a lack of open, scientific validation of the technical accuracy and clinical equivalency of results obtained through these services. METHODS. We conducted a cohort study of 60 healthy adults to compare the uncertainty and accuracy in 22 common clinical lab tests between one company offering blood tests obtained from finger prick (Theranos) and 2 major clinical testing services that require standard venipuncture draws (Quest and LabCorp). Samples were collected in Phoenix, Arizona, at an ambulatory clinic and at retail outlets with point-of-care services. RESULTS. Theranos flagged tests outside their normal range 1.6× more often than other testing services (P < 0.0001). Of the 22 lab measurements evaluated, 15 (68%) showed significant interservice variability (P < 0.002). We found nonequivalent lipid panel test results between Theranos and other clinical services. Variability in testing services, sample collection times, and subjects markedly influenced lab results. CONCLUSION. While laboratory practice standards exist to control this variability, the disparities between testing services we observed could potentially alter clinical interpretation and health care utilization. Greater transparency and evaluation of testing technologies would increase their utility in personalized health management. FUNDING. This work was supported by the Icahn Institute for Genomics and Multiscale Biology, a gift from the Harris Family Charitable Foundation (to J.T. Dudley), and grants from the NIH (R01 DK098242 and U54 CA189201, to J.T. Dudley, and R01 AG046170 and U01 AI111598, to E.E. Schadt). PMID:27018593

  19. A systematic review of statistical methods used to test for reliability of medical instruments measuring continuous variables.

    PubMed

    Zaki, Rafdzah; Bulgiba, Awang; Nordin, Noorhaire; Azina Ismail, Noor

    2013-06-01

    Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. The Intra-class Correlation Coefficient (ICC) is the most popular method with 25 (60%) studies having used this method followed by the comparing means (8 or 19%). Out of 25 studies using the ICC, only 7 (28%) reported the confidence intervals and types of ICC used. Most studies (71%) also tested the agreement of instruments. This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.

  20. Variable Structure Control of a Hand-Launched Glider

    NASA Technical Reports Server (NTRS)

    Anderson, Mark R.; Waszak, Martin R.

    2005-01-01

    Variable structure control system design methods are applied to the problem of aircraft spin recovery. A variable structure control law typically has two phases of operation. The reaching mode phase uses a nonlinear relay control strategy to drive the system trajectory to a pre-defined switching surface within the motion state space. The sliding mode phase involves motion along the surface as the system moves toward an equilibrium or critical point. Analysis results presented in this paper reveal that the conventional method for spin recovery can be interpreted as a variable structure controller with a switching surface defined at zero yaw rate. Application of Lyapunov stability methods show that deflecting the ailerons in the direction of the spin helps to insure that this switching surface is stable. Flight test results, obtained using an instrumented hand-launched glider, are used to verify stability of the reaching mode dynamics.

  1. Effects of Talker Variability on Vowel Recognition in Cochlear Implants

    ERIC Educational Resources Information Center

    Chang, Yi-ping; Fu, Qian-Jie

    2006-01-01

    Purpose: To investigate the effects of talker variability on vowel recognition by cochlear implant (CI) users and by normal-hearing (NH) participants listening to 4-channel acoustic CI simulations. Method: CI users were tested with their clinically assigned speech processors. For NH participants, 3 CI processors were simulated, using different…

  2. Evaluation of Reliability Coefficients for Two-Level Models via Latent Variable Analysis

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    2010-01-01

    A latent variable analysis procedure for evaluation of reliability coefficients for 2-level models is outlined. The method provides point and interval estimates of group means' reliability, overall reliability of means, and conditional reliability. In addition, the approach can be used to test simple hypotheses about these parameters. The…

  3. Visual Cues and Listening Effort: Individual Variability

    ERIC Educational Resources Information Center

    Picou, Erin M.; Ricketts, Todd A; Hornsby, Benjamin W. Y.

    2011-01-01

    Purpose: To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Method: Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and…

  4. Adjusting for radiotelemetry error to improve estimates of habitat use.

    Treesearch

    Scott L. Findholt; Bruce K. Johnson; Lyman L. McDonald; John W. Kern; Alan Ager; Rosemary J. Stussy; Larry D. Bryant

    2002-01-01

    Animal locations estimated from radiotelemetry have traditionally been treated as error-free when analyzed in relation to habitat variables. Location error lowers the power of statistical tests of habitat selection. We describe a method that incorporates the error surrounding point estimates into measures of environmental variables determined from a geographic...

  5. Estimation of Latent Group Effects: Psychometric Technical Report No. 2.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.

    Conventional methods of multivariate normal analysis do not apply when the variables of interest are not observed directly, but must be inferred from fallible or incomplete data. For example, responses to mental test items may depend upon latent aptitude variables, which modeled in turn as functions of demographic effects in the population. A…

  6. Comparison of statistical tests for association between rare variants and binary traits.

    PubMed

    Bacanu, Silviu-Alin; Nelson, Matthew R; Whittaker, John C

    2012-01-01

    Genome-wide association studies have found thousands of common genetic variants associated with a wide variety of diseases and other complex traits. However, a large portion of the predicted genetic contribution to many traits remains unknown. One plausible explanation is that some of the missing variation is due to the effects of rare variants. Nonetheless, the statistical analysis of rare variants is challenging. A commonly used method is to contrast, within the same region (gene), the frequency of minor alleles at rare variants between cases and controls. However, this strategy is most useful under the assumption that the tested variants have similar effects. We previously proposed a method that can accommodate heterogeneous effects in the analysis of quantitative traits. Here we extend this method to include binary traits that can accommodate covariates. We use simulations for a variety of causal and covariate impact scenarios to compare the performance of the proposed method to standard logistic regression, C-alpha, SKAT, and EREC. We found that i) logistic regression methods perform well when the heterogeneity of the effects is not extreme and ii) SKAT and EREC have good performance under all tested scenarios but they can be computationally intensive. Consequently, it would be more computationally desirable to use a two-step strategy by (i) selecting promising genes by faster methods and ii) analyzing selected genes using SKAT/EREC. To select promising genes one can use (1) regression methods when effect heterogeneity is assumed to be low and the covariates explain a non-negligible part of trait variability, (2) C-alpha when heterogeneity is assumed to be large and covariates explain a small fraction of trait's variability and (3) the proposed trend and heterogeneity test when the heterogeneity is assumed to be non-trivial and the covariates explain a large fraction of trait variability.

  7. 40 CFR Appendix A to Part 63 - Test Methods

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... components by a different analyst). 3.3Surrogate Reference Materials. The analyst may use surrogate compounds... the variance of the proposed method is significantly different from that of the validated method by... variables can be determined in eight experiments rather than 128 (W.J. Youden, Statistical Manual of the...

  8. 40 CFR 60.456 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Section 60.456 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... § 60.453. (2) Method 25 for the measurement of the VOC concentration in the gas stream vent. (3) Method... sampling times or smaller volumes, when necessitated by process variables or other factors, may be approved...

  9. Statistical interpretation of machine learning-based feature importance scores for biomarker discovery.

    PubMed

    Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre

    2012-07-01

    Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.

  10. Tracking variable sedimentation rates in orbitally forced paleoclimate proxy series

    NASA Astrophysics Data System (ADS)

    Li, M.; Kump, L. R.; Hinnov, L.

    2017-12-01

    This study addresses two fundamental issues in cyclostratigraphy: quantitative testing of orbital forcing in cyclic sedimentary sequences and tracking variable sedimentation rates. The methodology proposed here addresses these issues as an inverse problem, and estimates the product-moment correlation coefficient between the frequency spectra of orbital solutions and paleoclimate proxy series over a range of "test" sedimentation rates. It is inspired by the ASM method (1). The number of orbital parameters involved in the estimation is also considered. The method relies on the hypothesis that orbital forcing had a significant impact on the paleoclimate proxy variations, and thus is also tested. The null hypothesis of no astronomical forcing is evaluated using the Beta distribution, for which the shape parameters are estimated using a Monte Carlo simulation approach. We introduce a metric to estimate the most likely sedimentation rate using the product-moment correlation coefficient, H0 significance level, and the number of contributing orbital parameters, i.e., the CHO value. The CHO metric is applied with a sliding window to track variable sedimentation rates along the paleoclimate proxy series. Two forward models with uniform and variable sedimentation rates are evaluated to demonstrate the robustness of the method. The CHO method is applied to the classical Late Triassic Newark depth rank series; the estimated sedimentation rates match closely with previously published sedimentation rates and provide a more highly time-resolved estimate (2,3). References: (1) Meyers, S.R., Sageman, B.B., Amer. J. Sci., 307, 773-792, 2007; (2) Kent, D.V., Olsen, P.E., Muttoni, G., Earth-Sci. Rev.166, 153-180, 2017; (3) Li, M., Zhang, Y., Huang, C., Ogg, J., Hinnov, L., Wang, Y., Zou, Z., Li, L., 2017. Earth Plant. Sc. Lett. doi:10.1016/j.epsl.2017.07.015

  11. On the objective identification of flood seasons

    NASA Astrophysics Data System (ADS)

    Cunderlik, Juraj M.; Ouarda, Taha B. M. J.; BobéE, Bernard

    2004-01-01

    The determination of seasons of high and low probability of flood occurrence is a task with many practical applications in contemporary hydrology and water resources management. Flood seasons are generally identified subjectively by visually assessing the temporal distribution of flood occurrences and, then at a regional scale, verified by comparing the temporal distribution with distributions obtained at hydrologically similar neighboring sites. This approach is subjective, time consuming, and potentially unreliable. The main objective of this study is therefore to introduce a new, objective, and systematic method for the identification of flood seasons. The proposed method tests the significance of flood seasons by comparing the observed variability of flood occurrences with the theoretical flood variability in a nonseasonal model. The method also addresses the uncertainty resulting from sampling variability by quantifying the probability associated with the identified flood seasons. The performance of the method was tested on an extensive number of samples with different record lengths generated from several theoretical models of flood seasonality. The proposed approach was then applied on real data from a large set of sites with different flood regimes across Great Britain. The results show that the method can efficiently identify flood seasons from both theoretical and observed distributions of flood occurrence. The results were used for the determination of the main flood seasonality types in Great Britain.

  12. Test Anxiety and High-Stakes Test Performance between School Settings: Implications for Educators

    ERIC Educational Resources Information Center

    von der Embse, Nathaniel; Hasson, Ramzi

    2012-01-01

    With the enactment of standards-based accountability in education, high-stakes tests have become the dominant method for measuring school effectiveness and student achievement. Schools and educators are under increasing pressure to meet achievement standards. However, there are variables which may interfere with the authentic measurement of…

  13. A New Nonparametric Levene Test for Equal Variances

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Zumbo, Bruno D.

    2010-01-01

    Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…

  14. Guidelines for the Investigation of Mediating Variables in Business Research.

    PubMed

    MacKinnon, David P; Coxe, Stefany; Baraldi, Amanda N

    2012-03-01

    Business theories often specify the mediating mechanisms by which a predictor variable affects an outcome variable. In the last 30 years, investigations of mediating processes have become more widespread with corresponding developments in statistical methods to conduct these tests. The purpose of this article is to provide guidelines for mediation studies by focusing on decisions made prior to the research study that affect the clarity of conclusions from a mediation study, the statistical models for mediation analysis, and methods to improve interpretation of mediation results after the research study. Throughout this article, the importance of a program of experimental and observational research for investigating mediating mechanisms is emphasized.

  15. Spectral gene set enrichment (SGSE).

    PubMed

    Frost, H Robert; Li, Zhigang; Moore, Jason H

    2015-03-03

    Gene set testing is typically performed in a supervised context to quantify the association between groups of genes and a clinical phenotype. In many cases, however, a gene set-based interpretation of genomic data is desired in the absence of a phenotype variable. Although methods exist for unsupervised gene set testing, they predominantly compute enrichment relative to clusters of the genomic variables with performance strongly dependent on the clustering algorithm and number of clusters. We propose a novel method, spectral gene set enrichment (SGSE), for unsupervised competitive testing of the association between gene sets and empirical data sources. SGSE first computes the statistical association between gene sets and principal components (PCs) using our principal component gene set enrichment (PCGSE) method. The overall statistical association between each gene set and the spectral structure of the data is then computed by combining the PC-level p-values using the weighted Z-method with weights set to the PC variance scaled by Tracy-Widom test p-values. Using simulated data, we show that the SGSE algorithm can accurately recover spectral features from noisy data. To illustrate the utility of our method on real data, we demonstrate the superior performance of the SGSE method relative to standard cluster-based techniques for testing the association between MSigDB gene sets and the variance structure of microarray gene expression data. Unsupervised gene set testing can provide important information about the biological signal held in high-dimensional genomic data sets. Because it uses the association between gene sets and samples PCs to generate a measure of unsupervised enrichment, the SGSE method is independent of cluster or network creation algorithms and, most importantly, is able to utilize the statistical significance of PC eigenvalues to ignore elements of the data most likely to represent noise.

  16. Alignment as a Teacher Variable

    ERIC Educational Resources Information Center

    Porter, Andrew C.; Smithson, John; Blank, Rolf; Zeidner, Timothy

    2007-01-01

    With the exception of the procedures developed by Porter and colleagues (Porter, 2002), other methods of defining and measuring alignment are essentially limited to alignment between tests and standards. Porter's procedures have been generalized to investigating the alignment between content standards, tests, textbooks, and even classroom…

  17. Estimation of diagnostic test accuracy without full verification: a review of latent class methods

    PubMed Central

    Collins, John; Huynh, Minh

    2014-01-01

    The performance of a diagnostic test is best evaluated against a reference test that is without error. For many diseases, this is not possible, and an imperfect reference test must be used. However, diagnostic accuracy estimates may be biased if inaccurately verified status is used as the truth. Statistical models have been developed to handle this situation by treating disease as a latent variable. In this paper, we conduct a systematized review of statistical methods using latent class models for estimating test accuracy and disease prevalence in the absence of complete verification. PMID:24910172

  18. Valx: A system for extracting and structuring numeric lab test comparison statements from text

    PubMed Central

    Hao, Tianyong; Liu, Hongfang; Weng, Chunhua

    2017-01-01

    Objectives To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Methods Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes 7 steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. Results The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 Diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Conclusions Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community. PMID:26940748

  19. Monoamine Oxidase A (MAOA) Gene and Personality Traits from Late Adolescence through Early Adulthood: A Latent Variable Investigation

    PubMed Central

    Xu, Man K.; Gaysina, Darya; Tsonaka, Roula; Morin, Alexandre J. S.; Croudace, Tim J.; Barnett, Jennifer H.; Houwing-Duistermaat, Jeanine; Richards, Marcus; Jones, Peter B.

    2017-01-01

    Very few molecular genetic studies of personality traits have used longitudinal phenotypic data, therefore molecular basis for developmental change and stability of personality remains to be explored. We examined the role of the monoamine oxidase A gene (MAOA) on extraversion and neuroticism from adolescence to adulthood, using modern latent variable methods. A sample of 1,160 male and 1,180 female participants with complete genotyping data was drawn from a British national birth cohort, the MRC National Survey of Health and Development (NSHD). The predictor variable was based on a latent variable representing genetic variations of the MAOA gene measured by three SNPs (rs3788862, rs5906957, and rs979606). Latent phenotype variables were constructed using psychometric methods to represent cross-sectional and longitudinal phenotypes of extraversion and neuroticism measured at ages 16 and 26. In males, the MAOA genetic latent variable (AAG) was associated with lower extraversion score at age 16 (β = −0.167; CI: −0.289, −0.045; p = 0.007, FDRp = 0.042), as well as greater increase in extraversion score from 16 to 26 years (β = 0.197; CI: 0.067, 0.328; p = 0.003, FDRp = 0.036). No genetic association was found for neuroticism after adjustment for multiple testing. Although, we did not find statistically significant associations after multiple testing correction in females, this result needs to be interpreted with caution due to issues related to x-inactivation in females. The latent variable method is an effective way of modeling phenotype- and genetic-based variances and may therefore improve the methodology of molecular genetic studies of complex psychological traits. PMID:29075213

  20. Monoamine Oxidase A (MAOA) Gene and Personality Traits from Late Adolescence through Early Adulthood: A Latent Variable Investigation.

    PubMed

    Xu, Man K; Gaysina, Darya; Tsonaka, Roula; Morin, Alexandre J S; Croudace, Tim J; Barnett, Jennifer H; Houwing-Duistermaat, Jeanine; Richards, Marcus; Jones, Peter B

    2017-01-01

    Very few molecular genetic studies of personality traits have used longitudinal phenotypic data, therefore molecular basis for developmental change and stability of personality remains to be explored. We examined the role of the monoamine oxidase A gene ( MAOA ) on extraversion and neuroticism from adolescence to adulthood, using modern latent variable methods. A sample of 1,160 male and 1,180 female participants with complete genotyping data was drawn from a British national birth cohort, the MRC National Survey of Health and Development (NSHD). The predictor variable was based on a latent variable representing genetic variations of the MAOA gene measured by three SNPs (rs3788862, rs5906957, and rs979606). Latent phenotype variables were constructed using psychometric methods to represent cross-sectional and longitudinal phenotypes of extraversion and neuroticism measured at ages 16 and 26. In males, the MAOA genetic latent variable (AAG) was associated with lower extraversion score at age 16 (β = -0.167; CI: -0.289, -0.045; p = 0.007, FDRp = 0.042), as well as greater increase in extraversion score from 16 to 26 years (β = 0.197; CI: 0.067, 0.328; p = 0.003, FDRp = 0.036). No genetic association was found for neuroticism after adjustment for multiple testing. Although, we did not find statistically significant associations after multiple testing correction in females, this result needs to be interpreted with caution due to issues related to x-inactivation in females. The latent variable method is an effective way of modeling phenotype- and genetic-based variances and may therefore improve the methodology of molecular genetic studies of complex psychological traits.

  1. Effect of Test Specimen Shape and Size on Interlaminar Tensile Properties of Advanced Carbon-Carbon Composites

    NASA Technical Reports Server (NTRS)

    Vaughn, Wallace L.

    2015-01-01

    The interlaminar tensile strength of 1000-tow T-300 fiber ACC-6 carbon-carbon composites was measured using the method of bonding the coupons to adherends at room temperature. The size, 0.70 to 1.963 inches maximum width or radius, and shape, round or square, of the test coupons were varied to determine if the test method was sensitive to these variables. Sixteen total variations were investigated and the results modeled.

  2. Comparing four non-invasive methods to determine the ventilatory anaerobic threshold during cardiopulmonary exercise testing in children with congenital heart or lung disease.

    PubMed

    Visschers, Naomi C A; Hulzebos, Erik H; van Brussel, Marco; Takken, Tim

    2015-11-01

    The ventilatory anaerobic threshold (VAT) is an important method to assess the aerobic fitness in patients with cardiopulmonary disease. Several methods exist to determine the VAT; however, there is no consensus which of these methods is the most accurate. To compare four different non-invasive methods for the determination of the VAT via respiratory gas exchange analysis during a cardiopulmonary exercise test (CPET). A secondary objective is to determine the interobserver reliability of the VAT. CPET data of 30 children diagnosed with either cystic fibrosis (CF; N = 15) or with a surgically corrected dextro-transposition of the great arteries (asoTGA; N = 15) were included. No significant differences were found between conditions or among testers. The RER = 1 method differed the most compared to the other methods, showing significant higher results in all six variables. The PET-O2 method differed significantly on five of six and four of six exercise variables with the V-slope method and the VentEq method, respectively. The V-slope and the VentEq method differed significantly on one of six exercise variables. Ten of thirteen ICCs that were >0.80 had a 95% CI > 0.70. The RER = 1 method and the V-slope method had the highest number of significant ICCs and 95% CIs. The V-slope method, the ventilatory equivalent method and the PET-O2 method are comparable and reliable methods to determine the VAT during CPET in children with CF or asoTGA. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  3. Determination of insoluble, soluble, and total dietary fiber (CODEX definition) by enzymatic-gravimetric method and liquid chromatography: collaborative study.

    PubMed

    McCleary, Barry V; DeVries, Jonathan W; Rader, Jeanne I; Cohen, Gerald; Prosky, Leon; Mugford, David C; Okuma, Kazuhiro

    2012-01-01

    A method for the determination of insoluble (IDF), soluble (SDF), and total dietary fiber (TDF), as defined by the CODEX Alimentarius, was validated in foods. Based upon the principles of AOAC Official Methods 985.29, 991.43, 2001.03, and 2002.02, the method quantitates water-insoluble and water-soluble dietary fiber. This method extends the capabilities of the previously adopted AOAC Official Method 2009.01, Total Dietary Fiber in Foods, Enzymatic-Gravimetric-Liquid Chromatographic Method, applicable to plant material, foods, and food ingredients consistent with CODEX Definition 2009, including naturally occurring, isolated, modified, and synthetic polymers meeting that definition. The method was evaluated through an AOAC/AACC collaborative study. Twenty-two laboratories participated, with 19 laboratories returning valid assay data for 16 test portions (eight blind duplicates) consisting of samples with a range of traditional dietary fiber, resistant starch, and nondigestible oligosaccharides. The dietary fiber content of the eight test pairs ranged from 10.45 to 29.90%. Digestion of samples under the conditions of AOAC 2002.02 followed by the isolation, fractionation, and gravimetric procedures of AOAC 985.29 (and its extensions 991.42 and 993.19) and 991.43 results in quantitation of IDF and soluble dietary fiber that precipitates (SDFP). The filtrate from the quantitation of water-alcohol-insoluble dietary fiber is concentrated, deionized, concentrated again, and analyzed by LC to determine the SDF that remains soluble (SDFS), i.e., all dietary fiber polymers of degree of polymerization = 3 and higher, consisting primarily, but not exclusively, of oligosaccharides. SDF is calculated as the sum of SDFP and SDFS. TDF is calculated as the sum of IDF and SDF. The within-laboratory variability, repeatability SD (Sr), for IDF ranged from 0.13 to 0.71, and the between-laboratory variability, reproducibility SD (SR), for IDF ranged from 0.42 to 2.24. The within-laboratory variability Sr for SDF ranged from 0.28 to 1.03, and the between-laboratory variability SR for SDF ranged from 0.85 to 1.66. The within-laboratory variability Sr for TDF ranged from 0.47 to 1.41, and the between-laboratory variability SR for TDF ranged from 0.95 to 3.14. This is comparable to other official and approved dietary fiber methods, and the method is recommended for adoption as Official First Action.

  4. Reproducibility of Interferon Gamma (IFN-γ) Release Assays. A Systematic Review

    PubMed Central

    Tagmouti, Saloua; Slater, Madeline; Benedetti, Andrea; Kik, Sandra V.; Banaei, Niaz; Cattamanchi, Adithya; Metcalfe, John; Dowdy, David; van Zyl Smit, Richard; Dendukuri, Nandini

    2014-01-01

    Rationale: Interferon gamma (IFN-γ) release assays for latent tuberculosis infection result in a larger-than-expected number of conversions and reversions in occupational screening programs, and reproducibility of test results is a concern. Objectives: Knowledge of the relative contribution and extent of the individual sources of variability (immunological, preanalytical, or analytical) could help optimize testing protocols. Methods: We performed a systematic review of studies published by October 2013 on all potential sources of variability of commercial IFN-γ release assays (QuantiFERON-TB Gold In-Tube and T-SPOT.TB). The included studies assessed test variability under identical conditions and under different conditions (the latter both overall and stratified by individual sources of variability). Linear mixed effects models were used to estimate within-subject SD. Measurements and Main Results: We identified a total of 26 articles, including 7 studies analyzing variability under the same conditions, 10 studies analyzing variability with repeat testing over time under different conditions, and 19 studies reporting individual sources of variability. Most data were on QuantiFERON (only three studies on T-SPOT.TB). A considerable number of conversions and reversions were seen around the manufacturer-recommended cut-point. The estimated range of variability of IFN-γ response in QuantiFERON under identical conditions was ±0.47 IU/ml (coefficient of variation, 13%) and ±0.26 IU/ml (30%) for individuals with an initial IFN-γ response in the borderline range (0.25–0.80 IU/ml). The estimated range of variability in noncontrolled settings was substantially larger (±1.4 IU/ml; 60%). Blood volume inoculated into QuantiFERON tubes and preanalytic delay were identified as key sources of variability. Conclusions: This systematic review shows substantial variability with repeat IFN-γ release assays testing even under identical conditions, suggesting that reversions and conversions around the existing cut-point should be interpreted with caution. PMID:25188809

  5. COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.

    USGS Publications Warehouse

    Hromadka, T.V.; Yen, C.C.; Guymon, G.L.

    1985-01-01

    The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.

  6. Variable-intercept panel model for deformation zoning of a super-high arch dam.

    PubMed

    Shi, Zhongwen; Gu, Chongshi; Qin, Dong

    2016-01-01

    This study determines dam deformation similarity indexes based on an analysis of deformation zoning features and panel data clustering theory, with comprehensive consideration to the actual deformation law of super-high arch dams and the spatial-temporal features of dam deformation. Measurement methods of these indexes are studied. Based on the established deformation similarity criteria, the principle used to determine the number of dam deformation zones is constructed through entropy weight method. This study proposes the deformation zoning method for super-high arch dams and the implementation steps, analyzes the effect of special influencing factors of different dam zones on the deformation, introduces dummy variables that represent the special effect of dam deformation, and establishes a variable-intercept panel model for deformation zoning of super-high arch dams. Based on different patterns of the special effect in the variable-intercept panel model, two panel analysis models were established to monitor fixed and random effects of dam deformation. Hausman test method of model selection and model effectiveness assessment method are discussed. Finally, the effectiveness of established models is verified through a case study.

  7. Determinants of Healthcare Expenditure in Economic Cooperation Organization (ECO) Countries: Evidence from Panel Cointegration Tests

    PubMed Central

    Samadi, Alihussein; Homaie Rad, Enayatollah

    2013-01-01

    Background: Over the last decade there has been an increase in healthcare expenditures while at the same time the inequity in distribution of resources has grown. These two issues have urged the researchers to review the determinants of healthcare expenditures. In this study, we surveyed the determinants of health expenditures in Economic Cooperation Organization (ECO) countries. Methods: We used Panel data econometrics methods for the purpose of this research. For long term analysis, we used Pesaran cross sectional dependency test followed by panel unit root tests to show first whether the variables were stationary or not. Upon confirmation of no stationary variables, we used Westerlund panel cointegration test in order to show whether long term relationships exist between the variables. At the end, we estimated the model with Continuous-Updated Fully Modified (CUP-FM) estimator. For short term analysis also, we used Fixed Effects (FE) estimator to estimate the model. Results: A long term relationship was found between the health expenditures per capita and GDP per capita, the proportion of population below 15 and above 65 years old, number of physicians, and urbanisation. Besides, all the variables had short term relationships with health expenditures, except for the proportion of population above 65 years old. Conclusion: The coefficient of GDP was below 1 in the model. Therefore, health is counted as a necessary good in ECO countries and governments must pay due attention to the equal distribution of health services in all regions of the country. PMID:24596838

  8. Validation of structural analysis methods using the in-house liner cyclic rigs

    NASA Technical Reports Server (NTRS)

    Thompson, R. L.

    1982-01-01

    Test conditions and variables to be considered in each of the test rigs and test configurations, and also used in the validation of the structural predictive theories and tools, include: thermal and mechanical load histories (simulating an engine mission cycle; different boundary conditions; specimens and components of different dimensions and geometries; different materials; various cooling schemes and cooling hole configurations; several advanced burner liner structural design concepts; and the simulation of hot streaks. Based on these test conditions and test variables, the test matrices for each rig and configurations can be established to verify the predictive tools over as wide a range of test conditions as possible using the simplest possible tests. A flow chart for the thermal/structural analysis of a burner liner and how the analysis relates to the tests is shown schematically. The chart shows that several nonlinear constitutive theories are to be evaluated.

  9. Probabilistic Component Mode Synthesis of Nondeterministic Substructures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1996-01-01

    Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. We present a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.

  10. SCD-HeFT: Use of RR Interval Statistics for Long-term Risk Stratification for Arrhythmic Sudden Cardiac Death

    PubMed Central

    Au-yeung, Wan-tai M.; Reinhall, Per; Poole, Jeanne E.; Anderson, Jill; Johnson, George; Fletcher, Ross D.; Moore, Hans J.; Mark, Daniel B.; Lee, Kerry L.; Bardy, Gust H.

    2015-01-01

    Background In the SCD-HeFT a significant fraction of the congestive heart failure (CHF) patients ultimately did not die suddenly from arrhythmic causes. CHF patients will benefit from better tools to identify if ICD therapy is needed. Objective To identify predictor variables from baseline SCD-HeFT patients’ RR intervals that correlate to arrhythmic sudden cardiac death (SCD) and mortality and to design an ICD therapy screening test. Methods Ten predictor variables were extracted from pre-randomization Holter data from 475 patients enrolled in the SCD-HeFT ICD arm using novel and traditional heart rate variability methods. All variables were correlated to SCD using Mann Whitney-Wilcoxon test and receiver operating characteristic analysis. ICD therapy screening tests were designed by minimizing the cost of false classifications. Survival analysis, including log-rank test and Cox models, was also performed. Results α1 and α2 from detrended fluctuation analysis, the ratio of low to high frequency power, the number of PVCs per hour and heart rate turbulence slope are all statistically significant for predicting the occurrences of SCD (p<0.001) and survival (log-rank p<0.01). The most powerful multivariate predictor tool using the Cox Proportional Hazards was α2 with a hazard ratio of 0.0465 (95% CI: 0.00528 – 0.409, p<0.01). Conclusion Predictor variables from RR intervals correlate to the occurrences of SCD and distinguish survival among SCD-HeFT ICD patients. We believe SCD prediction models should incorporate Holter based RR interval analysis to refine ICD patient selection especially in removing patients who are unlikely to benefit from ICD therapy. PMID:26096609

  11. Short term spatio-temporal variability of soil water-extractable calcium and magnesium after a low severity grassland fire in Lithuania.

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Martin, David

    2014-05-01

    Fire has important impacts on soil nutrient spatio-temporal distribution (Outeiro et al., 2008). This impact depends on fire severity, topography of the burned area, type of soil and vegetation affected, and the meteorological conditions post-fire. Fire produces a complex mosaic of impacts in soil that can be extremely variable at small plot scale in the space and time. In order to assess and map such a heterogeneous distribution, the test of interpolation methods is fundamental to identify the best estimator and to have a better understanding of soil nutrients spatial distribution. The objective of this work is to identify the short-term spatial variability of water-extractable calcium and magnesium after a low severity grassland fire. The studied area is located near Vilnius (Lithuania) at 54° 42' N, 25° 08 E, 158 masl. Four days after the fire, it was designed in a burned area a plot with 400 m2 (20 x 20 m with 5 m space between sampling points). Twenty five samples from top soil (0-5 cm) were collected immediately after the fire (IAF), 2, 5, 7 and 9 months after the fire (a total of 125 in all sampling dates). The original data of water-extractable calcium and magnesium did not respected the Gaussian distribution, thus a neperian logarithm (ln) was applied in order to normalize data. Significant differences of water-extractable calcium and magnesium among sampling dates were carried out with the Anova One-way test using the ln data. In order to assess the spatial variability of water-extractable calcium and magnesium, we tested several interpolation methods as Ordinary Kriging (OK), Inverse Distance to a Weight (IDW) with the power of 1, 2, 3 and 4, Radial Basis Functions (RBF) - Inverse Multiquadratic (IMT), Multilog (MTG), Multiquadratic (MTQ) Natural Cubic Spline (NCS) and Thin Plate Spline (TPS) - and Local Polynomial (LP) with the power of 1 and 2. Interpolation tests were carried out with Ln data. The best interpolation method was assessed using the cross validation method. Cross-validation was obtained by taking each observation in turn out of the sample pool and estimating from the remaining ones. The errors produced (observed-predicted) are used to evaluate the performance of each method. With these data, the mean error (ME) and root mean square error (RMSE) were calculated. The best method was the one which had the lower RMSE (Pereira et al. in press). The results shown significant differences among sampling dates in the water-extractable calcium (F= 138.78, p< 0.001) and extractable magnesium (F= 160.66; p< 0.001). Water-extractable calcium and magnesium was high IAF decreasing until 7 months after the fire, rising in the last sampling date. Among the tested methods, the most accurate to interpolate the water-extractable calcium were: IAF-IDW1; 2 Months-IDW1; 5 months-OK; 7 Months-IDW4 and 9 Months-IDW3. In relation to water-extractable magnesium the best interpolation techniques were: IAF-IDW2; 2 Months-IDW1; 5 months- IDW3; 7 Months-TPS and 9 Months-IDW1. These results suggested that the spatial variability of these water-extractable is variable with the time. The causes of this variability will be discussed during the presentation. References Outeiro, L., Aspero, F., Ubeda, X. (2008) Geostatistical methods to study spatial variability of soil cation after a prescribed fire and rainfall. Catena, 74: 310-320. Pereira, P., Cerdà, A., Úbeda, X., Mataix-Solera, J. Arcenegui, V., Zavala, L. Modelling the impacts of wildfire on ash thickness in a short-term period, Land Degradation and Development, (In Press), DOI: 10.1002/ldr.2195

  12. Accurate evaluation of sensitivity for calibration between a LiDAR and a panoramic camera used for remote sensing

    NASA Astrophysics Data System (ADS)

    García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier

    2016-04-01

    Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.

  13. A Novel Group-Fused Sparse Partial Correlation Method for Simultaneous Estimation of Functional Networks in Group Comparison Studies.

    PubMed

    Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando

    2018-05-01

    The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.

  14. Enhanced Conformational Sampling Using Replica Exchange with Collective-Variable Tempering

    PubMed Central

    2015-01-01

    The computational study of conformational transitions in RNA and proteins with atomistic molecular dynamics often requires suitable enhanced sampling techniques. We here introduce a novel method where concurrent metadynamics are integrated in a Hamiltonian replica-exchange scheme. The ladder of replicas is built with different strengths of the bias potential exploiting the tunability of well-tempered metadynamics. Using this method, free-energy barriers of individual collective variables are significantly reduced compared with simple force-field scaling. The introduced methodology is flexible and allows adaptive bias potentials to be self-consistently constructed for a large number of simple collective variables, such as distances and dihedral angles. The method is tested on alanine dipeptide and applied to the difficult problem of conformational sampling in a tetranucleotide. PMID:25838811

  15. Linguistic methodology for the analysis of aviation accidents

    NASA Technical Reports Server (NTRS)

    Goguen, J. A.; Linde, C.

    1983-01-01

    A linguistic method for the analysis of small group discourse, was developed and the use of this method on transcripts of commercial air transpot accidents is demonstrated. The method identifies the discourse types that occur and determine their linguistic structure; it identifies significant linguistic variables based upon these structures or other linguistic concepts such as speech act and topic; it tests hypotheses that support significance and reliability of these variables; and it indicates the implications of the validated hypotheses. These implications fall into three categories: (1) to train crews to use more nearly optimal communication patterns; (2) to use linguistic variables as indices for aspects of crew performance such as attention; and (3) to provide guidelines for the design of aviation procedures and equipment, especially those that involve speech.

  16. Establishment of alternative potency test for botulinum toxin type A using compound muscle action potential (CMAP) in rats.

    PubMed

    Torii, Yasushi; Goto, Yoshitaka; Nakahira, Shinji; Ginnaga, Akihiro

    2014-11-01

    The biological activity of botulinum toxin type A has been evaluated using the mouse intraperitoneal (ip) LD50 test. This method requires a large number of mice to precisely determine toxin activity, and, as such, poses problems with regard to animal welfare. We previously developed a compound muscle action potential (CMAP) assay using rats as an alternative method to the mouse ip LD50 test. In this study, to evaluate this quantitative method of measuring toxin activity using CMAP, we assessed the parameters necessary for quantitative tests according to ICH Q2 (R1). This assay could be used to evaluate the activity of the toxin, even when inactive toxin was mixed with the sample. To reduce the number of animals needed, this assay was set to measure two samples per animal. Linearity was detected over a range of 0.1-12.8 U/mL, and the measurement range was set at 0.4-6.4 U/mL. The results for accuracy and precision showed low variability. The body weight was selected as a variable factor, but it showed no effect on the CMAP amplitude. In this study, potency tests using the rat CMAP assay of botulinum toxin type A demonstrated that it met the criteria for a quantitative analysis method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes

    NASA Astrophysics Data System (ADS)

    Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping

    2017-01-01

    Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.

  18. 2D versus 3D in the kinematic analysis of the horse at the trot.

    PubMed

    Miró, F; Santos, R; Garrido-Castro, J L; Galisteo, A M; Medina-Carnicer, R

    2009-08-01

    The handled trot of three Lusitano Purebred stallions was analyzed by using 2D and 3D kinematical analysis methods. Using the same capture and analysis system, 2D and 3D data of some linear (stride length, maximal height of the hoof trajectories) and angular (angular range of motion, inclination of bone segments) variables were obtained. A paired Student T-test was performed in order to detect statistically significant differences between data resulting from the two methodologies With respect to the angular variables, there were significant differences in scapula inclination, shoulder angle, cannon inclination and protraction-retraction angle in the forelimb variables, but none of them were statistically different in the hind limb. Differences between the two methods were found in most of the linear variables analyzed.

  19. Relationships among Reading Performance, Locus of Control and Achievement for Marginal Admission Students.

    ERIC Educational Resources Information Center

    Pepper, Roger S.; Drexler, John A., Jr.

    The first phase of the study was a 2 x 2 factorial design, with locus of control and instructional method (lecture and demonstration) as independent variables and honor point average (HPA) as the dependent variable. The second phase used correlational techniques to test the extent to which reading performance and traditional predictors of…

  20. Interrater Agreement Evaluation: A Latent Variable Modeling Approach

    ERIC Educational Resources Information Center

    Raykov, Tenko; Dimitrov, Dimiter M.; von Eye, Alexander; Marcoulides, George A.

    2013-01-01

    A latent variable modeling method for evaluation of interrater agreement is outlined. The procedure is useful for point and interval estimation of the degree of agreement among a given set of judges evaluating a group of targets. In addition, the approach allows one to test for identity in underlying thresholds across raters as well as to identify…

  1. Temporal Patterns of Variable Relationships in Person-Oriented Research: Longitudinal Models of Configural Frequency Analysis

    ERIC Educational Resources Information Center

    von Eye, Alexander; Mun, Eun Young; Bogat, G. Anne

    2008-01-01

    This article reviews the premises of configural frequency analysis (CFA), including methods of choosing significance tests and base models, as well as protecting [alpha], and discusses why CFA is a useful approach when conducting longitudinal person-oriented research. CFA operates at the manifest variable level. Longitudinal CFA seeks to identify…

  2. Associations between Whole-Grain Intake, Psychosocial Variables, and Home Availability among Elementary School Children

    ERIC Educational Resources Information Center

    Rosen, Renee A.; Burgess-Champoux, Teri L.; Marquart, Len; Reicks, Marla M.

    2012-01-01

    Objective: Develop, refine, and test psychosocial scales for associations with whole-grain intake. Methods: A cross-sectional survey was conducted in a Minneapolis/St. Paul suburban elementary school with children in fourth through sixth grades (n = 98) and their parents (n = 76). Variables of interest were child whole-grain intake, self-efficacy,…

  3. Soil variability in engineering applications

    NASA Astrophysics Data System (ADS)

    Vessia, Giovanna

    2014-05-01

    Natural geomaterials, as soils and rocks, show spatial variability and heterogeneity of physical and mechanical properties. They can be measured by in field and laboratory testing. The heterogeneity concerns different values of litho-technical parameters pertaining similar lithological units placed close to each other. On the contrary, the variability is inherent to the formation and evolution processes experienced by each geological units (homogeneous geomaterials on average) and captured as a spatial structure of fluctuation of physical property values about their mean trend, e.g. the unit weight, the hydraulic permeability, the friction angle, the cohesion, among others. The preceding spatial variations shall be managed by engineering models to accomplish reliable designing of structures and infrastructures. Materon (1962) introduced the Geostatistics as the most comprehensive tool to manage spatial correlation of parameter measures used in a wide range of earth science applications. In the field of the engineering geology, Vanmarcke (1977) developed the first pioneering attempts to describe and manage the inherent variability in geomaterials although Terzaghi (1943) already highlighted that spatial fluctuations of physical and mechanical parameters used in geotechnical designing cannot be neglected. A few years later, Mandelbrot (1983) and Turcotte (1986) interpreted the internal arrangement of geomaterial according to Fractal Theory. In the same years, Vanmarcke (1983) proposed the Random Field Theory providing mathematical tools to deal with inherent variability of each geological units or stratigraphic succession that can be resembled as one material. In this approach, measurement fluctuations of physical parameters are interpreted through the spatial variability structure consisting in the correlation function and the scale of fluctuation. Fenton and Griffiths (1992) combined random field simulation with the finite element method to produce the Random Finite Element Method (RFEM). This method has been used to investigate the random behavior of soils in the context of a variety of classical geotechnical problems. Afterward, some following studies collected the worldwide variability values of many technical parameters of soils (Phoon and Kulhawy 1999a) and their spatial correlation functions (Phoon and Kulhawy 1999b). In Italy, Cherubini et al. (2007) calculated the spatial variability structure of sandy and clayey soils from the standard cone penetration test readings. The large extent of the worldwide measured spatial variability of soils and rocks heavily affects the reliability of geotechnical designing as well as other uncertainties introduced by testing devices and engineering models. So far, several methods have been provided to deal with the preceding sources of uncertainties in engineering designing models (e.g. First Order Reliability Method, Second Order Reliability Method, Response Surface Method, High Dimensional Model Representation, etc.). Nowadays, the efforts in this field have been focusing on (1) measuring spatial variability of different rocks and soils and (2) developing numerical models that take into account the spatial variability as additional physical variable. References Cherubini C., Vessia G. and Pula W. 2007. Statistical soil characterization of Italian sites for reliability analyses. Proc. 2nd Int. Workshop. on Characterization and Engineering Properties of Natural Soils, 3-4: 2681-2706. Griffiths D.V. and Fenton G.A. 1993. Seepage beneath water retaining structures founded on spatially random soil, Géotechnique, 43(6): 577-587. Mandelbrot B.B. 1983. The Fractal Geometry of Nature. San Francisco: W H Freeman. Matheron G. 1962. Traité de Géostatistique appliquée. Tome 1, Editions Technip, Paris, 334 p. Phoon K.K. and Kulhawy F.H. 1999a. Characterization of geotechnical variability. Can Geotech J, 36(4): 612-624. Phoon K.K. and Kulhawy F.H. 1999b. Evaluation of geotechnical property variability. Can Geotech J, 36(4): 625-639. Terzaghi K. 1943. Theoretical Soil Mechanics. New York: John Wiley and Sons. Turcotte D.L. 1986. Fractals and fragmentation. J Geophys Res, 91: 1921-1926. Vanmarcke E.H. 1977. Probabilistic modeling of soil profiles. J Geotech Eng Div, ASCE, 103: 1227-1246. Vanmarcke E.H. 1983. Random fields: analysis and synthesis. MIT Press, Cambridge.

  4. Application of concepts from cross-recurrence analysis in speech production: an overview and comparison with other nonlinear methods.

    PubMed

    Lancia, Leonardo; Fuchs, Susanne; Tiede, Mark

    2014-06-01

    The aim of this article was to introduce an important tool, cross-recurrence analysis, to speech production applications by showing how it can be adapted to evaluate the similarity of multivariate patterns of articulatory motion. The method differs from classical applications of cross-recurrence analysis because no phase space reconstruction is conducted, and a cleaning algorithm removes the artifacts from the recurrence plot. The main features of the proposed approach are robustness to nonstationarity and efficient separation of amplitude variability from temporal variability. The authors tested these claims by applying their method to synthetic stimuli whose variability had been carefully controlled. The proposed method was also demonstrated in a practical application: It was used to investigate the role of biomechanical constraints in articulatory reorganization as a consequence of speeded repetition of CVCV utterances containing a labial and a coronal consonant. Overall, the proposed approach provided more reliable results than other methods, particularly in the presence of high variability. The proposed method is a useful and appropriate tool for quantifying similarity and dissimilarity in patterns of speech articulator movement, especially in such research areas as speech errors and pathologies, where unpredictable divergent behavior is expected.

  5. Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.

    PubMed

    Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J

    2009-11-01

    Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.

  6. Sensitivity analysis and nonlinearity assessment of steam cracking furnace process

    NASA Astrophysics Data System (ADS)

    Rosli, M. N.; Sudibyo, Aziz, N.

    2017-11-01

    In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.

  7. Fnk Model of Cracking Rate Calculus for a Variable Asymmetry Coefficient

    NASA Astrophysics Data System (ADS)

    Roşca, Vâlcu; Miriţoiu, Cosmin Mihai

    2017-12-01

    In the process of materials fracture, a very important parameter to study is the cracking rate growth da/dN. This paper proposes an analysis of the cracking rate, in a comparative way, by using four mathematical models:1 - polynomial method, by using successive iterations according to the ASTM E647 standard; 2 - model that uses the Paris formula; 3 - Walker formula method; 4 - NASGRO model or Forman - Newman - Konig equation, abbreviated as FNK model. This model is used in the NASA programs studies. For the tests, CT type specimens were made from stainless steel, V2A class, 10TiNiCr175 mark, and loaded to a variable tensile test axial - eccentrically, with the asymmetry coefficients: R= 0.1, 0.3 and 0.5; at the 213K (-60°C) temperature. There are analyzed the cracking rates variations according to the above models, especially through FNK method, highlighting the asymmetry factor variation.

  8. One or many? Which and how many parenting variables should be targeted in interventions to reduce children's externalizing behavior?

    PubMed

    Loop, Laurie; Mouton, Bénédicte; Stievenart, Marie; Roskam, Isabelle

    2017-05-01

    This research compared the efficacy of two parenting interventions that vary according to the number and the nature of variables in reducing preschoolers' externalizing behavior (EB). The goal was to identify which parenting intervention format (one-variable versus two-variable) caused higher behavioral adjustment in children. The first was a one-variable intervention manipulating parental self-efficacy beliefs. The second was a two-variable intervention manipulating both parents' self-efficacy beliefs and emotion coaching practices. The two interventions shared exactly the same design, consisting of eight parent group sessions. Effect on children's EB and observed behaviors were evaluated through a multi-method assessment at three points (pre-test, post-test and follow-up). The results highlighted that compared to the waitlist condition, the two intervention formats tended to cause a significant reduction in children's EB reported by their parent. However, the one-variable intervention was found to lead to a greater decrease in children's EB at follow-up. The opposite was reported for children's observed behavior, which was improved to a greater extent in the two-variable intervention at post-test and follow-up. The results illustrated that interventions' format cannot be considered as purely interchangeable since their impact on children's behavior modification is different. The results are discussed for their research and clinical implications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Development of criteria for the use of asphalt-rubber as a Stress-Absorbing Membrane Interlayer (SAMI)

    NASA Astrophysics Data System (ADS)

    Newcomb, D. E.; McKeen, R. G.

    1983-12-01

    This report documents over 2 years of research efforts to characterize asphalt-rubber mixtures to be used in Stress-Absorbing Membrane Interlayers (SAMI). The purpose of these SAMIs is to retard or prevent reflection cracking in asphalt-concrete overlays. Several laboratory experiments and one field trial were conducted to define significant test methods and parameters for incorporation into construction design and specification documents. Test methods used in this study included a modified softening point test, force-ductility, and Schweyer viscosity. Variables investigated included (1) Laboratory-mixing temperature; (2) Rubber type; (3) Laboratory storage time; (4) Laboratory storage condition; (5) Laboratory batch replication; (6) Laboratory mixing time; (7) Field mixing time; (8) Laboratory test temperature; (9) Force-Ductility elongation rates; and (10) Asphalt grade. It was found that mixing temperature, mixing time, rubber type, and asphalt grade all have significant effects upon the behavior of asphalt-rubber mixtures. Significant variability was also noticed in different laboratory batch replications. Varying laboratory test temperature and force-ductility elongation rate revealed further differences in asphalt-rubber mixtures.

  10. Variable selection based near infrared spectroscopy quantitative and qualitative analysis on wheat wet gluten

    NASA Astrophysics Data System (ADS)

    Lü, Chengxu; Jiang, Xunpeng; Zhou, Xingfan; Zhang, Yinqiao; Zhang, Naiqian; Wei, Chongfeng; Mao, Wenhua

    2017-10-01

    Wet gluten is a useful quality indicator for wheat, and short wave near infrared spectroscopy (NIRS) is a high performance technique with the advantage of economic rapid and nondestructive test. To study the feasibility of short wave NIRS analyzing wet gluten directly from wheat seed, 54 representative wheat seed samples were collected and scanned by spectrometer. 8 spectral pretreatment method and genetic algorithm (GA) variable selection method were used to optimize analysis. Both quantitative and qualitative model of wet gluten were built by partial least squares regression and discriminate analysis. For quantitative analysis, normalization is the optimized pretreatment method, 17 wet gluten sensitive variables are selected by GA, and GA model performs a better result than that of all variable model, with R2V=0.88, and RMSEV=1.47. For qualitative analysis, automatic weighted least squares baseline is the optimized pretreatment method, all variable models perform better results than those of GA models. The correct classification rates of 3 class of <24%, 24-30%, >30% wet gluten content are 95.45, 84.52, and 90.00%, respectively. The short wave NIRS technique shows potential for both quantitative and qualitative analysis of wet gluten for wheat seed.

  11. Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.

    1998-01-01

    BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.

  12. Inversion of the anomalous diffraction approximation for variable complex index of refraction near unity. [numerical tests for water-haze aerosol model

    NASA Technical Reports Server (NTRS)

    Smith, C. B.

    1982-01-01

    The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.

  13. Testing Web Applications with Mutation Analysis

    ERIC Educational Resources Information Center

    Praphamontripong, Upsorn

    2017-01-01

    Web application software uses new technologies that have novel methods for integration and state maintenance that amount to new control flow mechanisms and new variables scoping. While modern web development technologies enhance the capabilities of web applications, they introduce challenges that current testing techniques do not adequately test…

  14. UNDERSTANDING AND ACCOUNTING FOR METHOD VARIABILITY IN WHOLE EFFLUENT TOXICITY APPLICATIONS UNDER THE NPDES PROGRAM

    EPA Science Inventory

    This chapter provides a brief introduction to whole effluent toxicity (WET) testing and describes the regulatory background and context of WET testing. This chapter also describes the purpose of this document and outlines the issues addressed in each chapter.

  15. Quality by design approach for understanding the critical quality attributes of cyclosporine ophthalmic emulsion.

    PubMed

    Rahman, Ziyaur; Xu, Xiaoming; Katragadda, Usha; Krishnaiah, Yellela S R; Yu, Lawrence; Khan, Mansoor A

    2014-03-03

    Restasis is an ophthalmic cyclosporine emulsion used for the treatment of dry eye syndrome. There are no generic products for this product, probably because of the limitations on establishing in vivo bioequivalence methods and lack of alternative in vitro bioequivalence testing methods. The present investigation was carried out to understand and identify the appropriate in vitro methods that can discriminate the effect of formulation and process variables on critical quality attributes (CQA) of cyclosporine microemulsion formulations having the same qualitative (Q1) and quantitative (Q2) composition as that of Restasis. Quality by design (QbD) approach was used to understand the effect of formulation and process variables on critical quality attributes (CQA) of cyclosporine microemulsion. The formulation variables chosen were mixing order method, phase volume ratio, and pH adjustment method, while the process variables were temperature of primary and raw emulsion formation, microfluidizer pressure, and number of pressure cycles. The responses selected were particle size, turbidity, zeta potential, viscosity, osmolality, surface tension, contact angle, pH, and drug diffusion. The selected independent variables showed statistically significant (p < 0.05) effect on droplet size, zeta potential, viscosity, turbidity, and osmolality. However, the surface tension, contact angle, pH, and drug diffusion were not significantly affected by independent variables. In summary, in vitro methods can detect formulation and manufacturing changes and would thus be important for quality control or sameness of cyclosporine ophthalmic products.

  16. Correlation to FVIII:C in Two Thrombin Generation Tests: TGA-CAT and INNOVANCE ETP.

    PubMed

    Ljungkvist, Marcus; Berndtsson, Maria; Holmström, Margareta; Mikovic, Danijela; Elezovic, Ivo; Antovic, Jovan P; Zetterberg, Eva; Berntorp, Erik

    2017-01-01

    Several thrombin-generation tests are available, but few have been directly compared. Our primary aim was to investigate the correlation of two thrombin generation tests, thrombin generation assay-calibrated automated thrombogram (TGA-CAT) and INNOVANCE ETP, to factor VIII levels (FVIII:C) in a group of patients with hemophilia A. The secondary aim was to investigate inter-laboratory variation for the TGA-CAT method. Blood samples were taken from 45 patients with mild, moderate and severe hemophilia A. The TGA-CAT method was performed at both centers while the INNOVANCE ETP was only performed at the Stockholm center. Correlation between parameters was evaluated using Spearman's rank correlation test. For determination of the TGA-CAT inter-laboratory variability, Bland-Altman plots were used. The correlation for the INNOVANCE ETP and TGA-CAT methods with FVIII:C in persons with hemophilia (PWH) was r=0.701 and r=0.734 respectively.The correlation between the two methods was r=0.546.When dividing the study material into disease severity groups (mild, moderate and severe) based on FVIII levels, both methods fail to discriminate between them.The variability of the TGA-CAT results performed at the two centers was reduced after normalization; before normalization, 29% of values showed less than ±10% difference while after normalization the number increased to 41%. Both methods correlate in an equal manner to FVIII:C in PWH but show a poor correlation with each other. The level of agreement for the TGA-CAT method was poor though slightly improved after normalization of data. Further improvement of standardization of these methods is warranted.

  17. Impact of global financial crisis on precious metals returns: An application of ARCH and GARCH methods

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Tahir; Abdullah, Nurul Ain; Abdul Karim, Samsul Ariffin

    2013-04-01

    This paper is focusing on seeing the resilient of precious metals returns in facing the global financial crisis and provides a new guide for the investors before making investment decisions on precious metals. Four types of precious metals returns which are the variables selected in this study. The precious metals are gold, silver, bronze and platinum. All the variables are transferred to natural logarithm (ln). Daily data over the period 2 January 1995 to 30 December 2011 is used. Unit root tests that involve Augmented Dickey-Fuller (ADF) and Kwiatkowski-Phillips-Schmidt-Shin (KPSS) tests have been employed in determining the stationarity of the variables. Autoregressive Conditional Heteroscedasticity (ARCH) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) methods have been applied in measuring the impact of global financial crisis on precious metals returns. The result shows that investing in platinum is less risky compared to the other precious metals because it is not influence by the crisis period.

  18. Multilocus Association Mapping Using Variable-Length Markov Chains

    PubMed Central

    Browning, Sharon R.

    2006-01-01

    I propose a new method for association-based gene mapping that makes powerful use of multilocus data, is computationally efficient, and is straightforward to apply over large genomic regions. The approach is based on the fitting of variable-length Markov chain models, which automatically adapt to the degree of linkage disequilibrium (LD) between markers to create a parsimonious model for the LD structure. Edges of the fitted graph are tested for association with trait status. This approach can be thought of as haplotype testing with sophisticated windowing that accounts for extent of LD to reduce degrees of freedom and number of tests while maximizing information. I present analyses of two published data sets that show that this approach can have better power than single-marker tests or sliding-window haplotypic tests. PMID:16685642

  19. Multilocus association mapping using variable-length Markov chains.

    PubMed

    Browning, Sharon R

    2006-06-01

    I propose a new method for association-based gene mapping that makes powerful use of multilocus data, is computationally efficient, and is straightforward to apply over large genomic regions. The approach is based on the fitting of variable-length Markov chain models, which automatically adapt to the degree of linkage disequilibrium (LD) between markers to create a parsimonious model for the LD structure. Edges of the fitted graph are tested for association with trait status. This approach can be thought of as haplotype testing with sophisticated windowing that accounts for extent of LD to reduce degrees of freedom and number of tests while maximizing information. I present analyses of two published data sets that show that this approach can have better power than single-marker tests or sliding-window haplotypic tests.

  20. Sex Bias in Research Design.

    ERIC Educational Resources Information Center

    Grady, Kathleen E.

    1981-01-01

    Presents feminist criticisms of selected aspects of research methods in psychology. Reviews data relevant to sex bias in topic selection, subject selection and single-sex designs, operationalization of variables, testing for sex differences, and interpretation of results. Suggestions for achieving more "sex fair" research methods are discussed.…

  1. Studies of the Variables Affecting Behavior of Larval Zebrafish for Developmental Neurotoxicity Testing

    EPA Science Inventory

    The U.S. Environmental Protection Agency is evaluating methods to screen and prioritize large numbers of chemicals for developmental toxicity. We are exploring methods to detect developmentally neurotoxic chemicals using zebrafish behavior at 6 days of age. The behavioral paradig...

  2. Guidelines for the Investigation of Mediating Variables in Business Research

    PubMed Central

    Coxe, Stefany; Baraldi, Amanda N.

    2013-01-01

    Business theories often specify the mediating mechanisms by which a predictor variable affects an outcome variable. In the last 30 years, investigations of mediating processes have become more widespread with corresponding developments in statistical methods to conduct these tests. The purpose of this article is to provide guidelines for mediation studies by focusing on decisions made prior to the research study that affect the clarity of conclusions from a mediation study, the statistical models for mediation analysis, and methods to improve interpretation of mediation results after the research study. Throughout this article, the importance of a program of experimental and observational research for investigating mediating mechanisms is emphasized. PMID:25237213

  3. Identifying elderly people at risk for cognitive decline by using the 2-step test.

    PubMed

    Maruya, Kohei; Fujita, Hiroaki; Arai, Tomoyuki; Hosoi, Toshiki; Ogiwara, Kennichi; Moriyama, Shunnichiro; Ishibashi, Hideaki

    2018-01-01

    [Purpose] The purpose is to verify the effectiveness of the 2-step test in predicting cognitive decline in elderly individuals. [Subjects and Methods] One hundred eighty-two participants aged over 65 years underwent the 2-step test, cognitive function tests and higher level competence testing. Participants were classified as Robust, <1.3, and <1.1 using criteria regarding the locomotive syndrome risk stage for the 2-step test, variables were compared between groups. In addition, ordered logistic analysis was used to analyze cognitive functions as independent variables in the three groups, using the 2-step test results as the dependent variable, with age, gender, etc. as adjustment factors. [Results] In the crude data, the <1.3 and <1.1 groups were older and displayed lower motor and cognitive functions than did the Robust group. Furthermore, the <1.3 group exhibited significantly lower memory retention than did the Robust group. The 2-step test was related to the Stroop test (β: 0.06, 95% confidence interval: 0.01-0.12). [Conclusion] The finding is that the risk stage of the 2-step test is related to cognitive functions, even at an initial risk stage. The 2-step test may help with earlier detection and implementation of prevention measures for locomotive syndrome and mild cognitive impairment.

  4. Repeatability of riparian vegetation sampling methods: how useful are these techniques for broad-scale, long-term monitoring?

    Treesearch

    Marc C. Coles-Ritchie; Richard C. Henderson; Eric K. Archer; Caroline Kennedy; Jeffrey L. Kershner

    2004-01-01

    Tests were conducted to evaluate variability among observers for riparian vegetation data collection methods and data reduction techniques. The methods are used as part of a largescale monitoring program designed to detect changes in riparian resource conditions on Federal lands. Methods were evaluated using agreement matrices, the Bray-Curtis dissimilarity metric, the...

  5. Insights from analysis for harmful and potentially harmful constituents (HPHCs) in tobacco products.

    PubMed

    Oldham, Michael J; DeSoi, Darren J; Rimmer, Lonnie T; Wagner, Karl A; Morton, Michael J

    2014-10-01

    A total of 20 commercial cigarette and 16 commercial smokeless tobacco products were assayed for 96 compounds listed as harmful and potentially harmful constituents (HPHCs) by the US Food and Drug Administration. For each product, a single lot was used for all testing. Both International Organization for Standardization and Health Canada smoking regimens were used for cigarette testing. For those HPHCs detected, measured levels were consistent with levels reported in the literature, however substantial assay variability (measured as average relative standard deviation) was found for most results. Using an abbreviated list of HPHCs, statistically significant differences for most of these HPHCs occurred when results were obtained 4-6months apart (i.e., temporal variability). The assay variability and temporal variability demonstrate the need for standardized analytical methods with defined repeatability and reproducibility for each HPHC using certified reference standards. Temporal variability also means that simple conventional comparisons, such as two-sample t-tests, are inappropriate for comparing products tested at different points in time from the same laboratory or from different laboratories. Until capable laboratories use standardized assays with established repeatability, reproducibility, and certified reference standards, the resulting HPHC data will be unreliable for product comparisons or other decision making in regulatory science. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. A benchmarking method to measure dietary absorption efficiency of chemicals by fish.

    PubMed

    Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew

    2013-12-01

    Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.

  7. SU-F-I-80: Correction for Bias in a Channelized Hotelling Model Observer Caused by Temporally Variable Non-Stationary Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favazza, C; Fetterly, K

    2016-06-15

    Purpose: Application of a channelized Hotelling model observer (CHO) over a wide range of x-ray angiography detector target dose (DTD) levels demonstrated substantial bias for conditions yielding low detectability indices (d’), including low DTD and small test objects. The purpose of this work was to develop theory and methods to correct this bias. Methods: A hypothesis was developed wherein the measured detectability index (d’b) for a known test object is positively biased by temporally variable non-stationary noise in the images. Hotelling’s T2 test statistic provided the foundation for a mathematical theory which accounts for independent contributions to the measured d’bmore » value from both the test object (d’o) and non-stationary noise (d’ns). Experimental methods were developed to directly estimate d’o by determining d’ns and subtracting it from d’b, in accordance with the theory. Specifically, d’ns was determined from two sets of images from which the traditional test object was withheld. This method was applied to angiography images with DTD levels in the range 0 to 240 nGy and for disk-shaped iodine-based contrast targets with diameters 0.5 to 4.0 mm. Results: Bias in d’ was evidenced by d’b values which exceeded values expected from a quantum limited imaging system and decreasing object size and DTD. d’ns increased with decreasing DTD, reaching a maximum of 2.6 for DTD = 0. Bias-corrected d’o estimates demonstrated sub-quantum limited performance of the x-ray angiography for low DTD. Findings demonstrated that the source of non-stationary noise was detector electronic readout noise. Conclusion: Theory and methods to estimate and correct bias in CHO measurements from temporally variable non-stationary noise were presented. The temporal non-stationary noise was shown to be due to electronic readout noise. This method facilitates accurate estimates of d’ values over a large range of object size and detector target dose.« less

  8. Socio-Economic Factors Influencing on Total Fertility Rate in Iran: A Panel Data Analysis for the Period of 2002–2012

    PubMed Central

    Jafari, Hasan; Jaafaripooyan, Ebrahim; Vedadhir, Abou Ali; Foroushani, Abbas Rahimi; Ahadinejad, Bahman; Pourreza, Abolghasem

    2016-01-01

    Introduction Over the last few decades, total fertility rate (TFR) has followed a downward trend in Iran. The consequences of this trend from the perspectives of some are negative. Considering the macro-population policies in recent years, this study aimed to examine the effect of some macro socio-economic variables, including divorce, marriage, urbanization, and unemployment rate on TFR in Iran from 2002 to 2012. Methods This time series research was conducted in 2015 using the databases of the National Organization for Civil Registration (NOCR) and the Statistical Center of Iran. The study population was the related data of provinces in the selected variables. The main methods used in the research were the common unit root test, Pedroni Cointegration test, redundant fixed effects tests, correlated random effects-Hausman test, and panel least squares of fixed effects. In order to determine the suitable model for estimating panel data, likelihood ratio and Huasman tests were done using Eviews software, and the fixed effects regression model was chosen as the dominant model. Results The results indicated that the divorce rate had a negative and significant effect on TFR (p < 0.05). A positive and significant relationship between marriage rate and TFR variables also was observed (p < 0.05). Urbanization rate (p = 0.24) and unemployment rate (p = 0.36) had no significant relationship with TFR. According to F statistic, significance of the overall model also was confirmed (p < 0.001). Conclusion Due to the lower effect of the studied factors on the reduction of TFR, it seems that variables other than the ones studied, as well as cultural factors and values, might be fundamental factors for this change in the country. PMID:27504172

  9. multiUQ: An intrusive uncertainty quantification tool for gas-liquid multiphase flows

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2017-11-01

    Uncertainty quantification (UQ) can improve our understanding of the sensitivity of gas-liquid multiphase flows to variability about inflow conditions and fluid properties, creating a valuable tool for engineers. While non-intrusive UQ methods (e.g., Monte Carlo) are simple and robust, the cost associated with these techniques can render them unrealistic. In contrast, intrusive UQ techniques modify the governing equations by replacing deterministic variables with stochastic variables, adding complexity, but making UQ cost effective. Our numerical framework, called multiUQ, introduces an intrusive UQ approach for gas-liquid flows, leveraging a polynomial chaos expansion of the stochastic variables: density, momentum, pressure, viscosity, and surface tension. The gas-liquid interface is captured using a conservative level set approach, including a modified reinitialization equation which is robust and quadrature free. A least-squares method is leveraged to compute the stochastic interface normal and curvature needed in the continuum surface force method for surface tension. The solver is tested by applying uncertainty to one or two variables and verifying results against the Monte Carlo approach. NSF Grant #1511325.

  10. Comparative In vivo, Ex vivo, and In vitro Toxicity Studies of Engineered Nanomaterials

    EPA Science Inventory

    Efforts to reduce the number of animals in engineered nanomaterials (ENM) toxicity testing have resulted in the development of numerous alternative toxicity testing methods, but in vivo and in vitro results are still evolving and variable. This inconsistency could be due to the f...

  11. A SUGGESTED METHOD FOR PRE-SCHOOL IDENTIFICATION OF POTENTIAL READING DISABILITY.

    ERIC Educational Resources Information Center

    NEWTON, KENNETH R.; AND OTHERS

    THE RELATIONSHIPS BETWEEN PREREADING MEASURES OF VISUAL-MOTOR-PERCEPTUAL SKILLS AND READING ACHIEVEMENT WERE STUDIED. SUBJECTS WERE 172 FIRST GRADERS. PRETESTS AND POST-TESTS FOR WORD RECOGNITION, MOTOR COORDINATION, AND VISUAL PERCEPTION WERE ADMINISTERED. FOURTEEN VARIABLES WERE TESTED. RESULTS INDICATED THAT FORM-COPYING WAS MORE EFFECTIVE THAN…

  12. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  13. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  14. Evaluation of a method of estimating low-flow frequencies from base-flow measurements at Indiana streams

    USGS Publications Warehouse

    Wilson, John Thomas

    2000-01-01

    A mathematical technique of estimating low-flow frequencies from base-flow measurements was evaluated by using data for streams in Indiana. Low-flow frequencies at low- flow partial-record stations were estimated by relating base-flow measurements to concurrent daily flows at nearby streamflow-gaging stations (index stations) for which low-flowfrequency curves had been developed. A network of long-term streamflow-gaging stations in Indiana provided a sample of sites with observed low-flow frequencies. Observed values of 7-day, 10-year low flow and 7-day, 2-year low flow were compared to predicted values to evaluate the accuracy of the method. Five test cases were used to evaluate the method under a variety of conditions in which the location of the index station and its drainage area varied relative to the partial-record station. A total of 141 pairs of streamflow-gaging stations were used in the five test cases. Four of the test cases used one index station, the fifth test case used two index stations. The number of base-flow measurements was varied for each test case to see if the accuracy of the method was affected by the number of measurements used. The most accurate and least variable results were produced when two index stations on the same stream or tributaries of the partial-record station were used. All but one value of the predicted 7-day, 10-year low flow were within 15 percent of the values observed for the long-term continuous record, and all of the predicted values of the 7-day, 2-year lowflow were within 15 percent of the observed values. This apparent accuracy, to some extent, may be a result of the small sample set of 15. Of the four test cases that used one index station, the most accurate and least variable results were produced in the test case where the index station and partial-record station were on the same stream or on streams tributary to each other and where the index station had a larger drainage area than the partial-record station. In that test case, the method tended to over predict, based on the median relative error. In 23 of 28 test pairs, the predicted 7-day, 10-year low flow was within 15 percent of the observed value; in 26 of 28 test pairs, the predicted 7-day, 2-year low flow was within 15 percent of the observed value. When the index station and partial-record station were on the same stream or streams tributary to each other and the index station had a smaller drainage area than the partial-record station, the method tended to under predict the low-flow frequencies. Nineteen of 28 predicted values of the 7-day, 10-year low flow were within 15 percent of the observed values. Twenty-five of 28 predicted values of the 7-day, 2-year low flow were within 15 percent of the observed values. When the index station and the partial-record station were on different streams, the method tended to under predict regardless of whether the index station had a larger or smaller drainage area than that of the partial-record station. Also, the variability of the relative error of estimate was greatest for the test cases that used index stations and partial-record stations from different streams. This variability, in part, may be caused by using more streamflow-gaging stations with small low-flow frequencies in these test cases. A small difference in the predicted and observed values can equate to a large relative error when dealing with stations that have small low-flow frequencies. In the test cases that used one index station, the method tended to predict smaller low-flow frequencies as the number of base-flow measurements was reduced from 20 to 5. Overall, the average relative error of estimate and the variability of the predicted values increased as the number of base-flow measurements was reduced.

  15. Quantifying human disturbance in watersheds: Variable selection and performance of a GIS-based disturbance index for predicting the biological condition of perennial streams

    USGS Publications Warehouse

    Falcone, James A.; Carlisle, Daren M.; Weber, Lisa C.

    2010-01-01

    Characterizing the relative severity of human disturbance in watersheds is often part of stream assessments and is frequently done with the aid of Geographic Information System (GIS)-derived data. However, the choice of variables and how they are used to quantify disturbance are often subjective. In this study, we developed a number of disturbance indices by testing sets of variables, scoring methods, and weightings of 33 potential disturbance factors derived from readily available GIS data. The indices were calibrated using 770 watersheds located in the western United States for which the severity of disturbance had previously been classified from detailed local data by the United States Environmental Protection Agency (USEPA) Environmental Monitoring and Assessment Program (EMAP). The indices were calibrated by determining which variable or variable combinations and aggregation method best differentiated between least- and most-disturbed sites. Indices composed of several variables performed better than any individual variable, and best results came from a threshold method of scoring using six uncorrelated variables: housing unit density, road density, pesticide application, dam storage, land cover along a mainstem buffer, and distance to nearest canal/pipeline. The final index was validated with 192 withheld watersheds and correctly classified about two-thirds (68%) of least- and most-disturbed sites. These results provide information about the potential for using a disturbance index as a screening tool for a priori ranking of watersheds at a regional/national scale, and which landscape variables and methods of combination may be most helpful in doing so.

  16. Reinforcement learning state estimator.

    PubMed

    Morimoto, Jun; Doya, Kenji

    2007-03-01

    In this study, we propose a novel use of reinforcement learning for estimating hidden variables and parameters of nonlinear dynamical systems. A critical issue in hidden-state estimation is that we cannot directly observe estimation errors. However, by defining errors of observable variables as a delayed penalty, we can apply a reinforcement learning frame-work to state estimation problems. Specifically, we derive a method to construct a nonlinear state estimator by finding an appropriate feedback input gain using the policy gradient method. We tested the proposed method on single pendulum dynamics and show that the joint angle variable could be successfully estimated by observing only the angular velocity, and vice versa. In addition, we show that we could acquire a state estimator for the pendulum swing-up task in which a swing-up controller is also acquired by reinforcement learning simultaneously. Furthermore, we demonstrate that it is possible to estimate the dynamics of the pendulum itself while the hidden variables are estimated in the pendulum swing-up task. Application of the proposed method to a two-linked biped model is also presented.

  17. Accelerated/abbreviated test methods of the low-cost silicon solar array project. Study 4, task 3: Encapsulation

    NASA Technical Reports Server (NTRS)

    Kolyer, J. M.; Mann, N. R.

    1977-01-01

    Methods of accelerated and abbreviated testing were developed and applied to solar cell encapsulants. These encapsulants must provide protection for as long as 20 years outdoors at different locations within the United States. Consequently, encapsulants were exposed for increasing periods of time to the inherent climatic variables of temperature, humidity, and solar flux. Property changes in the encapsulants were observed. The goal was to predict long term behavior of encapsulants based upon experimental data obtained over relatively short test periods.

  18. CRMS vegetation analytical team framework: Methods for collection, development, and use of vegetation response variables

    USGS Publications Warehouse

    Cretini, Kari F.; Visser, Jenneke M.; Krauss, Ken W.; Steyer, Gregory D.

    2011-01-01

    This document identifies the main objectives of the Coastwide Reference Monitoring System (CRMS) vegetation analytical team, which are to provide (1) collection and development methods for vegetation response variables and (2) the ways in which these response variables will be used to evaluate restoration project effectiveness. The vegetation parameters (that is, response variables) collected in CRMS and other coastal restoration projects funded under the Coastal Wetlands Planning, Protection and Restoration Act (CWPPRA) are identified, and the field collection methods for these parameters are summarized. Existing knowledge on community and plant responses to changes in environmental drivers (for example, flooding and salinity) from published literature and from the CRMS and CWPPRA monitoring dataset are used to develop a suite of indices to assess wetland condition in coastal Louisiana. Two indices, the floristic quality index (FQI) and a productivity index, are described for herbaceous and forested vegetation. The FQI for herbaceous vegetation is tested with a long-term dataset from a CWPPRA marsh creation project. Example graphics for this index are provided and discussed. The other indices, an FQI for forest vegetation (that is, trees and shrubs) and productivity indices for herbaceous and forest vegetation, are proposed but not tested. New response variables may be added or current response variables removed as data become available and as our understanding of restoration success indicators develops. Once indices are fully developed, each will be used by the vegetation analytical team to assess and evaluate CRMS/CWPPRA project and program effectiveness. The vegetation analytical teams plan to summarize their results in the form of written reports and/or graphics and present these items to CRMS Federal and State sponsors, restoration project managers, landowners, and other data users for their input.

  19. Quantifying Effects of Pharmacological Blockers of Cardiac Autonomous Control Using Variability Parameters.

    PubMed

    Miyabara, Renata; Berg, Karsten; Kraemer, Jan F; Baltatu, Ovidiu C; Wessel, Niels; Campos, Luciana A

    2017-01-01

    Objective: The aim of this study was to identify the most sensitive heart rate and blood pressure variability (HRV and BPV) parameters from a given set of well-known methods for the quantification of cardiovascular autonomic function after several autonomic blockades. Methods: Cardiovascular sympathetic and parasympathetic functions were studied in freely moving rats following peripheral muscarinic (methylatropine), β1-adrenergic (metoprolol), muscarinic + β1-adrenergic, α1-adrenergic (prazosin), and ganglionic (hexamethonium) blockades. Time domain, frequency domain and symbolic dynamics measures for each of HRV and BPV were classified through paired Wilcoxon test for all autonomic drugs separately. In order to select those variables that have a high relevance to, and stable influence on our target measurements (HRV, BPV) we used Fisher's Method to combine the p -value of multiple tests. Results: This analysis led to the following best set of cardiovascular variability parameters: The mean normal beat-to-beat-interval/value (HRV/BPV: meanNN), the coefficient of variation (cvNN = standard deviation over meanNN) and the root mean square differences of successive (RMSSD) of the time domain analysis. In frequency domain analysis the very-low-frequency (VLF) component was selected. From symbolic dynamics Shannon entropy of the word distribution (FWSHANNON) as well as POLVAR3, the non-linear parameter to detect intermittently decreased variability, showed the best ability to discriminate between the different autonomic blockades. Conclusion: Throughout a complex comparative analysis of HRV and BPV measures altered by a set of autonomic drugs, we identified the most sensitive set of informative cardiovascular variability indexes able to pick up the modifications imposed by the autonomic challenges. These indexes may help to increase our understanding of cardiovascular sympathetic and parasympathetic functions in translational studies of experimental diseases.

  20. Studies of the Variables Affecting Behavior of Larval Zebrafish for Developmental Neurotoxicity Testing*

    EPA Science Inventory

    The U.S. Environmental Protection Agency is evaluating methods to screen and prioritize large numbers of chemicals for developmental toxicity. We are exploring methods to screen for developmentally neurotoxic chemicals using zebrafish behavior at 6 days of age. The behavioral par...

  1. Evaluation of Two Types of Differential Item Functioning in Factor Mixture Models with Binary Outcomes

    ERIC Educational Resources Information Center

    Lee, HwaYoung; Beretvas, S. Natasha

    2014-01-01

    Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…

  2. Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.

    ERIC Educational Resources Information Center

    Vidal, Sherry

    Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…

  3. Multivariate modelling and personality organization: a comparative study of the Defense Mechanism Test and linguistic expressions.

    PubMed

    Sundbom, E; Jeanneau, M

    1996-03-01

    The main aim of the study is to establish an empirical connection between perceptual defences as measured by the Defense Mechanism Test (DMT)--a projective percept-genetic method--and manifest linguistic expressions based on word pattern analyses. The subjects were 25 psychiatric patients with the diagnoses neurotic personality organization (NPO), borderline personality organization (BPO) and psychotic personality organization (PPO) in accordance with Kernberg's theory. A set of 130 DMT variables and 40 linguistic variables were analyzed by means of partial least squares (PLS) discriminant analysis separately and then pooled together. The overall hypothesis was that it would be possible to define the personality organization of the patients in terms of an amalgam of perceptual defences and word patterns, and that these two kinds of data would confirm each other. The result of the combined PLS analysis revealed a very good separation between the diagnostic groups as measured by the pooled variable sets. Among other things, it was shown that NPO patients are principally characterized by linguistic variables, whereas BPO and PPO patients are better defined by perceptual defences as measured by the DMT method.

  4. Examining the Missing Completely at Random Mechanism in Incomplete Data Sets: A Multiple Testing Approach

    ERIC Educational Resources Information Center

    Raykov, Tenko; Lichtenberg, Peter A.; Paulson, Daniel

    2012-01-01

    A multiple testing procedure for examining implications of the missing completely at random (MCAR) mechanism in incomplete data sets is discussed. The approach uses the false discovery rate concept and is concerned with testing group differences on a set of variables. The method can be used for ascertaining violations of MCAR and disproving this…

  5. Evidence of an application of a variable MEMS capacitive sensor for detecting shunt occlusions

    NASA Astrophysics Data System (ADS)

    Apigo, David J.; Bartholomew, Philip L.; Russell, Thomas; Kanwal, Alokik; Farrow, Reginald C.; Thomas, Gordon A.

    2017-04-01

    A sensor was tested subdural and in vitro, simulating a supine infant with a ventricular-peritoneal shunt and controlled occlusions. The variable MEMS capacitive device is able to detect and forecast blockages, similar to early detection procedures in cancer patients. For example, with gradual occlusion development over a year, the method forecasts a danger over one month ahead of blockage. The method also distinguishes between ventricular and peritoneal occlusions. Because the sensor provides quantitative data on the dynamics of the cerebrospinal fluid, it can help test new therapies and work toward understanding hydrocephalus as well as idiopathic normal pressure hydrocephalus. The sensor appears to be a substantial advance in treating brain injuries treated with shunts and has the potential to bring significant impact in a clinical setting.

  6. Quantifying and Testing Indirect Effects in Simple Mediation Models when the Constituent Paths Are Nonlinear

    ERIC Educational Resources Information Center

    Hayes, Andrew F.; Preacher, Kristopher J.

    2010-01-01

    Most treatments of indirect effects and mediation in the statistical methods literature and the corresponding methods used by behavioral scientists have assumed linear relationships between variables in the causal system. Here we describe and extend a method first introduced by Stolzenberg (1980) for estimating indirect effects in models of…

  7. Clinical experimental stress studies: methods and assessment.

    PubMed

    Bali, Anjana; Jaggi, Amteshwar Singh

    2015-01-01

    Stress is a state of threatened homeostasis during which a variety of adaptive processes are activated to produce physiological and behavioral changes. Stress induction methods are pivotal for understanding these physiological or pathophysiological changes in the body in response to stress. Furthermore, these methods are also important for the development of novel pharmacological agents for stress management. The well-described methods to induce stress in humans include the cold pressor test, Trier Social Stress Test, Montreal Imaging Stress Task, Maastricht Acute Stress Test, CO2 challenge test, Stroop test, Paced Auditory Serial Addition Task, noise stress, and Mannheim Multicomponent Stress Test. Stress assessment in humans is done by measuring biochemical markers such as cortisol, cortisol awakening response, dexamethasone suppression test, salivary α-amylase, plasma/urinary norepinephrine, norepinephrine spillover rate, and interleukins. Physiological and behavioral changes such as galvanic skin response, heart rate variability, pupil size, and muscle and/or skin sympathetic nerve activity (microneurography) and cardiovascular parameters such as heart rate, blood pressure, and self-reported anxiety are also monitored to assess stress response. This present review describes these commonly employed methods to induce stress in humans along with stress assessment methods.

  8. Estimating integrated variance in the presence of microstructure noise using linear regression

    NASA Astrophysics Data System (ADS)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  9. Enhancement of hepatitis virus immunoassay outcome predictions in imbalanced routine pathology data by data balancing and feature selection before the application of support vector machines.

    PubMed

    Richardson, Alice M; Lidbury, Brett A

    2017-08-14

    Data mining techniques such as support vector machines (SVMs) have been successfully used to predict outcomes for complex problems, including for human health. Much health data is imbalanced, with many more controls than positive cases. The impact of three balancing methods and one feature selection method is explored, to assess the ability of SVMs to classify imbalanced diagnostic pathology data associated with the laboratory diagnosis of hepatitis B (HBV) and hepatitis C (HCV) infections. Random forests (RFs) for predictor variable selection, and data reshaping to overcome a large imbalance of negative to positive test results in relation to HBV and HCV immunoassay results, are examined. The methodology is illustrated using data from ACT Pathology (Canberra, Australia), consisting of laboratory test records from 18,625 individuals who underwent hepatitis virus testing over the decade from 1997 to 2007. Overall, the prediction of HCV test results by immunoassay was more accurate than for HBV immunoassay results associated with identical routine pathology predictor variable data. HBV and HCV negative results were vastly in excess of positive results, so three approaches to handling the negative/positive data imbalance were compared. Generating datasets by the Synthetic Minority Oversampling Technique (SMOTE) resulted in significantly more accurate prediction than single downsizing or multiple downsizing (MDS) of the dataset. For downsized data sets, applying a RF for predictor variable selection had a small effect on the performance, which varied depending on the virus. For SMOTE, a RF had a negative effect on performance. An analysis of variance of the performance across settings supports these findings. Finally, age and assay results for alanine aminotransferase (ALT), sodium for HBV and urea for HCV were found to have a significant impact upon laboratory diagnosis of HBV or HCV infection using an optimised SVM model. Laboratories looking to include machine learning via SVM as part of their decision support need to be aware that the balancing method, predictor variable selection and the virus type interact to affect the laboratory diagnosis of hepatitis virus infection with routine pathology laboratory variables in different ways depending on which combination is being studied. This awareness should lead to careful use of existing machine learning methods, thus improving the quality of laboratory diagnosis.

  10. Development of toughened epoxy polymers for high performance composite and ablative applications

    NASA Technical Reports Server (NTRS)

    Allen, V. R.

    1982-01-01

    A survey of current procedures for the assessment of state of cure in epoxy polymers and for the evaluation of polymer toughness as related to nature of the crosslinking agent was made to facilitate a cause-effect study of the chemical modification of epoxy polymers. Various conformations of sample morphology were examined to identify testing variables and to establish optimum conditions for the selected physical test methods. Dynamic viscoelasticity testing was examined in conjunction with chemical analyses to allow observation of the extent of the curing reaction with size of the crosslinking agent the primary variable. Specifically the aims of the project were twofold: (1) to consider the experimental variables associated with development of "extent of cure" analysis, and (2) to assess methodology of fracture energy determination and to prescribe a meaningful and reproducible procedure. The following is separated into two categories for ease of presentation.

  11. Field and laboratory arsenic speciation methods and their application to natural-water analysis

    USGS Publications Warehouse

    Bednar, A.J.; Garbarino, J.R.; Burkhardt, M.R.; Ranville, J.F.; Wildeman, T.R.

    2004-01-01

    The toxic and carcinogenic properties of inorganic and organic arsenic species make their determination in natural water vitally important. Determination of individual inorganic and organic arsenic species is critical because the toxicology, mobility, and adsorptivity vary substantially. Several methods for the speciation of arsenic in groundwater, surface-water, and acid mine drainage sample matrices using field and laboratory techniques are presented. The methods provide quantitative determination of arsenite [As(III)], arsenate [As(V)], monomethylarsonate (MMA), dimethylarsinate (DMA), and roxarsone in 2-8min at detection limits of less than 1??g arsenic per liter (??g AsL-1). All the methods use anion exchange chromatography to separate the arsenic species and inductively coupled plasma-mass spectrometry as an arsenic-specific detector. Different methods were needed because some sample matrices did not have all arsenic species present or were incompatible with particular high-performance liquid chromatography (HPLC) mobile phases. The bias and variability of the methods were evaluated using total arsenic, As(III), As(V), DMA, and MMA results from more than 100 surface-water, groundwater, and acid mine drainage samples, and reference materials. Concentrations in test samples were as much as 13,000??g AsL-1 for As(III) and 3700??g AsL-1 for As(V). Methylated arsenic species were less than 100??g AsL-1 and were found only in certain surface-water samples, and roxarsone was not detected in any of the water samples tested. The distribution of inorganic arsenic species in the test samples ranged from 0% to 90% As(III). Laboratory-speciation method variability for As(III), As(V), MMA, and DMA in reagent water at 0.5??g AsL-1 was 8-13% (n=7). Field-speciation method variability for As(III) and As(V) at 1??g AsL-1 in reagent water was 3-4% (n=3). ?? 2003 Elsevier Ltd. All rights reserved.

  12. Moles: Tool-Assisted Environment Isolation with Closures

    NASA Astrophysics Data System (ADS)

    de Halleux, Jonathan; Tillmann, Nikolai

    Isolating test cases from environment dependencies is often desirable, as it increases test reliability and reduces test execution time. However, code that calls non-virtual methods or consumes sealed classes is often impossible to test in isolation. Moles is a new lightweight framework which addresses this problem. For any .NET method, Moles allows test-code to provide alternative implementations, given as .NET delegates, for which C# provides very concise syntax while capturing local variables in a closure object. Using code instrumentation, the Moles framework will redirect calls to provided delegates instead of the original methods. The Moles framework is designed to work together with the dynamic symbolic execution tool Pex to enable automated test generation. In a case study, testing code programmed against the Microsoft SharePoint Foundation API, we achieved full code coverage while running tests in isolation without an actual SharePoint server. The Moles framework integrates with .NET and Visual Studio.

  13. Experimental studies of breaking of elastic tired wheel under variable normal load

    NASA Astrophysics Data System (ADS)

    Fedotov, A. I.; Zedgenizov, V. G.; Ovchinnikova, N. I.

    2017-10-01

    The paper analyzes the braking of a vehicle wheel subjected to disturbances of normal load variations. Experimental tests and methods for developing test modes as sinusoidal force disturbances of the normal wheel load were used. Measuring methods for digital and analogue signals were used as well. Stabilization of vehicle wheel braking subjected to disturbances of normal load variations is a topical issue. The paper suggests a method for analyzing wheel braking processes under disturbances of normal load variations. A method to control wheel baking processes subjected to disturbances of normal load variations was developed.

  14. Hardware-in-the-loop grid simulator system and method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, John Curtiss; Collins, Edward Randolph; Rigas, Nikolaos

    A hardware-in-the-loop (HIL) electrical grid simulation system and method that combines a reactive divider with a variable frequency converter to better mimic and control expected and unexpected parameters in an electrical grid. The invention provides grid simulation in a manner to allow improved testing of variable power generators, such as wind turbines, and their operation once interconnected with an electrical grid in multiple countries. The system further comprises an improved variable fault reactance (reactive divider) capable of providing a variable fault reactance power output to control a voltage profile, therein creating an arbitrary recovery voltage. The system further comprises anmore » improved isolation transformer designed to isolate zero-sequence current from either a primary or secondary winding in a transformer or pass the zero-sequence current from a primary to a secondary winding.« less

  15. Resonance: The science behind the art of sonic drilling

    NASA Astrophysics Data System (ADS)

    Lucon, Peter Andrew

    The research presented in this dissertation quantifies the system dynamics and the influence of control variables of a sonic drill system. The investigation began with an initial body of work funded by the Department of Energy under a Small Business Innovative Research Phase I Grant, grant number: DE-FG02-06ER84618, to investigate the feasibility of using sonic drills to drill micro well holes to depths of 1500 feet. The Department of Energy funding enabled feasibility testing using a 750 hp sonic drill owned by Jeffery Barrow, owner of Water Development Co. During the initial feasibility testing, data was measured and recorded at the sonic drill head while the sonic drill penetrated to a depth of 120 feet. To demonstrate feasibility, the system had to be well understood to show that testing of a larger sonic drill could simulate the results of drilling a micro well hole of 2.5 inch diameter. A first-order model of the system was developed that produced counter-intuitive findings that enabled the feasibility of using this method to drill deeper and produce micro-well holes to 1500 feet using sonic drills. Although funding was not continued, the project work continued. This continued work expanded on the sonic drill models by understanding the governing differential equation and solving the boundary value problem, finite difference methods, and finite element methods to determine the significance of the control variables that can affect the sonic drill. Using a design of experiment approach and commercially available software, the significance of the variables to the effectiveness of the drill system were determined. From the significant variables, as well as the real world testing, a control system schematic for a sonic drill was derived and is patent pending. The control system includes sensors, actuators, personal logic controllers, as well as a human machine interface. It was determined that the control system should control the resonant mode and the weight on the bit as the primary two control variables. The sonic drill can also be controlled using feedback from sensors mounted on the sonic drill head, which is the driver for the sonic drill located above ground

  16. Objective assessment of motor fatigue in multiple sclerosis using kinematic gait analysis: a pilot study

    PubMed Central

    2011-01-01

    Background Fatigue is a frequent and serious symptom in patients with Multiple Sclerosis (MS). However, to date there are only few methods for the objective assessment of fatigue. The aim of this study was to develop a method for the objective assessment of motor fatigue using kinematic gait analysis based on treadmill walking and an infrared-guided system. Patients and methods Fourteen patients with clinically definite MS participated in this study. Fatigue was defined according to the Fatigue Scale for Motor and Cognition (FSMC). Patients underwent a physical exertion test involving walking at their pre-determined patient-specific preferred walking speed until they reached complete exhaustion. Gait was recorded using a video camera, a three line-scanning camera system with 11 infrared sensors. Step length, width and height, maximum circumduction with the right and left leg, maximum knee flexion angle of the right and left leg, and trunk sway were measured and compared using paired t-tests (α = 0.005). In addition, variability in these parameters during one-minute intervals was examined. The fatigue index was defined as the number of significant mean and SD changes from the beginning to the end of the exertion test relative to the total number of gait kinematic parameters. Results Clearly, for some patients the mean gait parameters were more affected than the variability of their movements while other patients had smaller differences in mean gait parameters with greater increases in variability. Finally, for other patients gait changes with physical exertion manifested both in changes in mean gait parameters and in altered variability. The variability and fatigue indices correlated significantly with the motoric but not with the cognitive dimension of the FSMC score (R = -0.602 and R = -0.592, respectively; P < 0.026). Conclusions Changes in gait patterns following a physical exertion test in patients with MS suffering from motor fatigue can be measured objectively. These changes in gait patterns can be described using the motor fatigue index and represent an objective measure to assess motor fatigue in MS patients. The results of this study have important implications for the assessments and treatment evaluations of fatigue in MS. PMID:22029427

  17. Evaluation of a standardized procedure for [corrected] microscopic cell counts [corrected] in body fluids.

    PubMed

    Emerson, Jane F; Emerson, Scott S

    2005-01-01

    A standardized urinalysis and manual microscopic cell counting system was evaluated for its potential to reduce intra- and interoperator variability in urine and cerebrospinal fluid (CSF) cell counts. Replicate aliquots of pooled specimens were submitted blindly to technologists who were instructed to use either the Kova system with the disposable Glasstic slide (Hycor Biomedical, Inc., Garden Grove, CA) or the standard operating procedure of the University of California-Irvine (UCI), which uses plain glass slides for urine sediments and hemacytometers for CSF. The Hycor system provides a mechanical means of obtaining a fixed volume of fluid in which to resuspend the sediment, and fixes the volume of specimen to be microscopically examined by using capillary filling of a chamber containing in-plane counting grids. Ninety aliquots of pooled specimens of each type of body fluid were used to assess the inter- and intraoperator reproducibility of the measurements. The variability of replicate Hycor measurements made on a single specimen by the same or different observers was compared with that predicted by a Poisson distribution. The Hycor methods generally resulted in test statistics that were slightly lower than those obtained with the laboratory standard methods, indicating a trend toward decreasing the effects of various sources of variability. For 15 paired aliquots of each body fluid, tests for systematically higher or lower measurements with the Hycor methods were performed using the Wilcoxon signed-rank test. Also examined was the average difference between the Hycor and current laboratory standard measurements, along with a 95% confidence interval (CI) for the true average difference. Without increasing labor or the requirement for attention to detail, the Hycor method provides slightly better interrater comparisons than the current method used at UCI. Copyright 2005 Wiley-Liss, Inc.

  18. Application of Temperature Sensitivities During Iterative Strain-Gage Balance Calibration Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    A new method is discussed that may be used to correct wind tunnel strain-gage balance load predictions for the influence of residual temperature effects at the location of the strain-gages. The method was designed for the iterative analysis technique that is used in the aerospace testing community to predict balance loads from strain-gage outputs during a wind tunnel test. The new method implicitly applies temperature corrections to the gage outputs during the load iteration process. Therefore, it can use uncorrected gage outputs directly as input for the load calculations. The new method is applied in several steps. First, balance calibration data is analyzed in the usual manner assuming that the balance temperature was kept constant during the calibration. Then, the temperature difference relative to the calibration temperature is introduced as a new independent variable for each strain--gage output. Therefore, sensors must exist near the strain--gages so that the required temperature differences can be measured during the wind tunnel test. In addition, the format of the regression coefficient matrix needs to be extended so that it can support the new independent variables. In the next step, the extended regression coefficient matrix of the original calibration data is modified by using the manufacturer specified temperature sensitivity of each strain--gage as the regression coefficient of the corresponding temperature difference variable. Finally, the modified regression coefficient matrix is converted to a data reduction matrix that the iterative analysis technique needs for the calculation of balance loads. Original calibration data and modified check load data of NASA's MC60D balance are used to illustrate the new method.

  19. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    PubMed

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  20. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  1. Computerized test versus personal interview as admission methods for graduate nursing studies: A retrospective cohort study.

    PubMed

    Hazut, Koren; Romem, Pnina; Malkin, Smadar; Livshiz-Riven, Ilana

    2016-12-01

    The purpose of this study was to compare the predictive validity, economic efficiency, and faculty staff satisfaction of a computerized test versus a personal interview as admission methods for graduate nursing studies. A mixed method study was designed, including cross-sectional and retrospective cohorts, interviews, and cost analysis. One hundred and thirty-four students in the Master of Nursing program participated. The success of students in required core courses was similar in both admission method groups. The personal interview method was found to be a significant predictor of success, with cognitive variables the only significant contributors to the model. Higher satisfaction levels were reported with the computerized test compared with the personal interview method. The cost of the personal interview method, in annual hourly work, was 2.28 times higher than the computerized test. These findings may promote discussion regarding the cost benefit of the personal interview as an admission method for advanced academic studies in healthcare professions. © 2016 John Wiley & Sons Australia, Ltd.

  2. Development of a Probabilistic Component Mode Synthesis Method for the Analysis of Non-Deterministic Substructures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1995-01-01

    Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature, researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. This paper presents a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.

  3. Evaluation of Criterion Validity for Scales with Congeneric Measures

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2007-01-01

    A method for estimating criterion validity of scales with homogeneous components is outlined. It accomplishes point and interval estimation of interrelationship indices between composite scores and criterion variables and is useful for testing hypotheses about criterion validity of measurement instruments. The method can also be used with missing…

  4. Precise time series photometry for the Kepler-2.0 mission

    NASA Astrophysics Data System (ADS)

    Aigrain, S.; Hodgkin, S. T.; Irwin, M. J.; Lewis, J. R.; Roberts, S. J.

    2015-03-01

    The recently approved NASA K2 mission has the potential to multiply by an order of magnitude the number of short-period transiting planets found by Kepler around bright and low-mass stars, and to revolutionize our understanding of stellar variability in open clusters. However, the data processing is made more challenging by the reduced pointing accuracy of the satellite, which has only two functioning reaction wheels. We present a new method to extract precise light curves from K2 data, combining list-driven, soft-edged aperture photometry with a star-by-star correction of systematic effects associated with the drift in the roll angle of the satellite about its boresight. The systematics are modelled simultaneously with the stars' intrinsic variability using a semiparametric Gaussian process model. We test this method on a week of data collected during an engineering test in 2014 January, perform checks to verify that our method does not alter intrinsic variability signals, and compute the precision as a function of magnitude on long-cadence (30 min) and planetary transit (2.5 h) time-scales. In both cases, we reach photometric precisions close to the precision reached during the nominal Kepler mission for stars fainter than 12th magnitude, and between 40 and 80 parts per million for brighter stars. These results confirm the bright prospects for planet detection and characterization, asteroseismology and stellar variability studies with K2. Finally, we perform a basic transit search on the light curves, detecting two bona fide transit-like events, seven detached eclipsing binaries and 13 classical variables.

  5. Comparison of body composition, heart rate variability, aerobic and anaerobic performance between competitive cyclists and triathletes

    PubMed Central

    Arslan, Erşan; Aras, Dicle

    2016-01-01

    [Purpose] The aim of this study was to compare the body composition, heart rate variability, and aerobic and anaerobic performance between competitive cyclists and triathletes. [Subjects] Six cyclists and eight triathletes with experience in competitions voluntarily participated in this study. [Methods] The subjects’ body composition was measured with an anthropometric tape and skinfold caliper. Maximal oxygen consumption and maximum heart rate were determined using the incremental treadmill test. Heart rate variability was measured by 7 min electrocardiographic recording. The Wingate test was conducted to determine anaerobic physical performance. [Results] There were significant differences in minimum power and relative minimum power between the triathletes and cyclists. Anthropometric characteristics and heart rate variability responses were similar among the triathletes and cyclists. However, triathletes had higher maximal oxygen consumption and lower resting heart rates. This study demonstrated that athletes in both sports have similar body composition and aerobic performance characteristics. PMID:27190476

  6. Prioritizing individual genetic variants after kernel machine testing using variable selection.

    PubMed

    He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C

    2016-12-01

    Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach. © 2016 WILEY PERIODICALS, INC.

  7. Reproducibility of the exponential rise technique of CO(2) rebreathing for measuring P(v)CO(2) and C(v)CO(2 )to non-invasively estimate cardiac output during incremental, maximal treadmill exercise.

    PubMed

    Cade, W Todd; Nabar, Sharmila R; Keyser, Randall E

    2004-05-01

    The purpose of this study was to determine the reproducibility of the indirect Fick method for the measurement of mixed venous carbon dioxide partial pressure (P(v)CO(2)) and venous carbon dioxide content (C(v)CO(2)) for estimation of cardiac output (Q(c)), using the exponential rise method of carbon dioxide rebreathing, during non-steady-state treadmill exercise. Ten healthy participants (eight female and two male) performed three incremental, maximal exercise treadmill tests to exhaustion within 1 week. Non-invasive Q(c) measurements were evaluated at rest, during each 3-min stage, and at peak exercise, across three identical treadmill tests, using the exponential rise technique for measuring mixed venous PCO(2) and CCO(2) and estimating venous-arterio carbon dioxide content difference (C(v-a)CO(2)). Measurements were divided into measured or estimated variables [heart rate (HR), oxygen consumption (VO(2)), volume of expired carbon dioxide (VCO(2)), end-tidal carbon dioxide (P(ET)CO(2)), arterial carbon dioxide partial pressure (P(a)CO(2)), venous carbon dioxide partial pressure ( P(v)CO(2)), and C(v-a)CO(2)] and cardiorespiratory variables derived from the measured variables [Q(c), stroke volume (V(s)), and arteriovenous oxygen difference ( C(a-v)O(2))]. In general, the derived cardiorespiratory variables demonstrated acceptable (R=0.61) to high (R>0.80) reproducibility, especially at higher intensities and peak exercise. Measured variables, excluding P(a)CO(2) and C(v-a)CO(2), also demonstrated acceptable (R=0.6 to 0.79) to high reliability. The current study demonstrated acceptable to high reproducibility of the exponential rise indirect Fick method in measurement of mixed venous PCO(2) and CCO(2) for estimation of Q(c) during incremental treadmill exercise testing, especially at high-intensity and peak exercise.

  8. Predictive validity of pre-admission assessments on medical student performance

    PubMed Central

    Dabaliz, Al-Awwab; Kaadan, Samy; Dabbagh, M. Marwan; Barakat, Abdulaziz; Shareef, Mohammad Abrar; Al-Tannir, Mohamad; Obeidat, Akef

    2017-01-01

    Objectives To examine the predictive validity of pre-admission variables on students’ performance in a medical school in Saudi Arabia.  Methods In this retrospective study, we collected admission and college performance data for 737 students in preclinical and clinical years. Data included high school scores and other standardized test scores, such as those of the National Achievement Test and the General Aptitude Test. Additionally, we included the scores of the Test of English as a Foreign Language (TOEFL) and the International English Language Testing System (IELTS) exams. Those datasets were then compared with college performance indicators, namely the cumulative Grade Point Average (cGPA) and progress test, using multivariate linear regression analysis. Results In preclinical years, both the National Achievement Test (p=0.04, B=0.08) and TOEFL (p=0.017, B=0.01) scores were positive predictors of cGPA, whereas the General Aptitude Test (p=0.048, B=-0.05) negatively predicted cGPA. Moreover, none of the pre-admission variables were predictive of progress test performance in the same group. On the other hand, none of the pre-admission variables were predictive of cGPA in clinical years. Overall, cGPA strongly predict-ed students’ progress test performance (p<0.001 and B=19.02). Conclusions Only the National Achievement Test and TOEFL significantly predicted performance in preclinical years. However, these variables do not predict progress test performance, meaning that they do not predict the functional knowledge reflected in the progress test. We report various strengths and deficiencies in the current medical college admission criteria, and call for employing more sensitive and valid ones that predict student performance and functional knowledge, especially in the clinical years. PMID:29176032

  9. Techniques used for the screening of hemoglobin levels in blood donors: current insights and future directions.

    PubMed

    Chaudhary, Rajendra; Dubey, Anju; Sonker, Atul

    2017-01-01

    Blood donor hemoglobin (Hb) estimation is an important donation test that is performed prior to blood donation. It serves the dual purpose of protecting the donors' health against anemia and ensuring good quality of blood components, which has an implication on recipients' health. Diverse cutoff criteria have been defined world over depending on population characteristics; however, no testing methodology and sample requirement have been specified for Hb screening. Besides the technique, there are several physiological and methodological factors that affect accuracy and reliability of Hb estimation. These include the anatomical source of blood sample, posture of the donor, timing of sample and several other biological factors. Qualitative copper sulfate gravimetric method has been the archaic time-tested method that is still used in resource-constrained settings. Portable hemoglobinometers are modern quantitative devices that have been further modified to reagent-free cuvettes. Furthermore, noninvasive spectrophotometry was introduced, mitigating pain to blood donor and eliminating risk of infection. Notwithstanding a tremendous evolution in terms of ease of operation, accuracy, mobility, rapidity and cost, a component of inherent variability persists, which may partly be attributed to pre-analytical variables. Hence, blood centers should pay due attention to validation of test methodology, competency of operating staff and regular proficiency testing of the outputs. In this article, we have reviewed various regulatory guidelines, described the variables that affect the measurements and compared the validated technologies for Hb screening of blood donors along with enumeration of their merits and limitations.

  10. A practical approach to Sasang constitutional diagnosis using vocal features

    PubMed Central

    2013-01-01

    Background Sasang constitutional medicine (SCM) is a type of tailored medicine that divides human beings into four Sasang constitutional (SC) types. Diagnosis of SC types is crucial to proper treatment in SCM. Voice characteristics have been used as an essential clue for diagnosing SC types. In the past, many studies tried to extract quantitative vocal features to make diagnosis models; however, these studies were flawed by limited data collected from one or a few sites, long recording time, and low accuracy. We propose a practical diagnosis model having only a few variables, which decreases model complexity. This in turn, makes our model appropriate for clinical applications. Methods A total of 2,341 participants’ voice recordings were used in making a SC classification model and to test the generalization ability of the model. Although the voice data consisted of five vowels and two repeated sentences per participant, we used only the sentence part for our study. A total of 21 features were extracted, and an advanced feature selection method—the least absolute shrinkage and selection operator (LASSO)—was applied to reduce the number of variables for classifier learning. A SC classification model was developed using multinomial logistic regression via LASSO. Results We compared the proposed classification model to the previous study, which used both sentences and five vowels from the same patient’s group. The classification accuracies for the test set were 47.9% and 40.4% for male and female, respectively. Our result showed that the proposed method was superior to the previous study in that it required shorter voice recordings, is more applicable to practical use, and had better generalization performance. Conclusions We proposed a practical SC classification method and showed that our model having fewer variables outperformed the model having many variables in the generalization test. We attempted to reduce the number of variables in two ways: 1) the initial number of candidate features was decreased by considering shorter voice recording, and 2) LASSO was introduced for reducing model complexity. The proposed method is suitable for an actual clinical environment. Moreover, we expect it to yield more stable results because of the model’s simplicity. PMID:24200041

  11. Dynamic and Transient Performance of Turbofan/Turboshaft Convertible Engine With Variable Inlet Guide Vanes

    NASA Technical Reports Server (NTRS)

    McArdle, Jack G.; Barth, Richard L.; Wenzel, Leon M.; Biesiadny, Thomas J.

    1996-01-01

    A convertible engine called the CEST TF34, using the variable inlet guide vane method of power change, was tested on an outdoor stand at the NASA Lewis Research Center with a waterbrake dynamometer for the shaft load. A new digital electronic system, in conjunction with a modified standard TF34 hydromechanical fuel control, kept engine operation stable and safely within limits. All planned testing was completed successfully. Steady-state performance and acoustic characteristics were reported previously and are referenced. This report presents results of transient and dynamic tests. The transient tests measured engine response to several rapid changes in thrust and torque commands at constant fan (shaft) speed. Limited results from dynamic tests using the pseudorandom binary noise technique are also presented. Performance of the waterbrake dynamometer is discussed in an appendix.

  12. Laboratory test for ice adhesion strength using commercial instrumentation.

    PubMed

    Wang, Chenyu; Zhang, Wei; Siva, Adarsh; Tiea, Daniel; Wynne, Kenneth J

    2014-01-21

    A laboratory test method for evaluating ice adhesion has been developed employing a commercially available instrument normally used for dynamic mechanical analysis (TA RSA-III). This is the first laboratory ice adhesion test that does not require a custom-built apparatus. The upper grip range of ∼10 mm is an enabling feature that is essential for the test. The method involves removal of an ice cylinder from a polymer coating with a probe and the determination of peak removal force (Ps). To validate the test method, the strength of ice adhesion was determined for a prototypical glassy polymer, poly(methyl methacrylate). The distance of the probe from the PMMA surface has been identified as a critical variable for Ps. The new test provides a readily available platform for investigating fundamental surface characteristics affecting ice adhesion. In addition to the ice release test, PMMA coatings were characterized using DSC, DCA, and TM-AFM.

  13. Mechanical Impact Testing: A Statistical Measurement

    NASA Technical Reports Server (NTRS)

    Engel, Carl D.; Herald, Stephen D.; Davis, S. Eddie

    2005-01-01

    In the decades since the 1950s, when NASA first developed mechanical impact testing of materials, researchers have continued efforts to gain a better understanding of the chemical, mechanical, and thermodynamic nature of the phenomenon. The impact mechanism is a real combustion ignition mechanism that needs understanding in the design of an oxygen system. The use of test data from this test method has been questioned due to lack of a clear method of application of the data and variability found between tests, material batches, and facilities. This effort explores a large database that has accumulated over a number of years and explores its overall nature. Moreover, testing was performed to determine the statistical nature of the test procedure to help establish sample size guidelines for material characterization. The current method of determining a pass/fail criterion based on either light emission or sound report or material charring is questioned.

  14. The Need for Speed in Rodent Locomotion Analyses

    PubMed Central

    Batka, Richard J.; Brown, Todd J.; Mcmillan, Kathryn P.; Meadows, Rena M.; Jones, Kathryn J.; Haulcomb, Melissa M.

    2016-01-01

    Locomotion analysis is now widely used across many animal species to understand the motor defects in disease, functional recovery following neural injury, and the effectiveness of various treatments. More recently, rodent locomotion analysis has become an increasingly popular method in a diverse range of research. Speed is an inseparable aspect of locomotion that is still not fully understood, and its effects are often not properly incorporated while analyzing data. In this hybrid manuscript, we accomplish three things: (1) review the interaction between speed and locomotion variables in rodent studies, (2) comprehensively analyze the relationship between speed and 162 locomotion variables in a group of 16 wild-type mice using the CatWalk gait analysis system, and (3) develop and test a statistical method in which locomotion variables are analyzed and reported in the context of speed. Notable results include the following: (1) over 90% of variables, reported by CatWalk, were dependent on speed with an average R2 value of 0.624, (2) most variables were related to speed in a nonlinear manner, (3) current methods of controlling for speed are insufficient, and (4) the linear mixed model is an appropriate and effective statistical method for locomotion analyses that is inclusive of speed-dependent relationships. Given the pervasive dependency of locomotion variables on speed, we maintain that valid conclusions from locomotion analyses cannot be made unless they are analyzed and reported within the context of speed. PMID:24890845

  15. Multivariate Models for Prediction of Human Skin Sensitization Hazard

    PubMed Central

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2016-01-01

    One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324

  16. Large scale landslide susceptibility assessment using the statistical methods of logistic regression and BSA - study case: the sub-basin of the small Niraj (Transylvania Depression, Romania)

    NASA Astrophysics Data System (ADS)

    Roşca, S.; Bilaşco, Ş.; Petrea, D.; Fodorean, I.; Vescan, I.; Filip, S.; Măguţ, F.-L.

    2015-11-01

    The existence of a large number of GIS models for the identification of landslide occurrence probability makes difficult the selection of a specific one. The present study focuses on the application of two quantitative models: the logistic and the BSA models. The comparative analysis of the results aims at identifying the most suitable model. The territory corresponding to the Niraj Mic Basin (87 km2) is an area characterised by a wide variety of the landforms with their morphometric, morphographical and geological characteristics as well as by a high complexity of the land use types where active landslides exist. This is the reason why it represents the test area for applying the two models and for the comparison of the results. The large complexity of input variables is illustrated by 16 factors which were represented as 72 dummy variables, analysed on the basis of their importance within the model structures. The testing of the statistical significance corresponding to each variable reduced the number of dummy variables to 12 which were considered significant for the test area within the logistic model, whereas for the BSA model all the variables were employed. The predictability degree of the models was tested through the identification of the area under the ROC curve which indicated a good accuracy (AUROC = 0.86 for the testing area) and predictability of the logistic model (AUROC = 0.63 for the validation area).

  17. Effectiveness of Variable-Gain Kalman Filter Based on Angle Error Calculated from Acceleration Signals in Lower Limb Angle Measurement with Inertial Sensors

    PubMed Central

    Watanabe, Takashi

    2013-01-01

    The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors. PMID:24282442

  18. Expressive Language Intratest Scatter of Preschool-Age Children Who Stutter

    PubMed Central

    Millager, Ryan A.; Conture, Edward G.; Walden, Tedra A.; Kelly, Ellen M.

    2014-01-01

    Purpose The purpose of this study was to assess intratest scatter (variability) on standardized tests of expressive language by preschool-age children who do (CWS) and do not stutter (CWNS). Method Participants were 40 preschool-age CWS and 46 CWNS. Between-group comparisons of intratest scatter were made based on participant responses to the Expressive subtest of the Test of Early Language Development – 3 (TELD-Exp; Hresko, Reid, & Hamill, 1999) and the Expressive Vocabulary Test 2 (EVT-2; Williams, 2007). Within-group correlational analyses between intratest scatter and stuttering frequency and severity were also conducted for CWS. Results Findings indicated that, for CWS, categorical scatter on the EVT-2 was positively correlated with their stuttering frequency. No significant between-group differences in intratest scatter were found on the TELD-Exp or the EVT-2. Conclusions Consistent with earlier findings, variability in speech-language performance appears to be related to CWS’ stuttering, a finding taken to suggest an underlying cognitive-linguistic variable (e.g., cognitive load) may be common to both variables. PMID:25520550

  19. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  20. Development and Application of Methods for Estimating Operating Characteristics of Discrete Test Item Responses without Assuming any Mathematical Form.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    In latent trait theory the latent space, or space of the hypothetical construct, is usually represented by some unidimensional or multi-dimensional continuum of real numbers. Like the latent space, the item response can either be treated as a discrete variable or as a continuous variable. Latent trait theory relates the item response to the latent…

  1. Common pitfalls in statistical analysis: Measures of agreement.

    PubMed

    Ranganathan, Priya; Pramesh, C S; Aggarwal, Rakesh

    2017-01-01

    Agreement between measurements refers to the degree of concordance between two (or more) sets of measurements. Statistical methods to test agreement are used to assess inter-rater variability or to decide whether one technique for measuring a variable can substitute another. In this article, we look at statistical measures of agreement for different types of data and discuss the differences between these and those for assessing correlation.

  2. 40 CFR 60.45c - Compliance and performance test methods and procedures for particulate matter.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... and procedures for particulate matter. 60.45c Section 60.45c Protection of Environment ENVIRONMENTAL... Administrator when necessitated by process variables or other factors. (5) For Method 5 or 5B of appendix A of... (CO2) measurement shall be obtained simultaneously with each run of Method 5, 5B, or 17 of appendix A...

  3. Screening large-scale association study data: exploiting interactions using random forests.

    PubMed

    Lunetta, Kathryn L; Hayward, L Brooke; Segal, Jonathan; Van Eerdewegh, Paul

    2004-12-10

    Genome-wide association studies for complex diseases will produce genotypes on hundreds of thousands of single nucleotide polymorphisms (SNPs). A logical first approach to dealing with massive numbers of SNPs is to use some test to screen the SNPs, retaining only those that meet some criterion for further study. For example, SNPs can be ranked by p-value, and those with the lowest p-values retained. When SNPs have large interaction effects but small marginal effects in a population, they are unlikely to be retained when univariate tests are used for screening. However, model-based screens that pre-specify interactions are impractical for data sets with thousands of SNPs. Random forest analysis is an alternative method that produces a single measure of importance for each predictor variable that takes into account interactions among variables without requiring model specification. Interactions increase the importance for the individual interacting variables, making them more likely to be given high importance relative to other variables. We test the performance of random forests as a screening procedure to identify small numbers of risk-associated SNPs from among large numbers of unassociated SNPs using complex disease models with up to 32 loci, incorporating both genetic heterogeneity and multi-locus interaction. Keeping other factors constant, if risk SNPs interact, the random forest importance measure significantly outperforms the Fisher Exact test as a screening tool. As the number of interacting SNPs increases, the improvement in performance of random forest analysis relative to Fisher Exact test for screening also increases. Random forests perform similarly to the univariate Fisher Exact test as a screening tool when SNPs in the analysis do not interact. In the context of large-scale genetic association studies where unknown interactions exist among true risk-associated SNPs or SNPs and environmental covariates, screening SNPs using random forest analyses can significantly reduce the number of SNPs that need to be retained for further study compared to standard univariate screening methods.

  4. Detection of fatigue cracks by nondestructive testing methods

    NASA Technical Reports Server (NTRS)

    Anderson, R. T.; Delacy, T. J.; Stewart, R. C.

    1973-01-01

    The effectiveness was assessed of various NDT methods to detect small tight cracks by randomly introducing fatigue cracks into aluminum sheets. The study included optimizing NDT methods calibrating NDT equipment with fatigue cracked standards, and evaluating a number of cracked specimens by the optimized NDT methods. The evaluations were conducted by highly trained personnel, provided with detailed procedures, in order to minimize the effects of human variability. These personnel performed the NDT on the test specimens without knowledge of the flaw locations and reported on the flaws detected. The performance of these tests was measured by comparing the flaws detected against the flaws present. The principal NDT methods utilized were radiographic, ultrasonic, penetrant, and eddy current. Holographic interferometry, acoustic emission monitoring, and replication methods were also applied on a reduced number of specimens. Generally, the best performance was shown by eddy current, ultrasonic, penetrant and holographic tests. Etching provided no measurable improvement, while proof loading improved flaw detectability. Data are shown that quantify the performances of the NDT methods applied.

  5. Characterization of the porosity of human dental enamel and shear bond strength in vitro after variable etch times: initial findings using the BET method.

    PubMed

    Nguyen, Trang T; Miller, Arthur; Orellana, Maria F

    2011-07-01

    (1) To quantitatively characterize human enamel porosity and surface area in vitro before and after etching for variable etching times; and (2) to evaluate shear bond strength after variable etching times. Specifically, our goal was to identify the presence of any correlation between enamel porosity and shear bond strength. Pore surface area, pore volume, and pore size of enamel from extracted human teeth were analyzed by Brunauer-Emmett-Teller (BET) gas adsorption before and after etching for 15, 30, and 60 seconds with 37% phosphoric acid. Orthodontic brackets were bonded with Transbond to the samples with variable etch times and were subsequently applied to a single-plane lap shear testing system. Pore volume and surface area increased after etching for 15 and 30 seconds. At 60 seconds, this increase was less pronounced. On the contrary, pore size appears to decrease after etching. No correlation was found between variable etching times and shear strength. Samples etched for 15, 30, and 60 seconds all demonstrated clinically viable shear strength values. The BET adsorption method could be a valuable tool in enhancing our understanding of enamel characteristics. Our findings indicate that distinct quantitative changes in enamel pore architecture are evident after etching. Further testing with a larger sample size would have to be carried out for more definitive conclusions to be made.

  6. Response surface modeling of boron adsorption from aqueous solution by vermiculite using different adsorption agents: Box-Behnken experimental design.

    PubMed

    Demirçivi, Pelin; Saygılı, Gülhayat Nasün

    2017-07-01

    In this study, a different method was applied for boron removal by using vermiculite as the adsorbent. Vermiculite, which was used in the experiments, was not modified with adsorption agents before boron adsorption using a separate process. Hexadecyltrimethylammonium bromide (HDTMA) and Gallic acid (GA) were used as adsorption agents for vermiculite by maintaining the solid/liquid ratio at 12.5 g/L. HDTMA/GA concentration, contact time, pH, initial boron concentration, inert electrolyte and temperature effects on boron adsorption were analyzed. A three-factor, three-level Box-Behnken design model combined with response surface method (RSM) was employed to examine and optimize process variables for boron adsorption from aqueous solution by vermiculite using HDTMA and GA. Solution pH (2-12), temperature (25-60 °C) and initial boron concentration (50-8,000 mg/L) were chosen as independent variables and coded x 1 , x 2 and x 3 at three levels (-1, 0 and 1). Analysis of variance was used to test the significance of variables and their interactions with 95% confidence limit (α = 0.05). According to the regression coefficients, a second-order empirical equation was evaluated between the adsorption capacity (q i ) and the coded variables tested (x i ). Optimum values of the variables were also evaluated for maximum boron adsorption by vermiculite-HDTMA (HDTMA-Verm) and vermiculite-GA (GA-Verm).

  7. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Validity of a portable glucose, total cholesterol, and triglycerides multi-analyzer in adults.

    PubMed

    Coqueiro, Raildo da Silva; Santos, Mateus Carmo; Neto, João de Souza Leal; Queiroz, Bruno Morbeck de; Brügger, Nelson Augusto Jardim; Barbosa, Aline Rodrigues

    2014-07-01

    This study investigated the accuracy and precision of the Accutrend Plus system to determine blood glucose, total cholesterol, and plasma triglycerides in adults and evaluated its efficiency in measuring these blood variables. The sample consisted of 53 subjects (≥ 18 years). For blood variable laboratory determination, venous blood samples were collected and processed in a Labmax 240 analyzer. To measure blood variables with the Accutrend Plus system, samples of capillary blood were collected. In the analysis, the following tests were included: Wilcoxon and Student's t-tests for paired samples, Lin's concordance coefficient, Bland-Altman method, receiver operating characteristic curve, McNemar test, and k statistics. The results show that the Accutrend Plus system provided significantly higher values (p ≤ .05) of glucose and triglycerides but not of total cholesterol (p > .05) as compared to the values determined in the laboratory. However, the system showed good reproducibility (Lin's coefficient: glucose = .958, triglycerides = .992, total cholesterol = .940) and high concordance with the laboratory method (Lin's coefficient: glucose = .952, triglycerides = .990, total cholesterol = .944) and high sensitivity (glucose = 80.0%, triglycerides = 90.5%, total cholesterol = 84.4%) and specificity (glucose = 100.0%, triglycerides = 96.9%, total cholesterol = 95.2%) in the discrimination of high values of the three blood variables analyzed. It could be concluded that despite the tendency to overestimate glucose and triglyceride levels, a portable multi-analyzer is a valid alternative for the monitoring of metabolic disorders and cardiovascular risk factors. © The Author(s) 2013.

  9. Detection of "noisy" chaos in a time series

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Kanters, J. K.; Cohen, R. J.; Holstein-Rathlou, N. H.

    1997-01-01

    Time series from biological system often displays fluctuations in the measured variables. Much effort has been directed at determining whether this variability reflects deterministic chaos, or whether it is merely "noise". The output from most biological systems is probably the result of both the internal dynamics of the systems, and the input to the system from the surroundings. This implies that the system should be viewed as a mixed system with both stochastic and deterministic components. We present a method that appears to be useful in deciding whether determinism is present in a time series, and if this determinism has chaotic attributes. The method relies on fitting a nonlinear autoregressive model to the time series followed by an estimation of the characteristic exponents of the model over the observed probability distribution of states for the system. The method is tested by computer simulations, and applied to heart rate variability data.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pichara, Karim; Protopapas, Pavlos

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine howmore » classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.« less

  11. Realist identification of group-level latent variables for perinatal social epidemiology theory building.

    PubMed

    Eastwood, John Graeme; Jalaludin, Bin Badrudin; Kemp, Lynn Ann; Phung, Hai Ngoc

    2014-01-01

    We have previously reported in this journal on an ecological study of perinatal depressive symptoms in South Western Sydney. In that article, we briefly reported on a factor analysis that was utilized to identify empirical indicators for analysis. In this article, we report on the mixed method approach that was used to identify those latent variables. Social epidemiology has been slow to embrace a latent variable approach to the study of social, political, economic, and cultural structures and mechanisms, partly for philosophical reasons. Critical realist ontology and epistemology have been advocated as an appropriate methodological approach to both theory building and theory testing in the health sciences. We describe here an emergent mixed method approach that uses qualitative methods to identify latent constructs followed by factor analysis using empirical indicators chosen to measure identified qualitative codes. Comparative analysis of the findings is reported together with a limited description of realist approaches to abstract reasoning.

  12. Influence of engine coolant composition on the electrochemical degradation behavior of EPDM radiator hoses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vroomen, G.L.M.; Lievens, S.S.; Maes, J.P.

    1999-08-01

    EPDM (ethylene-propylene rubber) has been used for more than 25 years as the main elastomer in radiator hoses because it offers a well-balanced price/performance ratio in this field of application. Some years ago the automotive and rubber industry became aware of a problem called electrochemical degradation and cracking. Cooling systems broke down due to a typical cracking failure of some radiator hoses. Different test methods were developed to simulate and solve the problem on laboratory scale. The influence of different variables with respect to the electrochemical degradation and cracking. Cooling systems broke down due to a typical cracking failure ofmore » some radiator hoses. Different test methods were developed to simulate and solve the problem on laboratory scale. The influence of different variables with respect to the electrochemical degradation process has been investigated, but until recently the influence of the engine coolant was ignored. Using a test method developed by DSM elastomers, the influence of the composition of the engine coolant as well as of the EPDM composition has now been evaluated. This paper gives an overview of test results with different coolant technologies and offers a plausible explanation of the degradation mechanisms as a function of the elastomer composition.« less

  13. From SOPs to Reports to Evaluations: Learning and Memory ...

    EPA Pesticide Factsheets

    In an era of global trade and regulatory cooperation, consistent and scientifically based interpretation of developmental neurotoxicity (DNT) studies is essential. Because there is flexibility in the selection of test method(s), consistency can be especially challenging for learning and memory tests required by EPA and OECD DNT guidelines (chemicals and pesticides) and recommended for ICH prenatal/postnatal guidelines (pharmaceuticals). A well­ reasoned uniform approach is particularly important for variable endpoints and if non-standard tests are used. An understanding of the purpose behind the tests and expected outcomes is critical, and attention to elements of experimental design, conduct, and reporting can improve study design by the investigator as well as accuracy and consistency of interpretation by evaluators. This understanding also directs which information must be clearly described in study reports. While missing information may be available in standardized operating procedures (SOPs), if not clearly reflected in report submissions there may be questions and misunderstandings by evaluators which could impact risk assessments. A practical example will be presented to provide insights into important variables and reporting approaches. Cognitive functions most often tested in guidelines studies include associative, positional, sequential, and spatial learning and memory in weanling and adult animals. These complex behaviors tap different bra

  14. VARIABLE SELECTION FOR QUALITATIVE INTERACTIONS IN PERSONALIZED MEDICINE WHILE CONTROLLING THE FAMILY-WISE ERROR RATE

    PubMed Central

    Gunter, Lacey; Zhu, Ji; Murphy, Susan

    2012-01-01

    For many years, subset analysis has been a popular topic for the biostatistics and clinical trials literature. In more recent years, the discussion has focused on finding subsets of genomes which play a role in the effect of treatment, often referred to as stratified or personalized medicine. Though highly sought after, methods for detecting subsets with altering treatment effects are limited and lacking in power. In this article we discuss variable selection for qualitative interactions with the aim to discover these critical patient subsets. We propose a new technique designed specifically to find these interaction variables among a large set of variables while still controlling for the number of false discoveries. We compare this new method against standard qualitative interaction tests using simulations and give an example of its use on data from a randomized controlled trial for the treatment of depression. PMID:22023676

  15. Research on damping properties optimization of variable-stiffness plate

    NASA Astrophysics Data System (ADS)

    Wen-kai, QI; Xian-tao, YIN; Cheng, SHEN

    2016-09-01

    This paper investigates damping optimization design of variable-stiffness composite laminated plate, which means fibre paths can be continuously curved and fibre angles are distinct for different regions. First, damping prediction model is developed based on modal dissipative energy principle and verified by comparing with modal testing results. Then, instead of fibre angles, the element stiffness and damping matrixes are translated to be design variables on the basis of novel Discrete Material Optimization (DMO) formulation, thus reducing the computation time greatly. Finally, the modal damping capacity of arbitrary order is optimized using MMA (Method of Moving Asymptotes) method. Meanwhile, mode tracking technique is employed to investigate the variation of modal shape. The convergent performance of interpolation function, first order specific damping capacity (SDC) optimization results and variation of modal shape in different penalty factor are discussed. The results show that the damping properties of the variable-stiffness plate can be increased by 50%-70% after optimization.

  16. Variables affecting learning in a simulation experience: a mixed methods study.

    PubMed

    Beischel, Kelly P

    2013-02-01

    The primary purpose of this study was to test a hypothesized model describing the direct effects of learning variables on anxiety and cognitive learning outcomes in a high-fidelity simulation (HFS) experience. The secondary purpose was to explain and explore student perceptions concerning the qualities and context of HFS affecting anxiety and learning. This study used a mixed methods quantitative-dominant explanatory design with concurrent qualitative data collection to examine variables affecting learning in undergraduate, beginning nursing students (N = 124). Being ready to learn, having a strong auditory-verbal learning style, and being prepared for simulation directly affected anxiety, whereas learning outcomes were directly affected by having strong auditory-verbal and hands-on learning styles. Anxiety did not quantitatively mediate cognitive learning outcomes as theorized, although students qualitatively reported debilitating levels of anxiety. This study advances nursing education science by providing evidence concerning variables affecting learning outcomes in HFS.

  17. General Method for Constructing Local Hidden Variable Models for Entangled Quantum States

    NASA Astrophysics Data System (ADS)

    Cavalcanti, D.; Guerini, L.; Rabelo, R.; Skrzypczyk, P.

    2016-11-01

    Entanglement allows for the nonlocality of quantum theory, which is the resource behind device-independent quantum information protocols. However, not all entangled quantum states display nonlocality. A central question is to determine the precise relation between entanglement and nonlocality. Here we present the first general test to decide whether a quantum state is local, and show that the test can be implemented by semidefinite programing. This method can be applied to any given state and for the construction of new examples of states with local hidden variable models for both projective and general measurements. As applications, we provide a lower-bound estimate of the fraction of two-qubit local entangled states and present new explicit examples of such states, including those that arise from physical noise models, Bell-diagonal states, and noisy Greenberger-Horne-Zeilinger and W states.

  18. T-duality constraints on higher derivatives revisited

    NASA Astrophysics Data System (ADS)

    Hohm, Olaf; Zwiebach, Barton

    2016-04-01

    We ask to what extent are the higher-derivative corrections of string theory constrained by T-duality. The seminal early work by Meissner tests T-duality by reduction to one dimension using a distinguished choice of field variables in which the bosonic string action takes a Gauss-Bonnet-type form. By analyzing all field redefinitions that may or may not be duality covariant and may or may not be gauge covariant we extend the procedure to test T-duality starting from an action expressed in arbitrary field variables. We illustrate the method by showing that it determines uniquely the first-order α' corrections of the bosonic string, up to terms that vanish in one dimension. We also use the method to glean information about the O({α}^' 2}) corrections in the double field theory with Green-Schwarz deformation.

  19. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    PubMed

    Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie

    2016-01-01

    In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).

  20. Multigrid one shot methods for optimal control problems: Infinite dimensional control

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Taasan, Shlomo

    1994-01-01

    The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.

  1. Using structural equation modeling to investigate relationships among ecological variables

    USGS Publications Warehouse

    Malaeb, Z.A.; Kevin, Summers J.; Pugesek, B.H.

    2000-01-01

    Structural equation modeling is an advanced multivariate statistical process with which a researcher can construct theoretical concepts, test their measurement reliability, hypothesize and test a theory about their relationships, take into account measurement errors, and consider both direct and indirect effects of variables on one another. Latent variables are theoretical concepts that unite phenomena under a single term, e.g., ecosystem health, environmental condition, and pollution (Bollen, 1989). Latent variables are not measured directly but can be expressed in terms of one or more directly measurable variables called indicators. For some researchers, defining, constructing, and examining the validity of latent variables may be the end task of itself. For others, testing hypothesized relationships of latent variables may be of interest. We analyzed the correlation matrix of eleven environmental variables from the U.S. Environmental Protection Agency's (USEPA) Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) using methods of structural equation modeling. We hypothesized and tested a conceptual model to characterize the interdependencies between four latent variables-sediment contamination, natural variability, biodiversity, and growth potential. In particular, we were interested in measuring the direct, indirect, and total effects of sediment contamination and natural variability on biodiversity and growth potential. The model fit the data well and accounted for 81% of the variability in biodiversity and 69% of the variability in growth potential. It revealed a positive total effect of natural variability on growth potential that otherwise would have been judged negative had we not considered indirect effects. That is, natural variability had a negative direct effect on growth potential of magnitude -0.3251 and a positive indirect effect mediated through biodiversity of magnitude 0.4509, yielding a net positive total effect of 0.1258. Natural variability had a positive direct effect on biodiversity of magnitude 0.5347 and a negative indirect effect mediated through growth potential of magnitude -0.1105 yielding a positive total effects of magnitude 0.4242. Sediment contamination had a negative direct effect on biodiversity of magnitude -0.1956 and a negative indirect effect on growth potential via biodiversity of magnitude -0.067. Biodiversity had a positive effect on growth potential of magnitude 0.8432, and growth potential had a positive effect on biodiversity of magnitude 0.3398. The correlation between biodiversity and growth potential was estimated at 0.7658 and that between sediment contamination and natural variability at -0.3769.

  2. Thrust modulation methods for a subsonic V/STOL aircraft

    NASA Technical Reports Server (NTRS)

    Woollett, R. R.

    1981-01-01

    Low speed wind tunnel tests were conducted to assess four methods for attaining thrust modulation for V/STOL aircraft. The four methods were: (1) fan speed change, (2) fan nozzle exit area change, (3) variable pitch rotor (VPR) fan, and (4) variable inlet guide vanes (VIGV). The interrelationships between inlet and thrust modulation system were also investigated using a double slotted inlet and thick lip inlet. Results can be summarized as: (1) the VPR and VIGV systems were the most promising, (2) changes in blade angle to obtain changes in fan thrust have significant implications for the inlet, and (3) both systems attained required level of thrust with acceptable levels of fan blade stress.

  3. Aileron controls for wind turbine applications

    NASA Technical Reports Server (NTRS)

    Miller, D. R.; Putoff, R. L.

    1984-01-01

    Horizontal axis wind turbines which utilize partial or full variable blade pitch to regulate rotor speed were examined. The weight and costs of these systems indicated a need for alternate methods of rotor control. Aileron control is an alternative which has potential to meet this need. Aileron control rotors were tested on the Mod-O wind turbine to determine their power regulation and shutdown characteristics. Test results for a 20 and 38% chord aileron control rotor are presented. Test is shown that aileron control is a viable method for safety for safely controlling rotor speed, following a loss of general load.

  4. Aileron controls for wind turbine applications

    NASA Technical Reports Server (NTRS)

    Miller, D. R.; Puthoff, R. L.

    1984-01-01

    Horizontal axis wind turbines which utilize partial or full variable blade pitch to regulate rotor speed were examined. The weight and costs of these systems indicated a need for alternate methods of rotor control. Aileron control is an alternative which has potential to meet this need. Aileron control rotors were tested on the Mod-O wind turbine to determine their power regulation and shutdown characteristics. Test results for a 20 and 38 percent chord aileron control rotor are presented. Test is shown that aileron control is a viable method for safety for safely controlling rotor speed, following a loss of general load.

  5. Non-invasive diagnosis of liver fibrosis in chronic hepatitis C

    PubMed Central

    Schiavon, Leonardo de Lucca; Narciso-Schiavon, Janaína Luz; de Carvalho-Filho, Roberto José

    2014-01-01

    Assessment of liver fibrosis in chronic hepatitis C virus (HCV) infection is considered a relevant part of patient care and key for decision making. Although liver biopsy has been considered the gold standard for staging liver fibrosis, it is an invasive technique and subject to sampling errors and significant intra- and inter-observer variability. Over the last decade, several noninvasive markers were proposed for liver fibrosis diagnosis in chronic HCV infection, with variable performance. Besides the clear advantage of being noninvasive, a more objective interpretation of test results may overcome the mentioned intra- and inter-observer variability of liver biopsy. In addition, these tests can theoretically offer a more accurate view of fibrogenic events occurring in the entire liver with the advantage of providing frequent fibrosis evaluation without additional risk. However, in general, these tests show low accuracy in discriminating between intermediate stages of fibrosis and may be influenced by several hepatic and extra-hepatic conditions. These methods are either serum markers (usually combined in a mathematical model) or imaging modalities that can be used separately or combined in algorithms to improve accuracy. In this review we will discuss the different noninvasive methods that are currently available for the evaluation of liver fibrosis in chronic hepatitis C, their advantages, limitations and application in clinical practice. PMID:24659877

  6. Inter-laboratory analysis of selected genetically modified plant reference materials with digital PCR.

    PubMed

    Dobnik, David; Demšar, Tina; Huber, Ingrid; Gerdes, Lars; Broeders, Sylvia; Roosens, Nancy; Debode, Frederic; Berben, Gilbert; Žel, Jana

    2018-01-01

    Digital PCR (dPCR), as a new technology in the field of genetically modified (GM) organism (GMO) testing, enables determination of absolute target copy numbers. The purpose of our study was to test the transferability of methods designed for quantitative PCR (qPCR) to dPCR and to carry out an inter-laboratory comparison of the performance of two different dPCR platforms when determining the absolute GM copy numbers and GM copy number ratio in reference materials certified for GM content in mass fraction. Overall results in terms of measured GM% were within acceptable variation limits for both tested dPCR systems. However, the determined absolute copy numbers for individual genes or events showed higher variability between laboratories in one third of the cases, most possibly due to variability in the technical work, droplet size variability, and analysis of the raw data. GMO quantification with dPCR and qPCR was comparable. As methods originally designed for qPCR performed well in dPCR systems, already validated qPCR assays can most generally be used for dPCR technology with the purpose of GMO detection. Graphical abstract The output of three different PCR-based platforms was assessed in an inter-laboratory comparison.

  7. Comparison of four approaches to a rock facies classification problem

    USGS Publications Warehouse

    Dubois, M.K.; Bohling, Geoffrey C.; Chakrabarti, S.

    2007-01-01

    In this study, seven classifiers based on four different approaches were tested in a rock facies classification problem: classical parametric methods using Bayes' rule, and non-parametric methods using fuzzy logic, k-nearest neighbor, and feed forward-back propagating artificial neural network. Determining the most effective classifier for geologic facies prediction in wells without cores in the Panoma gas field, in Southwest Kansas, was the objective. Study data include 3600 samples with known rock facies class (from core) with each sample having either four or five measured properties (wire-line log curves), and two derived geologic properties (geologic constraining variables). The sample set was divided into two subsets, one for training and one for testing the ability of the trained classifier to correctly assign classes. Artificial neural networks clearly outperformed all other classifiers and are effective tools for this particular classification problem. Classical parametric models were inadequate due to the nature of the predictor variables (high dimensional and not linearly correlated), and feature space of the classes (overlapping). The other non-parametric methods tested, k-nearest neighbor and fuzzy logic, would need considerable improvement to match the neural network effectiveness, but further work, possibly combining certain aspects of the three non-parametric methods, may be justified. ?? 2006 Elsevier Ltd. All rights reserved.

  8. A critical issue in model-based inference for studying trait-based community assembly and a solution.

    PubMed

    Ter Braak, Cajo J F; Peres-Neto, Pedro; Dray, Stéphane

    2017-01-01

    Statistical testing of trait-environment association from data is a challenge as there is no common unit of observation: the trait is observed on species, the environment on sites and the mediating abundance on species-site combinations. A number of correlation-based methods, such as the community weighted trait means method (CWM), the fourth-corner correlation method and the multivariate method RLQ, have been proposed to estimate such trait-environment associations. In these methods, valid statistical testing proceeds by performing two separate resampling tests, one site-based and the other species-based and by assessing significance by the largest of the two p -values (the p max test). Recently, regression-based methods using generalized linear models (GLM) have been proposed as a promising alternative with statistical inference via site-based resampling. We investigated the performance of this new approach along with approaches that mimicked the p max test using GLM instead of fourth-corner. By simulation using models with additional random variation in the species response to the environment, the site-based resampling tests using GLM are shown to have severely inflated type I error, of up to 90%, when the nominal level is set as 5%. In addition, predictive modelling of such data using site-based cross-validation very often identified trait-environment interactions that had no predictive value. The problem that we identify is not an "omitted variable bias" problem as it occurs even when the additional random variation is independent of the observed trait and environment data. Instead, it is a problem of ignoring a random effect. In the same simulations, the GLM-based p max test controlled the type I error in all models proposed so far in this context, but still gave slightly inflated error in more complex models that included both missing (but important) traits and missing (but important) environmental variables. For screening the importance of single trait-environment combinations, the fourth-corner test is shown to give almost the same results as the GLM-based tests in far less computing time.

  9. Attributing runoff changes to climate variability and human activities: uncertainty analysis using four monthly water balance models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shuai; Xiong, Lihua; Li, Hong-Yi

    2015-05-26

    Hydrological simulations to delineate the impacts of climate variability and human activities are subjected to uncertainties related to both parameter and structure of the hydrological models. To analyze the impact of these uncertainties on the model performance and to yield more reliable simulation results, a global calibration and multimodel combination method that integrates the Shuffled Complex Evolution Metropolis (SCEM) and Bayesian Model Averaging (BMA) of four monthly water balance models was proposed. The method was applied to the Weihe River Basin (WRB), the largest tributary of the Yellow River, to determine the contribution of climate variability and human activities tomore » runoff changes. The change point, which was used to determine the baseline period (1956-1990) and human-impacted period (1991-2009), was derived using both cumulative curve and Pettitt’s test. Results show that the combination method from SCEM provides more skillful deterministic predictions than the best calibrated individual model, resulting in the smallest uncertainty interval of runoff changes attributed to climate variability and human activities. This combination methodology provides a practical and flexible tool for attribution of runoff changes to climate variability and human activities by hydrological models.« less

  10. Inter-laboratory trial of a standardized sediment contact test with the aquatic plant Myriophyllum aquaticum (ISO 16191).

    PubMed

    Feiler, Ute; Ratte, Monika; Arts, Gertie; Bazin, Christine; Brauer, Frank; Casado, Carmen; Dören, Laszlo; Eklund, Britta; Gilberg, Daniel; Grote, Matthias; Gonsior, Guido; Hafner, Christoph; Kopf, Willi; Lemnitzer, Bernd; Liedtke, Anja; Matthias, Uwe; Okos, Ewa; Pandard, Pascal; Scheerbaum, Dirk; Schmitt-Jansen, Mechthild; Stewart, Kathleen; Teodorovic, Ivana; Wenzel, Andrea; Pluta, Hans-Jürgen

    2014-03-01

    A whole-sediment toxicity test with Myriophyllum aquaticum has been developed by the German Federal Institute of Hydrology and standardized within the International Organization for Standardization (ISO; ISO 16191). An international ring-test was performed to evaluate the precision of the test method. Four sediments (artificial, natural) were tested. Test duration was 10 d, and test endpoint was inhibition of growth rate (r) based on fresh weight data. Eighteen of 21 laboratories met the validity criterion of r ≥ 0.09 d(-1) in the control. Results from 4 tests that did not conform to test-performance criteria were excluded from statistical evaluation. The inter-laboratory variability of growth rates (20.6%-25.0%) and inhibition (26.6%-39.9%) was comparable with the variability of other standardized bioassays. The mean test-internal variability of the controls was low (7% [control], 9.7% [solvent control]), yielding a high discriminatory power of the given test design (median minimum detectable differences [MDD] 13% to 15%). To ensure these MDDs, an additional validity criterion of CV ≤ 15% of the growth rate in the controls was recommended. As a positive control, 90 mg 3,5-dichlorophenol/kg sediment dry mass was tested. The range of the expected growth inhibition was proposed to be 35 ± 15%. The ring test results demonstrated the reliability of the ISO 16191 toxicity test and its suitability as a tool to assess the toxicity of sediment and dredged material. © 2013 SETAC.

  11. Empirical Histograms in Item Response Theory with Ordinal Data

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2007-01-01

    The purpose of this research is to describe, test, and illustrate a new implementation of the empirical histogram (EH) method for ordinal items. The EH method involves the estimation of item response model parameters simultaneously with the approximation of the distribution of the random latent variable (theta) as a histogram. Software for the EH…

  12. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  13. Automated Test Case Generator for Phishing Prevention Using Generative Grammars and Discriminative Methods

    ERIC Educational Resources Information Center

    Palka, Sean

    2015-01-01

    This research details a methodology designed for creating content in support of various phishing prevention tasks including live exercises and detection algorithm research. Our system uses probabilistic context-free grammars (PCFG) and variable interpolation as part of a multi-pass method to create diverse and consistent phishing email content on…

  14. A computer-vision-based rotating speed estimation method for motor bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxian; Guo, Jie; Lu, Siliang; Shen, Changqing; He, Qingbo

    2017-06-01

    Diagnosis of motor bearing faults under variable speed is a problem. In this study, a new computer-vision-based order tracking method is proposed to address this problem. First, a video recorded by a high-speed camera is analyzed with the speeded-up robust feature extraction and matching algorithm to obtain the instantaneous rotating speed (IRS) of the motor. Subsequently, an audio signal recorded by a microphone is equi-angle resampled for order tracking in accordance with the IRS curve, through which the frequency-domain signal is transferred to an angular-domain one. The envelope order spectrum is then calculated to determine the fault characteristic order, and finally the bearing fault pattern is determined. The effectiveness and robustness of the proposed method are verified with two brushless direct-current motor test rigs, in which two defective bearings and a healthy bearing are tested separately. This study provides a new noninvasive measurement approach that simultaneously avoids the installation of a tachometer and overcomes the disadvantages of tacholess order tracking methods for motor bearing fault diagnosis under variable speed.

  15. Method for using polarization gating to measure a scattering sample

    DOEpatents

    Baba, Justin S.

    2015-08-04

    Described herein are systems, devices, and methods facilitating optical characterization of scattering samples. A polarized optical beam can be directed to pass through a sample to be tested. The optical beam exiting the sample can then be analyzed to determine its degree of polarization, from which other properties of the sample can be determined. In some cases, an apparatus can include a source of an optical beam, an input polarizer, a sample, an output polarizer, and a photodetector. In some cases, a signal from a photodetector can be processed through attenuation, variable offset, and variable gain.

  16. Reliability of the Cardiff Test of basic life support and automated external defibrillation version 3.1.

    PubMed

    Whitfield, Richard H; Newcombe, Robert G; Woollard, Malcolm

    2003-12-01

    The introduction of the European Resuscitation Guidelines (2000) for cardiopulmonary resuscitation (CPR) and automated external defibrillation (AED) prompted the development of an up-to-date and reliable method of assessing the quality of performance of CPR in combination with the use of an AED. The Cardiff Test of basic life support (BLS) and AED version 3.1 was developed to meet this need and uses standardised checklists to retrospectively evaluate performance from analyses of video recordings and data drawn from a laptop computer attached to a training manikin. This paper reports the inter- and intra-observer reliability of this test. Data used to assess reliability were obtained from an investigation of CPR and AED skill acquisition in a lay responder AED training programme. Six observers were recruited to evaluate performance in 33 data sets, repeating their evaluation after a minimum interval of 3 weeks. More than 70% of the 42 variables considered in this study had a kappa score of 0.70 or above for inter-observer reliability or were drawn from computer data and therefore not subject to evaluator variability. 85% of the 42 variables had kappa scores for intra-observer reliability of 0.70 or above or were drawn from computer data. The standard deviations for inter- and intra-observer measures of time to first shock were 11.6 and 7.7 s, respectively. The inter- and intra-observer reliability for the majority of the variables in the Cardiff Test of BLS and AED version 3.1 is satisfactory. However, reliability is less acceptable with respect to shaking when checking for responsiveness, initial check/clearing of the airway, checks for signs of circulation, time to first shock and performance of interventions in the correct sequence. Further research is required to determine if modifications to the method of assessing these variables can increase reliability.

  17. Evaluation of Visual Acuity Measurements after Autorefraction versus Manual Refraction in Eyes with and without Diabetic Macular Edema

    PubMed Central

    Sun, Jennifer K.; Qin, Haijing; Aiello, Lloyd Paul; Melia, Michele; Beck, Roy W.; Andreoli, Christopher M.; Edwards, Paul A.; Glassman, Adam R.; Pavlica, Michael R.

    2012-01-01

    Objective To compare visual acuity (VA) scores after autorefraction versus research protocol manual refraction in eyes of patients with diabetes and a wide range of VA. Methods Electronic Early Treatment Diabetic Retinopathy Study (E-ETDRS) VA Test© letter score (EVA) was measured after autorefraction (AR-EVA) and after Diabetic Retinopathy Clinical Research Network (DRCR.net) protocol manual refraction (MR-EVA). Testing order was randomized, study participants and VA examiners were masked to refraction source, and a second EVA utilizing an identical manual refraction (MR-EVAsupl) was performed to determine test-retest variability. Results In 878 eyes of 456 study participants, median MR-EVA was 74 (Snellen equivalent approximately 20/32). Spherical equivalent was often similar for manual and autorefraction (median difference: 0.00, 5th and 95th percentiles −1.75 to +1.13 Diopters). However, on average, MR-EVA results were slightly better than AR-EVA results across the entire VA range. Furthermore, variability between AR-EVA and MR-EVA was substantially greater than the test-retest variability of MR-EVA (P<0.001). Variability of differences was highly dependent on autorefractor model. Conclusions Across a wide range of VA at multiple sites using a variety of autorefractors, VA measurements tend to be worse with autorefraction than manual refraction. Differences between individual autorefractor models were identified. However, even among autorefractor models comparing most favorably to manual refraction, VA variability between autorefraction and manual refraction is higher than the test-retest variability of manual refraction. The results suggest that with current instruments, autorefraction is not an acceptable substitute for manual refraction for most clinical trials with primary outcomes dependent on best-corrected VA. PMID:22159173

  18. Thrust performance of a variable-geometry, divergent exhaust nozzle on a turbojet engine at altitude

    NASA Technical Reports Server (NTRS)

    Straight, D. M.; Collom, R. R.

    1983-01-01

    A variable geometry, low aspect ratio, nonaxisymmetric, two dimensional, convergent-divergent exhaust nozzle was tested at simulated altitude on a turbojet engine to obtain baseline axial, dry thrust performance over wide ranges of operating nozzle pressure ratios, throat areas, and internal expansion area ratios. The thrust data showed good agreement with theory and scale model test results after the data were corrected for seal leakage and coolant losses. Wall static pressure profile data were also obtained and compared with one dimensional theory and scale model data. The pressure data indicate greater three dimensional flow effects in the full scale tests than with models. The leakage and coolant penalties were substantial, and the method to determine them is included.

  19. Anaerobic Threshold by Mathematical Model in Healthy and Post-Myocardial Infarction Men.

    PubMed

    Novais, L D; Silva, E; Simões, R P; Sakabe, D I; Martins, L E B; Oliveira, L; Diniz, C A R; Gallo, L; Catai, A M

    2016-02-01

    The aim of this study was to determine the anaerobic threshold (AT) in a population of healthy and post-myocardial infarction men by applying Hinkley's mathematical method and comparing its performance to the ventilatory visual method. This mathematical model, in lieu of observer-dependent visual determination, can produce more reliable results due to the uniformity of the procedure. 17 middle-aged men (55±3 years) were studied in 2 groups: 9 healthy men (54±2 years); and 8 men with previous myocardial infarction (57±3 years). All subjects underwent an incremental ramp exercise test until physical exhaustion. Breath-by-breath ventilatory variables, heart rate (HR), and vastus lateralis surface electromyography (sEMG) signal were collected throughout the test. Carbon dioxide output (V˙CO2), HR, and sEMG were studied, and the AT determination methods were compared using correlation coefficients and Bland-Altman plots. Parametric statistical tests were applied with significance level set at 5%. No significant differences were found in the HR, sEMG, and ventilatory variables at AT between the different methods, such as the intensity of effort relative to AT. Moreover, important concordance and significant correlations were observed between the methods. We concluded that the mathematical model was suitable for detecting the AT in both healthy and myocardial infarction subjects. © Georg Thieme Verlag KG Stuttgart · New York.

  20. Evaluation of DuPont Qualicon Bax System PCR assay for yeast and mold.

    PubMed

    Wallace, F Morgan; Burns, Frank; Fleck, Lois; Andaloro, Bridget; Farnum, Andrew; Tice, George; Ruebl, Joanne

    2010-01-01

    Evaluations were conducted to test the performance of the BAX System PCR assay which was certified as Performance Tested Method 010902 for screening yeast and mold in yogurt, corn starch, and milk-based powdered infant formula. Method comparison studies performed on samples with low-level inoculates showed that the BAX System demonstrates a sensitivity equivalent to the U.S. Food and Drug Administration's Bacteriological Analytical Manual culture method, but with a significantly shorter time to obtain results. Tests to evaluate inclusivity and exclusivity returned no false-negative and no false-positive results on a diverse panel of isolates, and tests for lot-to-lot variability and tablet stability demonstrated consistent performance. Ruggedness studies determined that none of the factors examined affected the performance of the assay.

  1. The borderline range of toxicological methods: Quantification and implications for evaluating precision.

    PubMed

    Leontaridou, Maria; Urbisch, Daniel; Kolle, Susanne N; Ott, Katharina; Mulliner, Denis S; Gabbert, Silke; Landsiedel, Robert

    2017-01-01

    Test methods to assess the skin sensitization potential of a substance usually use threshold criteria to dichotomize continuous experimental read-outs into yes/no conclusions. The threshold criteria are prescribed in the respective OECD test guidelines and the conclusion is used for regulatory hazard assessment, i.e., classification and labelling of the substance. We can identify a borderline range (BR) around the classification threshold within which test results are inconclusive due to a test method's biological and technical variability. We quantified BRs in the prediction models of the non-animal test methods DPRA, LuSens and h-CLAT, and of the animal test LLNA, respectively. Depending on the size of the BR, we found that between 6% and 28% of the substances in the sets tested with these methods were considered borderline. When the results of individual non-animal test methods were combined into integrated testing strategies (ITS), borderline test results of individual tests also affected the overall assessment of the skin sensitization potential of the testing strategy. This was analyzed for the 2-out-of-3 ITS: Four out of 40 substances (10%) were considered borderline. Based on our findings we propose expanding the standard binary classification of substances into "positive"/"negative" or "hazardous"/"non-hazardous" by adding a "borderline" or "inconclusive" alert for cases where test results fall within the borderline range.

  2. Direction dependence analysis: A framework to test the direction of effects in linear models with an implementation in SPSS.

    PubMed

    Wiedermann, Wolfgang; Li, Xintong

    2018-04-16

    In nonexperimental data, at least three possible explanations exist for the association of two variables x and y: (1) x is the cause of y, (2) y is the cause of x, or (3) an unmeasured confounder is present. Statistical tests that identify which of the three explanatory models fits best would be a useful adjunct to the use of theory alone. The present article introduces one such statistical method, direction dependence analysis (DDA), which assesses the relative plausibility of the three explanatory models on the basis of higher-moment information about the variables (i.e., skewness and kurtosis). DDA involves the evaluation of three properties of the data: (1) the observed distributions of the variables, (2) the residual distributions of the competing models, and (3) the independence properties of the predictors and residuals of the competing models. When the observed variables are nonnormally distributed, we show that DDA components can be used to uniquely identify each explanatory model. Statistical inference methods for model selection are presented, and macros to implement DDA in SPSS are provided. An empirical example is given to illustrate the approach. Conceptual and empirical considerations are discussed for best-practice applications in psychological data, and sample size recommendations based on previous simulation studies are provided.

  3. Foreign Object Damage to Tires Operating in a Wartime Environment

    DTIC Science & Technology

    1991-11-01

    barriers were successfully overcome and the method of testing employed can now be confidently used for future test needs of this type. Data Analysis ...combined variable effects. Analysis consideration involved cut types, cut depths, number of cuts, cut/hit probabilities, tire failures, and aircraft...November 1988 with data reduction and analysis continuing into October 1989. All of the cutting tests reported in this report were conducted at the

  4. An Analysis of Effects of Variable Factors on Weapon Performance

    DTIC Science & Technology

    1993-03-01

    ALTERNATIVE ANALYSIS A. CATEGORICAL DATA ANALYSIS Statistical methodology for categorical data analysis traces its roots to the work of Francis Galton in the...choice of statistical tests . This thesis examines an analysis performed by Surface Warfare Development Group (SWDG). The SWDG analysis is shown to be...incorrect due to the misapplication of testing methods. A corrected analysis is presented and recommendations suggested for changes to the testing

  5. High Energy/LET Radiation EEE Parts Certification Handbook

    NASA Technical Reports Server (NTRS)

    Reddell, Brandon

    2012-01-01

    Certifying electronic components is a very involved process. It includes pre-coordination with the radiation test facility for time, schedule and cost, as well as intimate work with designers to develop test procedures and hardware. It also involves work with radiation engineers to understand the effects of the radiation field on the test article/setup as well as the analysis and production of a test report. The technical content of traditional ionizing radiation testing protocol is in wide use and generally follows established standards (ref. Appendix C). This document is not intended to cover all these areas but to cover the methodology of using Variable Depth Bragg Peak (VDBP) to accomplish the goal of characterizing an electronic component. The Variable Depth Bragg Peak (VDBP) test method is primarily used for deep space applications of electronics. However, it can be used on any part for any radiation environment, especially those parts where the sensitive volume cannot be reached by the radiation beam. An example of this problem would be issues that arise in de-lidding of parts or in parts with flip-chip designs, etc. The VDBP method is ideally suited to test modern avionics designs which increasingly incorporate commercial off-the-shelf (COTS) parts and units. Johnson Space Center (JSC) developed software provides assistance to users in developing the radiation characterization data from the raw test data.

  6. Assessment of resampling methods for causality testing: A note on the US inflation behavior

    PubMed Central

    Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms. PMID:28708870

  7. Assessment of resampling methods for causality testing: A note on the US inflation behavior.

    PubMed

    Papana, Angeliki; Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.

  8. Estimating, Testing, and Comparing Specific Effects in Structural Equation Models: The Phantom Model Approach

    ERIC Educational Resources Information Center

    Macho, Siegfried; Ledermann, Thomas

    2011-01-01

    The phantom model approach for estimating, testing, and comparing specific effects within structural equation models (SEMs) is presented. The rationale underlying this novel method consists in representing the specific effect to be assessed as a total effect within a separate latent variable model, the phantom model that is added to the main…

  9. Mothers Who Kill Their Offspring: Testing Evolutionary Hypothesis in a 110-Case Italian Sample

    ERIC Educational Resources Information Center

    Camperio Ciani, Andrea S.; Fontanesi, Lilybeth

    2012-01-01

    Objectives: This research aimed to identify incidents of mothers in Italy killing their own children and to test an adaptive evolutionary hypothesis to explain their occurrence. Methods: 110 cases of mothers killing 123 of their own offspring from 1976 to 2010 were analyzed. Each case was classified using 13 dichotomic variables. Descriptive…

  10. Effects of School-Based Point-of-Testing Counselling on Health Status Variables among Rural Adolescents

    ERIC Educational Resources Information Center

    Murimi, Mary W.; Chrisman, Matthew S.; Hughes, Kelly; Taylor, Chris; Kim, Yeonsoo; McAllister, Tiffany L.

    2015-01-01

    Objective: Rural areas may suffer from a lack of access to health care and programmes to promote behaviours such as healthy eating and physical activity. Point-of-testing counselling is a method of promoting a healthy lifestyle during an individual's most "teachable moment". Design/Setting: This longitudinal study examined the effects of…

  11. Examining Measurement Invariance and Differential Item Functioning with Discrete Latent Construct Indicators: A Note on a Multiple Testing Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Li, Tatyana; Menold, Natalja

    2018-01-01

    A latent variable modeling method for studying measurement invariance when evaluating latent constructs with multiple binary or binary scored items with no guessing is outlined. The approach extends the continuous indicator procedure described by Raykov and colleagues, utilizes similarly the false discovery rate approach to multiple testing, and…

  12. Probe-specific mixed-model approach to detect copy number differences using multiplex ligation-dependent probe amplification (MLPA)

    PubMed Central

    González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier

    2008-01-01

    Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760

  13. Impact of climate variability and anthropogenic activity on streamflow in the Three Rivers Headwater Region, Tibetan Plateau, China

    NASA Astrophysics Data System (ADS)

    Jiang, Chong; Li, Daiqing; Gao, Yanni; Liu, Wenfeng; Zhang, Linbo

    2017-07-01

    Under the impacts of climate variability and human activities, there is violent fluctuation for streamflow in the large basins in China. Therefore, it is crucial to separate the impacts of climate variability and human activities on streamflow fluctuation for better water resources planning and management. In this study, the Three Rivers Headwater Region (TRHR) was chosen as the study area. Long-term hydrological data for the TRHR were collected in order to investigate the changes in annual runoff during the period of 1956-2012. The nonparametric Mann-Kendall test, moving t test, Pettitt test, Mann-Kendall-Sneyers test, and the cumulative anomaly curve were used to identify trends and change points in the hydro-meteorological variables. Change point in runoff was identified in the three basins, which respectively occurred around the years 1989 and 1993, dividing the long-term runoff series into a natural period and a human-induced period. Then, the hydrologic sensitivity analysis method was employed to evaluate the effects of climate variability and human activities on mean annual runoff for the human-induced period based on precipitation and potential evapotranspiration. In the human-induced period, climate variability was the main factor that increased (reduced) runoff in LRB and YARB (YRB) with contribution of more than 90 %, while the increasing (decreasing) percentage due to human activities only accounted for less than 10 %, showing that runoff in the TRHR is more sensitive to climate variability than human activities. The intra-annual distribution of runoff shifted gradually from a double peak pattern to a single peak pattern, which was mainly influenced by atmospheric circulation in the summer and autumn. The inter-annual variation in runoff was jointly controlled by the East Asian monsoon, the westerly, and Tibetan Plateau monsoons.

  14. Old and New Ideas for Data Screening and Assumption Testing for Exploratory and Confirmatory Factor Analysis

    PubMed Central

    Flora, David B.; LaBrish, Cathy; Chalmers, R. Philip

    2011-01-01

    We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables. PMID:22403561

  15. Datamining approaches for modeling tumor control probability.

    PubMed

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  16. Assessment of Technologies for the Space Shuttle External Tank Thermal Protection System and Recommendations for Technology Improvement - Part III: Material Property Characterization, Analysis, and Test Methods

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Johnson, Theodore F.; Whitley, Karen S.

    2005-01-01

    The objective of this report is to contribute to the independent assessment of the Space Shuttle External Tank Foam Material. This report specifically addresses material modeling, characterization testing, data reduction methods, and data pedigree. A brief description of the External Tank foam materials, locations, and standard failure modes is provided to develop suitable background information. A review of mechanics based analysis methods from the open literature is used to provide an assessment of the state-of-the-art in material modeling of closed cell foams. Further, this report assesses the existing material property database and investigates sources of material property variability. The report presents identified deficiencies in testing methods and procedures, recommendations for additional testing as required, identification of near-term improvements that should be pursued, and long-term capabilities or enhancements that should be developed.

  17. Carryover effect of hip and knee exercises program on functional performance in individuals with patellofemoral pain syndrome

    PubMed Central

    Ahmed Hamada, Hamada; Hussein Draz, Amira; Koura, Ghada Mohamed; Saab, Ibtissam M.

    2017-01-01

    [Purpose] This study was carried out to investigate the carryover effect of hip and knee exercises program on functional performance (single legged hop test as functional performance test and Kujala score for functional activities). [Subjects and Methods] Thirty patients with patellofemoral pain syndrome were randomly assigned into two equal groups. Group (A) consisted of 15 patients undergoing hip strengthening exercises for four weeks then measuring all variables followed by additional four weeks of knee exercises program then measuring all variables again. Group (B): consisted of 15 patients undergoing knee exercises program for four weeks then measuring all variables followed by additional four weeks of hip strengthening exercises then measuring all variables. Functional abilities and knee muscles performance were assessed using Kujala questionnaire and single legged hop test respectively pre and after the completion of the first 4 weeks then after 8 weeks for both groups. [Results] Significantly increase in Kujala questionnaire in group A compared with group B was observed. While, there were significant increase in single legged hop performance test in group B compared with group A. [Conclusion] Starting with hip exercises improve the performance of subjects more than functional activities while starting with knee exercises improve the functional activities of subjects more than performance. PMID:28878459

  18. Carryover effect of hip and knee exercises program on functional performance in individuals with patellofemoral pain syndrome.

    PubMed

    Ahmed Hamada, Hamada; Hussein Draz, Amira; Koura, Ghada Mohamed; Saab, Ibtissam M

    2017-08-01

    [Purpose] This study was carried out to investigate the carryover effect of hip and knee exercises program on functional performance (single legged hop test as functional performance test and Kujala score for functional activities). [Subjects and Methods] Thirty patients with patellofemoral pain syndrome were randomly assigned into two equal groups. Group (A) consisted of 15 patients undergoing hip strengthening exercises for four weeks then measuring all variables followed by additional four weeks of knee exercises program then measuring all variables again. Group (B): consisted of 15 patients undergoing knee exercises program for four weeks then measuring all variables followed by additional four weeks of hip strengthening exercises then measuring all variables. Functional abilities and knee muscles performance were assessed using Kujala questionnaire and single legged hop test respectively pre and after the completion of the first 4 weeks then after 8 weeks for both groups. [Results] Significantly increase in Kujala questionnaire in group A compared with group B was observed. While, there were significant increase in single legged hop performance test in group B compared with group A. [Conclusion] Starting with hip exercises improve the performance of subjects more than functional activities while starting with knee exercises improve the functional activities of subjects more than performance.

  19. Comparative Performance of Reagents and Platforms for Quantitation of Cytomegalovirus DNA by Digital PCR

    PubMed Central

    Gu, Z.; Sam, S. S.; Sun, Y.; Tang, L.; Pounds, S.; Caliendo, A. M.

    2016-01-01

    A potential benefit of digital PCR is a reduction in result variability across assays and platforms. Three sets of PCR reagents were tested on two digital PCR systems (Bio-Rad and RainDance), using three different sets of PCR reagents for quantitation of cytomegalovirus (CMV). Both commercial quantitative viral standards and 16 patient samples (n = 16) were tested. Quantitative accuracy (compared to nominal values) and variability were determined based on viral standard testing results. Quantitative correlation and variability were assessed with pairwise comparisons across all reagent-platform combinations for clinical plasma sample results. The three reagent sets, when used to assay quantitative standards on the Bio-Rad system, all showed a high degree of accuracy, low variability, and close agreement with one another. When used on the RainDance system, one of the three reagent sets appeared to have a much better correlation to nominal values than did the other two. Quantitative results for patient samples showed good correlation in most pairwise comparisons, with some showing poorer correlations when testing samples with low viral loads. Digital PCR is a robust method for measuring CMV viral load. Some degree of result variation may be seen, depending on platform and reagents used; this variation appears to be greater in samples with low viral load values. PMID:27535685

  20. Comparision of Immunohistochemical Expression of CD10 in Odontogenic Cysts

    PubMed Central

    Munisekhar, M.S.; Suri, Charu; Rajalbandi, Santosh Kumar; M.R., Pradeep; Gothe, Pavan

    2014-01-01

    Background: Expression of CD10 has been documented in various tumors like nasopharyngeal carcinoma, gastric carcinoma, squamous cell carcinoma, odontogenic tumors. Aim: To evaluate and compare CD10 expression in odontogenic cysts like radicular cyst, dentigerous cyst and odontogenic keratocyst (OKC). Materials and Methods: Total 60 cases were included in the study, comprising 20 cases each of radicular, dentigerous and odontogenic keratocyst. Each case was evaluated and compared for immunohistochemical expression of CD10. Results obtained were statistically analysed using ANOVA test followed by post hoc test Tukey-Kramer Multiple Comparisons Test for continuous variable and Chi-square test for discrete variable. Results: More number of cases showing sub-epithelial stromal CD10 expression were found in OKC among the cysts. Conclusion: CD10 expression was more in OKC compared to radicular and dentigerous cysts. PMID:25584313

  1. Effects of Visual Feedback-Induced Variability on Motor Learning of Handrim Wheelchair Propulsion

    PubMed Central

    Leving, Marika T.; Vegter, Riemer J. K.; Hartog, Johanneke; Lamoth, Claudine J. C.; de Groot, Sonja; van der Woude, Lucas H. V.

    2015-01-01

    Background It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. Methods 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. Results The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. Conclusion These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear simultaneously during the motor learning process. Their relationship is most likely modified by other factors such as the amount of the intra-individual variability. PMID:25992626

  2. Comparison of local grid refinement methods for MODFLOW

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.; Leake, S.A.

    2006-01-01

    Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).

  3. Effect of various binning methods and ROI sizes on the accuracy of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Sung, Yu Sub; Park, Bum-Woo; Lee, Youngjoo; Park, Seong Hoon; Lee, Young Kyung; Kang, Suk-Ho

    2008-03-01

    To find optimal binning, variable binning size linear binning (LB) and non-linear binning (NLB) methods were tested. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. To find optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of textural analysis at HRCT Six-hundred circular regions of interest (ROI) with 10, 20, and 30 pixel diameter, comprising of each 100 ROIs representing six regional disease patterns (normal, NL; ground-glass opacity, GGO; reticular opacity, RO; honeycombing, HC; emphysema, EMPH; and consolidation, CONS) were marked by an experienced radiologist from HRCT images. Histogram (mean) and co-occurrence matrix (mean and SD of angular second moment, contrast, correlation, entropy, and inverse difference momentum) features were employed to test binning and ROI effects. To find optimal binning, variable binning size LB (bin size Q: 4~30, 32, 64, 128, 144, 196, 256, 384) and NLB (Q: 4~30) methods (K-means, and Fuzzy C-means clustering) were tested. For automated classification, a SVM classifier was implemented. To assess cross-validation of the system, a five-folding method was used. Each test was repeatedly performed twenty times. Overall accuracies with every combination of variable ROIs, and binning sizes were statistically compared. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. In case of 30x30 ROI size and most of binning size, the K-means method showed better than other NLB and LB methods. When optimal binning and other parameters were set, overall sensitivity of the classifier was 92.85%. The sensitivity and specificity of the system for each class were as follows: NL, 95%, 97.9%; GGO, 80%, 98.9%; RO 85%, 96.9%; HC, 94.7%, 97%; EMPH, 100%, 100%; and CONS, 100%, 100%, respectively. We determined the optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT.

  4. Validity and Reliability of the 8-Item Work Limitations Questionnaire.

    PubMed

    Walker, Timothy J; Tullar, Jessica M; Diamond, Pamela M; Kohl, Harold W; Amick, Benjamin C

    2017-12-01

    Purpose To evaluate factorial validity, scale reliability, test-retest reliability, convergent validity, and discriminant validity of the 8-item Work Limitations Questionnaire (WLQ) among employees from a public university system. Methods A secondary analysis using de-identified data from employees who completed an annual Health Assessment between the years 2009-2015 tested research aims. Confirmatory factor analysis (CFA) (n = 10,165) tested the latent structure of the 8-item WLQ. Scale reliability was determined using a CFA-based approach while test-retest reliability was determined using the intraclass correlation coefficient. Convergent/discriminant validity was tested by evaluating relations between the 8-item WLQ with health/performance variables for convergent validity (health-related work performance, number of chronic conditions, and general health) and demographic variables for discriminant validity (gender and institution type). Results A 1-factor model with three correlated residuals demonstrated excellent model fit (CFI = 0.99, TLI = 0.99, RMSEA = 0.03, and SRMR = 0.01). The scale reliability was acceptable (0.69, 95% CI 0.68-0.70) and the test-retest reliability was very good (ICC = 0.78). Low-to-moderate associations were observed between the 8-item WLQ and the health/performance variables while weak associations were observed between the demographic variables. Conclusions The 8-item WLQ demonstrated sufficient reliability and validity among employees from a public university system. Results suggest the 8-item WLQ is a usable alternative for studies when the more comprehensive 25-item WLQ is not available.

  5. Valx: A System for Extracting and Structuring Numeric Lab Test Comparison Statements from Text.

    PubMed

    Hao, Tianyong; Liu, Hongfang; Weng, Chunhua

    2016-05-17

    To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes seven steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community.

  6. Normalizing and scaling of data to derive human response corridors from impact tests.

    PubMed

    Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A

    2014-06-03

    It is well known that variability is inherent in any biological experiment. Human cadavers (Post-Mortem Human Subjects, PMHS) are routinely used to determine responses to impact loading for crashworthiness applications including civilian (motor vehicle) and military environments. It is important to transform measured variables from PMHS tests (accelerations, forces and deflections) to a standard or reference population, termed normalization. The transformation process should account for inter-specimen variations with some underlying assumptions used during normalization. Scaling is a process by which normalized responses are converted from one standard to another (example, mid-size adult male to large-male and small-size female adults, and to pediatric populations). These responses are used to derive corridors to assess the biofidelity of anthropomorphic test devices (crash dummies) used to predict injury in impact environments and design injury mitigating devices. This survey examines the pros and cons of different approaches for obtaining normalized and scaled responses and corridors used in biomechanical studies for over four decades. Specifically, the equal-stress equal-velocity and impulse-momentum methods along with their variations are discussed in this review. Methods ranging from subjective to quasi-static loading to different approaches are discussed for deriving temporal mean and plus minus one standard deviation human corridors of time-varying fundamental responses and cross variables (e.g., force-deflection). The survey offers some insights into the potential efficacy of these approaches with examples from recent impact tests and concludes with recommendations for future studies. The importance of considering various parameters during the experimental design of human impact tests is stressed. Published by Elsevier Ltd.

  7. The CO2 stimulus for cerebrovascular reactivity: Fixing inspired concentrations vs. targeting end-tidal partial pressures.

    PubMed

    Fisher, Joseph A

    2016-06-01

    Cerebrovascular reactivity (CVR) studies have elucidated the physiology and pathophysiology of cerebral blood flow regulation. A non-invasive, high spatial resolution approach uses carbon dioxide (CO2) as the vasoactive stimulus and magnetic resonance techniques to estimate the cerebral blood flow response. CVR is assessed as the ratio response change to stimulus change. Precise control of the stimulus is sought to minimize CVR variability between tests, and show functional differences. Computerized methods targeting end-tidal CO2 partial pressures are precise, but expensive. Simpler, improvised methods that fix the inspired CO2 concentrations have been recommended as less expensive, and so more widely accessible. However, these methods have drawbacks that have not been previously presented by those that advocate their use, or those that employ them in their studies. As one of the developers of a computerized method, I provide my perspective on the trade-offs between these two methods. The main concern is that declaring the precision of fixed inspired concentration of CO2 is misleading: it does not, as implied, translate to precise control of the actual vasoactive stimulus - the arterial partial pressure of CO2 The inherent test-to-test, and therefore subject-to-subject variability, precludes clinical application of findings. Moreover, improvised methods imply widespread duplication of development, assembly time and costs, yet lack uniformity and quality control. A tabular comparison between approaches is provided. © The Author(s) 2016.

  8. Simplest chronoscope. III. Further comparisons between reaction times obtained by meterstick versus machine.

    PubMed

    Montare, Alberto

    2013-06-01

    The three classical Donders' reaction time (RT) tasks (simple, choice, and discriminative RTs) were employed to compare reaction time scores from college students obtained by use of Montare's simplest chronoscope (meterstick) methodology to scores obtained by use of a digital-readout multi-choice reaction timer (machine). Five hypotheses were tested. Simple RT, choice RT, and discriminative RT were faster when obtained by meterstick than by machine. The meterstick method showed higher reliability than the machine method and was less variable. The meterstick method of the simplest chronoscope may help to alleviate the longstanding problems of low reliability and high variability of reaction time performances; while at the same time producing faster performance on Donders' simple, choice and discriminative RT tasks than the machine method.

  9. Finite element implementation of state variable-based viscoplasticity models

    NASA Technical Reports Server (NTRS)

    Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.

    1991-01-01

    The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.

  10. A method for the dynamic management of genetic variability in dairy cattle

    PubMed Central

    Colleau, Jean-Jacques; Moureaux, Sophie; Briend, Michèle; Bechu, Jérôme

    2004-01-01

    According to the general approach developed in this paper, dynamic management of genetic variability in selected populations of dairy cattle is carried out for three simultaneous purposes: procreation of young bulls to be further progeny-tested, use of service bulls already selected and approval of recently progeny-tested bulls for use. At each step, the objective is to minimize the average pairwise relationship coefficient in the future population born from programmed matings and the existing population. As a common constraint, the average estimated breeding value of the new population, for a selection goal including many important traits, is set to a desired value. For the procreation of young bulls, breeding costs are additionally constrained. Optimization is fully analytical and directly considers matings. Corresponding algorithms are presented in detail. The efficiency of these procedures was tested on the current Norman population. Comparisons between optimized and real matings, clearly showed that optimization would have saved substantial genetic variability without reducing short-term genetic gains. PMID:15231230

  11. Determination of the anaerobic threshold in the pre-operative assessment clinic: inter-observer measurement error.

    PubMed

    Sinclair, R C F; Danjoux, G R; Goodridge, V; Batterham, A M

    2009-11-01

    The variability between observers in the interpretation of cardiopulmonary exercise tests may impact upon clinical decision making and affect the risk stratification and peri-operative management of a patient. The purpose of this study was to quantify the inter-reader variability in the determination of the anaerobic threshold (V-slope method). A series of 21 cardiopulmonary exercise tests from patients attending a surgical pre-operative assessment clinic were read independently by nine experienced clinicians regularly involved in clinical decision making. The grand mean for the anaerobic threshold was 10.5 ml O(2).kg body mass(-1).min(-1). The technical error of measurement was 8.1% (circa 0.9 ml.kg(-1).min(-1); 90% confidence interval, 7.4-8.9%). The mean absolute difference between readers was 4.5% with a typical random error of 6.5% (6.0-7.2%). We conclude that the inter-observer variability for experienced clinicians determining the anaerobic threshold from cardiopulmonary exercise tests is acceptable.

  12. Connecting the dots between math and reality: A study of critical thinking in high school physics

    NASA Astrophysics Data System (ADS)

    Loper, Timothy K.

    The purpose of this mixed method study was to discover whether training in understanding relationships between variables would help students read and interpret equations for the purposes of problem solving in physics. Twenty students from two physics classes at a private Catholic high school participated in a one group pretest-posttest unit with the conceptually based mathematical intervention being the independent variable, and the test results being the dependent variable for the quantitative portion of the study. A random sample of students was interviewed pre and post intervention for the qualitative portion of the study to determine both how their understanding of equations changed and how their approach to the problems changed. The paired-sample t test showed a significant improvement on the Physics Critical Thinking test at the p<.01 alpha level; furthermore, the interview data indicated the students displayed a deeper understanding of equations and their purpose as opposed to the superficial understanding they had before the intervention.

  13. Validity of self-reported mechanical demands for occupational epidemiologic research of musculoskeletal disorders

    PubMed Central

    Barrero, Lope H; Katz, Jeffrey N; Dennerlein, Jack T

    2012-01-01

    Objectives To describe the relation of the measured validity of self-reported mechanical demands (self-reports) with the quality of validity assessments and the variability of the assessed exposure in the study population. Methods We searched for original articles, published between 1990 and 2008, reporting the validity of self-reports in three major databases: EBSCOhost, Web of Science, and PubMed. Identified assessments were classified by methodological characteristics (eg, type of self-report and reference method) and exposure dimension was measured. We also classified assessments by the degree of comparability between the self-report and the employed reference method, and the variability of the assessed exposure in the study population. Finally, we examined the association of the published validity (r) with this degree of comparability, as well as with the variability of the exposure variable in the study population. Results Of the 490 assessments identified, 75% used observation-based reference measures and 55% tested self-reports of posture duration and movement frequency. Frequently, validity studies did not report demographic information (eg, education, age, and gender distribution). Among assessments reporting correlations as a measure of validity, studies with a better match between the self-report and the reference method, and studies conducted in more heterogeneous populations tended to report higher correlations [odds ratio (OR) 2.03, 95% confidence interval (95% CI) 0.89–4.65 and OR 1.60, 95% CI 0.96–2.61, respectively]. Conclusions The reported data support the hypothesis that validity depends on study-specific factors often not examined. Experimentally manipulating the testing setting could lead to a better understanding of the capabilities and limitations of self-reported information. PMID:19562235

  14. System Accuracy Evaluation of Four Systems for Self-Monitoring of Blood Glucose Following ISO 15197 Using a Glucose Oxidase and a Hexokinase-Based Comparison Method.

    PubMed

    Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido

    2015-04-14

    The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.

  15. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    PubMed

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  16. Development and evaluation of a contrast sensitivity perimetry test for patients with glaucoma.

    PubMed

    Hot, Aliya; Dul, Mitchell W; Swanson, William H

    2008-07-01

    To design a contrast sensitivity perimetry (CSP) protocol that decreases variability in glaucomatous defects while maintaining good sensitivity to glaucomatous loss. Twenty patients with glaucoma and 20 control subjects were tested with a CSP protocol implemented on a monitor-based testing station. In the protocol 26 locations were tested over the central visual field with Gabor patches with a peak spatial frequency of 0.4 cyc/deg and a two-dimensional spatial Gaussian envelope, with most of the energy concentrated within a 4 degrees circular region. Threshold was estimated by a staircase method: Patients and 10 age-similar control subjects were also tested on conventional automated perimetry (CAP), with the 24-2 pattern with the SITA Standard testing strategy. The neuroretinal rim area of the patients was measured with a retinal tomograph (Retina Tomograph II [HRT]; Heidelberg Engineering, Heidelberg, Germany). A Bland-Altman analysis of agreement was used to assess test-retest variability, compare depth of defect shown by the two perimetric tests, and investigate the relations between contrast sensitivity and neuroretinal rim area. Variability showed less dependence on defect depth for CSP than for CAP (z = 9.3, P < 0.001). Defect depth was similar for CAP and CSP when averaged by quadrant (r = 0.26, P > 0.13). The relation between defect depth and rim area was more consistent with CSP than with CAP (z = 9, P < 0.001). The implementation of CSP was successful in reducing test-retest variability in glaucomatous defects. CSP was in general agreement with CAP in terms of depth of defect and was in better agreement than CAP with HRT-determined rim area.

  17. Grand average ERP-image plotting and statistics: A method for comparing variability in event-related single-trial EEG activities across subjects and conditions

    PubMed Central

    Delorme, Arnaud; Miyakoshi, Makoto; Jung, Tzyy-Ping; Makeig, Scott

    2014-01-01

    With the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects. We have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability. This study demonstrates new methods for computing and visualizing grand ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute. PMID:25447029

  18. A simple, rapid and validated high-performance liquid chromatography method suitable for clinical measurements of human mercaptalbumin and non-mercaptalbumin.

    PubMed

    Yasukawa, Keiko; Shimosawa, Tatsuo; Okubo, Shigeo; Yatomi, Yutaka

    2018-01-01

    Background Human mercaptalbumin and human non-mercaptalbumin have been reported as markers for various pathological conditions, such as kidney and liver diseases. These markers play important roles in redox regulations throughout the body. Despite the recognition of these markers in various pathophysiologic conditions, the measurements of human mercaptalbumin and non-mercaptalbumin have not been popular because of the technical complexity and long measurement time of conventional methods. Methods Based on previous reports, we explored the optimal analytical conditions for a high-performance liquid chromatography method using an anion-exchange column packed with a hydrophilic polyvinyl alcohol gel. The method was then validated using performance tests as well as measurements of various patients' serum samples. Results We successfully established a reliable high-performance liquid chromatography method with an analytical time of only 12 min per test. The repeatability (within-day variability) and reproducibility (day-to-day variability) were 0.30% and 0.27% (CV), respectively. A very good correlation was obtained with the results of the conventional method. Conclusions A practical method for the clinical measurement of human mercaptalbumin and non-mercaptalbumin was established. This high-performance liquid chromatography method is expected to be a powerful tool enabling the expansion of clinical usefulness and ensuring the elucidation of the roles of albumin in redox reactions throughout the human body.

  19. Design of a factorial experiment with randomization restrictions to assess medical device performance on vascular tissue

    PubMed Central

    2011-01-01

    Background Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. Methods The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. Results The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. Conclusions The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance. PMID:21599963

  20. Description of new dry granular materials of variable cohesion and friction coefficient: Implications for laboratory modeling of the brittle crust

    NASA Astrophysics Data System (ADS)

    Abdelmalak, M. M.; Bulois, C.; Mourgues, R.; Galland, O.; Legland, J.-B.; Gruber, C.

    2016-08-01

    Cohesion and friction coefficient are fundamental parameters for scaling brittle deformation in laboratory models of geological processes. However, they are commonly not experimental variable, whereas (1) rocks range from cohesion-less to strongly cohesive and from low friction to high friction and (2) strata exhibit substantial cohesion and friction contrasts. This brittle paradox implies that the effects of brittle properties on processes involving brittle deformation cannot be tested in laboratory models. Solving this paradox requires the use of dry granular materials of tunable and controllable brittle properties. In this paper, we describe dry mixtures of fine-grained cohesive, high friction silica powder (SP) and low-cohesion, low friction glass microspheres (GM) that fulfill this requirement. We systematically estimated the cohesions and friction coefficients of mixtures of variable proportions using two independent methods: (1) a classic Hubbert-type shear box to determine the extrapolated cohesion (C) and friction coefficient (μ), and (2) direct measurements of the tensile strength (T0) and the height (H) of open fractures to calculate the true cohesion (C0). The measured values of cohesion increase from 100 Pa for pure GM to 600 Pa for pure SP, with a sub-linear trend of the cohesion with the mixture GM content. The two independent cohesion measurement methods, from shear tests and tension/extensional tests, yield very similar results of extrapolated cohesion (C) and show that both are robust and can be used independently. The measured values of friction coefficients increase from 0.5 for pure GM to 1.05 for pure SP. The use of these granular material mixtures now allows testing (1) the effects of cohesion and friction coefficient in homogeneous laboratory models and (2) testing the effect of brittle layering on brittle deformation, as demonstrated by preliminary experiments. Therefore, the brittle properties become, at last, experimental variables.

  1. A Kernel Machine Method for Detecting Effects of Interaction Between Multidimensional Variable Sets: An Imaging Genetics Application

    PubMed Central

    Ge, Tian; Nichols, Thomas E.; Ghosh, Debashis; Mormino, Elizabeth C.

    2015-01-01

    Measurements derived from neuroimaging data can serve as markers of disease and/or healthy development, are largely heritable, and have been increasingly utilized as (intermediate) phenotypes in genetic association studies. To date, imaging genetic studies have mostly focused on discovering isolated genetic effects, typically ignoring potential interactions with non-genetic variables such as disease risk factors, environmental exposures, and epigenetic markers. However, identifying significant interaction effects is critical for revealing the true relationship between genetic and phenotypic variables, and shedding light on disease mechanisms. In this paper, we present a general kernel machine based method for detecting effects of interaction between multidimensional variable sets. This method can model the joint and epistatic effect of a collection of single nucleotide polymorphisms (SNPs), accommodate multiple factors that potentially moderate genetic influences, and test for nonlinear interactions between sets of variables in a flexible framework. As a demonstration of application, we applied the method to data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to detect the effects of the interactions between candidate Alzheimer's disease (AD) risk genes and a collection of cardiovascular disease (CVD) risk factors, on hippocampal volume measurements derived from structural brain magnetic resonance imaging (MRI) scans. Our method identified that two genes, CR1 and EPHA1, demonstrate significant interactions with CVD risk factors on hippocampal volume, suggesting that CR1 and EPHA1 may play a role in influencing AD-related neurodegeneration in the presence of CVD risks. PMID:25600633

  2. Enabling Advanced Wind-Tunnel Research Methods Using the NASA Langley 12-Foot Low Speed Tunnel

    NASA Technical Reports Server (NTRS)

    Busan, Ronald C.; Rothhaar, Paul M.; Croom, Mark A.; Murphy, Patrick C.; Grafton, Sue B.; O-Neal, Anthony W.

    2014-01-01

    Design of Experiment (DOE) testing methods were used to gather wind tunnel data characterizing the aerodynamic and propulsion forces and moments acting on a complex vehicle configuration with 10 motor-driven propellers, 9 control surfaces, a tilt wing, and a tilt tail. This paper describes the potential benefits and practical implications of using DOE methods for wind tunnel testing - with an emphasis on describing how it can affect model hardware, facility hardware, and software for control and data acquisition. With up to 23 independent variables (19 model and 2 tunnel) for some vehicle configurations, this recent test also provides an excellent example of using DOE methods to assess critical coupling effects in a reasonable timeframe for complex vehicle configurations. Results for an exploratory test using conventional angle of attack sweeps to assess aerodynamic hysteresis is summarized, and DOE results are presented for an exploratory test used to set the data sampling time for the overall test. DOE results are also shown for one production test characterizing normal force in the Cruise mode for the vehicle.

  3. DuPont qualicon BAX system real-time PCR assay for Escherichia coli O157:H7.

    PubMed

    Burns, Frank; Fleck, Lois; Andaloro, Bridget; Davis, Eugene; Rohrbeck, Jeff; Tice, George; Wallace, Morgan

    2011-01-01

    Evaluations were conducted to test the performance of the BAX System Real-Time PCR assay, which was certified as Performance Tested Method 031002 for screening E. coli O157:H7 in ground beef, beef trim, spinach, and lettuce. Method comparison studies performed on samples with low-level inoculates showed that the BAX System demonstrates a sensitivity equivalent or superior to the FDA-BAM and the USDA-FSIS culture methods, but with a significantly shorter time to result. Tests to evaluate inclusivity and exclusivity returned no false-negative and no false-positive results on a diverse panel of isolates, and tests for lot-to-lot variability and tablet stability demonstrated consistent performance. Ruggedness studies determined that none of the factors examined affect the performance of the assay. An accelerated shelf life study determined an initial 36 month shelf life for the test kit.

  4. Development of a test method against hot alkaline chemical splashes.

    PubMed

    Mäkinen, Helena; Nieminen, Kalevi; Mäki, Susanna; Siiskonen, Sirkku

    2008-01-01

    High temperature alkaline chemical liquids have caused injuries and hazardous situations in Finnish pulp manufacturing mills. There are no requirements and/or test method standards concerning protection against high temperature alkaline chemical splashes. This paper describes the test method development process to test and identify materials appropriate for hot liquid chemical hazard protection. In the first phase, the liquid was spilled through a stainless steel funnel and the protection performance was evaluated using a polyvinyl chloride (PVC) film under the test material. After several tentative improvements, a graphite crucible was used for heating and spilling the chemical, and a copper-coated K-type thermometer with 4 independent measuring areas was designed to measure the temperature under the material samples. The thermometer was designed to respond quickly so that peak temperatures could be measured. The main problem was to keep the spilled amount of chemical constant, which unfortunately resulted in significant variability in data.

  5. Input Variability Facilitates Unguided Subcategory Learning in Adults

    PubMed Central

    Eidsvåg, Sunniva Sørhus; Austad, Margit; Asbjørnsen, Arve E.

    2015-01-01

    Purpose This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Results Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. Conclusions The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition. PMID:25680081

  6. Spatial Variability of Snowpack Properties On Small Slopes

    NASA Astrophysics Data System (ADS)

    Pielmeier, C.; Kronholm, K.; Schneebeli, M.; Schweizer, J.

    The spatial variability of alpine snowpacks is created by a variety of parameters like deposition, wind erosion, sublimation, melting, temperature, radiation and metamor- phism of the snow. Spatial variability is thought to strongly control the avalanche initi- ation and failure propagation processes. Local snowpack measurements are currently the basis for avalanche warning services and there exist contradicting hypotheses about the spatial continuity of avalanche active snow layers and interfaces. Very little about the spatial variability of the snowpack is known so far, therefore we have devel- oped a systematic and objective method to measure the spatial variability of snowpack properties, layering and its relation to stability. For a complete coverage, the analysis of the spatial variability has to entail all scales from mm to km. In this study the small to medium scale spatial variability is investigated, i.e. the range from centimeters to tenths of meters. During the winter 2000/2001 we took systematic measurements in lines and grids on a flat snow test field with grid distances from 5 cm to 0.5 m. Fur- thermore, we measured systematic grids with grid distances between 0.5 m and 2 m in undisturbed flat fields and on small slopes above the tree line at the Choerbschhorn, in the region of Davos, Switzerland. On 13 days we measured the spatial pattern of the snowpack stratigraphy with more than 110 snow micro penetrometer measure- ments at slopes and flat fields. Within this measuring grid we placed 1 rutschblock and 12 stuffblock tests to measure the stability of the snowpack. With the large num- ber of measurements we are able to use geostatistical methods to analyse the spatial variability of the snowpack. Typical correlation lengths are calculated from semivari- ograms. Discerning the systematic trends from random spatial variability is analysed using statistical models. Scale dependencies are shown and recurring scaling patterns are outlined. The importance of the small and medium scale spatial variability for the larger (kilometer) scale spatial variability as well as for the avalanche formation are discussed. Finally, an outlook on spatial models for the snowpack variability is given.

  7. Embedding of multidimensional time-dependent observations.

    PubMed

    Barnard, J P; Aldrich, C; Gerber, M

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  8. Embedding of multidimensional time-dependent observations

    NASA Astrophysics Data System (ADS)

    Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius

    2001-10-01

    A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.

  9. 40 CFR 60.45c - Compliance and performance test methods and procedures for particulate matter.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... requested by the Administrator, to determine compliance with the standards using the following procedures... Administrator when necessitated by process variables or other factors. (5) For Method 5 or 5B of appendix A of... this section. (1) Notify the Administrator 1 month before starting use of the system. (2) Notify the...

  10. 40 CFR 60.45c - Compliance and performance test methods and procedures for particulate matter.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... requested by the Administrator, to determine compliance with the standards using the following procedures... Administrator when necessitated by process variables or other factors. (5) For Method 5 or 5B of appendix A of... this section. (1) Notify the Administrator 1 month before starting use of the system. (2) Notify the...

  11. 40 CFR 60.45c - Compliance and performance test methods and procedures for particulate matter.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... requested by the Administrator, to determine compliance with the standards using the following procedures... Administrator when necessitated by process variables or other factors. (5) For Method 5 or 5B of appendix A of... this section. (1) Notify the Administrator 1 month before starting use of the system. (2) Notify the...

  12. Automated Dissolution for Enteric-Coated Aspirin Tablets: A Case Study for Method Transfer to a RoboDis II.

    PubMed

    Ibrahim, Sarah A; Martini, Luigi

    2014-08-01

    Dissolution method transfer is a complicated yet common process in the pharmaceutical industry. With increased pharmaceutical product manufacturing and dissolution acceptance requirements, dissolution testing has become one of the most labor-intensive quality control testing methods. There is an increased trend for automation in dissolution testing, particularly for large pharmaceutical companies to reduce variability and increase personnel efficiency. There is no official guideline for dissolution testing method transfer from a manual, semi-automated, to automated dissolution tester. In this study, a manual multipoint dissolution testing procedure for an enteric-coated aspirin tablet was transferred effectively and reproducibly to a fully automated dissolution testing device, RoboDis II. Enteric-coated aspirin samples were used as a model formulation to assess the feasibility and accuracy of media pH change during continuous automated dissolution testing. Several RoboDis II parameters were evaluated to ensure the integrity and equivalency of dissolution method transfer from a manual dissolution tester. This current study provides a systematic outline for the transfer of the manual dissolution testing protocol to an automated dissolution tester. This study further supports that automated dissolution testers compliant with regulatory requirements and similar to manual dissolution testers facilitate method transfer. © 2014 Society for Laboratory Automation and Screening.

  13. Digital mapping of soil properties in Canadian managed forests at 250 m of resolution using the k-nearest neighbor method

    NASA Astrophysics Data System (ADS)

    Mansuy, N. R.; Paré, D.; Thiffault, E.

    2015-12-01

    Large-scale mapping of soil properties is increasingly important for environmental resource management. Whileforested areas play critical environmental roles at local and global scales, forest soil maps are typically at lowresolution.The objective of this study was to generate continuous national maps of selected soil variables (C, N andsoil texture) for the Canadian managed forest landbase at 250 m resolution. We produced these maps using thekNN method with a training dataset of 538 ground-plots fromthe National Forest Inventory (NFI) across Canada,and 18 environmental predictor variables. The best predictor variables were selected (7 topographic and 5 climaticvariables) using the Least Absolute Shrinkage and Selection Operator method. On average, for all soil variables,topographic predictors explained 37% of the total variance versus 64% for the climatic predictors. Therelative root mean square error (RMSE%) calculated with the leave-one-out cross-validation method gave valuesranging between 22% and 99%, depending on the soil variables tested. RMSE values b 40% can be considered agood imputation in light of the low density of points used in this study. The study demonstrates strong capabilitiesfor mapping forest soil properties at 250m resolution, compared with the current Soil Landscape of CanadaSystem, which is largely oriented towards the agricultural landbase. The methodology used here can potentiallycontribute to the national and international need for spatially explicit soil information in resource managementscience.

  14. Between-centre variability in transfer function analysis, a widely used method for linear quantification of the dynamic pressure–flow relation: The CARNet study

    PubMed Central

    Meel-van den Abeelen, Aisha S.S.; Simpson, David M.; Wang, Lotte J.Y.; Slump, Cornelis H.; Zhang, Rong; Tarumi, Takashi; Rickards, Caroline A.; Payne, Stephen; Mitsis, Georgios D.; Kostoglou, Kyriaki; Marmarelis, Vasilis; Shin, Dae; Tzeng, Yu-Chieh; Ainslie, Philip N.; Gommer, Erik; Müller, Martin; Dorado, Alexander C.; Smielewski, Peter; Yelicich, Bernardo; Puppo, Corina; Liu, Xiuyun; Czosnyka, Marek; Wang, Cheng-Yen; Novak, Vera; Panerai, Ronney B.; Claassen, Jurgen A.H.R.

    2014-01-01

    Transfer function analysis (TFA) is a frequently used method to assess dynamic cerebral autoregulation (CA) using spontaneous oscillations in blood pressure (BP) and cerebral blood flow velocity (CBFV). However, controversies and variations exist in how research groups utilise TFA, causing high variability in interpretation. The objective of this study was to evaluate between-centre variability in TFA outcome metrics. 15 centres analysed the same 70 BP and CBFV datasets from healthy subjects (n = 50 rest; n = 20 during hypercapnia); 10 additional datasets were computer-generated. Each centre used their in-house TFA methods; however, certain parameters were specified to reduce a priori between-centre variability. Hypercapnia was used to assess discriminatory performance and synthetic data to evaluate effects of parameter settings. Results were analysed using the Mann–Whitney test and logistic regression. A large non-homogeneous variation was found in TFA outcome metrics between the centres. Logistic regression demonstrated that 11 centres were able to distinguish between normal and impaired CA with an AUC > 0.85. Further analysis identified TFA settings that are associated with large variation in outcome measures. These results indicate the need for standardisation of TFA settings in order to reduce between-centre variability and to allow accurate comparison between studies. Suggestions on optimal signal processing methods are proposed. PMID:24725709

  15. An evaluation of the effect of recent temperature variability on the prediction of coral bleaching events.

    PubMed

    Donner, Simon D

    2011-07-01

    Over the past 30 years, warm thermal disturbances have become commonplace on coral reefs worldwide. These periods of anomalous sea surface temperature (SST) can lead to coral bleaching, a breakdown of the symbiosis between the host coral and symbiotic dinoflagellates which reside in coral tissue. The onset of bleaching is typically predicted to occur when the SST exceeds a local climatological maximum by 1 degrees C for a month or more. However, recent evidence suggests that the threshold at which bleaching occurs may depend on thermal history. This study uses global SST data sets (HadISST and NOAA AVHRR) and mass coral bleaching reports (from Reefbase) to examine the effect of historical SST variability on the accuracy of bleaching prediction. Two variability-based bleaching prediction methods are developed from global analysis of seasonal and interannual SST variability. The first method employs a local bleaching threshold derived from the historical variability in maximum annual SST to account for spatial variability in past thermal disturbance frequency. The second method uses a different formula to estimate the local climatological maximum to account for the low seasonality of SST in the tropics. The new prediction methods are tested against the common globally fixed threshold method using the observed bleaching reports. The results find that estimating the bleaching threshold from local historical SST variability delivers the highest predictive power, but also a higher rate of Type I errors. The second method has the lowest predictive power globally, though regional analysis suggests that it may be applicable in equatorial regions. The historical data analysis suggests that the bleaching threshold may have appeared to be constant globally because the magnitude of interannual variability in maximum SST is similar for many of the world's coral reef ecosystems. For example, the results show that a SST anomaly of 1 degrees C is equivalent to 1.73-2.94 standard deviations of the maximum monthly SST for two-thirds of the world's coral reefs. Coral reefs in the few regions that experience anomalously high interannual SST variability like the equatorial Pacific could prove critical to understanding how coral communities acclimate or adapt to frequent and/or severe thermal disturbances.

  16. Testing of technology readiness index model based on exploratory factor analysis approach

    NASA Astrophysics Data System (ADS)

    Ariani, AF; Napitupulu, D.; Jati, RK; Kadar, JA; Syafrullah, M.

    2018-04-01

    SMEs readiness in using ICT will determine the adoption of ICT in the future. This study aims to evaluate the model of technology readiness in order to apply the technology on SMEs. The model is tested to find if TRI model is relevant to measure ICT adoption, especially for SMEs in Indonesia. The research method used in this paper is survey to a group of SMEs in South Tangerang. The survey measures the readiness to adopt ICT based on four variables which is Optimism, Innovativeness, Discomfort, and Insecurity. Each variable contains several indicators to make sure the variable is measured thoroughly. The data collected through survey is analysed using factor analysis methodwith the help of SPSS software. The result of this study shows that TRI model gives more descendants on some indicators and variables. This result can be caused by SMEs owners’ knowledge is not homogeneous about either the technology that they are used, knowledge or the type of their business.

  17. Variable Coding and Modulation Experiment Using NASA's Space Communication and Navigation Testbed

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Tollis, Nicholas S.

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques in an operational system. The experimental nature of the Testbed allows for rapid demonstrations while using flight hardware in a deployed system within NASA's networks. One example is variable coding and modulation, which is a method to increase data-throughput in a communication link. This paper describes recent flight testing with variable coding and modulation over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Performance of the variable coding and modulation system is evaluated and compared to the capacity of the link, as well as standard NASA waveforms.

  18. Calibrating the pixel-level Kepler imaging data with a causal data-driven model

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Foreman-Mackey, Daniel; Hogg, David W.; Schölkopf, Bernhard

    2015-01-01

    In general, astronomical observations are affected by several kinds of noise, each with it's own causal source; there is photon noise, stochastic source variability, and residuals coming from imperfect calibration of the detector or telescope. In particular, the precision of NASA Kepler photometry for exoplanet science—the most precise photometric measurements of stars ever made—appears to be limited by unknown or untracked variations in spacecraft pointing and temperature, and unmodeled stellar variability. Here we present the Causal Pixel Model (CPM) for Kepler data, a data-driven model intended to capture variability but preserve transit signals. The CPM works at the pixel level (not the photometric measurement level); it can capture more fine-grained information about the variation of the spacecraft than is available in the pixel-summed aperture photometry. The basic idea is that CPM predicts each target pixel value from a large number of pixels of other stars sharing the instrument variabilities while not containing any information on possible transits at the target star. In addition, we use the target star's future and past (auto-regression). By appropriately separating the data into training and test sets, we ensure that information about any transit will be perfectly isolated from the fitting of the model. The method has four hyper-parameters (the number of predictor stars, the auto-regressive window size, and two L2-regularization amplitudes for model components), which we set by cross-validation. We determine a generic set of hyper-parameters that works well on most of the stars with 11≤V≤12 mag and apply the method to a corresponding set of target stars with known planet transits. We find that we can consistently outperform (for the purposes of exoplanet detection) the Kepler Pre-search Data Conditioning (PDC) method for exoplanet discovery, often improving the SNR by a factor of two. While we have not yet exhaustively tested the method at other magnitudes, we expect that it should be generally applicable, with positive consequences for subsequent exoplanet detection or stellar variability (in which case we must exclude the autoregressive part to preserve intrinsic variability).

  19. A comparison of two microscale laboratory reporting methods in a secondary chemistry classroom

    NASA Astrophysics Data System (ADS)

    Martinez, Lance Michael

    This study attempted to determine if there was a difference between the laboratory achievement of students who used a modified reporting method and those who used traditional laboratory reporting. The study also determined the relationships between laboratory performance scores and the independent variables score on the Group Assessment of Logical Thinking (GALT) test, chronological age in months, gender, and ethnicity for each of the treatment groups. The study was conducted using 113 high school students who were enrolled in first-year general chemistry classes at Pueblo South High School in Colorado. The research design used was the quasi-experimental Nonequivalent Control Group Design. The statistical treatment consisted of the Multiple Regression Analysis and the Analysis of Covariance. Based on the GALT, students in the two groups were generally in the concrete and transitional stages of the Piagetian cognitive levels. The findings of the study revealed that the traditional and the modified methods of laboratory reporting did not have any effect on the laboratory performance outcome of the subjects. However, the students who used the traditional method of reporting showed a higher laboratory performance score when evaluation was conducted using the New Standards rubric recommended by the state. Multiple Regression Analysis revealed that there was a significant relationship between the criterion variable student laboratory performance outcome of individuals who employed traditional laboratory reporting methods and the composite set of predictor variables. On the contrary, there was no significant relationship between the criterion variable student laboratory performance outcome of individuals who employed modified laboratory reporting methods and the composite set of predictor variables.

  20. Capacitance variation measurement method with a continuously variable measuring range for a micro-capacitance sensor

    NASA Astrophysics Data System (ADS)

    Lü, Xiaozhou; Xie, Kai; Xue, Dongfeng; Zhang, Feng; Qi, Liang; Tao, Yebo; Li, Teng; Bao, Weimin; Wang, Songlin; Li, Xiaoping; Chen, Renjie

    2017-10-01

    Micro-capacitance sensors are widely applied in industrial applications for the measurement of mechanical variations. The measurement accuracy of micro-capacitance sensors is highly dependent on the capacitance measurement circuit. To overcome the inability of commonly used methods to directly measure capacitance variation and deal with the conflict between the measurement range and accuracy, this paper presents a capacitance variation measurement method which is able to measure the output capacitance variation (relative value) of the micro-capacitance sensor with a continuously variable measuring range. We present the principles and analyze the non-ideal factors affecting this method. To implement the method, we developed a capacitance variation measurement circuit and carried out experiments to test the circuit. The result shows that the circuit is able to measure a capacitance variation range of 0-700 pF linearly with a maximum relative accuracy of 0.05% and a capacitance range of 0-2 nF (with a baseline capacitance of 1 nF) with a constant resolution of 0.03%. The circuit is proposed as a new method to measure capacitance and is expected to have applications in micro-capacitance sensors for measuring capacitance variation with a continuously variable measuring range.

  1. Colorimetric determination of alkaline phosphatase as indicator of mammalian feces in corn meal: collaborative study.

    PubMed

    Gerber, H

    1986-01-01

    In the official method for rodent filth in corn meal, filth and corn meal are separated in organic solvents, and particles are identified by the presence of hair and a mucous coating. The solvents are toxic, poor separation yields low recoveries, and fecal characteristics are rarely present on all fragments, especially on small particles. The official AOAC alkaline phosphatase test for mammalian feces, 44.181-44.184, has therefore been adapted to determine the presence of mammalian feces in corn meal. The enzyme cleaves phosphate radicals from a test indicator/substrate, phenolphthalein diphosphate. As free phenolphthalein accumulates, a pink-to-red color develops in the gelled test agar medium. In a collaborative study conducted to compare the proposed method with the official method for corn meal, 44.049, the proposed method yielded 45.5% higher recoveries than the official method. Repeatability and reproducibility for the official method were roughly 1.8 times more variable than for the proposed method. The method has been adopted official first action.

  2. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  3. New approach to estimating variability in visual field data using an image processing technique.

    PubMed Central

    Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P

    1995-01-01

    AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196

  4. Testing for purchasing power parity in the long-run for ASEAN-5

    NASA Astrophysics Data System (ADS)

    Choji, Niri Martha; Sek, Siok Kun

    2017-04-01

    For more than a decade, there has been a substantial interest in testing for the validity of the purchasing power parity (PPP) hypothesis empirically. This paper performs a test on revealing a long-run relative Purchasing Power Parity for a group of ASEAN-5 countries for the period of 1996-2016 using monthly data. For this purpose, we used the Pedroni co-integration method to test for the long-run hypothesis of purchasing power parity. We first tested for the stationarity of the variables and found that the variables are non-stationary at levels but stationary at first difference. Results of the Pedroni test rejected the null hypothesis of no co-integration meaning that we have enough evidence to support PPP in the long-run for the ASEAN-5 countries over the period of 1996-2016. In other words, the rejection of null hypothesis implies a long-run relation between nominal exchange rates and relative prices.

  5. A fourth-order box method for solving the boundary layer equations

    NASA Technical Reports Server (NTRS)

    Wornom, S. F.

    1977-01-01

    A fourth order box method for calculating high accuracy numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations is presented. The method is the natural extension of the second order Keller Box scheme to fourth order and is demonstrated with application to the incompressible, laminar and turbulent boundary layer equations. Numerical results for high accuracy test cases show the method to be significantly faster than other higher order and second order methods.

  6. Method for evaluating wind turbine wake effects on wind farm performance

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Spera, D. A.

    1985-01-01

    A method of testing the performance of a cluster of wind turbine units an data analysis equations are presented which together form a simple and direct procedure for determining the reduction in energy output caused by the wake of an upwind turbine. This method appears to solve the problems presented by data scatter and wind variability. Test data from the three-unit Mod-2 wind turbine cluster at Goldendale, Washington, are analyzed to illustrate the application of the proposed method. In this sample case the reduction in energy was found to be about 10 percent when the Mod-2 units were separated a distance equal to seven diameters and winds were below rated.

  7. On quality control procedures for solar radiation and meteorological measures, from subhourly to montly average time periods

    NASA Astrophysics Data System (ADS)

    Espinar, B.; Blanc, P.; Wald, L.; Hoyer-Klick, C.; Schroedter-Homscheidt, M.; Wanderer, T.

    2012-04-01

    Meteorological data measured by ground stations are often a key element in the development and validation of methods exploiting satellite images. These data are considered as a reference against which satellite-derived estimates are compared. Long-term radiation and meteorological measurements are available from a large number of measuring stations. However, close examination of the data often reveals a lack of quality, often for extended periods of time. This lack of quality has been the reason, in many cases, of the rejection of large amount of available data. The quality data must be checked before their use in order to guarantee the inputs for the methods used in modelling, monitoring, forecast, etc. To control their quality, data should be submitted to several conditions or tests. After this checking, data that are not flagged by any of the test is released as a plausible data. In this work, it has been performed a bibliographical research of quality control tests for the common meteorological variables (ambient temperature, relative humidity and wind speed) and for the usual solar radiometrical variables (horizontal global and diffuse components of the solar radiation and the beam normal component). The different tests have been grouped according to the variable and the average time period (sub-hourly, hourly, daily and monthly averages). The quality test may be classified as follows: • Range checks: test that verify values are within a specific range. There are two types of range checks, those based on extrema and those based on rare observations. • Step check: test aimed at detecting unrealistic jumps or stagnation in the time series. • Consistency checks: test that verify the relationship between two or more time series. The gathered quality tests are applicable for all latitudes as they have not been optimized regionally nor seasonably with the aim of being generic. They have been applied to ground measurements in several geographic locations, what result in the detection of some control tests that are no longer adequate, due to different reasons. After the modification of some test, based in our experience, a set of quality control tests is now presented, updated according to technology advances and classified. The presented set of quality tests allows radiation and meteorological data to be tested in order to know their plausibility to be used as inputs in theoretical or empirical methods for scientific research. The research leading to those results has partly receive funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement no. 262892 (ENDORSE project).

  8. A goal attainment pain management program for older adults with arthritis.

    PubMed

    Davis, Gail C; White, Terri L

    2008-12-01

    The purpose of this study was to test a pain management intervention that integrates goal setting with older adults (age > or =65) living independently in residential settings. This preliminary testing of the Goal Attainment Pain Management Program (GAPMAP) included a sample of 17 adults (mean age 79.29 years) with self-reported pain related to arthritis. Specific study aims were to: 1) explore the use of individual goal setting; 2) determine participants' levels of goal attainment; 3) determine whether changes occurred in the pain management methods used and found to be helpful by GAPMAP participants; and 4) determine whether changes occurred in selected pain-related variables (i.e., experience of living with persistent pain, the expected outcomes of pain management, pain management barriers, and global ratings of perceived pain intensity and success of pain management). Because of the small sample size, both parametric (t test) and nonparametric (Wilcoxon signed rank test) analyses were used to examine differences from pretest to posttest. Results showed that older individuals could successfully participate in setting and attaining individual goals. Thirteen of the 17 participants (76%) met their goals at the expected level or above. Two management methods (exercise and using a heated pool, tub, or shower) were used significantly more often after the intervention, and two methods (exercise and distraction) were identified as significantly more helpful. Two pain-related variables (experience of living with persistent pain and expected outcomes of pain management) revealed significant change, and all of those tested showed overall improvement.

  9. The Reliability of Teacher Decision-Making in Recommending Accommodations for Large-Scale Tests. Technical Report # 08-01

    ERIC Educational Resources Information Center

    Tindal, Gerald; Lee, Daesik; Geller, Leanne Ketterlin

    2008-01-01

    In this paper we review different methods for teachers to recommend accommodations in large scale tests. Then we present data on the stability of their judgments on variables relevant to this decision-making process. The outcomes from the judgments support the need for a more explicit model. Four general categories are presented: student…

  10. Impact of Sample Size and Variability on the Power and Type I Error Rates of Equivalence Tests: A Simulation Study

    ERIC Educational Resources Information Center

    Rusticus, Shayna A.; Lovato, Chris Y.

    2014-01-01

    The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few…

  11. Measuring Rind Thickness on Polyurethane Foam

    NASA Technical Reports Server (NTRS)

    Johnson, C.; Miller, J.; Brown, H.

    1985-01-01

    Nondestructive test determines rind thickness of polyurethane foam. Surface harness of foam measured by Shore durometer method: hardness on Shore D scale correlates well with rind thickness. Shore D hardness of 20, for example, indicates rind thickness of 0.04 inch (1 millimeter). New hardness test makes it easy to determine rind thickness of sample nondestructively and to adjust fabrication variables accordingly.

  12. A Continuous Threshold Expectile Model.

    PubMed

    Zhang, Feipeng; Li, Qunhua

    2017-12-01

    Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .

  13. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  14. Examination of the Chayes-Kruskal procedure for testing correlations between proportions

    USGS Publications Warehouse

    Kork, J.O.

    1977-01-01

    The Chayes-Kruskal procedure for testing correlations between proportions uses a linear approximation to the actual closure transformation to provide a null value, pij, against which an observed closed correlation coefficient, rij, can be tested. It has been suggested that a significant difference between pij and rij would indicate a nonzero covariance relationship between the ith and jth open variables. In this paper, the linear approximation to the closure transformation is described in terms of a matrix equation. Examination of the solution set of this equation shows that estimation of, or even the identification of, significant nonzero open correlations is essentially impossible even if the number of variables and the sample size are large. The method of solving the matrix equation is described in the appendix. ?? 1977 Plenum Publishing Corporation.

  15. Wheelchair Propulsion Biomechanics in Junior Basketball Players: A Method for the Evaluation of the Efficacy of a Specific Training Program

    PubMed Central

    Bergamini, Elena; Morelli, Francesca; Marchetti, Flavia; Vannozzi, Giuseppe; Polidori, Lorenzo; Paradisi, Francesco; Traballesi, Marco; Cappozzo, Aurelio

    2015-01-01

    As participation in wheelchair sports increases, the need of quantitative assessment of biomechanical performance indicators and of sports- and population-specific training protocols has become central. The present study focuses on junior wheelchair basketball and aims at (i) proposing a method to identify biomechanical performance indicators of wheelchair propulsion using an instrumented in-field test and (ii) developing a training program specific for the considered population and assessing its efficacy using the proposed method. Twelve athletes (10 M, 2 F, age = 17.1 ± 2.7 years, years of practice = 4.5 ± 1.8) equipped with wheelchair- and wrist-mounted inertial sensors performed a 20-metre sprint test. Biomechanical parameters related to propulsion timing, progression force, and coordination were estimated from the measured accelerations and used in a regression model where the time to complete the test was set as dependent variable. Force- and coordination-related parameters accounted for 80% of the dependent variable variance. Based on these results, a training program was designed and administered for three months to six of the athletes (the others acting as control group). The biomechanical indicators proved to be effective in providing additional information about the wheelchair propulsion technique with respect to the final test outcome and demonstrated the efficacy of the developed program. PMID:26543852

  16. Quantifying the process and outcomes of person-centered planning.

    PubMed

    Holburn, S; Jacobson, J W; Vietze, P M; Schwartz, A A; Sersen, E

    2000-09-01

    Although person-centered planning is a popular approach in the field of developmental disabilities, there has been little systematic assessment of its process and outcomes. To measure person-centered planning, we developed three instruments designed to assess its various aspects. We then constructed variables comprising both a Process and an Outcome Index using a combined rational-empirical method. Test-retest reliability and measures of internal consistency appeared adequate. Variable correlations and factor analysis were generally consistent with our conceptualization and resulting item and variable classifications. Practical implications for intervention integrity, program evaluation, and organizational performance are discussed.

  17. Determination of the dried product resistance variability and its influence on the product temperature in pharmaceutical freeze-drying.

    PubMed

    Scutellà, Bernadette; Trelea, Ioan Cristian; Bourlès, Erwan; Fonseca, Fernanda; Passot, Stephanie

    2018-07-01

    During the primary drying step of the freeze-drying process, mass transfer resistance strongly affects the product temperature, and consequently the final product quality. The main objective of this study was to evaluate the variability of the mass transfer resistance resulting from the dried product layer (R p ) in a manufacturing batch of vials, and its potential effect on the product temperature, from data obtained in a pilot scale freeze-dryer. Sublimation experiments were run at -25 °C and 10 Pa using two different freezing protocols: with spontaneous or controlled ice nucleation. Five repetitions of each condition were performed. Global (pressure rise test) and local (gravimetric) methods were applied as complementary approaches to estimate R p . The global method allowed to assess variability of the evolution of R p with the dried layer thickness between different experiments whereas the local method informed about R p variability at a fixed time within the vial batch. A product temperature variability of approximately ±4.4 °C was defined for a product dried layer thickness of 5 mm. The present approach can be used to estimate the risk of failure of the process due to mass transfer variability when designing freeze-drying cycle. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Interpreting findings from Mendelian randomization using the MR-Egger method.

    PubMed

    Burgess, Stephen; Thompson, Simon G

    2017-05-01

    Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.

  19. A Correlation-Based Transition Model using Local Variables. Part 2; Test Cases and Industrial Applications

    NASA Technical Reports Server (NTRS)

    Langtry, R. B.; Menter, F. R.; Likki, S. R.; Suzen, Y. B.; Huang, P. G.; Volker, S.

    2006-01-01

    A new correlation-based transition model has been developed, which is built strictly on local variables. As a result, the transition model is compatible with modern computational fluid dynamics (CFD) methods using unstructured grids and massive parallel execution. The model is based on two transport equations, one for the intermittency and one for the transition onset criteria in terms of momentum thickness Reynolds number. The proposed transport equations do not attempt to model the physics of the transition process (unlike, e.g., turbulence models), but form a framework for the implementation of correlation-based models into general-purpose CFD methods.

  20. A boundary value approach for solving three-dimensional elliptic and hyperbolic partial differential equations.

    PubMed

    Biala, T A; Jator, S N

    2015-01-01

    In this article, the boundary value method is applied to solve three dimensional elliptic and hyperbolic partial differential equations. The partial derivatives with respect to two of the spatial variables (y, z) are discretized using finite difference approximations to obtain a large system of ordinary differential equations (ODEs) in the third spatial variable (x). Using interpolation and collocation techniques, a continuous scheme is developed and used to obtain discrete methods which are applied via the Block unification approach to obtain approximations to the resulting large system of ODEs. Several test problems are investigated to elucidate the solution process.

  1. Techniques used in the F-14 variable-sweep transition flight experiment

    NASA Technical Reports Server (NTRS)

    Anderson, Bianca Trujillo; Meyer, Robert R., Jr.; Chiles, Harry R.

    1988-01-01

    This paper discusses and evaluates the test measurement techniques used to determine the laminar-to-turbulent boundary layer transition location in the F-14 variable-sweep transition flight experiment (VSTFE). The main objective of the VSTFE was to determine the effects of wing sweep on the laminar-to-turbulent transition location at conditions representative of transport aircraft. Four methods were used to determine the transition location: (1) a hot-film anemometer system; (2) two boundary-layer rakes; (3) surface pitot tubes; and (4) liquid crystals for flow visualization. Of the four methods, the hot-film anemometer system was the most reliable indicator of transition.

  2. Mediation and moderation of treatment effects in randomised controlled trials of complex interventions.

    PubMed

    Emsley, Richard; Dunn, Graham; White, Ian R

    2010-06-01

    Complex intervention trials should be able to answer both pragmatic and explanatory questions in order to test the theories motivating the intervention and help understand the underlying nature of the clinical problem being tested. Key to this is the estimation of direct effects of treatment and indirect effects acting through intermediate variables which are measured post-randomisation. Using psychological treatment trials as an example of complex interventions, we review statistical methods which crucially evaluate both direct and indirect effects in the presence of hidden confounding between mediator and outcome. We review the historical literature on mediation and moderation of treatment effects. We introduce two methods from within the existing causal inference literature, principal stratification and structural mean models, and demonstrate how these can be applied in a mediation context before discussing approaches and assumptions necessary for attaining identifiability of key parameters of the basic causal model. Assuming that there is modification by baseline covariates of the effect of treatment (i.e. randomisation) on the mediator (i.e. covariate by treatment interactions), but no direct effect on the outcome of these treatment by covariate interactions leads to the use of instrumental variable methods. We describe how moderation can occur through post-randomisation variables, and extend the principal stratification approach to multiple group methods with explanatory models nested within the principal strata. We illustrate the new methodology with motivating examples of randomised trials from the mental health literature.

  3. Fully moderated T-statistic for small sample size gene expression arrays.

    PubMed

    Yu, Lianbo; Gulati, Parul; Fernandez, Soledad; Pennell, Michael; Kirschner, Lawrence; Jarjoura, David

    2011-09-15

    Gene expression microarray experiments with few replications lead to great variability in estimates of gene variances. Several Bayesian methods have been developed to reduce this variability and to increase power. Thus far, moderated t methods assumed a constant coefficient of variation (CV) for the gene variances. We provide evidence against this assumption, and extend the method by allowing the CV to vary with gene expression. Our CV varying method, which we refer to as the fully moderated t-statistic, was compared to three other methods (ordinary t, and two moderated t predecessors). A simulation study and a familiar spike-in data set were used to assess the performance of the testing methods. The results showed that our CV varying method had higher power than the other three methods, identified a greater number of true positives in spike-in data, fit simulated data under varying assumptions very well, and in a real data set better identified higher expressing genes that were consistent with functional pathways associated with the experiments.

  4. LP-search and its use in analysis of the accuracy of control systems with acoustical models

    NASA Technical Reports Server (NTRS)

    Sergeyev, V. I.; Sobol, I. M.; Statnikov, R. B.; Statnikov, I. N.

    1973-01-01

    The LP-search is proposed as an analog of the Monte Carlo method for finding values in nonlinear statistical systems. It is concluded that: To attain the required accuracy in solution to the problem of control for a statistical system in the LP-search, a considerably smaller number of tests is required than in the Monte Carlo method. The LP-search allows the possibility of multiple repetitions of tests under identical conditions and observability of the output variables of the system.

  5. Integrated Data Collection Analysis (IDCA) Program - SSST Testing Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.

    The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testingmore » by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent.« less

  6. SU-G-TeP2-04: Comprehensive Machine Isocenter Evaluation with Separation of Gantry, Collimator, and Table Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hancock, S; Clements, C; Hyer, D

    2016-06-15

    Purpose: To develop and demonstrate application of a method that characterizes deviation of linac x-ray beams from the centroid of the volumetric radiation isocenter as a function of gantry, collimator, and table variables. Methods: A set of Winston-Lutz ball-bearing images was used to determine the gantry radiation isocenter as the midrange of deviation values resulting from gantry and collimator rotation. Also determined were displacement of table axis from gantry isocenter and recommended table axis adjustment. The method, previously reported, has been extended to include the effect of collimator walkout by obtaining measurements with 0 and 180 degree collimator rotation formore » each gantry angle. Twelve images were used to characterize the volumetric isocenter for the full range of available gantry, collimator, and table rotations. Results: Three Varian True Beam, two Elekta Infinity and four Versa HD linacs at five institutions were tested using identical methodology. Varian linacs exhibited substantially less deviation due to head sag than Elekta linacs (0.4 mm vs. 1.2 mm on average). One linac from each manufacturer had additional isocenter deviation of 0.3 to 0.4 mm due to jaw instability with gantry and collimator rotation. For all linacs, the achievable isocenter tolerance was dependent on adjustment of collimator position offset, transverse position steering, and alignment of the table axis with gantry isocenter, facilitated by these test results. The pattern and magnitude of table axis wobble vs. table angle was reproducible and unique to each machine. Conclusion: This new method provides a comprehensive set of isocenter deviation values including all variables. It effectively facilitates minimization of deviation between beam center and target (ball-bearing) position. This method was used to quantify the effect of jaw instability on isocenter deviation and to identify the offending jaw. The test is suitable for incorporation into a routine machine QA program. Software development was performed by Radiological Imaging Technology, Inc.« less

  7. Multivariate models for prediction of human skin sensitization hazard.

    PubMed

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2017-03-01

    One of the Interagency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens™ assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy 88%), any of the alternative methods alone (accuracy 63-79%) or test batteries combining data from the individual methods (accuracy 75%). These results suggest that computational methods are promising tools to identify effectively the potential human skin sensitizers without animal testing. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA.

  8. Simulated Annealing in the Variable Landscape

    NASA Astrophysics Data System (ADS)

    Hasegawa, Manabu; Kim, Chang Ju

    An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.

  9. LP and NLP decomposition without a master problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuller, D.; Lan, B.

    We describe a new algorithm for decomposition of linear programs and a class of convex nonlinear programs, together with theoretical properties and some test results. Its most striking feature is the absence of a master problem; the subproblems pass primal and dual proposals directly to one another. The algorithm is defined for multi-stage LPs or NLPs, in which the constraints link the current stage`s variables to earlier stages` variables. This problem class is general enough to include many problem structures that do not immediately suggest stages, such as block diagonal problems. The basic algorithmis derived for two-stage problems and extendedmore » to more than two stages through nested decomposition. The main theoretical result assures convergence, to within any preset tolerance of the optimal value, in a finite number of iterations. This asymptotic convergence result contrasts with the results of limited tests on LPs, in which the optimal solution is apparently found exactly, i.e., to machine accuracy, in a small number of iterations. The tests further suggest that for LPs, the new algorithm is faster than the simplex method applied to the whole problem, as long as the stages are linked loosely; that the speedup over the simpex method improves as the number of stages increases; and that the algorithm is more reliable than nested Dantzig-Wolfe or Benders` methods in its improvement over the simplex method.« less

  10. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    NASA Astrophysics Data System (ADS)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  11. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  12. Wind erosion in semiarid landscapes: Predictive models and remote sensing methods for the influence of vegetation

    NASA Technical Reports Server (NTRS)

    Musick, H. Brad

    1993-01-01

    The objectives of this research are: to develop and test predictive relations for the quantitative influence of vegetation canopy structure on wind erosion of semiarid rangeland soils, and to develop remote sensing methods for measuring the canopy structural parameters that determine sheltering against wind erosion. The influence of canopy structure on wind erosion will be investigated by means of wind-tunnel and field experiments using structural variables identified by the wind-tunnel and field experiments using model roughness elements to simulate plant canopies. The canopy structural variables identified by the wind-tunnel and field experiments as important in determining vegetative sheltering against wind erosion will then be measured at a number of naturally vegetated field sites and compared with estimates of these variables derived from analysis of remotely sensed data.

  13. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    NASA Astrophysics Data System (ADS)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  14. Pathways from Autism Spectrum Disorder (ASD) Diagnosis to Genetic Testing

    PubMed Central

    Barton, Krysta S.; Tabor, Holly K.; Starks, Helene; Garrison, Nanibaa’ A.; Laurino, Mercy; Burke, Wylie

    2017-01-01

    Purpose This study examines challenges faced by families and health providers related to genetic testing for autism spectrum disorder (ASD). Methods This qualitative study of 14 parents and 15 health providers identified an unstandardized three-step process for families who pursue ASD genetic testing. Results Step 1 is the clinical diagnosis of ASD, confirmed by providers practicing alone or in a team. Step 2 is the offer of genetic testing to find an etiology. For those offered testing, step 3 involves the parents’ decision whether to pursue testing. Despite professional guidelines and recommendations, interviews describe considerable variability in approaches to genetic testing for ASD, a lack of consensus among providers, and questions about clinical utility. Many families in our study were unaware of the option for genetic testing; testing decisions by parents appear to be influenced by both provider recommendations and insurance coverage. Conclusion Consideration of genetic testing for ASD should take into account different views about the clinical utility of testing and variability in insurance coverage. Ideally, policy makers from the range of clinical specialties involved in ASD care should revisit policies to clarify the purpose of genetic testing for ASD and promote consensus about its appropriate use. PMID:29048417

  15. A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test

    NASA Technical Reports Server (NTRS)

    Messer, Bradley

    2007-01-01

    Propulsion ground test facilities face the daily challenge of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Over the last decade NASA s propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and exceeded the capabilities of numerous test facility and test article components. A logistic regression mathematical modeling technique has been developed to predict the probability of successfully completing a rocket propulsion test. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),.., X(sub k) to a binary or dichotomous dependent variable Y, where Y can only be one of two possible outcomes, in this case Success or Failure of accomplishing a full duration test. The use of logistic regression modeling is not new; however, modeling propulsion ground test facilities using logistic regression is both a new and unique application of the statistical technique. Results from this type of model provide project managers with insight and confidence into the effectiveness of rocket propulsion ground testing.

  16. Simulation and experimental design of a new advanced variable step size Incremental Conductance MPPT algorithm for PV systems.

    PubMed

    Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir

    2016-05-01

    Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. A method to isolate bacterial communities and characterize ecosystems from food products: Validation and utilization in as a reproducible chicken meat model.

    PubMed

    Rouger, Amélie; Remenant, Benoit; Prévost, Hervé; Zagorec, Monique

    2017-04-17

    Influenced by production and storage processes and by seasonal changes the diversity of meat products microbiota can be very variable. Because microbiotas influence meat quality and safety, characterizing and understanding their dynamics during processing and storage is important for proposing innovative and efficient storage conditions. Challenge tests are usually performed using meat from the same batch, inoculated at high levels with one or few strains. Such experiments do not reflect the true microbial situation, and the global ecosystem is not taken into account. Our purpose was to constitute live stocks of chicken meat microbiotas to create standard and reproducible ecosystems. We searched for the best method to collect contaminating bacterial communities from chicken cuts to store as frozen aliquots. We tested several methods to extract DNA of these stored communities for subsequent PCR amplification. We determined the best moment to collect bacteria in sufficient amounts during the product shelf life. Results showed that the rinsing method associated to the use of Mobio DNA extraction kit was the most reliable method to collect bacteria and obtain DNA for subsequent PCR amplification. Then, 23 different chicken meat microbiotas were collected using this procedure. Microbiota aliquots were stored at -80°C without important loss of viability. Their characterization by cultural methods confirmed the large variability (richness and abundance) of bacterial communities present on chicken cuts. Four of these bacterial communities were used to estimate their ability to regrow on meat matrices. Challenge tests performed on sterile matrices showed that these microbiotas were successfully inoculated and could overgrow the natural microbiota of chicken meat. They can therefore be used for performing reproducible challenge tests mimicking a true meat ecosystem and enabling the possibility to test the influence of various processing or storage conditions on complex meat matrices. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Selection of a Geostatistical Method to Interpolate Soil Properties of the State Crop Testing Fields using Attributes of a Digital Terrain Model

    NASA Astrophysics Data System (ADS)

    Sahabiev, I. A.; Ryazanov, S. S.; Kolcova, T. G.; Grigoryan, B. R.

    2018-03-01

    The three most common techniques to interpolate soil properties at a field scale—ordinary kriging (OK), regression kriging with multiple linear regression drift model (RK + MLR), and regression kriging with principal component regression drift model (RK + PCR)—were examined. The results of the performed study were compiled into an algorithm of choosing the most appropriate soil mapping technique. Relief attributes were used as the auxiliary variables. When spatial dependence of a target variable was strong, the OK method showed more accurate interpolation results, and the inclusion of the auxiliary data resulted in an insignificant improvement in prediction accuracy. According to the algorithm, the RK + PCR method effectively eliminates multicollinearity of explanatory variables. However, if the number of predictors is less than ten, the probability of multicollinearity is reduced, and application of the PCR becomes irrational. In that case, the multiple linear regression should be used instead.

  19. Variability in testing policies and impact on reported Clostridium difficile infection rates: results from the pilot Longitudinal European Clostridium difficile Infection Diagnosis surveillance study (LuCID).

    PubMed

    Davies, K; Davis, G; Barbut, F; Eckert, C; Petrosillo, N; Wilcox, M H

    2016-12-01

    Lack of standardised Clostridium difficile testing is a potential confounder when comparing infection rates. We used an observational, systematic, prospective large-scale sampling approach to investigate variability in C. difficile sampling to understand C. difficile infection (CDI) incidence rates. In-patient and institutional data were gathered from 60 European hospitals (across three countries). Testing methodology, testing/CDI rates and case profiles were compared between countries and institution types. The mean annual CDI rate per hospital was lowest in the UK and highest in Italy (1.5 vs. 4.7 cases/10,000 patient bed days [pbds], p < 0.001). The testing rate was highest in the UK compared with Italy and France (50.7/10,000 pbds vs. 31.5 and 30.3, respectively, p < 0.001). Only 58.4 % of diarrhoeal samples were tested for CDI across all countries. Overall, only 64 % of hospitals used recommended testing algorithms for laboratory testing. Small hospitals were significantly more likely to use standalone toxin tests (SATTs). There was an inverse correlation between hospital size and CDI testing rate. Hospitals using SATT or assays not detecting toxin reported significantly higher CDI rates than those using recommended methods, despite testing similar testing frequencies. These data are consistent with higher false-positive rates in such (non-recommended) testing scenarios. Cases in Italy and those diagnosed by SATT or methods NOT detecting toxin were significantly older. Testing occurred significantly earlier in the UK. Assessment of testing practice is paramount to the accurate interpretation and comparison of CDI rates.

  20. A systematic approach to evaluate parameter consistency in the inlet stream of source separated biowaste composting facilities: A case study in Colombia.

    PubMed

    Oviedo-Ocaña, E R; Torres-Lozada, P; Marmolejo-Rebellon, L F; Torres-López, W A; Dominguez, I; Komilis, D; Sánchez, A

    2017-04-01

    Biowaste is commonly the largest fraction of municipal solid waste (MSW) in developing countries. Although composting is an effective method to treat source separated biowaste (SSB), there are certain limitations in terms of operation, partly due to insufficient control to the variability of SSB quality, which affects process kinetics and product quality. This study assesses the variability of the SSB physicochemical quality in a composting facility located in a small town of Colombia, in which SSB collection was performed twice a week. Likewise, the influence of the SSB physicochemical variability on the variability of compost parameters was assessed. Parametric and non-parametric tests (i.e. Student's t-test and the Mann-Whitney test) showed no significant differences in the quality parameters of SSB among collection days, and therefore, it was unnecessary to establish specific operation and maintenance regulations for each collection day. Significant variability was found in eight of the twelve quality parameters analyzed in the inlet stream, with corresponding coefficients of variation (CV) higher than 23%. The CVs for the eight parameters analyzed in the final compost (i.e. pH, moisture, total organic carbon, total nitrogen, C/N ratio, total phosphorus, total potassium and ash) ranged from 9.6% to 49.4%, with significant variations in five of those parameters (CV>20%). The above indicate that variability in the inlet stream can affect the variability of the end-product. Results suggest the need to consider variability of the inlet stream in the performance of composting facilities to achieve a compost of consistent quality. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  2. Entropy-Based Analysis and Bioinformatics-Inspired Integration of Global Economic Information Transfer

    PubMed Central

    An, Sungbae; Kwon, Young-Kyun; Yoon, Sungroh

    2013-01-01

    The assessment of information transfer in the global economic network helps to understand the current environment and the outlook of an economy. Most approaches on global networks extract information transfer based mainly on a single variable. This paper establishes an entirely new bioinformatics-inspired approach to integrating information transfer derived from multiple variables and develops an international economic network accordingly. In the proposed methodology, we first construct the transfer entropies (TEs) between various intra- and inter-country pairs of economic time series variables, test their significances, and then use a weighted sum approach to aggregate information captured in each TE. Through a simulation study, the new method is shown to deliver better information integration compared to existing integration methods in that it can be applied even when intra-country variables are correlated. Empirical investigation with the real world data reveals that Western countries are more influential in the global economic network and that Japan has become less influential following the Asian currency crisis. PMID:23300959

  3. Entropy-based analysis and bioinformatics-inspired integration of global economic information transfer.

    PubMed

    Kim, Jinkyu; Kim, Gunn; An, Sungbae; Kwon, Young-Kyun; Yoon, Sungroh

    2013-01-01

    The assessment of information transfer in the global economic network helps to understand the current environment and the outlook of an economy. Most approaches on global networks extract information transfer based mainly on a single variable. This paper establishes an entirely new bioinformatics-inspired approach to integrating information transfer derived from multiple variables and develops an international economic network accordingly. In the proposed methodology, we first construct the transfer entropies (TEs) between various intra- and inter-country pairs of economic time series variables, test their significances, and then use a weighted sum approach to aggregate information captured in each TE. Through a simulation study, the new method is shown to deliver better information integration compared to existing integration methods in that it can be applied even when intra-country variables are correlated. Empirical investigation with the real world data reveals that Western countries are more influential in the global economic network and that Japan has become less influential following the Asian currency crisis.

  4. Canonical Measure of Correlation (CMC) and Canonical Measure of Distance (CMD) between sets of data. Part 3. Variable selection in classification.

    PubMed

    Ballabio, Davide; Consonni, Viviana; Mauri, Andrea; Todeschini, Roberto

    2010-01-11

    In multivariate regression and classification issues variable selection is an important procedure used to select an optimal subset of variables with the aim of producing more parsimonious and eventually more predictive models. Variable selection is often necessary when dealing with methodologies that produce thousands of variables, such as Quantitative Structure-Activity Relationships (QSARs) and highly dimensional analytical procedures. In this paper a novel method for variable selection for classification purposes is introduced. This method exploits the recently proposed Canonical Measure of Correlation between two sets of variables (CMC index). The CMC index is in this case calculated for two specific sets of variables, the former being comprised of the independent variables and the latter of the unfolded class matrix. The CMC values, calculated by considering one variable at a time, can be sorted and a ranking of the variables on the basis of their class discrimination capabilities results. Alternatively, CMC index can be calculated for all the possible combinations of variables and the variable subset with the maximal CMC can be selected, but this procedure is computationally more demanding and classification performance of the selected subset is not always the best one. The effectiveness of the CMC index in selecting variables with discriminative ability was compared with that of other well-known strategies for variable selection, such as the Wilks' Lambda, the VIP index based on the Partial Least Squares-Discriminant Analysis, and the selection provided by classification trees. A variable Forward Selection based on the CMC index was finally used in conjunction of Linear Discriminant Analysis. This approach was tested on several chemical data sets. Obtained results were encouraging.

  5. Nondestructive Techniques to Evaluate the Characteristics and Development of Engineered Cartilage

    PubMed Central

    Mansour, Joseph M.; Lee, Zhenghong; Welter, Jean F.

    2016-01-01

    In this review, methods for evaluating the properties of tissue engineered (TE) cartilage are described. Many of these have been developed for evaluating properties of native and osteoarthritic articular cartilage. However, with the increasing interest in engineering cartilage, specialized methods are needed for nondestructive evaluation of tissue while it is developing and after it is implanted. Such methods are needed, in part, due to the large inter- and intra-donor variability in the performance of the cellular component of the tissue, which remains a barrier to delivering reliable TE cartilage for implantation. Using conventional destructive tests, such variability makes it near-impossible to predict the timing and outcome of the tissue engineering process at the level of a specific piece of engineered tissue and also makes it difficult to assess the impact of changing tissue engineering regimens. While it is clear that the true test of engineered cartilage is its performance after it is implanted, correlation of pre and post implantation properties determined non-destructively in vitro and/or in vivo with performance should lead to predictive methods to improve quality-control and to minimize the chances of implanting inferior tissue. PMID:26817458

  6. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  7. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    PubMed

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Multiscale power analysis for heart rate variability

    NASA Astrophysics Data System (ADS)

    Zeng, Peng; Liu, Hongxing; Ni, Huangjing; Zhou, Jing; Xia, Lan; Ning, Xinbao

    2015-06-01

    We first introduce multiscale power (MSP) method to assess the power distribution of physiological signals on multiple time scales. Simulation on synthetic data and experiments on heart rate variability (HRV) are tested to support the approach. Results show that both physical and psychological changes influence power distribution significantly. A quantitative parameter, termed power difference (PD), is introduced to evaluate the degree of power distribution alteration. We find that dynamical correlation of HRV will be destroyed completely when PD>0.7.

  9. Unconditional or Conditional Logistic Regression Model for Age-Matched Case-Control Data?

    PubMed

    Kuo, Chia-Ling; Duan, Yinghui; Grady, James

    2018-01-01

    Matching on demographic variables is commonly used in case-control studies to adjust for confounding at the design stage. There is a presumption that matched data need to be analyzed by matched methods. Conditional logistic regression has become a standard for matched case-control data to tackle the sparse data problem. The sparse data problem, however, may not be a concern for loose-matching data when the matching between cases and controls is not unique, and one case can be matched to other controls without substantially changing the association. Data matched on a few demographic variables are clearly loose-matching data, and we hypothesize that unconditional logistic regression is a proper method to perform. To address the hypothesis, we compare unconditional and conditional logistic regression models by precision in estimates and hypothesis testing using simulated matched case-control data. Our results support our hypothesis; however, the unconditional model is not as robust as the conditional model to the matching distortion that the matching process not only makes cases and controls similar for matching variables but also for the exposure status. When the study design involves other complex features or the computational burden is high, matching in loose-matching data can be ignored for negligible loss in testing and estimation if the distributions of matching variables are not extremely different between cases and controls.

  10. Unconditional or Conditional Logistic Regression Model for Age-Matched Case–Control Data?

    PubMed Central

    Kuo, Chia-Ling; Duan, Yinghui; Grady, James

    2018-01-01

    Matching on demographic variables is commonly used in case–control studies to adjust for confounding at the design stage. There is a presumption that matched data need to be analyzed by matched methods. Conditional logistic regression has become a standard for matched case–control data to tackle the sparse data problem. The sparse data problem, however, may not be a concern for loose-matching data when the matching between cases and controls is not unique, and one case can be matched to other controls without substantially changing the association. Data matched on a few demographic variables are clearly loose-matching data, and we hypothesize that unconditional logistic regression is a proper method to perform. To address the hypothesis, we compare unconditional and conditional logistic regression models by precision in estimates and hypothesis testing using simulated matched case–control data. Our results support our hypothesis; however, the unconditional model is not as robust as the conditional model to the matching distortion that the matching process not only makes cases and controls similar for matching variables but also for the exposure status. When the study design involves other complex features or the computational burden is high, matching in loose-matching data can be ignored for negligible loss in testing and estimation if the distributions of matching variables are not extremely different between cases and controls. PMID:29552553

  11. Evaluation of a Serum Lung Cancer Biomarker Panel

    PubMed Central

    Mazzone, Peter J; Wang, Xiao-Feng; Han, Xiaozhen; Choi, Humberto; Seeley, Meredith; Scherer, Richard; Doseeva, Victoria

    2018-01-01

    Background: A panel of 3 serum proteins and 1 autoantibody has been developed to assist with the detection of lung cancer. We aimed to validate the accuracy of the biomarker panel in an independent test set and explore the impact of adding a fourth serum protein to the panel, as well as the impact of combining molecular and clinical variables. Methods: The training set of serum samples was purchased from commercially available biorepositories. The testing set was from a biorepository at the Cleveland Clinic. All lung cancer and control subjects were >50 years old and had smoked a minimum of 20 pack-years. A panel of biomarkers including CEA (carcinoembryonic antigen), CYFRA21-1 (cytokeratin-19 fragment 21-1), CA125 (carbohydrate antigen 125), HGF (hepatocyte growth factor), and NY-ESO-1 (New York esophageal cancer-1 antibody) was measured using immunoassay techniques. The multiple of the median method, multivariate logistic regression, and random forest modeling was used to analyze the results. Results: The training set consisted of 604 patient samples (268 with lung cancer and 336 controls) and the testing set of 400 patient samples (155 with lung cancer and 245 controls). With a threshold established from the training set, the sensitivity and specificity of both the 4- and 5-biomarker panels on the testing set was 49% and 96%, respectively. Models built on the testing set using only clinical variables had an area under the receiver operating characteristic curve of 0.68, using the biomarker panel 0.81 and by combining clinical and biomarker variables 0.86. Conclusions: This study validates the accuracy of a panel of proteins and an autoantibody in a population relevant to lung cancer detection and suggests a benefit to combining clinical features with the biomarker results. PMID:29371783

  12. Improving machine learning reproducibility in genetic association studies with proportional instance cross validation (PICV).

    PubMed

    Piette, Elizabeth R; Moore, Jason H

    2018-01-01

    Machine learning methods and conventions are increasingly employed for the analysis of large, complex biomedical data sets, including genome-wide association studies (GWAS). Reproducibility of machine learning analyses of GWAS can be hampered by biological and statistical factors, particularly so for the investigation of non-additive genetic interactions. Application of traditional cross validation to a GWAS data set may result in poor consistency between the training and testing data set splits due to an imbalance of the interaction genotypes relative to the data as a whole. We propose a new cross validation method, proportional instance cross validation (PICV), that preserves the original distribution of an independent variable when splitting the data set into training and testing partitions. We apply PICV to simulated GWAS data with epistatic interactions of varying minor allele frequencies and prevalences and compare performance to that of a traditional cross validation procedure in which individuals are randomly allocated to training and testing partitions. Sensitivity and positive predictive value are significantly improved across all tested scenarios for PICV compared to traditional cross validation. We also apply PICV to GWAS data from a study of primary open-angle glaucoma to investigate a previously-reported interaction, which fails to significantly replicate; PICV however improves the consistency of testing and training results. Application of traditional machine learning procedures to biomedical data may require modifications to better suit intrinsic characteristics of the data, such as the potential for highly imbalanced genotype distributions in the case of epistasis detection. The reproducibility of genetic interaction findings can be improved by considering this variable imbalance in cross validation implementation, such as with PICV. This approach may be extended to problems in other domains in which imbalanced variable distributions are a concern.

  13. The effect of explanations on mathematical reasoning tasks

    NASA Astrophysics Data System (ADS)

    Norqvist, Mathias

    2018-01-01

    Studies in mathematics education often point to the necessity for students to engage in more cognitively demanding activities than just solving tasks by applying given solution methods. Previous studies have shown that students that engage in creative mathematically founded reasoning to construct a solution method, perform significantly better in follow up tests than students that are given a solution method and engage in algorithmic reasoning. However, teachers and textbooks, at least occasionally, provide explanations together with an algorithmic method, and this could possibly be more efficient than creative reasoning. In this study, three matched groups practiced with either creative, algorithmic, or explained algorithmic tasks. The main finding was that students that practiced with creative tasks did, outperform the students that practiced with explained algorithmic tasks in a post-test, despite a much lower practice score. The two groups that got a solution method presented, performed similarly in both practice and post-test, even though one group got an explanation to the given solution method. Additionally, there were some differences between the groups in which variables predicted the post-test score.

  14. Boosting for detection of gene-environment interactions.

    PubMed

    Pashova, H; LeBlanc, M; Kooperberg, C

    2013-01-30

    In genetic association studies, it is typically thought that genetic variants and environmental variables jointly will explain more of the inheritance of a phenotype than either of these two components separately. Traditional methods to identify gene-environment interactions typically consider only one measured environmental variable at a time. However, in practice, multiple environmental factors may each be imprecise surrogates for the underlying physiological process that actually interacts with the genetic factors. In this paper, we develop a variant of L(2) boosting that is specifically designed to identify combinations of environmental variables that jointly modify the effect of a gene on a phenotype. Because the effect modifiers might have a small signal compared with the main effects, working in a space that is orthogonal to the main predictors allows us to focus on the interaction space. In a simulation study that investigates some plausible underlying model assumptions, our method outperforms the least absolute shrinkage and selection and Akaike Information Criterion and Bayesian Information Criterion model selection procedures as having the lowest test error. In an example for the Women's Health Initiative-Population Architecture using Genomics and Epidemiology study, the dedicated boosting method was able to pick out two single-nucleotide polymorphisms for which effect modification appears present. The performance was evaluated on an independent test set, and the results are promising. Copyright © 2012 John Wiley & Sons, Ltd.

  15. Variability analysis of the bulk resistivity measured using concrete cylinders.

    DOT National Transportation Integrated Search

    2011-01-01

    "Many agencies are interested in using a rapid test method for measuring the electrical properties of concrete (i.e., the : resistivity or conductivity) since the electrical properties can be related to fluid transport (e.g., ion diffusion). The adva...

  16. How-to-Do-It: Using the Muscle Response Phenomenon to Enhance Biology Teaching.

    ERIC Educational Resources Information Center

    Kendler, Barry S.

    1989-01-01

    Presents a method which introduces biology students to some fundamental aspects of scientific methodology. Explains simple muscle testing procedures, a historical perspective, experimental variables, pedagogical uses, and possible mechanisms of the muscle response phenomenon action. (RT)

  17. Musk as a Pheromone? Didactic Exercise.

    ERIC Educational Resources Information Center

    Bersted, Chris T.

    A classroom/laboratory exercise has been used to introduce college students to factorial research designs, differentiate between interpretations for experimental and quasi-experimental variables, and exemplify application of laboratory research methods to test practical questions (advertising claims). The exercise involves having randomly divided…

  18. Vulnerability to alcohol consumption, spiritual transcendence and psychosocial well-being: test of a theory 1

    PubMed Central

    Heredia, Luz Patricia Díaz; Sanchez, Alba Idaly Muñoz

    2016-01-01

    Abstract Objective: to demonstrate the relations among vulnerability, self-transcendence and well-being in the young adult population and the effect of each of these variables on the adoption of low-risk consumption conducts. Method: quantitative and cross-sectional correlation study using structural equations analysis to test the relation among the variables. Results: an inverse relation was evidenced between vulnerability to alcohol consumption and spiritual transcendence (β-0.123, p 0.025) and a direct positive relation between spiritual transcendence and psychosocial well-being (β 0.482, p 0.000). Conclusions: the relations among the variables spiritual transcendence, vulnerability to alcohol consumption and psychosocial well-being, based on Reed's Theory, are confirmed in the population group of young college students, concluding that psychosocial well-being can be achieved when spiritual transcendence is enhanced, as the vulnerability to alcohol consumption drops. PMID:27276017

  19. Probabilistic liquefaction triggering based on the cone penetration test

    USGS Publications Warehouse

    Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Tokimatsu, K.

    2005-01-01

    Performance-based earthquake engineering requires a probabilistic treatment of potential failure modes in order to accurately quantify the overall stability of the system. This paper is a summary of the application portions of the probabilistic liquefaction triggering correlations proposed recently proposed by Moss and co-workers. To enable probabilistic treatment of liquefaction triggering, the variables comprising the seismic load and the liquefaction resistance were treated as inherently uncertain. Supporting data from an extensive Cone Penetration Test (CPT)-based liquefaction case history database were used to develop a probabilistic correlation. The methods used to measure the uncertainty of the load and resistance variables, how the interactions of these variables were treated using Bayesian updating, and how reliability analysis was applied to produce curves of equal probability of liquefaction are presented. The normalization for effective overburden stress, the magnitude correlated duration weighting factor, and the non-linear shear mass participation factor used are also discussed.

  20. Manometric assessment of idiopathic megarectum in constipated children

    PubMed Central

    Chiarioni, Giuseppe; de Roberto, Giuseppe; Mazzocchi, Alessandro; Morelli, Antonio; Bassotti, Gabrio

    2005-01-01

    AIM: Chronic constipation is a frequent finding in children. In this age range, the concomitant occurrence of megarectum is not uncommon. However, the definition of megarectum is variable, and a few data exist for Italy. We studied anorectal manometric variables and sensation in a group of constipated children with megarectum defined by radiologic criteria. Data from this group were compared with those obtained in a similar group of children with recurrent abdominal pain. METHODS: Anorectal testing was carried out in both groups by standard manometric technique and rectal balloon expulsion test. RESULTS: Megarectum patients displayed discrete abnormalities of anorectal variables and sensation with respect to controls. In particular, the pelvic floor function appeared to be impaired in most patients. CONCLUSION: Constipated children with megarectum have abnormal anorectal function and sensation. These findings may be helpful for a better understanding of the pathophysiological basis of this condition. PMID:16273619

  1. Improved artificial neural networks in prediction of malignancy of lesions in contrast-enhanced MR-mammography.

    PubMed

    Vomweg, T W; Buscema, M; Kauczor, H U; Teifke, A; Intraligi, M; Terzi, S; Heussel, C P; Achenbach, T; Rieker, O; Mayer, D; Thelen, M

    2003-09-01

    The aim of this study was to evaluate the capability of improved artificial neural networks (ANN) and additional novel training methods in distinguishing between benign and malignant breast lesions in contrast-enhanced magnetic resonance-mammography (MRM). A total of 604 histologically proven cases of contrast-enhanced lesions of the female breast at MRI were analyzed. Morphological, dynamic and clinical parameters were collected and stored in a database. The data set was divided into several groups using random or experimental methods [Training & Testing (T&T) algorithm] to train and test different ANNs. An additional novel computer program for input variable selection was applied. Sensitivity and specificity were calculated and compared with a statistical method and an expert radiologist. After optimization of the distribution of cases among the training and testing sets by the T & T algorithm and the reduction of input variables by the Input Selection procedure a highly sophisticated ANN achieved a sensitivity of 93.6% and a specificity of 91.9% in predicting malignancy of lesions within an independent prediction sample set. The best statistical method reached a sensitivity of 90.5% and a specificity of 68.9%. An expert radiologist performed better than the statistical method but worse than the ANN (sensitivity 92.1%, specificity 85.6%). Features extracted out of dynamic contrast-enhanced MRM and additional clinical data can be successfully analyzed by advanced ANNs. The quality of the resulting network strongly depends on the training methods, which are improved by the use of novel training tools. The best results of an improved ANN outperform expert radiologists.

  2. Information transfer across the scales of climate data variability

    NASA Astrophysics Data System (ADS)

    Palus, Milan; Jajcay, Nikola; Hartman, David; Hlinka, Jaroslav

    2015-04-01

    Multitude of scales characteristic of the climate system variability requires innovative approaches in analysis of instrumental time series. We present a methodology which starts with a wavelet decomposition of a multi-scale signal into quasi-oscillatory modes of a limited band-with, described using their instantaneous phases and amplitudes. Then their statistical associations are tested in order to search for interactions across time scales. In particular, an information-theoretic formulation of the generalized, nonlinear Granger causality is applied together with surrogate data testing methods [1]. The method [2] uncovers causal influence (in the Granger sense) and information transfer from large-scale modes of climate variability with characteristic time scales from years to almost a decade to regional temperature variability on short time scales. In analyses of daily mean surface air temperature from various European locations an information transfer from larger to smaller scales has been observed as the influence of the phase of slow oscillatory phenomena with periods around 7-8 years on amplitudes of the variability characterized by smaller temporal scales from a few months to annual and quasi-biennial scales [3]. In sea surface temperature data from the tropical Pacific area an influence of quasi-oscillatory phenomena with periods around 4-6 years on the variability on and near the annual scale has been observed. This study is supported by the Ministry of Education, Youth and Sports of the Czech Republic within the Program KONTAKT II, Project No. LH14001. [1] M. Palus, M. Vejmelka, Phys. Rev. E 75, 056211 (2007) [2] M. Palus, Entropy 16(10), 5263-5289 (2014) [3] M. Palus, Phys. Rev. Lett. 112, 078702 (2014)

  3. Optimal Combinations of Diagnostic Tests Based on AUC.

    PubMed

    Huang, Xin; Qin, Gengsheng; Fang, Yixin

    2011-06-01

    When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.

  4. Normalized Rotational Multiple Yield Surface Framework (NRMYSF) stress-strain curve prediction method based on small strain triaxial test data on undisturbed Auckland residual clay soils

    NASA Astrophysics Data System (ADS)

    Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.

    2018-04-01

    Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.

  5. Thermal loading in the laser holography nondestructive testing of a composite structure

    NASA Technical Reports Server (NTRS)

    Liu, H. K.; Kurtz, R. L.

    1975-01-01

    A laser holographic interferometry method that has variable sensitivity to surface deformation was applied to the investigation of composite test samples under thermal loading. A successful attempt was made to detect debonds in a fiberglass-epoxy-ceramic plate. Experimental results are presented along with the mathematical analysis of the physical model of the thermal loading and current conduction in the composite material.

  6. The Impact of Childhood Obesity on Health and Health Service Use.

    PubMed

    Kinge, Jonas Minet; Morris, Stephen

    2018-06-01

    To test the impact of obesity on health and health care use in children, by the use of various methods to account for reverse causality and omitted variables. Fifteen rounds of the Health Survey for England (1998-2013), which is representative of children and adolescents in England. We use three methods to account for reverse causality and omitted variables in the relationship between BMI and health/health service use: regression with individual, parent, and household control variables; sibling fixed effects; and instrumental variables based on genetic variation in weight. We include all children and adolescents aged 4-18 years old. We find that obesity has a statistically significant and negative impact on self-rated health and a positive impact on health service use in girls, boys, younger children (aged 4-12), and adolescents (aged 13-18). The findings are comparable in each model in both boys and girls. Using econometric methods, we have mitigated several confounding factors affecting the impact of obesity in childhood on health and health service use. Our findings suggest that obesity has severe consequences for health and health service use even among children. © Health Research and Educational Trust.

  7. A kernel machine method for detecting effects of interaction between multidimensional variable sets: an imaging genetics application.

    PubMed

    Ge, Tian; Nichols, Thomas E; Ghosh, Debashis; Mormino, Elizabeth C; Smoller, Jordan W; Sabuncu, Mert R

    2015-04-01

    Measurements derived from neuroimaging data can serve as markers of disease and/or healthy development, are largely heritable, and have been increasingly utilized as (intermediate) phenotypes in genetic association studies. To date, imaging genetic studies have mostly focused on discovering isolated genetic effects, typically ignoring potential interactions with non-genetic variables such as disease risk factors, environmental exposures, and epigenetic markers. However, identifying significant interaction effects is critical for revealing the true relationship between genetic and phenotypic variables, and shedding light on disease mechanisms. In this paper, we present a general kernel machine based method for detecting effects of the interaction between multidimensional variable sets. This method can model the joint and epistatic effect of a collection of single nucleotide polymorphisms (SNPs), accommodate multiple factors that potentially moderate genetic influences, and test for nonlinear interactions between sets of variables in a flexible framework. As a demonstration of application, we applied the method to the data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to detect the effects of the interactions between candidate Alzheimer's disease (AD) risk genes and a collection of cardiovascular disease (CVD) risk factors, on hippocampal volume measurements derived from structural brain magnetic resonance imaging (MRI) scans. Our method identified that two genes, CR1 and EPHA1, demonstrate significant interactions with CVD risk factors on hippocampal volume, suggesting that CR1 and EPHA1 may play a role in influencing AD-related neurodegeneration in the presence of CVD risks. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Robust joint score tests in the application of DNA methylation data analysis.

    PubMed

    Li, Xuan; Fu, Yuejiao; Wang, Xiaogang; Qiu, Weiliang

    2018-05-18

    Recently differential variability has been showed to be valuable in evaluating the association of DNA methylation to the risks of complex human diseases. The statistical tests based on both differential methylation level and differential variability can be more powerful than those based only on differential methylation level. Anh and Wang (2013) proposed a joint score test (AW) to simultaneously detect for differential methylation and differential variability. However, AW's method seems to be quite conservative and has not been fully compared with existing joint tests. We proposed three improved joint score tests, namely iAW.Lev, iAW.BF, and iAW.TM, and have made extensive comparisons with the joint likelihood ratio test (jointLRT), the Kolmogorov-Smirnov (KS) test, and the AW test. Systematic simulation studies showed that: 1) the three improved tests performed better (i.e., having larger power, while keeping nominal Type I error rates) than the other three tests for data with outliers and having different variances between cases and controls; 2) for data from normal distributions, the three improved tests had slightly lower power than jointLRT and AW. The analyses of two Illumina HumanMethylation27 data sets GSE37020 and GSE20080 and one Illumina Infinium MethylationEPIC data set GSE107080 demonstrated that three improved tests had higher true validation rates than those from jointLRT, KS, and AW. The three proposed joint score tests are robust against the violation of normality assumption and presence of outlying observations in comparison with other three existing tests. Among the three proposed tests, iAW.BF seems to be the most robust and effective one for all simulated scenarios and also in real data analyses.

  9. Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies

    PubMed Central

    Zhang, Yu; Liu, Jun S.

    2011-01-01

    Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288

  10. Cross analysis of knowledge and learning methods followed by French residents in cardiology.

    PubMed

    Menet, Aymeric; Assez, Nathalie; Lacroix, Dominique

    2015-01-01

    No scientific assessment of the theoretical teaching of cardiology in France is available. To analyse the impact of the available teaching modalities on the theoretical knowledge of French residents in cardiology. Electronic questionnaires were returned by 283 residents. In the first part, an inventory of the teaching/learning methods was taken, using 21 questions (Yes/No format). The second part was a knowledge test, comprising 15 multiple-choice questions, exploring the core curriculum. Of the 21 variables tested, four emerged as independent predictors of the score obtained in the knowledge test: access to self-assessment (P=0.0093); access to teaching methods other than lectures (P=0.036); systematic discussion about clinical decisions (P=0.013); and the opportunity to prepare and give lectures (P=0.039). The fifth variable was seniority in residency (P=0.0003). Each item of the knowledge test was analysed independently: the score was higher when teaching the item was driven by reading guidelines and was lower if the item had not been covered by the programme (P<0.001). Finally, 91% of students would find it useful to have a national source for each topic of the curriculum; 76% of them would often connect to an e-learning platform if available. It is necessary to rethink teaching in cardiology by involving students in the training, by using teaching methods other than lectures and by facilitating access to self-assessment. The use of digital tools may be a particularly effective approach. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  11. Variation of strain rate sensitivity index of a superplastic aluminum alloy in different testing methods

    NASA Astrophysics Data System (ADS)

    Majidi, Omid; Jahazi, Mohammad; Bombardier, Nicolas; Samuel, Ehab

    2017-10-01

    The strain rate sensitivity index, m-value, is being applied as a common tool to evaluate the impact of the strain rate on the viscoplastic behaviour of materials. The m-value, as a constant number, has been frequently taken into consideration for modeling material behaviour in the numerical simulation of superplastic forming processes. However, the impact of the testing variables on the measured m-values has not been investigated comprehensively. In this study, the m-value for a superplastic grade of an aluminum alloy (i.e., AA5083) has been investigated. The conditions and the parameters that influence the strain rate sensitivity for the material are compared with three different testing methods, i.e., monotonic uniaxial tension test, strain rate jump test and stress relaxation test. All tests were conducted at elevated temperature (470°C) and at strain rates up to 0.1 s-1. The results show that the m-value is not constant and is highly dependent on the applied strain rate, strain level and testing method.

  12. Six-minute walk test and heart rate variability: lack of association in advanced stages of heart failure.

    PubMed

    Woo, M A; Moser, D K; Stevenson, L W; Stevenson, W G

    1997-09-01

    The 6-minute walk and heart rate variability have been used to assess mortality risk in patients with heart failure, but their relationship to each other and their usefulness for predicting mortality at 1 year are unknown. To assess the relationships between the 6-minute walk test, heart rate variability, and 1-year mortality. A sample of 113 patients in advanced stages of heart failure (New York Heart Association Functional Class III-IV, left ventricular ejection < 0.25) were studied. All 6-minute walks took place in an enclosed, level, measured corridor and were supervised by the same nurse. Heart rate variability was measured by using (1) a standard-deviation method and (2) Poincaré plots. Data on RR intervals obtained by using 24-hour Holter monitoring were analyzed. Survival was determined at 1 year after the Holter recording. The results showed no significant associations between the results of the 6-minute walk and the two measures of heart rate variability. The results of the walk were related to 1-year mortality but not to the risk of sudden death. Both measures of heart rate variability had significant associations with 1-year mortality and with sudden death. However, only heart rate variability measured by using Poincaré plots was a predictor of total mortality and risk of sudden death, independent of left ventricular ejection fraction, serum levels of sodium, results of the 6-minute walk test, and the standard-deviation measure of heart rate variability. Results of the 6-minute walk have poor association with mortality and the two measures of heart rate variability in patients with advanced-stage heart failure and a low ejection fraction. Further studies are needed to determine the optimal clinical usefulness of the 6-minute walk and heart rate variability in patients with advanced-stage heart failure.

  13. 99mTc-sestamibi scintigraphy used to evaluate tumor response to neoadjuvant chemotherapy in locally advanced breast cancer: A quantitative analysis

    PubMed Central

    KOGA, KATIA HIROMOTO; MORIGUCHI, SONIA MARTA; NETO, JORGE NAHÁS; PERES, STELA VERZINHASSE; SILVA, EDUARDO TINÓIS DA; SARRI, ALMIR JOSÉ; MICHELIN, ODAIR CARLITO; MARQUES, MARIANGELA ESTHER ALENCAR; GRIVA, BEATRIZ LOTUFO

    2010-01-01

    To evaluate the tumor response to neoadjuvant chemotherapy, 99mTc-sestamibi breast scintigraphy was proposed as a quantitative method. Fifty-five patients with ductal carcinoma were studied. They underwent breast scintigraphy before and after neoadjuvant chemotherapy, along with clinical assessment and surgical specimen analysis. The regions of interest on the lesion and contralateral breast were identified, and the pixel counts were used to evaluate lesion uptake in relation to background radiation. The ratio of these counts before to after neoadjuvant chemotherapy was assessed. The decrease in uptake rate due to chemotherapy characterized the scintigraphy tumor response. The Kruskal-Wallis test was used to compare the mean scintigraphic tumor response and histological type. Dunn’s multiple comparison test was used to detect differences between histological types. The Mann-Whitney test was used to compare means between quantitative and qualitative variables: scintigraphic tumor response vs. clinical response and uptake before chemotherapy vs. scintigraphic tumor response. The Spearman’s test was used to correlate the quantitative variables of clinical reduction in tumor size and scintigraphic tumor response. All of the variables compared presented significant differences. The change in 99mTc-sestamibi uptake noted on breast scintigraphy, before to after neoadjuvant chemotherapy, may be used as an effective method for evaluating the response to neoadjuvant chemotherapy, since this quantification reflects the biological behavior of the tumor towards the chemotherapy regimen. Furthermore, additional analysis on the uptake rate before chemotherapy may accurately predict treatment response. PMID:22966312

  14. Antifungal susceptibility testing of Malassezia yeast: comparison of two different methodologies.

    PubMed

    Rojas, Florencia D; Córdoba, Susana B; de Los Ángeles Sosa, María; Zalazar, Laura C; Fernández, Mariana S; Cattana, María E; Alegre, Liliana R; Carrillo-Muñoz, Alfonso J; Giusiano, Gustavo E

    2017-02-01

    All Malassezia species are lipophilic; thus, modifications are required in susceptibility testing methods to ensure their growth. Antifungal susceptibility of Malassezia species using agar and broth dilution methods has been studied. Currently, few tests using disc diffusion methods are being performed. The aim was to evaluate the in vitro susceptibility of Malassezia yeast against antifungal agents using broth microdilution and disc diffusion methods, then to compare both methodologies. Fifty Malassezia isolates were studied. Microdilution method was performed as described in reference document and agar diffusion test was performed using antifungal tablets and discs. To support growth, culture media were supplemented. To correlate methods, linear regression analysis and categorical agreement was determined. The strongest linear association was observed for fluconazole and miconazole. The highest agreement between both methods was observed for itraconazole and voriconazole and the lowest for amphotericin B and fluconazole. Although modifications made to disc diffusion method allowed to obtain susceptibility data for Malassezia yeast, variables cannot be associated through a linear correlation model, indicating that inhibition zone values cannot predict MIC value. According to the results, disc diffusion assay may not represent an alternative to determine antifungal susceptibility of Malassezia yeast. © 2016 Blackwell Verlag GmbH.

  15. Mendelian randomization analysis of a time-varying exposure for binary disease outcomes using functional data analysis methods.

    PubMed

    Cao, Ying; Rajan, Suja S; Wei, Peng

    2016-12-01

    A Mendelian randomization (MR) analysis is performed to analyze the causal effect of an exposure variable on a disease outcome in observational studies, by using genetic variants that affect the disease outcome only through the exposure variable. This method has recently gained popularity among epidemiologists given the success of genetic association studies. Many exposure variables of interest in epidemiological studies are time varying, for example, body mass index (BMI). Although longitudinal data have been collected in many cohort studies, current MR studies only use one measurement of a time-varying exposure variable, which cannot adequately capture the long-term time-varying information. We propose using the functional principal component analysis method to recover the underlying individual trajectory of the time-varying exposure from the sparsely and irregularly observed longitudinal data, and then conduct MR analysis using the recovered curves. We further propose two MR analysis methods. The first assumes a cumulative effect of the time-varying exposure variable on the disease risk, while the second assumes a time-varying genetic effect and employs functional regression models. We focus on statistical testing for a causal effect. Our simulation studies mimicking the real data show that the proposed functional data analysis based methods incorporating longitudinal data have substantial power gains compared to standard MR analysis using only one measurement. We used the Framingham Heart Study data to demonstrate the promising performance of the new methods as well as inconsistent results produced by the standard MR analysis that relies on a single measurement of the exposure at some arbitrary time point. © 2016 WILEY PERIODICALS, INC.

  16. New non-invasive method for early detection of metabolic syndrome in the working population.

    PubMed

    Romero-Saldaña, Manuel; Fuentes-Jiménez, Francisco J; Vaquero-Abellán, Manuel; Álvarez-Fernández, Carlos; Molina-Recio, Guillermo; López-Miranda, José

    2016-12-01

    We propose a new method for the early detection of metabolic syndrome in the working population, which was free of biomarkers (non-invasive) and based on anthropometric variables, and to validate it in a new working population. Prevalence studies and diagnostic test accuracy to determine the anthropometric variables associated with metabolic syndrome, as well as the screening validity of the new method proposed, were carried out between 2013 and 2015 on 636 and 550 workers, respectively. The anthropometric variables analysed were: blood pressure, body mass index, waist circumference, waist-height ratio, body fat percentage and waist-hip ratio. We performed a multivariate logistic regression analysis and obtained receiver operating curves to determine the predictive ability of the variables. The new method for the early detection of metabolic syndrome we present is based on a decision tree using chi-squared automatic interaction detection methodology. The overall prevalence of metabolic syndrome was 14.9%. The area under the curve for waist-height ratio and waist circumference was 0.91 and 0.90, respectively. The anthropometric variables associated with metabolic syndrome in the adjusted model were waist-height ratio, body mass index, blood pressure and body fat percentage. The decision tree was configured from the waist-height ratio (⩾0.55) and hypertension (blood pressure ⩾128/85 mmHg), with a sensitivity of 91.6% and a specificity of 95.7% obtained. The early detection of metabolic syndrome in a healthy population is possible through non-invasive methods, based on anthropometric indicators such as waist-height ratio and blood pressure. This method has a high degree of predictive validity and its use can be recommended in any healthcare context. © The European Society of Cardiology 2016.

  17. Intelligent Distribution Voltage Control with Distributed Generation =

    NASA Astrophysics Data System (ADS)

    Castro Mendieta, Jose

    In this thesis, three methods for the optimal participation of the reactive power of distributed generations (DGs) in unbalanced distributed network have been proposed, developed, and tested. These new methods were developed with the objectives of maintain voltage within permissible limits and reduce losses. The first method proposes an optimal participation of reactive power of all devices available in the network. The propose approach is validated by comparing the results with other methods reported in the literature. The proposed method was implemented using Simulink of Matlab and OpenDSS. Optimization techniques and the presentation of results are from Matlab. The co-simulation of Electric Power Research Institute's (EPRI) OpenDSS program solves a three-phase optimal power flow problem in the unbalanced IEEE 13 and 34-node test feeders. The results from this work showed a better loss reduction compared to the Coordinated Voltage Control (CVC) method. The second method aims to minimize the voltage variation on the pilot bus on distribution network using DGs. It uses Pareto and Fuzzy-PID logic to reduce the voltage variation. Results indicate that the proposed method reduces the voltage variation more than the other methods. Simulink of Matlab and OpenDSS is used in the development of the proposed approach. The performance of the method is evaluated on IEEE 13-node test feeder with one and three DGs. Variables and unbalanced loads are used, based on real consumption data, over a time window of 48 hours. The third method aims to minimize the reactive losses using DGs on distribution networks. This method analyzes the problem using the IEEE 13-node test feeder with three different loads and the IEEE 123-node test feeder with four DGs. The DGs can be fixed or variables. Results indicate that integration of DGs to optimize the reactive power of the network helps to maintain the voltage within the allowed limits and to reduce the reactive power losses. The thesis is presented in the form of the three articles. The first article is published in the journal Electrical Power and Energy System, the second is published in the international journal Energies and the third was submitted to the journal Electrical Power and Energy System. Two other articles have been published in conferences with reviewing committee. This work is based on six chapters, which are detailed in the various sections of the thesis.

  18. Batch process fault detection and identification based on discriminant global preserving kernel slow feature analysis.

    PubMed

    Zhang, Hanyuan; Tian, Xuemin; Deng, Xiaogang; Cao, Yuping

    2018-05-16

    As an attractive nonlinear dynamic data analysis tool, global preserving kernel slow feature analysis (GKSFA) has achieved great success in extracting the high nonlinearity and inherently time-varying dynamics of batch process. However, GKSFA is an unsupervised feature extraction method and lacks the ability to utilize batch process class label information, which may not offer the most effective means for dealing with batch process monitoring. To overcome this problem, we propose a novel batch process monitoring method based on the modified GKSFA, referred to as discriminant global preserving kernel slow feature analysis (DGKSFA), by closely integrating discriminant analysis and GKSFA. The proposed DGKSFA method can extract discriminant feature of batch process as well as preserve global and local geometrical structure information of observed data. For the purpose of fault detection, a monitoring statistic is constructed based on the distance between the optimal kernel feature vectors of test data and normal data. To tackle the challenging issue of nonlinear fault variable identification, a new nonlinear contribution plot method is also developed to help identifying the fault variable after a fault is detected, which is derived from the idea of variable pseudo-sample trajectory projection in DGKSFA nonlinear biplot. Simulation results conducted on a numerical nonlinear dynamic system and the benchmark fed-batch penicillin fermentation process demonstrate that the proposed process monitoring and fault diagnosis approach can effectively detect fault and distinguish fault variables from normal variables. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  19. High dimensional model representation method for fuzzy structural dynamics

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  20. Original method to compute epipoles using variable homography: application to measure emergent fibers on textile fabrics

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Cudel, Christophe; Kohler, Sophie; Fontaine, Stéphane; Haeberlé, Olivier; Klotz, Marie-Louise

    2012-04-01

    Fabric's smoothness is a key factor in determining the quality of finished textile products and has great influence on the functionality of industrial textiles and high-end textile products. With popularization of the zero defect industrial concept, identifying and measuring defective material in the early stage of production is of great interest to the industry. In the current market, many systems are able to achieve automatic monitoring and control of fabric, paper, and nonwoven material during the entire production process, however online measurement of hairiness is still an open topic and highly desirable for industrial applications. We propose a computer vision approach to compute epipole by using variable homography, which can be used to measure emergent fiber length on textile fabrics. The main challenges addressed in this paper are the application of variable homography on textile monitoring and measurement, as well as the accuracy of the estimated calculation. We propose that a fibrous structure can be considered as a two-layer structure, and then we show how variable homography combined with epipolar geometry can estimate the length of the fiber defects. Simulations are carried out to show the effectiveness of this method. The true length of selected fibers is measured precisely using a digital optical microscope, and then the same fibers are tested by our method. Our experimental results suggest that smoothness monitored by variable homography is an accurate and robust method of quality control for important industrial fabrics.

  1. Exact test-based approach for equivalence test with parameter margin.

    PubMed

    Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua

    2017-01-01

    The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.

  2. Estimating variability in grain legume yields across Europe and the Americas

    NASA Astrophysics Data System (ADS)

    Cernay, Charles; Ben-Ari, Tamara; Pelzer, Elise; Meynard, Jean-Marc; Makowski, David

    2015-06-01

    Grain legume production in Europe has recently come under scrutiny. Although legume crops are often promoted to provide environmental services, European farmers tend to turn to non-legume crops. It is assumed that high variability in legume yields explains this aversion, but so far this hypothesis has not been tested. Here, we estimate the variability of major grain legume and non-legume yields in Europe and the Americas from yield time series over 1961-2013. Results show that grain legume yields are significantly more variable than non-legume yields in Europe. These differences are smaller in the Americas. Our results are robust at the level of the statistical methods. In all regions, crops with high yield variability are allocated to less than 1% of cultivated areas. Although the expansion of grain legumes in Europe may be hindered by high yield variability, some species display risk levels compatible with the development of specialized supply chains.

  3. Latent mnemonic strengths are latent: a comment on Mickes, Wixted, and Wais (2007).

    PubMed

    Rouder, Jeffrey N; Pratte, Michael S; Morey, Richard D

    2010-06-01

    Mickes, Wixted, and Wais (2007) proposed a simple test of latent strength variability in recognition memory. They asked participants to rate their confidence using either a 20-point or a 99-point strength scale and plotted distributions of the resulting ratings. They found 25% more variability in ratings for studied than for new items, which they interpreted as providing evidence that latent mnemonic strength distributions are 25% more variable for studied than for new items. We show here that this conclusion is critically dependent on assumptions--so much so that these assumptions determine the conclusions. In fact, opposite conclusions, such that study does not affect the variability of latent strength, may be reached by making different but equally plausible assumptions. Because all measurements of mnemonic strength variability are critically dependent on untestable assumptions, all are arbitrary. Hence, there is no principled method for assessing the relative variability of latent mnemonic strength distributions.

  4. Integrated Site Investigation Methods and Modeling: Recent Developments at the BHRS (Invited)

    NASA Astrophysics Data System (ADS)

    Barrash, W.; Bradford, J. H.; Cardiff, M. A.; Dafflon, B.; Johnson, B. A.; Malama, B.; Thoma, M. J.

    2010-12-01

    The Boise Hydrogeophysical Research Site (BHRS) is a field-scale test facility in an unconfined aquifer with the goals of: developing cost-effective, non-invasive methods for quantitative characterization of heterogeneous aquifers using hydrologic and geophysical techniques; understanding fundamental relations and processes at multiple scales; and testing theories and models for groundwater flow and solute transport. The design of the BHRS supports a wide range of single-well, cross-hole, multiwell and multilevel hydrologic, geophysical, and combined hydrogeophysical experiments. New installations support direct and geophysical monitoring of hydrologic fluxes and states from the aquifer through the vadose zone to the atmosphere, including ET and river boundary behavior. Efforts to date have largely focused on establishing the 1D, 2D, and 3D distributions of geologic, hydrologic, and geophysical parameters which can then be used as the basis for testing methods to integrate direct and indirect data and invert for “known” parameter distributions, material boundaries, and tracer test or other system state behavior. Aquifer structure at the BHRS is hierarchical and includes layers and lenses that are recognized with geologic, hydrologic, radar, electrical, and seismic methods. Recent advances extend findings and method developments, but also highlight the need to examine assumptions and understand secular influences when designing and modeling field tests. Examples of advances and caveats include: New high-resolution 1D K profiles obtained from multi-level slug tests (inversion improves with priors for aquifer K, wellbore skin, and local presence of roots) show variable correlation with porosity and bring into question a Kozeny-Carman-type relation for much of the system. Modeling of 2D conservative tracer transport through a synthetic BHRS-like heterogeneous system shows the importance of including porosity heterogeneity (rather than assuming constant porosity for an aquifer) in addition to K heterogeneity. Similarly, 3D transient modeling of a conservative tracer test at the BHRS improves significantly with the use of prior geophysical information for layering and parameter structure and with use of both variable porosity and K. Joint inversion of multiple intersecting 2D radar tomograms gives well-resolved and consistent 3D distributions of porosity and unit boundaries that are largely correlated with neutron-porosity log and other site data, but the classic porosity-dielectric relation does not hold for one stratigraphic unit that also is recognized as anomalous with capacitive resistivity logs (i.e., cannot assume one petrophysical relation holds through a given aquifer system). Advances are being made in the new method of hydraulic tomography (2D with coincident electrical geophysics; 3D will be supplemented with priors); caveats here include the importance of boundary conditions and even ET effects. Also integrated data collection and modeling with multiple geophysical and hydrologic methods show promise for high-resolution quantification of vadose zone moisture and parameter distributions to improve variably saturated process models.

  5. Fatigue Behavior of AM60B Subjected to Variable Amplitude Loading

    NASA Astrophysics Data System (ADS)

    Kang, H.; Kari, K.; Khosrovaneh, A. K.; Nayaki, R.; Su, X.; Zhang, L.; Lee, Y.-L.

    Magnesium alloys are considered as an alternative material to reduce vehicle weight due to their weight which are 33% lighter than aluminum alloys. There has been a significant expansion in the applications of magnesium alloys in automotives components in an effort to improve fuel efficiency through vehicle mass reduction. In this project, a simple front shock tower of passenger vehicle is constructed with various magnesium alloys. To predict the fatigue behavior of the structure, fatigue properties of the magnesium alloy (AM60B) were determined from strain controlled fatigue tests. Notched specimens were also tested with three different variable amplitude loading profiles obtained from the shock tower of the similar size of vehicle. The test results were compared with various fatigue prediction results. The effect of mean stress and fatigue prediction method were discussed.

  6. Neural network models - a novel tool for predicting the efficacy of growth hormone (GH) therapy in children with short stature.

    PubMed

    Smyczynska, Joanna; Hilczer, Maciej; Smyczynska, Urszula; Stawerska, Renata; Tadeusiewicz, Ryszard; Lewinski, Andrzej

    2015-01-01

    The leading method for prediction of growth hormone (GH) therapy effectiveness are multiple linear regression (MLR) models. Best of our knowledge, we are the first to apply artificial neural networks (ANN) to solve this problem. For ANN there is no necessity to assume the functions linking independent and dependent variables. The aim of study is to compare ANN and MLR models of GH therapy effectiveness. Analysis comprised the data of 245 GH-deficient children (170 boys) treated with GH up to final height (FH). Independent variables included: patients' height, pre-treatment height velocity, chronological age, bone age, gender, pubertal status, parental heights, GH peak in 2 stimulation tests, IGF-I concentration. The output variable was FH. For testing dataset, MLR model predicted FH SDS with average error (RMSE) 0.64 SD, explaining 34.3% of its variability; ANN model derived on the same pre-processed data predicted FH SDS with RMSE 0.60 SD, explaining 42.0% of its variability; ANN model derived on raw data predicted FH with RMSE 3.9 cm (0.63 SD), explaining 78.7% of its variability. ANN seem to be valuable tool in prediction of GH treatment effectiveness, especially since they can be applied to raw clinical data.

  7. Verbal fluency in elderly with and without hypertension and diabetes from the FIBRA study in Ermelino Matarazzo

    PubMed Central

    Morelli, Nathalia Lais; Cachioni, Meire; Lopes, Andrea; Batistoni, Samila Sathler Tavares; Falcão, Deusivania Vieira da Silva; Neri, Anita Liberalesso; Yassuda, Monica Sanches

    2017-01-01

    ABSTRACT. Background: There are few studies on the qualitative variables derived from the animal category verbal fluency test (VF), especially with data originating from low-income samples of community-based studies. Objective: To compare elderly with and without hypertension (HTN) and diabetes mellitus (DM) regarding the total number of animals spoken, number of categories, groups and category switches on the VF test. Methods: We used the database of the FIBRA (Frailty in Brazilian Elderly) community-based study. The variables number of Categories, Groups and Category Switches were created for each participant. The total sample (n = 384) was divided into groups of elderly who reported having HTN, DM, both HTN and DM, or neither of these conditions. Results: There were no significant differences between the groups with and without these chronic diseases for VF total score or for the qualitative variables. Conclusion: Among independent community-dwelling elderly, the qualitative variables derived from the VF animal category may not add information regarding the cognitive profile of elderly with chronic diseases. Total VF score and the qualitative variables Category, Group and Switching did not differentiate elderly with and without HTN and DM. PMID:29354222

  8. Radio-Frequency Driven Dielectric Heaters for Non-Nuclear Testing in Nuclear Core Development

    NASA Technical Reports Server (NTRS)

    Sims, William Herbert, III (Inventor); Godfroy, Thomas J. (Inventor); Bitteker, Leo (Inventor)

    2006-01-01

    Apparatus and methods are provided through which a radiofrequency dielectric heater has a cylindrical form factor, a variable thermal energy deposition through variations in geometry and composition of a dielectric, and/or has a thermally isolated power input.

  9. Evaluation of a Two-Stage Neural Model of Glaucomatous Defect: An Approach to Reduce Test-Retest Variability

    PubMed Central

    PAN, FEI; SWANSON, WILLIAM H.; DUL, MITCHELL W.

    2006-01-01

    Purpose. The purpose of this study is to model perimetric defect and variability and identify stimulus conditions that can reduce variability while retaining good ability to detect glaucomatous defects. Methods. The two-stage neural model of Swanson et al.1 was extended to explore relations among perimetric defect, response variability, and heterogeneous glaucomatous ganglion cell damage. Predictions of the model were evaluated by testing patients with glaucoma using a standard luminance increment 0.43° in diameter and two innovative stimuli designed to tap cortical mechanisms tuned to low spatial frequencies. The innovative stimuli were a luminance-modulated Gabor stimulus (0.5 c/deg) and circular equiluminant red-green chromatic stimuli whose sizes were close to normal Ricco’s areas for the chromatic mechanism. Seventeen patients with glaucoma were each tested twice within a 2-week period. Sensitivities were measured at eight locations at eccentricities from 10° to 21° selected in terms of the retinal nerve fiber bundle patterns. Defect depth and response (test-retest) variability were compared for the innovative stimuli and the standard stimulus. Results. The model predicted that response variability in defective areas would be lower for our innovative stimuli than for the conventional perimetric stimulus with similar defect depths if detection of the chromatic and Gabor stimuli was mediated by spatial mechanisms tuned to low spatial frequencies. Experimental data were consistent with these predictions. Depth of defect was similar for all three stimuli (F = 1.67, p > 0.19). Mean response variability was lower for the chromatic stimulus than for the other stimuli (F = 5.58, p < 0.005) and was lower for the Gabor stimulus than for the standard stimulus in areas with more severe defects (t = 2.68, p < 0.005). Variability increased with defect depth for the standard and Gabor stimuli (p < 0.005) but not for the chromatic stimulus (slope less than zero). Conclusions. Use of large perimetric stimuli detected by cortical mechanisms tuned to low spatial frequencies can make it possible to lower response variability without comprising the ability to detect glaucomatous defect. PMID:16840874

  10. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison

    PubMed Central

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S.; Sinha, Saurabh

    2011-01-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, ‘enhancers’), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for ‘motif-blind’ CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to ‘supervise’ the search. We propose a new statistical method, based on ‘Interpolated Markov Models’, for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. PMID:21821659

  11. A general statistical test for correlations in a finite-length time series.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-06-07

    The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.

  12. Evoked Cavernous Activity: Normal Values

    PubMed Central

    Yang, Claire C.; Yilmaz, Ugur; Vicars, Brenda G.

    2009-01-01

    Purpose We present normative data for evoked cavernous activity (ECA), an electrodiagnostic test that evaluates the autonomic innervation of the corpora cavernosa. Material and Methods We enrolled 37 healthy, sexually active and potent men for the study. Each subject completed an IIEF questionnaire and underwent simultaneous ECA and hand and foot sympathetic skin response (SSR) testing. The sympathetic skin response tests were performed as autonomic controls. Results Thirty six men had discernible ECA and SSRs. The mean IIEF erectile domain score was 27. ECA is a low frequency wave that is morphologically and temporally similar in both corpora. The amplitudes of the responses were highly variable. The latencies, although variable, always occurred after the hand SSR. There was no change in the quality or the latency of the ECA with age. Conclusions ECA is measurable in healthy, potent men in a wide range of ages. Similar to other evoked responses of the autonomic nervous system, the measured waveform is highly variable, but its presence is consistent. The association between ECA and erectile function is to be determined. PMID:18423763

  13. U.S. residential consumer product information: Validation of methods for post-stratification weighting of Amazon Mechanical Turk surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenblatt, Jeffery B.; Yang, Hung-Chia; Desroches, Louis-Benoit

    2013-04-01

    We present two post-stratification weighting methods to validate survey data collected using Amazon Mechanical Turk (AMT). Two surveys focused on appliance and consumer electronics devices were administered in the spring and summer of 2012 to each of approximately 3,000 U.S. households. Specifically, the surveys asked questions about residential refrigeration products, televisions (TVs) and set-top boxes (STBs). Filtered data were assigned weights using each of two weighting methods, termed “sequential” and “simultaneous,” by examining up to eight demographic variables (income, education, gender, race, Hispanic origin, number of occupants, ages of occupants, and geographic region) in comparison to reference U.S. demographic datamore » from the 2009 Residential Energy Consumption Survey (RECS). Five key questions from the surveys (number of refrigerators, number of freezers, number of TVs, number of STBs and primary service provider) were evaluated with a set of statistical tests to determine whether either method improved the agreement of AMT with reference data, and if so, which method was better. The statistical tests used were: differences in proportions, distributions of proportions (using Pearson’s chi-squared test), and differences in average numbers of devices as functions of all demographic variables. The results indicated that both methods generally improved the agreement between AMT and reference data, sometimes greatly, but that the simultaneous method was usually superior to the sequential method. Some differences in sample populations were found between the AMT and reference data. Differences in the proportion of STBs reflected large changes in the STB market since the time our reference data was acquired in 2009. Differences in the proportions of some primary service providers suggested real sample bias, with the possible explanation that AMT user are more likely to subscribe to providers who also provide home internet service. Differences in other variables, while statistically significant in some cases, were nonetheless considered to be minor. Depending on the intended purpose of the data collected using AMT, these biases may or may not be important; to correct them, additional questions and/or further post-survey adjustments could be employed. In general, based on the analysis methods and the sample datasets used in this study, AMT surveys appeared to provide useful data on appliance and consumer electronics devices.« less

  14. Predictors and Moderators of Response to Cognitive Behavioral Therapy and Medication for the Treatment of Binge Eating Disorder

    PubMed Central

    Grilo, Carlos. M.; Masheb, Robin M.; Crosby, Ross D.

    2012-01-01

    Objective To examine predictors and moderators of response to cognitive-behavioral therapy (CBT) and medication treatments for binge-eating disorder (BED). Method 108 BED patients in a randomized double-blind placebo-controlled trial testing CBT and fluoxetine treatments were assessed prior, throughout-, and post-treatment. Demographic factors, psychiatric and personality-disorder co-morbidity, eating-disorder psychopathology, psychological features, and two sub-typing methods (negative-affect, overvaluation of shape/weight) were tested as predictors and moderators for the primary outcome of remission from binge-eating and four secondary dimensional outcomes (binge-eating frequency, eating-disorder psychopathology, depression, and body mass index). Mixed-effects-models analyzed all available data for each outcome variable. In each model, effects for baseline value and treatment were included with tests of both prediction and moderator effects. Results Several demographic and clinical variables significantly predicted and/or moderated outcomes. One demographic variable signaled a statistical advantage for medication-only (younger participants had greater binge-eating reductions) whereas several demographic and clinical variables (lower self-esteem, negative-affect, and overvaluation of shape/weight) signaled better improvements if receiving CBT. Overvaluation was the most salient predictor/moderator of outcomes. Overvaluation significantly predicted binge-eating remission (29% of participants with versus 57% of participants without overvaluation remitted). Overvaluation was especially associated with lower remission rates if receiving medication-only (10% versus 42% for participants without overvaluation). Overvaluation moderated dimensional outcomes: participants with overvaluation had significantly greater reductions in eating-disorder psychopathology and depression levels if receiving CBT. Overvaluation predictor/moderator findings persisted after controlling for negative-affect. Conclusions Our findings have clinical utility for prescription of CBT and medication and implications for refinement of the BED diagnosis. PMID:22289130

  15. Combined use of field and laboratory testing to predict preferred flow paths in an heterogeneous aquifer.

    PubMed

    Gierczak, R F D; Devlin, J F; Rudolph, D L

    2006-01-05

    Elevated nitrate concentrations within a municipal water supply aquifer led to pilot testing of a field-scale, in situ denitrification technology based on carbon substrate injections. In advance of the pilot test, detailed characterization of the site was undertaken. The aquifer consisted of complex, discontinuous and interstratified silt, sand and gravel units, similar to other well studied aquifers of glaciofluvial origin, 15-40 m deep. Laboratory and field tests, including a conservative tracer test, a pumping test, a borehole flowmeter test, grain-size analysis of drill cuttings and core material, and permeameter testing performed on core samples, were performed on the most productive depth range (27-40 m), and the results were compared. The velocity profiles derived from the tracer tests served as the basis for comparison with other methods. The spatial variation in K, based on grain-size analysis, using the Hazen method, were poorly correlated with the breakthrough data. Trends in relative hydraulic conductivity (K/K(avg)) from permeameter testing compared somewhat better. However, the trends in transient drawdown with depth, measured in multilevel sampling points, corresponded particularly well with those of solute mass flux. Estimates of absolute K, based on standard pumping test analysis of the multilevel drawdown data, were inversely correlated with the tracer test data. The inverse nature of the correlation was attributed to assumptions in the transient drawdown packages that were inconsistent with the variable diffusivities encountered at the scale of the measurements. Collectively, the data showed that despite a relatively low variability in K within the aquifer under study (within a factor of 3), water and solute mass fluxes were concentrated in discrete intervals that could be targeted for later bioremediation.

  16. First-principles quantum transport method for disordered nanoelectronics: Disorder-averaged transmission, shot noise, and device-to-device variability

    NASA Astrophysics Data System (ADS)

    Yan, Jiawei; Wang, Shizhuo; Xia, Ke; Ke, Youqi

    2017-03-01

    Because disorders are inevitable in realistic nanodevices, the capability to quantitatively simulate the disorder effects on electron transport is indispensable for quantum transport theory. Here, we report a unified and effective first-principles quantum transport method for analyzing effects of chemical or substitutional disorder on transport properties of nanoelectronics, including averaged transmission coefficient, shot noise, and disorder-induced device-to-device variability. All our theoretical formulations and numerical implementations are worked out within the framework of the tight-binding linear muffin tin orbital method. In this method, we carry out the electronic structure calculation with the density functional theory, treat the nonequilibrium statistics by the nonequilbrium Green's function method, and include the effects of multiple impurity scattering with the generalized nonequilibrium vertex correction (NVC) method in coherent potential approximation (CPA). The generalized NVC equations are solved from first principles to obtain various disorder-averaged two-Green's-function correlators. This method provides a unified way to obtain different disorder-averaged transport properties of disordered nanoelectronics from first principles. To test our implementation, we apply the method to investigate the shot noise in the disordered copper conductor, and find all our results for different disorder concentrations approach a universal Fano factor 1 /3 . As the second test, we calculate the device-to-device variability in the spin-dependent transport through the disordered Cu/Co interface and find the conductance fluctuation is very large in the minority spin channel and negligible in the majority spin channel. Our results agree well with experimental measurements and other theories. In both applications, we show the generalized nonequilibrium vertex corrections play a determinant role in electron transport simulation. Our results demonstrate the effectiveness of the first-principles generalized CPA-NVC for atomistic analysis of disordered nanoelectronics, extending the capability of quantum transport simulation.

  17. Cloning and analysis of the gene for a major surface antigen of Mycoplasma gallisepticum.

    PubMed

    Spencer, Denise L; Kurth, Kathy Toohey; Menon, Sreekumar A; VanDyk, Tina; Minion, F Chris

    2002-01-01

    Myplasma gallisepticum infects a wide variety of gallineaceous birds including chickens, turkeys, and pheasants. Infection occurs both horizontally and vertically. Thus, control of the spread of M. gallisepticum to noninfected flocks is difficult. Continual monitoring is necessary to identify infected flocks even under the most stringent infectious control practices. Monitoring, however, is usually performed by measuring hemagglutination activity (HA) in serum, an insensitive and variable test. Variability in the HA test arises differences in agglutination antigen, changes in antigenic profiles of the M. gallisepticum strain, and variability in reading the agglutination reaction. Enzyme-linked immunosorbent assays (ELISAs) are the preferred method of testing because of the ease in obtaining sera and the sensitivity and reproducibility of the assays, but the ELISA suffers from a lack of standardization in the test antigen. The ELISA test will be more easily accepted once the test antigen has been standardized. To this end, we have identified, cloned, and characterized the gene for an antigen that has potential as a species-specific antigen for M. gallisepticum The gene codes for a 75-kD protein, P75, that is recognized during natural infections. Recombinant P75 is not recognized in immunoblots by convalescent sera produced in chickens infected with Mycoplasma synoviae, Mycoplasma gallinarum, and Mycoplasma gallinaceum or in turkeys infected with Mycoplasma meleagridis.

  18. A method for monitoring nuclear absorption coefficients of aviation fuels

    NASA Technical Reports Server (NTRS)

    Sprinkle, Danny R.; Shen, Chih-Ping

    1989-01-01

    A technique for monitoring variability in the nuclear absorption characteristics of aviation fuels has been developed. It is based on a highly collimated low energy gamma radiation source and a sodium iodide counter. The source and the counter assembly are separated by a geometrically well-defined test fuel cell. A computer program for determining the mass attenuation coefficient of the test fuel sample, based on the data acquired for a preset counting period, has been developed and tested on several types of aviation fuel.

  19. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  20. Spectral multigrid methods for the solution of homogeneous turbulence problems

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Zang, T. A.; Hussaini, M. Y.

    1987-01-01

    New three-dimensional spectral multigrid algorithms are analyzed and implemented to solve the variable coefficient Helmholtz equation. Periodicity is assumed in all three directions which leads to a Fourier collocation representation. Convergence rates are theoretically predicted and confirmed through numerical tests. Residual averaging results in a spectral radius of 0.2 for the variable coefficient Poisson equation. In general, non-stationary Richardson must be used for the Helmholtz equation. The algorithms developed are applied to the large-eddy simulation of incompressible isotropic turbulence.

Top