Sample records for statistical equivalence testing

  1. Establishment of an equivalence acceptance criterion for accelerated stability studies.

    PubMed

    Burdick, Richard K; Sidor, Leslie

    2013-01-01

    In this article, the use of statistical equivalence testing for providing evidence of process comparability in an accelerated stability study is advocated over the use of a test of differences. The objective of such a study is to demonstrate comparability by showing that the stability profiles under nonrecommended storage conditions of two processes are equivalent. Because it is difficult at accelerated conditions to find a direct link to product specifications, and hence product safety and efficacy, an equivalence acceptance criterion is proposed that is based on the statistical concept of effect size. As with all statistical tests of equivalence, it is important to collect input from appropriate subject-matter experts when defining the acceptance criterion.

  2. A statistical test to show negligible trend

    Treesearch

    Philip M. Dixon; Joseph H.K. Pechmann

    2005-01-01

    The usual statistical tests of trend are inappropriate for demonstrating the absence of trend. This is because failure to reject the null hypothesis of no trend does not prove that null hypothesis. The appropriate statistical method is based on an equivalence test. The null hypothesis is that the trend is not zero, i.e., outside an a priori specified equivalence region...

  3. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  4. Application of the modified chi-square ratio statistic in a stepwise procedure for cascade impactor equivalence testing.

    PubMed

    Weber, Benjamin; Lee, Sau L; Delvadia, Renishkumar; Lionberger, Robert; Li, Bing V; Tsong, Yi; Hochhaus, Guenther

    2015-03-01

    Equivalence testing of aerodynamic particle size distribution (APSD) through multi-stage cascade impactors (CIs) is important for establishing bioequivalence of orally inhaled drug products. Recent work demonstrated that the median of the modified chi-square ratio statistic (MmCSRS) is a promising metric for APSD equivalence testing of test (T) and reference (R) products as it can be applied to a reduced number of CI sites that are more relevant for lung deposition. This metric is also less sensitive to the increased variability often observed for low-deposition sites. A method to establish critical values for the MmCSRS is described here. This method considers the variability of the R product by employing a reference variance scaling approach that allows definition of critical values as a function of the observed variability of the R product. A stepwise CI equivalence test is proposed that integrates the MmCSRS as a method for comparing the relative shapes of CI profiles and incorporates statistical tests for assessing equivalence of single actuation content and impactor sized mass. This stepwise CI equivalence test was applied to 55 published CI profile scenarios, which were classified as equivalent or inequivalent by members of the Product Quality Research Institute working group (PQRI WG). The results of the stepwise CI equivalence test using a 25% difference in MmCSRS as an acceptance criterion provided the best matching with those of the PQRI WG as decisions of both methods agreed in 75% of the 55 CI profile scenarios.

  5. Exact test-based approach for equivalence test with parameter margin.

    PubMed

    Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua

    2017-01-01

    The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.

  6. Grade Equivalents: We Report Them, You Should Too.

    ERIC Educational Resources Information Center

    Ligon, Glynn; Battaile, Richard

    In certain situations, grade equivalent scores are the most appropriate statistic available for reporting achievement test data. It is noted that testing practitioners have found that raw scores, normal curve equivalents, stanines, and standard scores are very useful. However, it is best to convert to either grade equivalents or percentiles before…

  7. Statistical Validation of Surrogate Endpoints: Another Look at the Prentice Criterion and Other Criteria.

    PubMed

    Saraf, Sanatan; Mathew, Thomas; Roy, Anindya

    2015-01-01

    For the statistical validation of surrogate endpoints, an alternative formulation is proposed for testing Prentice's fourth criterion, under a bivariate normal model. In such a setup, the criterion involves inference concerning an appropriate regression parameter, and the criterion holds if the regression parameter is zero. Testing such a null hypothesis has been criticized in the literature since it can only be used to reject a poor surrogate, and not to validate a good surrogate. In order to circumvent this, an equivalence hypothesis is formulated for the regression parameter, namely the hypothesis that the parameter is equivalent to zero. Such an equivalence hypothesis is formulated as an alternative hypothesis, so that the surrogate endpoint is statistically validated when the null hypothesis is rejected. Confidence intervals for the regression parameter and tests for the equivalence hypothesis are proposed using bootstrap methods and small sample asymptotics, and their performances are numerically evaluated and recommendations are made. The choice of the equivalence margin is a regulatory issue that needs to be addressed. The proposed equivalence testing formulation is also adopted for other parameters that have been proposed in the literature on surrogate endpoint validation, namely, the relative effect and proportion explained.

  8. Testing of Hypothesis in Equivalence and Non Inferiority Trials-A Concept.

    PubMed

    Juneja, Atul; Aggarwal, Abha R; Adhikari, Tulsi; Pandey, Arvind

    2016-04-01

    Establishing the appropriate hypothesis is one of the important steps for carrying out the statistical tests/analysis. Its understanding is important for interpreting the results of statistical analysis. The current communication attempts to provide the concept of testing of hypothesis in non inferiority and equivalence trials, where the null hypothesis is just reverse of what is set up for conventional superiority trials. It is similarly looked for rejection for establishing the fact the researcher is intending to prove. It is important to mention that equivalence or non inferiority cannot be proved by accepting the null hypothesis of no difference. Hence, establishing the appropriate statistical hypothesis is extremely important to arrive at meaningful conclusion for the set objectives in research.

  9. Optimal Sample Size Determinations for the Heteroscedastic Two One-Sided Tests of Mean Equivalence: Design Schemes and Software Implementations

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2017-01-01

    Equivalence assessment is becoming an increasingly important topic in many application areas including behavioral and social sciences research. Although there exist more powerful tests, the two one-sided tests (TOST) procedure is a technically transparent and widely accepted method for establishing statistical equivalence. Alternatively, a direct…

  10. Equivalence Testing of Complex Particle Size Distribution Profiles Based on Earth Mover's Distance.

    PubMed

    Hu, Meng; Jiang, Xiaohui; Absar, Mohammad; Choi, Stephanie; Kozak, Darby; Shen, Meiyu; Weng, Yu-Ting; Zhao, Liang; Lionberger, Robert

    2018-04-12

    Particle size distribution (PSD) is an important property of particulates in drug products. In the evaluation of generic drug products formulated as suspensions, emulsions, and liposomes, the PSD comparisons between a test product and the branded product can provide useful information regarding in vitro and in vivo performance. Historically, the FDA has recommended the population bioequivalence (PBE) statistical approach to compare the PSD descriptors D50 and SPAN from test and reference products to support product equivalence. In this study, the earth mover's distance (EMD) is proposed as a new metric for comparing PSD particularly when the PSD profile exhibits complex distribution (e.g., multiple peaks) that is not accurately described by the D50 and SPAN descriptor. EMD is a statistical metric that measures the discrepancy (distance) between size distribution profiles without a prior assumption of the distribution. PBE is then adopted to perform statistical test to establish equivalence based on the calculated EMD distances. Simulations show that proposed EMD-based approach is effective in comparing test and reference profiles for equivalence testing and is superior compared to commonly used distance measures, e.g., Euclidean and Kolmogorov-Smirnov distances. The proposed approach was demonstrated by evaluating equivalence of cyclosporine ophthalmic emulsion PSDs that were manufactured under different conditions. Our results show that proposed approach can effectively pass an equivalent product (e.g., reference product against itself) and reject an inequivalent product (e.g., reference product against negative control), thus suggesting its usefulness in supporting bioequivalence determination of a test product to the reference product which both possess multimodal PSDs.

  11. A statistical assessment of differences and equivalences between genetically modified and reference plant varieties

    PubMed Central

    2011-01-01

    Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199

  12. Ensuring Positiveness of the Scaled Difference Chi-Square Test Statistic

    ERIC Educational Resources Information Center

    Satorra, Albert; Bentler, Peter M.

    2010-01-01

    A scaled difference test statistic T[tilde][subscript d] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (Psychometrika 66:507-514, 2001). The statistic T[tilde][subscript d] is asymptotically equivalent to the scaled difference test statistic T[bar][subscript…

  13. Impact of Sample Size and Variability on the Power and Type I Error Rates of Equivalence Tests: A Simulation Study

    ERIC Educational Resources Information Center

    Rusticus, Shayna A.; Lovato, Chris Y.

    2014-01-01

    The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few…

  14. An Inferential Confidence Interval Method of Establishing Statistical Equivalence that Corrects Tryon's (2001) Reduction Factor

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2008-01-01

    Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…

  15. Statistical equivalence and test-retest reliability of delay and probability discounting using real and hypothetical rewards.

    PubMed

    Matusiewicz, Alexis K; Carter, Anne E; Landes, Reid D; Yi, Richard

    2013-11-01

    Delay discounting (DD) and probability discounting (PD) refer to the reduction in the subjective value of outcomes as a function of delay and uncertainty, respectively. Elevated measures of discounting are associated with a variety of maladaptive behaviors, and confidence in the validity of these measures is imperative. The present research examined (1) the statistical equivalence of discounting measures when rewards were hypothetical or real, and (2) their 1-week reliability. While previous research has partially explored these issues using the low threshold of nonsignificant difference, the present study fully addressed this issue using the more-compelling threshold of statistical equivalence. DD and PD measures were collected from 28 healthy adults using real and hypothetical $50 rewards during each of two experimental sessions, one week apart. Analyses using area-under-the-curve measures revealed a general pattern of statistical equivalence, indicating equivalence of real/hypothetical conditions as well as 1-week reliability. Exceptions are identified and discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. The Skillings-Mack test (Friedman test when there are missing data).

    PubMed

    Chatfield, Mark; Mander, Adrian

    2009-04-01

    The Skillings-Mack statistic (Skillings and Mack, 1981, Technometrics 23: 171-177) is a general Friedman-type statistic that can be used in almost any block design with an arbitrary missing-data structure. The missing data can be either missing by design, for example, an incomplete block design, or missing completely at random. The Skillings-Mack test is equivalent to the Friedman test when there are no missing data in a balanced complete block design, and the Skillings-Mack test is equivalent to the test suggested in Durbin (1951, British Journal of Psychology, Statistical Section 4: 85-90) for a balanced incomplete block design. The Friedman test was implemented in Stata by Goldstein (1991, Stata Technical Bulletin 3: 26-27) and further developed in Goldstein (2005, Stata Journal 5: 285). This article introduces the skilmack command, which performs the Skillings-Mack test.The skilmack command is also useful when there are many ties or equal ranks (N.B. the Friedman statistic compared with the chi(2) distribution will give a conservative result), as well as for small samples; appropriate results can be obtained by simulating the distribution of the test statistic under the null hypothesis.

  17. Establishing Statistical Equivalence of Data from Different Sampling Approaches for Assessment of Bacterial Phenotypic Antimicrobial Resistance

    PubMed Central

    2018-01-01

    ABSTRACT To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli. These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. PMID:29475868

  18. Establishing Statistical Equivalence of Data from Different Sampling Approaches for Assessment of Bacterial Phenotypic Antimicrobial Resistance.

    PubMed

    Shakeri, Heman; Volkova, Victoriya; Wen, Xuesong; Deters, Andrea; Cull, Charley; Drouillard, James; Müller, Christian; Moradijamei, Behnaz; Jaberi-Douraki, Majid

    2018-05-01

    To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. Copyright © 2018 Shakeri et al.

  19. An Evaluation of Statistical Strategies for Making Equating Function Selections. Research Report. ETS RR-08-60

    ERIC Educational Resources Information Center

    Moses, Tim

    2008-01-01

    Nine statistical strategies for selecting equating functions in an equivalent groups design were evaluated. The strategies of interest were likelihood ratio chi-square tests, regression tests, Kolmogorov-Smirnov tests, and significance tests for equated score differences. The most accurate strategies in the study were the likelihood ratio tests…

  20. Telephone-Based Cognitive-Behavioral Screening for Frontotemporal Changes in Patients with Amyotrophic Lateral Sclerosis (ALS)

    PubMed Central

    Christodoulou, Georgia; Gennings, Chris; Hupf, Jonathan; Factor-Litvak, Pam; Murphy, Jennifer; Goetz, Raymond R.; Mitsumoto, Hiroshi

    2017-01-01

    Objective To establish a valid and reliable battery of measures to evaluate frontotemporal dementia (FTD) in patients with ALS over the phone. Methods Thirty-one subjects were administered either in-person or telephone-based screening followed by the opposite mode of testing two weeks later, using a modified version of the UCSF Cognitive Screening Battery. Results Equivalence testing was performed for in-person and telephone-based tests. The standard ALS Cognitive Behavioral Screen (ALS-CBS) showed statistical equivalence at the 5% significance level when compared to a revised phone-version of the ALS-CBS. In addition, the Controlled Oral Word Association Test (COWAT) and Center for Neurologic Study-Lability Scale (CNS-LS) were also found to be equivalent at the 5% and 10% significance level respectively. Similarly, the Mini-Mental State Examination (MMSE) and the well-established Telephone Interview for Cognitive Status (TICS) were also statistically equivalent. Equivalence could not be claimed for the ALS-Frontal Behavioral Inventory (ALS-FBI) caregiver interview and the Written Verbal Fluency Index (WVFI). Conclusions Our study suggests that telephone-based versions of the ALS-CBS, COWAT, and CNS-LS may offer clinicians valid tools to detect frontotemporal changes in the ALS population. Development of telephone-based cognitive testing for ALS could become an integral resource for population-based research in the future. PMID:27121545

  1. Telephone based cognitive-behavioral screening for frontotemporal changes in patients with amyotrophic lateral sclerosis (ALS).

    PubMed

    Christodoulou, Georgia; Gennings, Chris; Hupf, Jonathan; Factor-Litvak, Pam; Murphy, Jennifer; Goetz, Raymond R; Mitsumoto, Hiroshi

    Our objective was to establish a valid and reliable battery of measures to evaluate frontotemporal dementia (FTD) in patients with ALS over the telephone. Thirty-one subjects were administered either in-person or by telephone-based screening followed by the opposite mode of testing two weeks later, using a modified version of the UCSF Cognitive Screening Battery. Equivalence testing was performed for in-person and telephone based tests. The standard ALS Cognitive Behavioral Screen (ALS-CBS) showed statistical equivalence at the 5% significance level compared to a revised phone version of the ALS-CBS. In addition, the Controlled Oral Word Association Test (COWAT) and Center for Neurologic Study-Lability Scale (CNS-LS) were also found to be equivalent at the 5% and 10% significance level, respectively. Similarly, the Mini-Mental State Examination (MMSE) and the well-established Telephone Interview for Cognitive Status (TICS) were also statistically equivalent. Equivalence could not be claimed for the ALS-Frontal Behavioral Inventory (ALS-FBI) caregiver interview and the Written Verbal Fluency Index (WVFI). In conclusion, our study suggests that telephone-based versions of the ALS-CBS, COWAT, and CNS-LS may offer clinicians valid tools to detect frontotemporal changes in the ALS population. Development of telephone based cognitive testing for ALS could become an integral resource for population based research in the future.

  2. On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.

    PubMed

    Yang, Harry; Novick, Steven; Burdick, Richard K

    Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σ R , where σ R is the reference product variability estimated by the sample standard deviation S R from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄ R ± K × σ R where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation S R underestimates the true reference product variability σ R As a result, substituting S R for σ R in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is a generic version of the original biological drug product. A key component of a biosimilar development is the demonstration of analytical similarity between the biosimilar and the reference product. Such demonstration relies on application of statistical methods to establish a similarity margin and appropriate test for equivalence between the two products. This paper discusses statistical issues with demonstration of analytical similarity and provides alternate approaches to potentially mitigate these problems. © PDA, Inc. 2016.

  3. Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.

    PubMed

    Satorra, Albert; Bentler, Peter M

    2010-06-01

    A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.

  4. Equivalence of the Color Trails Test and Trail Making Test in nonnative English-speakers.

    PubMed

    Dugbartey, A T; Townes, B D; Mahurin, R K

    2000-07-01

    The Color Trails Test (CTT) has been described as a culture-fair test of visual attention, graphomotor sequencing, and effortful executive processing abilities relative to the Trail Making Test (TMT). In this study, the equivalence of the TMT and the CTT among a group of 64 bilingual Turkish university students was examined. No difference in performance on the CTT-1 and TMT Part A was found, suggesting functionally equivalent performance across both tasks. In contrast, the statistically significant differences in performance on CTT-2 and TMT Part B, as well as the interference indices for both tests, were interpreted as providing evidence for task nonequivalence of the CTT-2 and TMT Part B. Results have implications for both psychometric test development and clinical cultural neuropsychology.

  5. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  6. The Same or Not the Same: Equivalence as an Issue in Educational Research

    NASA Astrophysics Data System (ADS)

    Lewis, Scott E.; Lewis, Jennifer E.

    2005-09-01

    In educational research, particularly in the sciences, a common research design calls for the establishment of a control and experimental group to determine the effectiveness of an intervention. As part of this design, it is often desirable to illustrate that the two groups were equivalent at the start of the intervention, based on measures such as standardized cognitive tests or student grades in prior courses. In this article we use SAT and ACT scores to illustrate a more robust way of testing equivalence. The method incorporates two one-sided t tests evaluating two null hypotheses, providing a stronger claim for equivalence than the standard method, which often does not address the possible problem of low statistical power. The two null hypotheses are based on the construction of an equivalence interval particular to the data, so the article also provides a rationale for and illustration of a procedure for constructing equivalence intervals. Our consideration of equivalence using this method also underscores the need to include sample sizes, standard deviations, and group means in published quantitative studies.

  7. Endurance and failure characteristics of modified Vasco X-2, CBS 600 and AISI 9310 spur gears. [aircraft construction materials

    NASA Technical Reports Server (NTRS)

    Townsend, D. P.; Zaretsky, E. V.

    1980-01-01

    Gear endurance tests and rolling-element fatigue tests were conducted to compare the performance of spur gears made from AISI 9310, CBS 600 and modified Vasco X-2 and to compare the pitting fatigue lives of these three materials. Gears manufactured from CBS 600 exhibited lives longer than those manufactured from AISI 9310. However, rolling-element fatigue tests resulted in statistically equivalent lives. Modified Vasco X-2 exhibited statistically equivalent lives to AISI 9310. CBS 600 and modified Vasco X-2 gears exhibited the potential of tooth fracture occurring at a tooth surface fatigue pit. Case carburization of all gear surfaces for the modified Vasco X-2 gears results in fracture at the tips of the gears.

  8. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  9. MICROSCOPE Mission: First Results of a Space Test of the Equivalence Principle.

    PubMed

    Touboul, Pierre; Métris, Gilles; Rodrigues, Manuel; André, Yves; Baghi, Quentin; Bergé, Joël; Boulanger, Damien; Bremer, Stefanie; Carle, Patrice; Chhun, Ratana; Christophe, Bruno; Cipolla, Valerio; Damour, Thibault; Danto, Pascale; Dittus, Hansjoerg; Fayet, Pierre; Foulon, Bernard; Gageant, Claude; Guidotti, Pierre-Yves; Hagedorn, Daniel; Hardy, Emilie; Huynh, Phuong-Anh; Inchauspe, Henri; Kayser, Patrick; Lala, Stéphanie; Lämmerzahl, Claus; Lebat, Vincent; Leseur, Pierre; Liorzou, Françoise; List, Meike; Löffler, Frank; Panet, Isabelle; Pouilloux, Benjamin; Prieur, Pascal; Rebray, Alexandre; Reynaud, Serge; Rievers, Benny; Robert, Alain; Selig, Hanns; Serron, Laura; Sumner, Timothy; Tanguy, Nicolas; Visser, Pieter

    2017-12-08

    According to the weak equivalence principle, all bodies should fall at the same rate in a gravitational field. The MICROSCOPE satellite, launched in April 2016, aims to test its validity at the 10^{-15} precision level, by measuring the force required to maintain two test masses (of titanium and platinum alloys) exactly in the same orbit. A nonvanishing result would correspond to a violation of the equivalence principle, or to the discovery of a new long-range force. Analysis of the first data gives δ(Ti,Pt)=[-1±9(stat)±9(syst)]×10^{-15} (1σ statistical uncertainty) for the titanium-platinum Eötvös parameter characterizing the relative difference in their free-fall accelerations.

  10. MICROSCOPE Mission: First Results of a Space Test of the Equivalence Principle

    NASA Astrophysics Data System (ADS)

    Touboul, Pierre; Métris, Gilles; Rodrigues, Manuel; André, Yves; Baghi, Quentin; Bergé, Joël; Boulanger, Damien; Bremer, Stefanie; Carle, Patrice; Chhun, Ratana; Christophe, Bruno; Cipolla, Valerio; Damour, Thibault; Danto, Pascale; Dittus, Hansjoerg; Fayet, Pierre; Foulon, Bernard; Gageant, Claude; Guidotti, Pierre-Yves; Hagedorn, Daniel; Hardy, Emilie; Huynh, Phuong-Anh; Inchauspe, Henri; Kayser, Patrick; Lala, Stéphanie; Lämmerzahl, Claus; Lebat, Vincent; Leseur, Pierre; Liorzou, Françoise; List, Meike; Löffler, Frank; Panet, Isabelle; Pouilloux, Benjamin; Prieur, Pascal; Rebray, Alexandre; Reynaud, Serge; Rievers, Benny; Robert, Alain; Selig, Hanns; Serron, Laura; Sumner, Timothy; Tanguy, Nicolas; Visser, Pieter

    2017-12-01

    According to the weak equivalence principle, all bodies should fall at the same rate in a gravitational field. The MICROSCOPE satellite, launched in April 2016, aims to test its validity at the 10-15 precision level, by measuring the force required to maintain two test masses (of titanium and platinum alloys) exactly in the same orbit. A nonvanishing result would correspond to a violation of the equivalence principle, or to the discovery of a new long-range force. Analysis of the first data gives δ (Ti ,Pt )=[-1 ±9 (stat)±9 (syst)]×10-15 (1 σ statistical uncertainty) for the titanium-platinum Eötvös parameter characterizing the relative difference in their free-fall accelerations.

  11. An empirical approach to sufficient similarity in dose-responsiveness: Utilization of statistical distance as a similarity measure.

    EPA Science Inventory

    Using statistical equivalence testing logic and mixed model theory an approach has been developed, that extends the work of Stork et al (JABES,2008), to define sufficient similarity in dose-response for chemical mixtures containing the same chemicals with different ratios ...

  12. Equivalent statistics and data interpretation.

    PubMed

    Francis, Gregory

    2017-08-01

    Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.

  13. Equivalence Testing as a Tool for Fatigue Risk Management in Aviation.

    PubMed

    Wu, Lora J; Gander, Philippa H; van den Berg, Margo; Signal, T Leigh

    2018-04-01

    Many civilian aviation regulators favor evidence-based strategies that go beyond hours-of-service approaches for managing fatigue risk. Several countries now allow operations to be flown outside of flight and duty hour limitations, provided airlines demonstrate an alternative method of compliance that yields safety levels "at least equivalent to" the prescriptive regulations. Here we discuss equivalence testing in occupational fatigue risk management. We present suggested ratios/margins of practical equivalence when comparing operations inside and outside of prescriptive regulations for two common aviation safety performance indicators: total in-flight sleep duration and psychomotor vigilance task reaction speed. Suggested levels of practical equivalence, based on expertise coupled with evidence from field and laboratory studies, are ≤ 30 min in-flight sleep and ± 15% of reference response speed. Equivalence testing is illustrated in analyses of a within-subjects field study during an out-and-back long-range trip. During both sectors of their trip, 41 pilots were monitored via actigraphy, sleep diary, and top of descent psychomotor vigilance task. Pilots were assigned to take rest breaks in a standard lie-flat bunk on one sector and in a bunk tapered 9 from hip to foot on the other sector. Total in-flight sleep duration (134 ± 53 vs. 135 ± 55 min) and mean reaction speed at top of descent (3.94 ± 0.58 vs. 3.77 ± 0.58) were equivalent after rest in the full vs. tapered bunk. Equivalence testing is a complimentary statistical approach to difference testing when comparing levels of fatigue and performance in occupational settings and can be applied in transportation policy decision making.Wu LJ, Gander PH, van den Berg M, Signal TL. Equivalence testing as a tool for fatigue risk management in aviation. Aerosp Med Hum Perform. 2018; 89(4):383-388.

  14. Evaluating Equating Results in the Non-Equivalent Groups with Anchor Test Design Using Equipercentile and Equity Criteria

    ERIC Educational Resources Information Center

    Duong, Minh Quang

    2011-01-01

    Testing programs often use multiple test forms of the same test to control item exposure and to ensure test security. Although test forms are constructed to be as similar as possible, they often differ. Test equating techniques are those statistical methods used to adjust scores obtained on different test forms of the same test so that they are…

  15. Mapping cell populations in flow cytometry data for cross‐sample comparison using the Friedman–Rafsky test statistic as a distance measure

    PubMed Central

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu

    2015-01-01

    Abstract Flow cytometry (FCM) is a fluorescence‐based single‐cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap‐FR, a novel method for cell population mapping across FCM samples. FlowMap‐FR is based on the Friedman–Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap‐FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap‐FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap‐FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap‐FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap‐FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback–Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL‐distance in distinguishing equivalent from nonequivalent cell populations. FlowMap‐FR was also employed as a distance metric to match cell populations delineated by manual gating across 30 FCM samples from a benchmark FlowCAP data set. An F‐measure of 0.88 was obtained, indicating high precision and recall of the FR‐based population matching results. FlowMap‐FR has been implemented as a standalone R/Bioconductor package so that it can be easily incorporated into current FCM data analytical workflows. © 2015 International Society for Advancement of Cytometry PMID:26274018

  16. Mapping cell populations in flow cytometry data for cross-sample comparison using the Friedman-Rafsky test statistic as a distance measure.

    PubMed

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu; Scheuermann, Richard H

    2016-01-01

    Flow cytometry (FCM) is a fluorescence-based single-cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap-FR, a novel method for cell population mapping across FCM samples. FlowMap-FR is based on the Friedman-Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap-FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap-FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap-FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap-FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap-FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback-Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL-distance in distinguishing equivalent from nonequivalent cell populations. FlowMap-FR was also employed as a distance metric to match cell populations delineated by manual gating across 30 FCM samples from a benchmark FlowCAP data set. An F-measure of 0.88 was obtained, indicating high precision and recall of the FR-based population matching results. FlowMap-FR has been implemented as a standalone R/Bioconductor package so that it can be easily incorporated into current FCM data analytical workflows. © The Authors. Published by Wiley Periodicals, Inc. on behalf of ISAC.

  17. Comment on the asymptotics of a distribution-free goodness of fit test statistic.

    PubMed

    Browne, Michael W; Shapiro, Alexander

    2015-03-01

    In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.

  18. Multiple-Choice versus Constructed-Response Tests in the Assessment of Mathematics Computation Skills.

    ERIC Educational Resources Information Center

    Gadalla, Tahany M.

    The equivalence of multiple-choice (MC) and constructed response (discrete) (CR-D) response formats as applied to mathematics computation at grade levels two to six was tested. The difference between total scores from the two response formats was tested for statistical significance, and the factor structure of items in both response formats was…

  19. A Statistical Review of Alternative Zinc and Copper Extraction from Mineral Fertilizers and Industrial By-Products.

    PubMed

    Cenciani de Souza, Camila Prado; Aparecida de Abreu, Cleide; Coscione, Aline Renée; Alberto de Andrade, Cristiano; Teixeira, Luiz Antonio Junqueira; Consolini, Flavia

    2018-01-01

    Rapid, accurate, and low-cost alternative analytical methods for micronutrient quantification in fertilizers are fundamental in QC. The purpose of this study was to evaluate whether zinc (Zn) and copper (Cu) content in mineral fertilizers and industrial by-products determined by the alternative methods USEPA 3051a, 10% HCl, and 10% H2SO4 are statistically equivalent to the standard method, consisting of hot-plate digestion using concentrated HCl. The commercially marketed Zn and Cu sources in Brazil consisted of oxides, carbonate, and sulfate fertilizers and by-products consisting of galvanizing ash, galvanizing sludge, brass ash, and brass or scrap slag. The contents of sources ranged from 15 to 82% and 10 to 45%, respectively, for Zn and Cu. The Zn and Cu contents refer to the variation of the elements found in the different sources evaluated with the concentrated HCl method as shown in Table 1. A protocol based on the following criteria was used for the statistical analysis assessment of the methods: F-test modified by Graybill, t-test for the mean error, and linear correlation coefficient analysis. In terms of equivalents, 10% HCl extraction was equivalent to the standard method for Zn, and the results of the USEPA 3051a and 10% HCl methods indicated that these methods were equivalents for Cu. Therefore, these methods can be considered viable alternatives to the standard method of determination for Cu and Zn in mineral fertilizers and industrial by-products in future research for their complete validation.

  20. Evaluation of a Head-Worn Display System as an Equivalent Head-Up Display for Low Visibility Commercial Operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis (Trey) J., III; Shelton, Kevin J.; Prinzel, Lawrence J.; Nicholas, Stephanie N.; Williams, Steven P.; Ellis, Kyle E.; Jones, Denise R.; Bailey, Randall E.; Harrison, Stephanie J.; Barnes, James R.

    2017-01-01

    Research, development, test, and evaluation of fight deck interface technologies is being conducted by the National Aeronautics and Space Administration (NASA) to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System (NextGen). One specific area of research was the use of small Head-Worn Displays (HWDs) to serve as a possible equivalent to a Head-Up Display (HUD). A simulation experiment and a fight test were conducted to evaluate if the HWD can provide an equivalent level of performance to a HUD. For the simulation experiment, airline crews conducted simulated approach and landing, taxi, and departure operations during low visibility operations. In a follow-on fight test, highly experienced test pilots evaluated the same HWD during approach and surface operations. The results for both the simulation and fight tests showed that there were no statistical differences in the crews' performance in terms of approach, touchdown and takeoff; but, there are still technical hurdles to be overcome for complete display equivalence including, most notably, the end-to-end latency of the HWD system.

  1. Some challenges with statistical inference in adaptive designs.

    PubMed

    Hung, H M James; Wang, Sue-Jane; Yang, Peiling

    2014-01-01

    Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.

  2. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    PubMed

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  3. A weighted generalized score statistic for comparison of predictive values of diagnostic tests

    PubMed Central

    Kosinski, Andrzej S.

    2013-01-01

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations which are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we present, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic which incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, it always reduces to the score statistic in the independent samples situation, and it preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the weighted generalized score test statistic in a general GEE setting. PMID:22912343

  4. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power

    PubMed Central

    Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943

  5. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.

    PubMed

    Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.

  6. Testing for Additivity in Chemical Mixtures Using a Fixed-Ratio Ray Design and Statistical Equivalence Testing Methods

    EPA Science Inventory

    Fixed-ratio ray designs have been used for detecting and characterizing interactions of large numbers of chemicals in combination. Single chemical dose-response data are used to predict an “additivity curve” along an environmentally relevant ray. A “mixture curve” is estimated fr...

  7. 78 FR 255 - Resumption of the Population Estimates Challenge Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-03

    ... governmental unit. In those instances where a non-functioning county-level government or statistical equivalent...) A non-functioning county or statistical equivalent means a sub- state entity that does not function... represents a non-functioning county or statistical equivalent, the governor will serve as the chief executive...

  8. Primer of statistics in dental research: part I.

    PubMed

    Shintani, Ayumi

    2014-01-01

    Statistics play essential roles in evidence-based dentistry (EBD) practice and research. It ranges widely from formulating scientific questions, designing studies, collecting and analyzing data to interpreting, reporting, and presenting study findings. Mastering statistical concepts appears to be an unreachable goal among many dental researchers in part due to statistical authorities' limitations of explaining statistical principles to health researchers without elaborating complex mathematical concepts. This series of 2 articles aim to introduce dental researchers to 9 essential topics in statistics to conduct EBD with intuitive examples. The part I of the series includes the first 5 topics (1) statistical graph, (2) how to deal with outliers, (3) p-value and confidence interval, (4) testing equivalence, and (5) multiplicity adjustment. Part II will follow to cover the remaining topics including (6) selecting the proper statistical tests, (7) repeated measures analysis, (8) epidemiological consideration for causal association, and (9) analysis of agreement. Copyright © 2014. Published by Elsevier Ltd.

  9. Dividing the Force Concept Inventory into two equivalent half-length tests

    NASA Astrophysics Data System (ADS)

    Han, Jing; Bao, Lei; Chen, Li; Cai, Tianfang; Pi, Yuan; Zhou, Shaona; Tu, Yan; Koenig, Kathleen

    2015-06-01

    The Force Concept Inventory (FCI) is a 30-question multiple-choice assessment that has been a building block for much of the physics education research done today. In practice, there are often concerns regarding the length of the test and possible test-retest effects. Since many studies in the literature use the mean score of the FCI as the primary variable, it would be useful then to have different shorter tests that can produce FCI-equivalent scores while providing the benefits of being quicker to administer and overcoming the test-retest effects. In this study, we divide the 1995 version of the FCI into two half-length tests; each contains a different subset of the original FCI questions. The two new tests are shorter, still cover the same set of concepts, and produce mean scores equivalent to those of the FCI. Using a large quantitative data set collected at a large midwestern university, we statistically compare the assessment features of the two half-length tests and the full-length FCI. The results show that the mean error of equivalent scores between any two of the three tests is within 3%. Scores from all tests are well correlated. Based on the analysis, it appears that the two half-length tests can be a viable option for score based assessment that need to administer tests quickly or need to measure short-term gains where using identical pre- and post-test questions is a concern.

  10. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants.

    PubMed

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-04-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.

  11. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants

    PubMed Central

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-01-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325

  12. [Development and equivalence evaluation of spondee lists of mandarin speech test materials].

    PubMed

    Zhang, Hua; Wang, Shuo; Wang, Liang; Chen, Jing; Chen, Ai-ting; Guo, Lian-sheng; Zhao, Xiao-yan; Ji, Chen

    2006-06-01

    To edit the spondee (disyllable) word lists as a part of mandarin speech test materials (MSTM). These will be basic speech materials for routine tests in clinics and laboratories. Two groups of professionals (audiologists, Chinese and Mandarin scientists, linguistician and statistician) were set up at first. The editing principles were established after 3 round table meetings. Ten spondee lists, each with 50 words, were edited and recorded into cassettes. All lists were phonemically balanced (3-dimensions: vowels, consonants and Chinese tones). Seventy-three normal hearing college students were tested. The speech was presented by earphone monaurally. Three statistic methods were used for equivalent analysis. Related analysis showed that all lists were much related, except List 5. Cluster analysis showed that all ten lists could be classified as two groups. But Kappa test showed that the lists' homogeneity were not well. Spondee lists are one of the most routine speech test materials. Their editing, recording and equivalent evaluation are affected by many factors. This also needs multi-discipline cooperation. All lists edited in present study need future modification in recording and testing in order to be used clinically and in research. The phonemic balance should be kept.

  13. The transfer of analytical procedures.

    PubMed

    Ermer, J; Limberger, M; Lis, K; Wätzig, H

    2013-11-01

    Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Statistical tests for detecting associations with groups of genetic variants: generalization, evaluation, and implementation

    PubMed Central

    Ferguson, John; Wheeler, William; Fu, YiPing; Prokunina-Olsson, Ludmila; Zhao, Hongyu; Sampson, Joshua

    2013-01-01

    With recent advances in sequencing, genotyping arrays, and imputation, GWAS now aim to identify associations with rare and uncommon genetic variants. Here, we describe and evaluate a class of statistics, generalized score statistics (GSS), that can test for an association between a group of genetic variants and a phenotype. GSS are a simple weighted sum of single-variant statistics and their cross-products. We show that the majority of statistics currently used to detect associations with rare variants are equivalent to choosing a specific set of weights within this framework. We then evaluate the power of various weighting schemes as a function of variant characteristics, such as MAF, the proportion associated with the phenotype, and the direction of effect. Ultimately, we find that two classical tests are robust and powerful, but details are provided as to when other GSS may perform favorably. The software package CRaVe is available at our website (http://dceg.cancer.gov/bb/tools/crave). PMID:23092956

  15. Test of Equivalence Principle at 10(-8) Level by a Dual-Species Double-Diffraction Raman Atom Interferometer.

    PubMed

    Zhou, Lin; Long, Shitong; Tang, Biao; Chen, Xi; Gao, Fen; Peng, Wencui; Duan, Weitao; Zhong, Jiaqi; Xiong, Zongyuan; Wang, Jin; Zhang, Yuanzhong; Zhan, Mingsheng

    2015-07-03

    We report an improved test of the weak equivalence principle by using a simultaneous 85Rb-87Rb dual-species atom interferometer. We propose and implement a four-wave double-diffraction Raman transition scheme for the interferometer, and demonstrate its ability in suppressing common-mode phase noise of Raman lasers after their frequencies and intensity ratios are optimized. The statistical uncertainty of the experimental data for Eötvös parameter η is 0.8×10(-8) at 3200 s. With various systematic errors corrected, the final value is η=(2.8±3.0)×10(-8). The major uncertainty is attributed to the Coriolis effect.

  16. Measurement invariance via multigroup SEM: Issues and solutions with chi-square-difference tests.

    PubMed

    Yuan, Ke-Hai; Chan, Wai

    2016-09-01

    Multigroup structural equation modeling (SEM) plays a key role in studying measurement invariance and in group comparison. When population covariance matrices are deemed not equal across groups, the next step to substantiate measurement invariance is to see whether the sample covariance matrices in all the groups can be adequately fitted by the same factor model, called configural invariance. After configural invariance is established, cross-group equalities of factor loadings, error variances, and factor variances-covariances are then examined in sequence. With mean structures, cross-group equalities of intercepts and factor means are also examined. The established rule is that if the statistic at the current model is not significant at the level of .05, one then moves on to testing the next more restricted model using a chi-square-difference statistic. This article argues that such an established rule is unable to control either Type I or Type II errors. Analysis, an example, and Monte Carlo results show why and how chi-square-difference tests are easily misused. The fundamental issue is that chi-square-difference tests are developed under the assumption that the base model is sufficiently close to the population, and a nonsignificant chi-square statistic tells little about how good the model is. To overcome this issue, this article further proposes that null hypothesis testing in multigroup SEM be replaced by equivalence testing, which allows researchers to effectively control the size of misspecification before moving on to testing a more restricted model. R code is also provided to facilitate the applications of equivalence testing for multigroup SEM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Single-row, double-row, and transosseous equivalent techniques for isolated supraspinatus tendon tears with minimal atrophy: A retrospective comparative outcome and radiographic analysis at minimum 2-year followup

    PubMed Central

    McCormick, Frank; Gupta, Anil; Bruce, Ben; Harris, Josh; Abrams, Geoff; Wilson, Hillary; Hussey, Kristen; Cole, Brian J.

    2014-01-01

    Purpose: The purpose of this study was to measure and compare the subjective, objective, and radiographic healing outcomes of single-row (SR), double-row (DR), and transosseous equivalent (TOE) suture techniques for arthroscopic rotator cuff repair. Materials and Methods: A retrospective comparative analysis of arthroscopic rotator cuff repairs by one surgeon from 2004 to 2010 at minimum 2-year followup was performed. Cohorts were matched for age, sex, and tear size. Subjective outcome variables included ASES, Constant, SST, UCLA, and SF-12 scores. Objective outcome variables included strength, active range of motion (ROM). Radiographic healing was assessed by magnetic resonance imaging (MRI). Statistical analysis was performed using analysis of variance (ANOVA), Mann — Whitney and Kruskal — Wallis tests with significance, and the Fisher exact probability test <0.05. Results: Sixty-three patients completed the study requirements (20 SR, 21 DR, 22 TOE). There was a clinically and statistically significant improvement in outcomes with all repair techniques (ASES mean improvement P = <0.0001). The mean final ASES scores were: SR 83; (SD 21.4); DR 87 (SD 18.2); TOE 87 (SD 13.2); (P = 0.73). There was a statistically significant improvement in strength for each repair technique (P < 0.001). There was no significant difference between techniques across all secondary outcome assessments: ASES improvement, Constant, SST, UCLA, SF-12, ROM, Strength, and MRI re-tear rates. There was a decrease in re-tear rates from single row (22%) to double-row (18%) to transosseous equivalent (11%); however, this difference was not statistically significant (P = 0.6). Conclusions: Compared to preoperatively, arthroscopic rotator cuff repair, using SR, DR, or TOE techniques, yielded a clinically and statistically significant improvement in subjective and objective outcomes at a minimum 2-year follow-up. Level of Evidence: Therapeutic level 3. PMID:24926159

  18. Statistical Approaches to Assess Biosimilarity from Analytical Data.

    PubMed

    Burdick, Richard; Coffey, Todd; Gutka, Hiten; Gratzl, Gyöngyi; Conlon, Hugh D; Huang, Chi-Ting; Boyne, Michael; Kuehne, Henriette

    2017-01-01

    Protein therapeutics have unique critical quality attributes (CQAs) that define their purity, potency, and safety. The analytical methods used to assess CQAs must be able to distinguish clinically meaningful differences in comparator products, and the most important CQAs should be evaluated with the most statistical rigor. High-risk CQA measurements assess the most important attributes that directly impact the clinical mechanism of action or have known implications for safety, while the moderate- to low-risk characteristics may have a lower direct impact and thereby may have a broader range to establish similarity. Statistical equivalence testing is applied for high-risk CQA measurements to establish the degree of similarity (e.g., highly similar fingerprint, highly similar, or similar) of selected attributes. Notably, some high-risk CQAs (e.g., primary sequence or disulfide bonding) are qualitative (e.g., the same as the originator or not the same) and therefore not amenable to equivalence testing. For biosimilars, an important step is the acquisition of a sufficient number of unique originator drug product lots to measure the variability in the originator drug manufacturing process and provide sufficient statistical power for the analytical data comparisons. Together, these analytical evaluations, along with PK/PD and safety data (immunogenicity), provide the data necessary to determine if the totality of the evidence warrants a designation of biosimilarity and subsequent licensure for marketing in the USA. In this paper, a case study approach is used to provide examples of analytical similarity exercises and the appropriateness of statistical approaches for the example data.

  19. Effects of Long-Term Thermal Exposure on Commercially Pure Titanium Grade 2 Elevated-Temperature Tensile Properties

    NASA Technical Reports Server (NTRS)

    Ellis, David L.

    2012-01-01

    Elevated-temperature tensile testing of commercially pure titanium (CP Ti) Grade 2 was conducted for as-received commercially produced sheet and following thermal exposure at 550 and 650 K (531 and 711 F) for times up to 5000 h. The tensile testing revealed some statistical differences between the 11 thermal treatments, but most thermal treatments were statistically equivalent. Previous data from room temperature tensile testing was combined with the new data to allow regression and development of mathematical models relating tensile properties to temperature and thermal exposure. The results indicate that thermal exposure temperature has a very small effect, whereas the thermal exposure duration has no statistically significant effects on the tensile properties. These results indicate that CP Ti Grade 2 will be thermally stable and suitable for long-duration space missions.

  20. A Comparison of the Effects of Non-Normal Distributions on Tests of Equivalence

    ERIC Educational Resources Information Center

    Ellington, Linda F.

    2011-01-01

    Statistical theory and its application provide the foundation to modern systematic inquiry in the behavioral, physical and social sciences disciplines (Fisher, 1958; Wilcox, 1996). It provides the tools for scholars and researchers to operationalize constructs, describe populations, and measure and interpret the relations between populations and…

  1. Test Vehicle Forebody Wake Effects on CPAS Parachutes

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.

    2017-01-01

    Parachute drag performance has been reconstructed for a large number of Capsule Parachute Assembly System (CPAS) flight tests. This allows for determining forebody wake effects indirectly through statistical means. When data are available in a "clean" wake, such as behind a slender test vehicle, the relative degradation in performance for other test vehicles can be computed as a Pressure Recovery Fraction (PRF). All four CPAS parachute types were evaluated: Forward Bay Cover Parachutes (FBCPs), Drogues, Pilots, and Mains. Many tests used the missile-shaped Parachute Compartment Drop Test Vehicle (PCDTV) to obtain data at high airspeeds. Other tests used the Orion "boilerplate" Parachute Test Vehicle (PTV) to evaluate parachute performance in a representative heatshield wake. Drag data from both vehicles are normalized to a "capsule" forebody equivalent for Orion simulations. A separate database of PCDTV-specific performance is maintained to accurately predict flight tests. Data are shared among analogous parachutes whenever possible to maximize statistical significance.

  2. Granger Causality Testing with Intensive Longitudinal Data.

    PubMed

    Molenaar, Peter C M

    2018-06-01

    The availability of intensive longitudinal data obtained by means of ambulatory assessment opens up new prospects for prevention research in that it allows the derivation of subject-specific dynamic networks of interacting variables by means of vector autoregressive (VAR) modeling. The dynamic networks thus obtained can be subjected to Granger causality testing in order to identify causal relations among the observed time-dependent variables. VARs have two equivalent representations: standard and structural. Results obtained with Granger causality testing depend upon which representation is chosen, yet no criteria exist on which this important choice can be based. A new equivalent representation is introduced called hybrid VARs with which the best representation can be chosen in a data-driven way. Partial directed coherence, a frequency-domain statistic for Granger causality testing, is shown to perform optimally when based on hybrid VARs. An application to real data is provided.

  3. Colorimetric determination of nitrate plus nitrite in water by enzymatic reduction, automated discrete analyzer methods

    USGS Publications Warehouse

    Patton, Charles J.; Kryskalla, Jennifer R.

    2011-01-01

    In addition to operational details and performance benchmarks for these new DA-AtNaR2 nitrate + nitrite assays, this report also provides results of interference studies for common inorganic and organic matrix constituents at 1, 10, and 100 times their median concentrations in surface-water and groundwater samples submitted annually to the NWQL for nitrate + nitrite analyses. Paired t-test and Wilcoxon signed-rank statistical analyses of results determined by CFA-CdR methods and DA-AtNaR2 methods indicate that nitrate concentration differences between population means or sign ranks were either statistically equivalent to zero at the 95 percent confidence level (p ≥ 0.05) or analytically equivalent to zero-that is, when p < 0.05, concentration differences between population means or medians were less than MDLs.

  4. Differential effects of oral reading to improve comprehension with severe learning disabled and educable mentally handicapped students.

    PubMed

    Chang, S Q; Williams, R L; McLaughlin, T F

    1983-01-01

    The purpose of this study was to evaluate the effectiveness of oral reading as a teaching technique for improving reading comprehension of 11 Educable Mentally Handicapped or Severe Learning Disabled adolescents. Students were tested on their ability to answer comprehension questions from a short factual article. Comprehension improved following the oral reading for students with a reading grade equivalent of less than 5.5 (measured from the Wide Range Achievement Test) but not for those students having a grade equivalent of greater than 5.5. This association was statistically significant (p = less than .01). Oral reading appeared to improve comprehension among the poorer readers but not for readers with moderately high ability.

  5. Does the month of birth influence the prevalence of refractive errors?

    PubMed

    Czepita, Maciej; Kuprjanowicz, Leszek; Safranow, Krzysztof; Mojsa, Artur; Majdanik, Ewa; Ustianowska, Maria; Czepita, Damian

    2015-01-01

    The aim of our study was to examine whether the month of birth influences the prevalence of refractive errors. A total of 5,601 schoolchildren were examined (2,688 boys and 2,913 girls, aged 6-18 years, mean age 11.9, SD 3.2 years). The children examined, students of elementary and secondary schools, were Polish and resided in and around Szczecin, Poland. Every examined subject underwent retinoscopy under cycloplegia using 1% tropicamide. Data analysis was performed using the Kruskal-Wallis test followed by the Siegel and Castellan post-hoc test or the Mann-Whitney U-test. P values of < 0.05 were considered statistically significant. Students born in June had significantly higher spherical equivalents than schoolchildren born in May (0.66 ± 1.17 and 0.39 ± 1.17 respectively, p = 0.0058). The Mann-Whitney U-test showed that students born in June had significantly higher spherical equivalents than schoolchildren born in any other month (0.66 ± 1.17 and 0.50 ± 1.17 respectively, p = 0.0033). Besides that, we did not observe any other association between refractive errors and the month of birth. Children born in Poland in June may have a higher spherical equivalent.

  6. Cultural adaptation of the Test of Narrative Language (TNL) into Brazilian Portuguese.

    PubMed

    Rossi, Natalia Freitas; Lindau, Tâmara de Andrade; Gillam, Ronald Bradley; Giacheti, Célia Maria

    To accomplish the translation and cultural adaptation of the Test of Narrative Language (TNL) into Brazilian Portuguese. The TNL is a formal instrument which assesses narrative comprehension and oral narration of children between the ages of 5-0 and 11-11 (years-months). The TNL translation and adaptation process had the following steps: (1) translation into the target language; (2) summary of the translated versions; (3) back-translation; (4) checking of the conceptual, semantics and cultural equivalence process and (5) pilot study (56 children within the test age range and from both genders). The adapted version maintained the same structure as the original version: number of tasks (both, three comprehension and oral narration), narrative formats (no picture, sequenced pictures and single picture) and scoring system. There were no adjustments to the pictures. The "McDonald's Story" was replaced by the "Snack Bar History" to meet the semantic and experiential equivalence of the target population. The other stories had semantic and grammatical adjustments. Statistically significant difference was found when comparing the raw score (comprehension, narration and total) of age groups from the adapted version. Adjustments were required to meet the equivalence between the original and the translated versions. The adapted version showed it has the potential to identify differences in oral narratives of children in the age range provided by the test. Measurement equivalence for validation and test standardization are in progress and will be able to supplement the study outcomes.

  7. Reducing Probabilistic Weather Forecasts to the Worst-Case Scenario: Anchoring Effects

    ERIC Educational Resources Information Center

    Joslyn, Susan; Savelli, Sonia; Nadav-Greenberg, Limor

    2011-01-01

    Many weather forecast providers believe that forecast uncertainty in the form of the worst-case scenario would be useful for general public end users. We tested this suggestion in 4 studies using realistic weather-related decision tasks involving high winds and low temperatures. College undergraduates, given the statistical equivalent of the…

  8. The Nature of Objectivity with the Rasch Model.

    ERIC Educational Resources Information Center

    Whitely, Susan E.; Dawis, Rene V.

    Although it has been claimed that the Rasch model leads to a higher degree of objectivity in measurement than has been previously possible, this model has had little impact on test development. Population-invariant item and ability calibrations along with the statistical equivalency of any two item subsets are supposedly possible if the item pool…

  9. Construct Equivalence of a National Certification Examination that Uses Dual Languages and Audio Assistance

    ERIC Educational Resources Information Center

    Wang, Shudong; Wang, Ning; Hoadley, David

    2007-01-01

    This study used confirmatory factor analysis (CFA) to examine the comparability of the National Nurse Aide Assessment Program (NNAAP[TM]) test scores across language and administration condition groups for calibration and validation samples that were randomly drawn from the same population. Fit statistics supported both the calibration and…

  10. Oxidative status and lipid profile in metabolic syndrome: gender differences.

    PubMed

    Kaya, Aysem; Uzunhasan, Isil; Baskurt, Murat; Ozkan, Alev; Ataoglu, Esra; Okcun, Baris; Yigit, Zerrin

    2010-02-01

    Metabolic syndrome is associated with cardiovascular disease and oxidative stress. The aim of this study was to investigate the differences of novel oxidative stress parameters and lipid profiles in men and women with metabolic syndrome. The study population included 88 patients with metabolic syndrome, consisting of 48 postmenauposal women (group I) and 40 men (group II). Premenauposal women were excluded. Plasma levels of total antioxidant status (TAS) and total oxidative status (TOS) were determined by using the Erel automated measurement method, and oxidative stress index (OSI) was calculated. To perform the calculation, the resulting unit of TAS, mmol Trolox equivalent/L, was converted to micromol equivalent/L and the OSI value was calculated as: OSI = [(TOS, micromol/L)/(TAS, mmol Trolox equivalent/L) x 100]. The Student t-test, Mann-Whitney-U test, and chi-squared test were used for statistical analysis; the Pearson correlation coefficient and Spearman rank test were used for correlation analysis. P < or = 0.05 was considered to be statistically significant. Both women and men had similar properties regarding demographic characteristics and biochemical work up. Group II had significantly lower levels of antioxidant levels of TAS and lower levels of TOS and OSI compared with group I (P = 0.0001, P = 0.0035, and P = 0,0001). Apolipoprotein A (ApoA) levels were significantly higher in group I compared with group II. Our findings indicate that women with metabolic syndrome have a better antioxidant status and higher ApoA levels compared with men. Our findings suggest the existence of a higher oxidative stress index in men with metabolic syndrome. Considering the higher risk of atherosclerosis associated with men, these novel oxidative stress parameters may be valuable in the evaluation of patients with metabolic sydrome.

  11. An astronomer's guide to period searching

    NASA Astrophysics Data System (ADS)

    Schwarzenberg-Czerny, A.

    2003-03-01

    We concentrate on analysis of unevenly sampled time series, interrupted by periodic gaps, as often encountered in astronomy. While some of our conclusions may appear surprising, all are based on classical statistical principles of Fisher & successors. Except for discussion of the resolution issues, it is best for the reader to forget temporarily about Fourier transforms and to concentrate on problems of fitting of a time series with a model curve. According to their statistical content we divide the issues into several sections, consisting of: (ii) statistical numerical aspects of model fitting, (iii) evaluation of fitted models as hypotheses testing, (iv) the role of the orthogonal models in signal detection (v) conditions for equivalence of periodograms (vi) rating sensitivity by test power. An experienced observer working with individual objects would benefit little from formalized statistical approach. However, we demonstrate the usefulness of this approach in evaluation of performance of periodograms and in quantitative design of large variability surveys.

  12. [Generalization of money-handling though training in equivalence relationships].

    PubMed

    Vives-Montero, Carmen; Valero-Aguayo, Luis; Ascanio, Lourdes

    2011-02-01

    This research used a matching-to-sample procedure and equivalence learning process with language and verbal tasks. In the study, an application of the equivalence relationship of money was used with several kinds of euro coins presented. The sample consisted of 16 children (8 in the experimental group and 8 in the control group) aged 5 years. The prerequisite behaviors, the identification of coins and the practical use of different euro coins, were assessed in the pre and post phases for both groups. The children in the experimental group performed an equivalence task using the matching-to-sample procedure. This consisted of a stimulus sample and four matching stimuli, using a series of euro coins with equivalent value in each set. The children in the control group did not undergo this training process. The results showed a large variability in the children's data of the equivalence tests. The experimental group showed the greatest pre and post changes in the statistically significant data. They also showed a greater generalization in the identification of money and in the use of euro coins than the control group. The implications for educational training and the characteristics of the procedure used here for coin equivalence are discussed.

  13. Changes in Occupational Radiation Exposures after Incorporation of a Real-time Dosimetry System in the Interventional Radiology Suite.

    PubMed

    Poudel, Sashi; Weir, Lori; Dowling, Dawn; Medich, David C

    2016-08-01

    A statistical pilot study was retrospectively performed to analyze potential changes in occupational radiation exposures to Interventional Radiology (IR) staff at Lawrence General Hospital after implementation of the i2 Active Radiation Dosimetry System (Unfors RaySafe Inc, 6045 Cochran Road Cleveland, OH 44139-3302). In this study, the monthly OSL dosimetry records obtained during the eight-month period prior to i2 implementation were normalized to the number of procedures performed during each month and statistically compared to the normalized dosimetry records obtained for the 8-mo period after i2 implementation. The resulting statistics included calculation of the mean and standard deviation of the dose equivalences per procedure and included appropriate hypothesis tests to assess for statistically valid differences between the pre and post i2 study periods. Hypothesis testing was performed on three groups of staff present during an IR procedure: The first group included all members of the IR staff, the second group consisted of the IR radiologists, and the third group consisted of the IR technician staff. After implementing the i2 active dosimetry system, participating members of the Lawrence General IR staff had a reduction in the average dose equivalence per procedure of 43.1% ± 16.7% (p = 0.04). Similarly, Lawrence General IR radiologists had a 65.8% ± 33.6% (p=0.01) reduction while the technologists had a 45.0% ± 14.4% (p=0.03) reduction.

  14. Validation of a modification to Performance-Tested Method 070601: Reveal Listeria Test for detection of Listeria spp. in selected foods and selected environmental samples.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.

  15. Coefficient alpha and interculture test selection.

    PubMed

    Thurber, Steven; Kishi, Yasuhiro

    2014-04-01

    The internal consistency reliability of a measure can be a focal point in an evaluation of the potential adequacy of an instrument for adaptation to another cultural setting. Cronbach's alpha (α) coefficient is often used as the statistical index for such a determination. However, alpha presumes a tau-equivalent test and may constitute an inaccurate population estimate for multidimensional tests. These notions are expanded and examined with a Japanese version of a questionnaire on nursing attitudes toward suicidal patients, originally constructed in Sweden using the English language. The English measure was reported to have acceptable internal consistency (α) albeit the dimensionality of the questionnaire was not addressed. The Japanese scale was found to lack tau-equivalence. An alternative to alpha, "composite reliability," was computed and found to be below acceptable standards in magnitude and precision. Implications for research application of the Japanese instrument are discussed. © The Author(s) 2012.

  16. Missing data handling in non-inferiority and equivalence trials: A systematic review.

    PubMed

    Rabe, Brooke A; Day, Simon; Fiero, Mallorie H; Bell, Melanie L

    2018-05-25

    Non-inferiority (NI) and equivalence clinical trials test whether a new treatment is therapeutically no worse than, or equivalent to, an existing standard of care. Missing data in clinical trials have been shown to reduce statistical power and potentially bias estimates of effect size; however, in NI and equivalence trials, they present additional issues. For instance, they may decrease sensitivity to differences between treatment groups and bias toward the alternative hypothesis of NI (or equivalence). Our primary aim was to review the extent of and methods for handling missing data (model-based methods, single imputation, multiple imputation, complete case), the analysis sets used (Intention-To-Treat, Per-Protocol, or both), and whether sensitivity analyses were used to explore departures from assumptions about the missing data. We conducted a systematic review of NI and equivalence trials published between May 2015 and April 2016 by searching the PubMed database. Articles were reviewed primarily by 2 reviewers, with 6 articles reviewed by both reviewers to establish consensus. Of 109 selected articles, 93% reported some missing data in the primary outcome. Among those, 50% reported complete case analysis, and 28% reported single imputation approaches for handling missing data. Only 32% reported conducting analyses of both intention-to-treat and per-protocol populations. Only 11% conducted any sensitivity analyses to test assumptions with respect to missing data. Missing data are common in NI and equivalence trials, and they are often handled by methods which may bias estimates and lead to incorrect conclusions. Copyright © 2018 John Wiley & Sons, Ltd.

  17. 15 CFR 90.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... non-functioning county or statistical equivalent means a sub-state entity that does not function as an... program, an eligible governmental unit also includes the District of Columbia and non-functioning counties or statistical equivalents represented by a FSCPE member agency. ...

  18. Indirect potentiometric titration of ascorbic acid in pharmaceutical preparations using copper based mercury film electrode.

    PubMed

    Abdul Kamal Nazer, Meeran Mohideen; Hameed, Abdul Rahman Shahul; Riyazuddin, Patel

    2004-01-01

    A simple and rapid potentiometric method for the estimation of ascorbic acid in pharmaceutical dosage forms has been developed. The method is based on treating ascorbic acid with iodine and titration of the iodide produced equivalent to ascorbic acid with silver nitrate using Copper Based Mercury Film Electrode (CBMFE) as an indicator electrode. Interference study was carried to check possible interference of usual excipients and other vitamins. The precision and accuracy of the method was assessed by the application of lack-of-fit test and other statistical methods. The results of the proposed method and British Pharmacopoeia method were compared using F and t-statistical tests of significance.

  19. Conducted-Susceptibility Testing as an Alternative Approach to Unit-Level Radiated-Susceptibility Verifications

    NASA Astrophysics Data System (ADS)

    Badini, L.; Grassi, F.; Pignari, S. A.; Spadacini, G.; Bisognin, P.; Pelissou, P.; Marra, S.

    2016-05-01

    This work presents a theoretical rationale for the substitution of radiated-susceptibility (RS) verifications defined in current aerospace standards with an equivalent conducted-susceptibility (CS) test procedure based on bulk current injection (BCI) up to 500 MHz. Statistics is used to overcome the lack of knowledge about uncontrolled or uncertain setup parameters, with particular reference to the common-mode impedance of equipment. The BCI test level is properly investigated so to ensure correlation of currents injected in the equipment under test via CS and RS. In particular, an over-testing probability quantifies the severity of the BCI test with respect to the RS test.

  20. Reliability and equivalence of alternate forms for the Symbol Digit Modalities Test: implications for multiple sclerosis clinical trials.

    PubMed

    Benedict, Ralph H B; Smerbeck, Audrey; Parikh, Rajavi; Rodgers, Jonathan; Cadavid, Diego; Erlanger, David

    2012-09-01

    Cognitive impairment is common in multiple sclerosis (MS), but is seldom assessed in clinical trials investigating the effects of disease-modifying therapies. The Symbol Digit Modalities Test (SDMT) is a particularly promising tool due to its sensitivity and robust correlation with brain magnetic resonance imaging (MRI) and vocational disability. Unfortunately, there are no validated alternate SDMT forms, which are needed to mitigate practice effects. The aim of the study was to assess the reliability and equivalence of SDMT alternate forms. Twenty-five healthy participants completed each of five alternate versions of the SDMT - the standard form, two versions from the Rao Brief Repeatable Battery, and two forms specifically designed for this study. Order effects were controlled using a Latin-square research design. All five versions of the SDMT produced mean values within 3 raw score points of one another. Three forms were very consistent, and not different by conservative statistical tests. The SDMT test-retest reliability using these forms was good to excellent, with all r values exceeding 0.80. For the first time, we find good evidence that at least three alternate versions of the SDMT are of equivalent difficulty in healthy adults. The forms are reliable, and can be implemented in clinical trials emphasizing cognitive outcomes.

  1. Validation of a modification to Performance-Tested Method 010403: microwell DNA hybridization assay for detection of Listeria spp. in selected foods and selected environmental surfaces.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.

  2. Normative wideband reflectance, equivalent admittance at the tympanic membrane, and acoustic stapedius reflex threshold in adults

    PubMed Central

    Feeney, M. Patrick; Keefe, Douglas H.; Hunter, Lisa L.; Fitzpatrick, Denis F.; Garinis, Angela C.; Putterman, Daniel B.; McMillan, Garnett P.

    2016-01-01

    Objectives Wideband acoustic immittance (WAI) measures such as pressure reflectance, parameterized by absorbance and group delay, equivalent admittance at the tympanic membrane (TM), and acoustic stapedius reflex threshold (ASRT) describe middle-ear function across a wide frequency range, compared to traditional tests employing a single frequency. The objective of this study was to obtain normative data using these tests for a group of normal hearing adults and investigate test-retest reliability using a longitudinal design. Design A longitudinal prospective design was used to obtain normative test and retest data on clinical and WAI measures. Subjects were 13 males and 20 females (mean age = 25 y). Inclusion criteria included normal audiometry and clinical immittance. Subjects were tested on two separate visits approximately one month apart. Reflectance and equivalent admittance at the TM were measured from 0.25 to 8.0 kHz under three conditions: at ambient pressure in the ear canal and with pressure sweeps from positive to negative pressure (downswept) and negative to positive pressure (upswept). Equivalent admittance at the TM was calculated using admittance measurements at the probe tip which were adjusted using a model of sound transmission in the ear canal and acoustic estimates of ear-canal area and length. Wideband ASRTs were measured at tympanometric peak pressure (TPP) derived from the average TPP of downswept and upswept tympanograms. Descriptive statistics were obtained for all WAI responses, and wideband and clinical ASRTs were compared. Results Mean absorbance at ambient pressure and TPP demonstrated a broad band-pass pattern typical of previous studies. Test-retest differences were lower for absorbance at TPP for the downswept method compared to ambient pressure at frequencies between 1.0 and 1.26 kHz. Mean tympanometric peak-to-tail differences for absorbance were greatest around 1.0 to 2.0 kHz and similar for positive and negative tails. Mean group delay at ambient pressure and at TPP were greatest between 0.32 and 0.6 kHz at 200 to 300 μs, reduced at frequencies between 0.8 and 1.5 kHz, and increased above 1.5 kHz to around 150 μs. Mean equivalent admittance at the TM had a lower level for the ambient method than at TPP for both sweep directions below 1.2 kHz, but the difference between methods was only statistically significant for the comparison between the ambient method and TPP for the upswept tympanogram. Mean equivalent admittance phase was positive at all frequencies. Test-retest reliability of the equivalent admittance level ranged from 1 to 3 dB at frequencies below 1.0 kHz, but increased to 8 to 9 dB at higher frequencies. The mean wideband ASRT for an ipsilateral broadband noise activator was 12 dB lower than the clinical ASRT, but had poorer reliability. Conclusions Normative data for the WAI test battery revealed minor differences for results at ambient pressure compared to tympanometric methods at TPP for reflectance, group delay, and equivalent admittance level at the TM for subjects with middle-ear pressure within ±100 daPa. Test-retest reliability was better for absorbance at TPP for the downswept tympanogram compared to ambient pressure at frequencies around 1.0 kHz. Large peak-to-tail differences in absorbance combined with good reliability at frequencies between about 0.7 and 3.0 kHz suggest that this may be a sensitive frequency range for interpreting absorbance at TPP. The mean wideband ipsilateral ASRT was lower than the clinical ASRT, consistent with previous studies. Results are promising for the use of a wideband test battery to evaluate middle-ear function. PMID:28045835

  3. EFFICIENTLY ESTABLISHING CONCEPTS OF INFERENTIAL STATISTICS AND HYPOTHESIS DECISION MAKING THROUGH CONTEXTUALLY CONTROLLED EQUIVALENCE CLASSES

    PubMed Central

    Fienup, Daniel M; Critchfield, Thomas S

    2010-01-01

    Computerized lessons that reflect stimulus equivalence principles were used to teach college students concepts related to inferential statistics and hypothesis decision making. Lesson 1 taught participants concepts related to inferential statistics, and Lesson 2 taught them to base hypothesis decisions on a scientific hypothesis and the direction of an effect. Lesson 3 taught the conditional influence of inferential statistics over decisions regarding the scientific and null hypotheses. Participants entered the study with low scores on the targeted skills and left the study demonstrating a high level of accuracy on these skills, which involved mastering more relations than were taught formally. This study illustrates the efficiency of equivalence-based instruction in establishing academic skills in sophisticated learners. PMID:21358904

  4. Optimal study design with identical power: an application of power equivalence to latent growth curve models.

    PubMed

    von Oertzen, Timo; Brandmaier, Andreas M

    2013-06-01

    Structural equation models have become a broadly applied data-analytic framework. Among them, latent growth curve models have become a standard method in longitudinal research. However, researchers often rely solely on rules of thumb about statistical power in their study designs. The theory of power equivalence provides an analytical answer to the question of how design factors, for example, the number of observed indicators and the number of time points assessed in repeated measures, trade off against each other while holding the power for likelihood-ratio tests on the latent structure constant. In this article, we present applications of power-equivalent transformations on a model with data from a previously published study on cognitive aging, and highlight consequences of participant attrition on power. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  5. pacce: Perl algorithm to compute continuum and equivalent widths

    NASA Astrophysics Data System (ADS)

    Riffel, Rogério; Borges Vale, Tibério

    2011-08-01

    We present Perl Algorithm to Compute continuum and Equivalent Widths ( pacce). We describe the methods used in the computations and the requirements for its usage. We compare the measurements made with pacce and "manual" ones made using iraf splot task. These tests show that for synthetic simple stellar population (SSP) models the equivalent widths strengths are very similar (differences ≲0.2 Å) for both measurements. In real stellar spectra, the correlation between both values is still very good, but with differences of up to 0.5 Å. pacce is also able to determine mean continuum and continuum at line center values, which are helpful in stellar population studies. In addition, it is also able to compute the uncertainties in the equivalent widths using photon statistics. The code is made available for the community through the web at http://www.if.ufrgs.br/~riffel/software.html .

  6. In-Situ Adhesive Bond Assessment

    DTIC Science & Technology

    2010-08-01

    a list of AR coefficients. The use of the VCC metric , with appropriate extreme value statistics models as described in detail below, allowed...equivalent PZT with thickness equal to the MFC electrode spacing , a , and length equal to the MFC net electrode length, (p le), where p is the number of ...particular geometry of the test specimen and with MFC patches affixed to the

  7. The Relationship between the Rigor of a State's Proficiency Standard and Student Achievement in the State

    ERIC Educational Resources Information Center

    Stoneberg, Bert D.

    2015-01-01

    The National Center of Education Statistics conducted a mapping study that equated the percentage proficient or above on each state's NCLB reading and mathematics tests in grades 4 and 8 to the NAEP scale. Each "NAEP equivalent score" was labeled according to NAEP's achievement levels and used to compare state proficiency standards and…

  8. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics

    PubMed Central

    Chen, Wenan; Larrabee, Beth R.; Ovsyannikova, Inna G.; Kennedy, Richard B.; Haralambieva, Iana H.; Poland, Gregory A.; Schaid, Daniel J.

    2015-01-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. PMID:25948564

  9. Dissolution curve comparisons through the F(2) parameter, a Bayesian extension of the f(2) statistic.

    PubMed

    Novick, Steven; Shen, Yan; Yang, Harry; Peterson, John; LeBlond, Dave; Altan, Stan

    2015-01-01

    Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.

  10. Part II: Biomechanical assessment for a footprint-restoring transosseous-equivalent rotator cuff repair technique compared with a double-row repair technique.

    PubMed

    Park, Maxwell C; Tibone, James E; ElAttrache, Neal S; Ahmad, Christopher S; Jun, Bong-Jae; Lee, Thay Q

    2007-01-01

    We hypothesized that a transosseous-equivalent repair would demonstrate improved tensile strength and gap formation between the tendon and tuberosity when compared with a double-row technique. In 6 fresh-frozen human shoulders, a transosseous-equivalent rotator cuff repair was performed: a suture limb from each of two medial anchors was bridged over the tendon and fixed laterally with an interference screw. In 6 contralateral matched-pair specimens, a double-row repair was performed. For all repairs, a materials testing machine was used to load each repair cyclically from 10 N to 180 N for 30 cycles; each repair underwent tensile testing to measure failure loads at a deformation rate of 1 mm/sec. Gap formation between the tendon edge and insertion was measured with a video digitizing system. The mean ultimate load to failure was significantly greater for the transosseous-equivalent technique (443.0 +/- 87.8 N) compared with the double-row technique (299.2 +/- 52.5 N) (P = .043). Gap formation during cyclic loading was not significantly different between the transosseous-equivalent and double-row techniques, with mean values of 3.74 +/- 1.51 mm and 3.79 +/- 0.68 mm, respectively (P = .95). Stiffness for all cycles was not statistically different between the two constructs (P > .40). The transosseous-equivalent rotator cuff repair technique improves ultimate failure loads when compared with a double-row technique. Gap formation is similar for both techniques. A transosseous-equivalent repair helps restore footprint dimensions and provides a stronger repair than the double-row technique, which may help optimize healing biology.

  11. Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.

    PubMed

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  12. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    PubMed Central

    Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar

    2012-01-01

    Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786

  13. Evaluating Independent Proportions for Statistical Difference, Equivalence, Indeterminacy, and Trivial Difference Using Inferential Confidence Intervals

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2009-01-01

    Tryon presented a graphic inferential confidence interval (ICI) approach to analyzing two independent and dependent means for statistical difference, equivalence, replication, indeterminacy, and trivial difference. Tryon and Lewis corrected the reduction factor used to adjust descriptive confidence intervals (DCIs) to create ICIs and introduced…

  14. Little Green Lies: Dissecting the Hype of Renewables

    DTIC Science & Technology

    2011-05-11

    Sources: 2009 BP Statistical Energy Analysis , US Energy Information Administration Per Capita Energy Use (Kg Oil Equivalent) World 1,819 USA 7,766...Equivalent BUILDING STRONG® Energy Trends Sources: 2006 BP Statistical Energy Analysis Oil 37% Nuclear 6o/o Coal 25% Gas 23o/o Biomass 4% Hydro 3% Wind

  15. No sex differences in use of dopaminergic medication in early Parkinson disease in the US and Canada - baseline findings of a multicenter trial.

    PubMed

    Umeh, Chizoba C; Pérez, Adriana; Augustine, Erika F; Dhall, Rohit; Dewey, Richard B; Mari, Zoltan; Simon, David K; Wills, Anne-Marie A; Christine, Chadwick W; Schneider, Jay S; Suchowersky, Oksana

    2014-01-01

    Sex differences in Parkinson disease clinical features have been reported, but few studies have examined sex influences on use of dopaminergic medication in early Parkinson disease. The objective of this study was to test if there are differences in the type of dopaminergic medication used and levodopa equivalent daily dose between men and women with early Parkinson disease enrolled in a large multicenter study of Creatine as a potential disease modifying therapy - the National Institute of Neurological Disorders and Stroke Exploratory Trials in Parkinson Disease Long-Term Study-1. Baseline data of 1,741 participants from 45 participating sites were analyzed. Participants from the United States and Canada were enrolled within five years of Parkinson Disease diagnosis. Two outcome variables were studied: type of dopaminergic medication used and levodopa equivalent daily dose at baseline in the Long-Term Study-1. Chi-square statistic and linear regression models were used for statistical analysis. There were no statistically significant differences in the frequency of use of different types of dopaminergic medications at baseline between men and women with Parkinson Disease. A small but statistically significant difference was observed in the median unadjusted levodopa equivalent daily dose at baseline between women (300 mg) and men (325 mg), but this was not observed after controlling for disease duration (years since Parkinson disease diagnosis), disease severity (Unified Parkinson's Disease Rating Scale Motor and Activities of Daily Living Scores), and body weight. In this large multicenter study, we did not observe sex differences in the type and dose of dopaminergic medications used in early Parkinson Disease. Further research is needed to evaluate the influence of male or female sex on use of dopaminergic medication in mid- and late-stage Parkinson Disease.

  16. PACCE: Perl Algorithm to Compute Continuum and Equivalent Widths

    NASA Astrophysics Data System (ADS)

    Riffel, Rogério; Borges Vale, Tibério

    2011-05-01

    PACCE (Perl Algorithm to Compute continuum and Equivalent Widths) computes continuum and equivalent widths. PACCE is able to determine mean continuum and continuum at line center values, which are helpful in stellar population studies, and is also able to compute the uncertainties in the equivalent widths using photon statistics.

  17. Functional equivalency inferred from "authoritative sources" in networks of homologous proteins.

    PubMed

    Natarajan, Shreedhar; Jakobsson, Eric

    2009-06-12

    A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods.

  18. Functional Equivalency Inferred from “Authoritative Sources” in Networks of Homologous Proteins

    PubMed Central

    Natarajan, Shreedhar; Jakobsson, Eric

    2009-01-01

    A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods. PMID:19521530

  19. [Dilemma of null hypothesis in ecological hypothesis's experiment test.

    PubMed

    Li, Ji

    2016-06-01

    Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.

  20. Statistical tests to compare motif count exceptionalities

    PubMed Central

    Robin, Stéphane; Schbath, Sophie; Vandewalle, Vincent

    2007-01-01

    Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use. PMID:17346349

  1. Comparative testing of pulse oximeter probes.

    PubMed

    van Oostrom, Johannes H; Melker, Richard J

    2004-05-01

    The testing of pulse oximeter probes is generally limited to the integrity of the electrical circuit and does not include the optical properties of the probes. Few pulse oximeter testers evaluate the accuracy of both the monitor and the probe. We designed a study to compare the accuracy of nonproprietary probes (OSS Medical) designed for use with Nellcor, Datex-Ohmeda, and Criticare pulse oximeter monitors with that of their corresponding proprietary probes by using a commercial off-the-shelf pulse oximeter tester (Index). The Index pulse oximeter tester does include testing of the optical properties of the pulse oximeter probes. The pulse oximeter tester was given a controlled input that simulated acute apnea. Desaturation curves were automatically recorded from the pulse oximeter monitors with a data-collection computer. Comparisons between equivalent proprietary and nonproprietary probes were performed. Data were analyzed by using univariate and multivariate general linear model analysis. Five OSS Medical probe models were statistically better than the equivalent proprietary probes. The remainder of the probes were statistically similar. Comparative and simulation studies can have significant advantages over human studies because they are cost-effective, evaluate equipment in a clinically relevant scenario, and pose no risk to patients, but they are limited by the realism of the simulation. We studied the performance of pulse oximeter probes in a simulated environment. Our results show significant differences between some probes that affect the accuracy of measurement.

  2. Stereotype threat? Effects of inquiring about test takers' gender on conceptual test performance in physics

    NASA Astrophysics Data System (ADS)

    Maries, Alexandru; Singh, Chandralekha

    2015-12-01

    It has been found that activation of a stereotype, for example by indicating one's gender before a test, typically alters performance in a way consistent with the stereotype, an effect called "stereotype threat." On a standardized conceptual physics assessment, we found that asking test takers to indicate their gender right before taking the test did not deteriorate performance compared to an equivalent group who did not provide gender information. Although a statistically significant gender gap was present on the standardized test whether or not students indicated their gender, no gender gap was observed on the multiple-choice final exam students took, which included both quantitative and conceptual questions on similar topics.

  3. Performance Comparison Between a Head-Worn Display System and a Head-Up Display for Low Visibility Commercial Operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Barnes, James R.; Williams, Steven P.; Jones, Denise R.; Harrison, Stephanie J.; Bailey, Randall E.

    2014-01-01

    Research, development, test, and evaluation of flight deck interface technologies is being conducted by NASA to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in Next Generation Air Transportation System (NextGen). Under the Vehicle Systems Safety Technologies (VSST) project in the Aviation Safety Program, one specific area of research is the use of small Head-Worn Displays (HWDs) as an equivalent display to a Head-Up Display (HUD). Title 14 of the US Code of Federal Regulations (CFR) 91.175 describes a possible operational credit which can be obtained with airplane equipage of a HUD or an "equivalent" display combined with Enhanced Vision (EV). If successful, a HWD may provide the same safety and operational benefits as current HUD-equipped aircraft but for significantly more aircraft in which HUD installation is neither practical nor possible. A simulation experiment was conducted to evaluate if the HWD, coupled with a head-tracker, can provide an equivalent display to a HUD. Comparative testing was performed in the Research Flight Deck (RFD) Cockpit Motion Facility (CMF) full mission, motion-based simulator at NASA Langley. Twelve airline crews conducted approach and landing, taxi, and departure operations during low visibility operations (1000' Runway Visual Range (RVR), 300' RVR) at Memphis International Airport (Federal Aviation Administration (FAA) identifier: KMEM). The results showed that there were no statistical differences in the crews performance in terms of touchdown and takeoff. Further, there were no statistical differences between the HUD and HWD in pilots' responses to questionnaires.

  4. Comment on "The effect of same-sex marriage laws on different-sex marriage: evidence from the Netherlands".

    PubMed

    Dinno, Alexis

    2014-12-01

    In the recent Demography article titled "The Effect of Same-Sex Marriage Laws on Different-Sex Marriage: Evidence From the Netherlands," Trandafir attempted to answer the question, Are rates of opposite sex marriage affected by legal recognition of same-sex marriages? The results of his approach to statistical inference-looking for evidence of a difference in rates of opposite-sex marriage-provide an absence of evidence of such effects. However, the validity of his conclusion of no causal relationship between same-sex marriage laws and rates of opposite-sex marriage is threatened by the fact that Trandafir did not also look for equivalence in rates of opposite-sex marriage in order to provide evidence of an absence of such an effect. Equivalence tests in combination with difference tests are introduced and presented in this article as a more valid inferential approach to the substantive question Trandafir attempted to answer.

  5. What can we learn from noise? — Mesoscopic nonequilibrium statistical physics —

    PubMed Central

    KOBAYASHI, Kensuke

    2016-01-01

    Mesoscopic systems — small electric circuits working in quantum regime — offer us a unique experimental stage to explorer quantum transport in a tunable and precise way. The purpose of this Review is to show how they can contribute to statistical physics. We introduce the significance of fluctuation, or equivalently noise, as noise measurement enables us to address the fundamental aspects of a physical system. The significance of the fluctuation theorem (FT) in statistical physics is noted. We explain what information can be deduced from the current noise measurement in mesoscopic systems. As an important application of the noise measurement to statistical physics, we describe our experimental work on the current and current noise in an electron interferometer, which is the first experimental test of FT in quantum regime. Our attempt will shed new light in the research field of mesoscopic quantum statistical physics. PMID:27477456

  6. What can we learn from noise? - Mesoscopic nonequilibrium statistical physics.

    PubMed

    Kobayashi, Kensuke

    2016-01-01

    Mesoscopic systems - small electric circuits working in quantum regime - offer us a unique experimental stage to explorer quantum transport in a tunable and precise way. The purpose of this Review is to show how they can contribute to statistical physics. We introduce the significance of fluctuation, or equivalently noise, as noise measurement enables us to address the fundamental aspects of a physical system. The significance of the fluctuation theorem (FT) in statistical physics is noted. We explain what information can be deduced from the current noise measurement in mesoscopic systems. As an important application of the noise measurement to statistical physics, we describe our experimental work on the current and current noise in an electron interferometer, which is the first experimental test of FT in quantum regime. Our attempt will shed new light in the research field of mesoscopic quantum statistical physics.

  7. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  8. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.

    PubMed

    Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J

    2015-07-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. Copyright © 2015 by the Genetics Society of America.

  9. Filtration effects on ball bearing life and condition in a contaminated lubricant

    NASA Technical Reports Server (NTRS)

    Loewenthal, S. H.; Moyer, D. W.

    1978-01-01

    Ball bearings were fatigue tested with a noncontaminated lubricant and with a contaminated lubricant under four levels of filtration. The test filters had absolute particle removal ratings of 3, 30, 49, and 105 microns. Aircraft turbine engine contaminants were injected into the filter's supply line at a constant rate of 125 milligrams per bearing hour. Bearing life and running track condition generally improved with finer filtration. The experimental lives of 3 and 30 micron filter bearings were statistically equivalent, approaching those obtained with the noncontaminated lubricant bearings. Compared to these bearings, the lives of the 49 micron bearings were statistically lower. The 105 micron bearings experienced gross wear. The degree of surface distress, weight loss, and probable failure mode were dependent on filtration level, with finer filtration being clearly beneficial.

  10. An Evaluation of the Euroncap Crash Test Safety Ratings in the Real World

    PubMed Central

    Segui-Gomez, Maria; Lopez-Valdes, Francisco J.; Frampton, Richard

    2007-01-01

    We investigated whether the rating obtained in the EuroNCAP test procedures correlates with injury protection to vehicle occupants in real crashes using data in the UK Cooperative Crash Injury Study (CCIS) database from 1996 to 2005. Multivariate Poisson regression models were developed, using the Abbreviated Injury Scale (AIS) score by body region as the dependent variable and the EuroNCAP score for that particular body region, seat belt use, mass ratio and Equivalent Test Speed (ETS) as independent variables. Our models identified statistically significant relationships between injury severity and safety belt use, mass ratio and ETS. We could not identify any statistically significant relationships between the EuroNCAP body region scores and real injury outcome except for the protection to pelvis-femur-knee in frontal impacts where scoring “green” is significantly better than scoring “yellow” or “red”.

  11. Comparative statistical component analysis of transgenic, cyanophycin-producing potatoes in greenhouse and field trials.

    PubMed

    Schmidt, Kerstin; Schmidtke, Jörg; Mast, Yvonne; Waldvogel, Eva; Wohlleben, Wolfgang; Klemke, Friederike; Lockau, Wolfgang; Hausmann, Tina; Hühns, Maja; Broer, Inge

    2017-08-01

    Potatoes are a promising system for industrial production of the biopolymer cyanophycin as a second compound in addition to starch. To assess the efficiency in the field, we analysed the stability of the system, specifically its sensitivity to environmental factors. Field and greenhouse trials with transgenic potatoes (two independent events) were carried out for three years. The influence of environmental factors was measured and target compounds in the transgenic plants (cyanophycin, amino acids) were analysed for differences to control plants. Furthermore, non-target parameters (starch content, number, weight and size of tubers) were analysed for equivalence with control plants. The huge amount of data received was handled using modern statistical approaches to model the correlation between influencing environmental factors (year of cultivation, nitrogen fertilization, origin of plants, greenhouse or field cultivation) and key components (starch, amino acids, cyanophycin) and agronomic characteristics. General linear models were used for modelling, and standard effect sizes were applied to compare conventional and genetically modified plants. Altogether, the field trials prove that significant cyanophycin production is possible without reduction of starch content. Non-target compound composition seems to be equivalent under varying environmental conditions. Additionally, a quick test to measure cyanophycin content gives similar results compared to the extensive enzymatic test. This work facilitates the commercial cultivation of cyanophycin potatoes.

  12. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  13. The score statistic of the LD-lod analysis: detecting linkage adaptive to linkage disequilibrium.

    PubMed

    Huang, J; Jiang, Y

    2001-01-01

    We study the properties of a modified lod score method for testing linkage that incorporates linkage disequilibrium (LD-lod). By examination of its score statistic, we show that the LD-lod score method adaptively combines two sources of information: (a) the IBD sharing score which is informative for linkage regardless of the existence of LD and (b) the contrast between allele-specific IBD sharing scores which is informative for linkage only in the presence of LD. We also consider the connection between the LD-lod score method and the transmission-disequilibrium test (TDT) for triad data and the mean test for affected sib pair (ASP) data. We show that, for triad data, the recessive LD-lod test is asymptotically equivalent to the TDT; and for ASP data, it is an adaptive combination of the TDT and the ASP mean test. We demonstrate that the LD-lod score method has relatively good statistical efficiency in comparison with the ASP mean test and the TDT for a broad range of LD and the genetic models considered in this report. Therefore, the LD-lod score method is an interesting approach for detecting linkage when the extent of LD is unknown, such as in a genome-wide screen with a dense set of genetic markers. Copyright 2001 S. Karger AG, Basel

  14. Validation of Milliflex® Quantum for Bioburden Testing of Pharmaceutical Products.

    PubMed

    Gordon, Oliver; Goverde, Marcel; Staerk, Alexandra; Roesti, David

    2017-01-01

    This article reports the validation strategy used to demonstrate that the Milliflex ® Quantum yielded non-inferior results to the traditional bioburden method. It was validated according to USP <1223>, European Pharmacopoeia 5.1.6, and Parenteral Drug Association Technical Report No. 33 and comprised the validation parameters robustness, ruggedness, repeatability, specificity, limit of detection and quantification, accuracy, precision, linearity, range, and equivalence in routine operation. For the validation, a combination of pharmacopeial ATCC strains as well as a broad selection of in-house isolates were used. In-house isolates were used in stressed state. Results were statistically evaluated regarding the pharmacopeial acceptance criterion of ≥70% recovery compared to the traditional method. Post-hoc test power calculations verified the appropriateness of the used sample size to detect such a difference. Furthermore, equivalence tests verified non-inferiority of the rapid method as compared to the traditional method. In conclusion, the rapid bioburden on basis of the Milliflex ® Quantum was successfully validated as alternative method to the traditional bioburden test. LAY ABSTRACT: Pharmaceutical drug products must fulfill specified quality criteria regarding their microbial content in order to ensure patient safety. Drugs that are delivered into the body via injection, infusion, or implantation must be sterile (i.e., devoid of living microorganisms). Bioburden testing measures the levels of microbes present in the bulk solution of a drug before sterilization, and thus it provides important information for manufacturing a safe product. In general, bioburden testing has to be performed using the methods described in the pharmacopoeias (membrane filtration or plate count). These methods are well established and validated regarding their effectiveness; however, the incubation time required to visually identify microbial colonies is long. Thus, alternative methods that detect microbial contamination faster will improve control over the manufacturing process and speed up product release. Before alternative methods may be used, they must undergo a side-by-side comparison with pharmacopeial methods. In this comparison, referred to as validation, it must be shown in a statistically verified manner that the effectiveness of the alternative method is at least equivalent to that of the pharmacopeial methods. Here we describe the successful validation of an alternative bioburden testing method based on fluorescent staining of growing microorganisms applying the Milliflex ® Quantum system by MilliporeSigma. © PDA, Inc. 2017.

  15. Agreement Between VO2peak Predicted From PACER and One-Mile Run Time-Equated Laps.

    PubMed

    Saint-Maurice, Pedro F; Anderson, Katelin; Bai, Yang; Welk, Gregory J

    2016-12-01

    This study examined the agreement between estimated peak oxygen consumption (VO 2peak ) obtained from the Progressive Aerobic Cardiovascular Endurance Run (PACER) fitness test and equated PACER laps derived from One-Mile Run time (MR). A sample of 680 participants (324 boys and 356 girls) in Grades 7 through 12 completed both the PACER and the MR assessments. MR time was converted to PACER laps (PACER-MEQ) using previously developed conversion algorithms. Agreement between PACER and PACER-MEQ VO 2peak was examined using Pearson correlations, mean absolute percent error (MAPE), and equivalence testing procedures. Classification agreement based on health-related standards was examined using sensitivity, specificity, and Kappa statistics. Overall agreement between estimated VO 2peak obtained from the PACER and PACER-MEQ was high in boys, r(324) = .79, R 2  = .63, and moderate in girls, r(356) = .57, R 2  = .33. The MAPE for estimates obtained from PACER-MEQ was 10.3% and estimates were deemed equivalent to the PACER (43.1 ± 6.9 mL/kg/min vs. 44.6 ± 0.3 mL/kg/min). Classification agreement as illustrated by sensitivity and specificity ranged from 20.4% to 90.2% and was higher for classifications in the Healthy Fitness Zone (HFZ). Kappa statistics ranged from .14 to .51 and were also higher for the HFZ. Equated PACER laps can be used to obtain equivalent estimates of PACER VO 2peak in groups of adolescents, but some disparities can be found when students' scores are classified into the Needs Improvement Zone.

  16. A prognostic scoring system for arm exercise stress testing.

    PubMed

    Xie, Yan; Xian, Hong; Chandiramani, Pooja; Bainter, Emily; Wan, Leping; Martin, Wade H

    2016-01-01

    Arm exercise stress testing may be an equivalent or better predictor of mortality outcome than pharmacological stress imaging for the ≥50% for patients unable to perform leg exercise. Thus, our objective was to develop an arm exercise ECG stress test scoring system, analogous to the Duke Treadmill Score, for predicting outcome in these individuals. In this retrospective observational cohort study, arm exercise ECG stress tests were performed in 443 consecutive veterans aged 64.1 (11.1) years. (mean (SD)) between 1997 and 2002. From multivariate Cox models, arm exercise scores were developed for prediction of 5-year and 12-year all-cause and cardiovascular mortality and 5-year cardiovascular mortality or myocardial infarction (MI). Arm exercise capacity in resting metabolic equivalents (METs), 1 min heart rate recovery (HRR) and ST segment depression ≥1 mm were the stress test variables independently associated with all-cause and cardiovascular mortality by step-wise Cox analysis (all p<0.01). A score based on the relation HRR (bpm)+7.3×METs-10.5×ST depression (0=no; 1=yes) prognosticated 5-year cardiovascular mortality with a C-statistic of 0.81 before and 0.88 after adjustment for significant demographic and clinical covariates. Arm exercise scores for the other outcome end points yielded C-statistic values of 0.77-0.79 before and 0.82-0.86 after adjustment for significant covariates versus 0.64-0.72 for best fit pharmacological myocardial perfusion imaging models in a cohort of 1730 veterans who were evaluated over the same time period. Arm exercise scores, analogous to the Duke Treadmill Score, have good power for prediction of mortality or MI in patients who cannot perform leg exercise.

  17. Filtration effects on ball bearing life and condition in a contaminated lubricant

    NASA Technical Reports Server (NTRS)

    Loewenthal, S. H.; Moyer, D. W.

    1978-01-01

    Ball bearings were fatigue tested with a noncontaminated MIL-L-23699 lubricant and with a contaminated MIL-L-23699 lubricant under four levels of filtration. The test filters had absolute particle removal ratings of 3, 30, 49, and 105 microns. Aircraft turbine engine contaminants were injected into the filter's supply line at a constant rate of 125 milligrams per bearing hour. Bearing life and running track condition generally improved with finer filtration. The experimental lives of 3- and 30-micron filter bearings were statistically equivalent, approaching those obtained with the noncontaminated lubricant bearings. Compared to these bearings, the lives of the 49-micron bearings were statistically lower. The 105-micron bearings experienced gross wear. The degree of surface distress, weight loss, and probable failure mode were dependent on filtration level, with finer filtration being clearly beneficial.

  18. Comparison of Internet-based and paper-based questionnaires in Taiwan using multisample invariance approach.

    PubMed

    Yu, Sen-Chi; Yu, Min-Ning

    2007-08-01

    This study examines whether the Internet-based questionnaire is psychometrically equivalent to the paper-based questionnaire. A random sample of 2,400 teachers in Taiwan was divided into experimental and control groups. The experimental group was invited to complete the electronic form of the Chinese version of Center for Epidemiologic Studies Depression Scale (CES-D) placed on the Internet, whereas the control group was invited to complete the paper-based CES-D, which they received by mail. The multisample invariance approach, derived from structural equation modeling (SEM), was applied to analyze the collected data. The analytical results show that the two groups have equivalent factor structures in the CES-D. That is, the items in CES-D function equivalently in the two groups. Then the equality of latent mean test was performed. The latent means of "depressed mood," "positive affect," and "interpersonal problems" in CES-D are not significantly different between these two groups. However, the difference in the "somatic symptoms" latent means between these two groups is statistically significant at alpha = 0.01. But the Cohen's d statistics indicates that such differences in latent means do not apparently lead to a meaningful effect size in practice. Both CES-D questionnaires exhibit equal validity, reliability, and factor structures and exhibit a little difference in latent means. Therefore, the Internet-based questionnaire represents a promising alternative to the paper-based questionnaire.

  19. Comparing a single case to a control group - Applying linear mixed effects models to repeated measures data.

    PubMed

    Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus

    2015-10-01

    In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Transportability of Equivalence-Based Programmed Instruction: Efficacy and Efficiency in a College Classroom

    ERIC Educational Resources Information Center

    Fienup, Daniel M.; Critchfield, Thomas S.

    2011-01-01

    College students in a psychology research-methods course learned concepts related to inferential statistics and hypothesis decision making. One group received equivalence-based instruction on conditional discriminations that were expected to promote the emergence of many untaught, academically useful abilities (i.e., stimulus equivalence group). A…

  1. Teaching differential diagnosis to nurse practitioner students in a distance program.

    PubMed

    Colella, Christine L; Beery, Theresa A

    2014-08-01

    An interactive case study (ICS) is a novel way to enhance the teaching of differential diagnosis to distance learning nurse practitioner students. Distance education renders the use of many teaching strategies commonly used with face-to-face students difficult, if not impossible. To meet this new pedagogical dilemma and to provide excellence in education, the ICS was developed. Kolb's theory of experiential learning supported efforts to follow the utilization of the ICS. This study sought to determine whether learning outcomes for the distance learning students were equivalent to those of on-campus students who engaged in a live-patient encounter. Accuracy of differential diagnosis lists generated by onsite and online students was compared. Equivalency testing assessed clinical, rather than only statistical, significance in data from 291 students. The ICS responses from the distance learning and onsite students differed by 4.9%, which was within the a priori equivalence estimate of 10%. Narrative data supported the findings. Copyright 2014, SLACK Incorporated.

  2. Techniques for recognizing identity of several response functions from the data of visual inspection

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.

    1996-08-01

    The purpose of this paper is to present some efficient techniques for recognizing from the observed data whether several response functions are identical to each other. For example, in an industrial setting the problem may be to determine whether the production coefficients established in a small-scale pilot study apply to each of several large- scale production facilities. The techniques proposed here combine sensor information from automated visual inspection of manufactured products which is carried out by means of pixel-by-pixel comparison of the sensed image of the product to be inspected with some reference pattern (or image). Let (a1, . . . , am) be p-dimensional parameters associated with m response models of the same type. This study is concerned with the simultaneous comparison of a1, . . . , am. A generalized maximum likelihood ratio (GMLR) test is derived for testing equality of these parameters, where each of the parameters represents a corresponding vector of regression coefficients. The GMLR test reduces to an equivalent test based on a statistic that has an F distribution. The main advantage of the test lies in its relative simplicity and the ease with which it can be applied. Another interesting test for the same problem is an application of Fisher's method of combining independent test statistics which can be considered as a parallel procedure to the GMLR test. The combination of independent test statistics does not appear to have been used very much in applied statistics. There does, however, seem to be potential data analytic value in techniques for combining distributional assessments in relation to statistically independent samples which are of joint experimental relevance. In addition, a new iterated test for the problem defined above is presented. A rejection of the null hypothesis by this test provides some reason why all the parameters are not equal. A numerical example is discussed in the context of the proposed procedures for hypothesis testing.

  3. The use of statistical tools in field testing of putative effects of genetically modified plants on nontarget organisms

    PubMed Central

    Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F

    2013-01-01

    Abstract To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches – for example, analysis of variance (ANOVA) – are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in field testing. PMID:24567836

  4. The use of statistical tools in field testing of putative effects of genetically modified plants on nontarget organisms.

    PubMed

    Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F

    2013-08-01

    To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches - for example, analysis of variance (ANOVA) - are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in field testing.

  5. Examination of the equivalence of self-report survey-based paper-and-pencil and internet data collection methods.

    PubMed

    Weigold, Arne; Weigold, Ingrid K; Russell, Elizabeth J

    2013-03-01

    Self-report survey-based data collection is increasingly carried out using the Internet, as opposed to the traditional paper-and-pencil method. However, previous research on the equivalence of these methods has yielded inconsistent findings. This may be due to methodological and statistical issues present in much of the literature, such as nonequivalent samples in different conditions due to recruitment, participant self-selection to conditions, and data collection procedures, as well as incomplete or inappropriate statistical procedures for examining equivalence. We conducted 2 studies examining the equivalence of paper-and-pencil and Internet data collection that accounted for these issues. In both studies, we used measures of personality, social desirability, and computer self-efficacy, and, in Study 2, we used personal growth initiative to assess quantitative equivalence (i.e., mean equivalence), qualitative equivalence (i.e., internal consistency and intercorrelations), and auxiliary equivalence (i.e., response rates, missing data, completion time, and comfort completing questionnaires using paper-and-pencil and the Internet). Study 1 investigated the effects of completing surveys via paper-and-pencil or the Internet in both traditional (i.e., lab) and natural (i.e., take-home) settings. Results indicated equivalence across conditions, except for auxiliary equivalence aspects of missing data and completion time. Study 2 examined mailed paper-and-pencil and Internet surveys without contact between experimenter and participants. Results indicated equivalence between conditions, except for auxiliary equivalence aspects of response rate for providing an address and completion time. Overall, the findings show that paper-and-pencil and Internet data collection methods are generally equivalent, particularly for quantitative and qualitative equivalence, with nonequivalence only for some aspects of auxiliary equivalence. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  6. Long-term strength of metals in complex stress state (a survey)

    NASA Astrophysics Data System (ADS)

    Lokoshchenko, A. M.

    2012-05-01

    An analytic survey of experimental data and theoretical approaches characterizing the long-term strength of metals in complex stress state is given. In Sections 2 and 3, the results of plane stress tests (with opposite and equal signs of the nonzero principal stresses, respectively) are analyzed. In Section 4, the results of inhomogeneous stress tests (thick-walled tubes under the action of internal pressures and tensile forces) are considered. All known experimental data (35 test series) are analyzed by a criterion approach. An equivalent stress σ e is introduced as a characteristic of the stress state. Attention is mainly paid to the dependence of σ e on the principal stresses. Statistical methods are used to obtain an expression for σ e, which can be used to study various types of the complex stress state. It is shown that for the long-term strength criterion one can use the power or power-fractional dependence of the time to rupture on the equivalent stress. The methods proposed to describe the test results give a good correspondence between the experimental and theoretical values of the time to rupture. In Section 5, the possibilities of complicating the expressions for σ e by using additional material constants are considered.

  7. Comparison of Clobetasol Propionate Generics Using Simplified in Vitro Bioequivalence Method for Topical Drug Products.

    PubMed

    Soares, Kelen Carine Costa; de Souza, Weidson Carlos; de Souza Texeira, Leonardo; da Cunha-Filho, Marcilio Sergio Soares; Gelfuso, Guilherme Martins; Gratieri, Tais

    2017-11-20

    The aim of this paper is to propose a simple in vitro skin penetration experiment in which the drug is extracted from the whole skin piece as a test valid for formulation screening and optimization during development process, equivalence assessment during quality control or post-approval after changes to the product. Twelve clobetasol propionate (CP) formulations (six creams and six ointments) from the local market were used as a model to challenge the proposed methodology in comparison to in vitro skin penetration following tape-stripping for drug extraction. To support the results, physicochemical tests for pH, viscosity, density and assay, as well as in vitro release were performed. Both protocols, extracting the drug from the skin using the tape-stripping technique or extracting from the full skin were capable of differentiating CP formulations. Only one formulation did not present statistical difference from the reference drug product in penetration tests and only other two oitments presented equivalent release to the reference. The proposed protocol is straightforward and reproducible. Results suggest the bioinequavalence of tested CP formulations reinforcing the necessity of such evaluations. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Accuracy of Orthognathic Surgical Outcomes Using 2- and 3-Dimensional Landmarks-The Case for Apples and Oranges?

    PubMed

    Borba, Alexandre Meireles; José da Silva, Everton; Fernandes da Silva, André Luis; Han, Michael D; da Graça Naclério-Homem, Maria; Miloro, Michael

    2018-01-12

    To verify predicted versus obtained surgical movements in 2-dimensional (2D) and 3-dimensional (3D) measurements and compare the equivalence between these methods. A retrospective observational study of bimaxillary orthognathic surgeries was performed. Postoperative cone-beam computed tomographic (CBCT) scans were superimposed on preoperative scans and a lateral cephalometric radiograph was generated from each CBCT scan. After identification of the sella, nasion, and upper central incisor tip landmarks on 2D and 3D images, actual and planned movements were compared by cephalometric measurements. One-sample t test was used to statistically evaluate results, with expected mean discrepancy values ranging from 0 to 2 mm. Equivalence of 2D and 3D values was compared using paired t test. The final sample of 46 cases showed by 2D cephalometry that differences between actual and planned movements in the horizontal axis were statistically relevant for expected means of 0, 0.5, and 2 mm without relevance for expected means of 1 and 1.5 mm; vertical movements were statistically relevant for expected means of 0 and 0.5 mm without relevance for expected means of 1, 1.5, and 2 mm. For 3D cephalometry in the horizontal axis, there were statistically relevant differences for expected means of 0, 1.5, and 2 mm without relevance for expected means of 0.5 and 1 mm; vertical movements showed statistically relevant differences for expected means of 0, 0.5, 1.5 and 2 mm without relevance for the expected mean of 1 mm. Comparison of 2D and 3D values displayed statistical differences for the horizontal and vertical axes. Comparison of 2D and 3D surgical outcome assessments should be performed with caution because there seems to be a difference in acceptable levels of accuracy between these 2 methods of evaluation. Moreover, 3D accuracy studies should no longer rely on a 2-mm level of discrepancy but on a 1-mm level. Copyright © 2018 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Description and first application of a new technique to measure the gravitational mass of antihydrogen

    NASA Astrophysics Data System (ADS)

    Alpha Collaboration; Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø.; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.

    2013-04-01

    Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5% worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.

  10. Description and first application of a new technique to measure the gravitational mass of antihydrogen

    PubMed Central

    Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.

    2013-01-01

    Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime. PMID:23653197

  11. Description and first application of a new technique to measure the gravitational mass of antihydrogen.

    PubMed

    Charman, A E; Amole, C; Ashkezari, M D; Baquero-Ruiz, M; Bertsche, W; Butler, E; Capra, A; Cesar, C L; Charlton, M; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayden, M E; Isaac, C A; Jonsell, S; Kurchaninov, L; Little, A; Madsen, N; McKenna, J T K; Menary, S; Napoli, S C; Nolan, P; Olin, A; Pusa, P; Rasmussen, C Ø; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Thompson, R I; van der Werf, D P; Wurtele, J S; Zhmoginov, A I

    2013-01-01

    Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.

  12. Alignments of parity even/odd-only multipoles in CMB

    NASA Astrophysics Data System (ADS)

    Aluri, Pavan K.; Ralston, John P.; Weltman, Amanda

    2017-12-01

    We compare the statistics of parity even and odd multipoles of the cosmic microwave background (CMB) sky from Planck full mission temperature measurements. An excess power in odd multipoles compared to even multipoles has previously been found on large angular scales. Motivated by this apparent parity asymmetry, we evaluate directional statistics associated with even compared to odd multipoles, along with their significances. Primary tools are the Power tensor and Alignment tensor statistics. We limit our analysis to the first 60 multipoles i.e. l = [2, 61]. We find no evidence for statistically unusual alignments of even parity multipoles. More than one independent statistic finds evidence for alignments of anisotropy axes of odd multipoles, with a significance equivalent to ∼2σ or more. The robustness of alignment axes is tested by making Galactic cuts and varying the multipole range. Very interestingly, the region spanned by the (a)symmetry axes is found to broadly contain other parity (a)symmetry axes previously observed in the literature.

  13. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  14. The effect of statistical noise on IMRT plan quality and convergence for MC-based and MC-correction-based optimized treatment plans.

    PubMed

    Siebers, Jeffrey V

    2008-04-04

    Monte Carlo (MC) is rarely used for IMRT plan optimization outside of research centres due to the extensive computational resources or long computation times required to complete the process. Time can be reduced by degrading the statistical precision of the MC dose calculation used within the optimization loop. However, this eventually introduces optimization convergence errors (OCEs). This study determines the statistical noise levels tolerated during MC-IMRT optimization under the condition that the optimized plan has OCEs <100 cGy (1.5% of the prescription dose) for MC-optimized IMRT treatment plans.Seven-field prostate IMRT treatment plans for 10 prostate patients are used in this study. Pre-optimization is performed for deliverable beams with a pencil-beam (PB) dose algorithm. Further deliverable-based optimization proceeds using: (1) MC-based optimization, where dose is recomputed with MC after each intensity update or (2) a once-corrected (OC) MC-hybrid optimization, where a MC dose computation defines beam-by-beam dose correction matrices that are used during a PB-based optimization. Optimizations are performed with nominal per beam MC statistical precisions of 2, 5, 8, 10, 15, and 20%. Following optimizer convergence, beams are re-computed with MC using 2% per beam nominal statistical precision and the 2 PTV and 10 OAR dose indices used in the optimization objective function are tallied. For both the MC-optimization and OC-optimization methods, statistical equivalence tests found that OCEs are less than 1.5% of the prescription dose for plans optimized with nominal statistical uncertainties of up to 10% per beam. The achieved statistical uncertainty in the patient for the 10% per beam simulations from the combination of the 7 beams is ~3% with respect to maximum dose for voxels with D>0.5D(max). The MC dose computation time for the OC-optimization is only 6.2 minutes on a single 3 Ghz processor with results clinically equivalent to high precision MC computations.

  15. Parenchymal texture analysis in digital mammography: robust texture feature identification and equivalence across devices.

    PubMed

    Keller, Brad M; Oustimov, Andrew; Wang, Yan; Chen, Jinbo; Acciavatti, Raymond J; Zheng, Yuanjie; Ray, Shonket; Gee, James C; Maidment, Andrew D A; Kontos, Despina

    2015-04-01

    An analytical framework is presented for evaluating the equivalence of parenchymal texture features across different full-field digital mammography (FFDM) systems using a physical breast phantom. Phantom images (FOR PROCESSING) are acquired from three FFDM systems using their automated exposure control setting. A panel of texture features, including gray-level histogram, co-occurrence, run length, and structural descriptors, are extracted. To identify features that are robust across imaging systems, a series of equivalence tests are performed on the feature distributions, in which the extent of their intersystem variation is compared to their intrasystem variation via the Hodges-Lehmann test statistic. Overall, histogram and structural features tend to be most robust across all systems, and certain features, such as edge enhancement, tend to be more robust to intergenerational differences between detectors of a single vendor than to intervendor differences. Texture features extracted from larger regions of interest (i.e., [Formula: see text]) and with a larger offset length (i.e., [Formula: see text]), when applicable, also appear to be more robust across imaging systems. This framework and observations from our experiments may benefit applications utilizing mammographic texture analysis on images acquired in multivendor settings, such as in multicenter studies of computer-aided detection and breast cancer risk assessment.

  16. The effects of reminiscence in promoting mental health of Taiwanese elderly.

    PubMed

    Wang, Jing-Jy; Hsu, Ya-Chuan; Cheng, Su-Fen

    2005-01-01

    This study examined the effects of reminiscence on four selected mental health indicators, including depressive symptoms, mood status, self-esteem, and self-health perception of elderly people residing in community care facilities and at home. A longitudinal quasi-experimental design was conducted, using two equivalent groups for pre-post test and purposive sampling with random assignment. Each subject was administered pre- and post- tests at a 4 month interval but subjects in the experimental group underwent weekly intervention. Ninety-four subjects completed the study, with 48 in the control group and 46 in the experimental group. In the experimental group, a statistically significant difference (p = 0.041) was found between the pre-post tests on the dependent variable, depressive symptoms. However, no statistical significance was found in subjects' level of mood status, self-esteem, and self-health perception after the intervention in the experimental group, but slightly improvement was found. Reminiscence not only supports depression of the elderly but also empower nurses to become proactive in their daily nursing care activities.

  17. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    PubMed

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (-5 ± 4.4 SD) for MB and (-4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods. This paper illustrates and justifies the use of statistical tests and graphical representations for dosimetric comparisons in radiotherapy. The statistical analysis shows the significance of dose differences resulting from two or more techniques in radiotherapy.

  18. A note on sample size calculation for mean comparisons based on noncentral t-statistics.

    PubMed

    Chow, Shein-Chung; Shao, Jun; Wang, Hansheng

    2002-11-01

    One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.

  19. A U-statistics based approach to sample size planning of two-arm trials with discrete outcome criterion aiming to establish either superiority or noninferiority.

    PubMed

    Wellek, Stefan

    2017-02-28

    In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Relative Impact of Incorporating Pharmacokinetics on ...

    EPA Pesticide Factsheets

    The use of high-throughput in vitro assays has been proposed to play a significant role in the future of toxicity testing. In this study, rat hepatic metabolic clearance and plasma protein binding were measured for 59 ToxCast phase I chemicals. Computational in vitro-to-in vivo extrapolation was used to estimate the daily dose in a rat, called the oral equivalent dose, which would result in steady-state in vivo blood concentrations equivalent to the AC50 or lowest effective concentration (LEC) across more than 600 ToxCast phase I in vitro assays. Statistical classification analysis was performed using either oral equivalent doses or unadjusted AC50/LEC values for the in vitro assays to predict the in vivo effects of the 59 chemicals. Adjusting the in vitro assays for pharmacokinetics did not improve the ability to predict in vivo effects as either a discrete (yes or no) response or a low effect level (LEL) on a continuous dose scale. Interestingly, a comparison of the in vitro assay with the lowest oral equivalent dose with the in vivo endpoint with the lowest LEL suggested that the lowest oral equivalent dose may provide a conservative estimate of the point of departure for a chemical in a dose-response assessment. Furthermore, comparing the oral equivalent doses for the in vitro assays with the in vivo dose range that resulted in adverse effects identified more coincident in vitro assays across chemicals than expected by chance, suggesting that the approach ma

  1. Mycoprotein reduces glycemia and insulinemia when taken with an oral-glucose-tolerance test.

    PubMed

    Turnbull, W H; Ward, T

    1995-01-01

    This study investigated the effects of mycoprotein, a food produced by the continuous fermentation of Fusarium graminearum (Schwabe), on acute glycemia and insulinemia in normal healthy individuals. Subjects participated in two single-meal study periods in a crossover design. After an overnight fast, subjects were given milkshakes containing mycoprotein or a control substance, which were isoenergetic and nutrient balanced. Each milkshake contained 75 g carbohydrate, equivalent to a standard World Health Organization oral-glucose-tolerance test. Blood samples were taken fasting and at 30, 60, 90, and 120 min postprandially for the measurement of serum glucose and insulin. Glycemia was reduced postmeal after mycoprotein compared with the control and was statistically significant at 60 min (13% reduction). Insulinemia was reduced postmeal after mycoprotein compared with the control and was statistically significant at 30 min (19% reduction) and 60 min (36% reduction) postmeal. These results may be significant in the dietary treatment of diabetes.

  2. High School Equivalency Testing in Arizona. Forum: Responding to Changes in High School Equivalency Testing

    ERIC Educational Resources Information Center

    Hart, Sheryl

    2015-01-01

    For decades, the state of Arizona has used the General Educational Development (GED) Test to award the Arizona High School Equivalency (HSE) Diploma, as the GED Test was the only test available, recognized and accepted in the United States as the measure by which adults could demonstrate the educational attainment equivalent to high school…

  3. Transversus abdominis plane block in renal allotransplant recipients: A retrospective chart review.

    PubMed

    Gopwani, S R; Rosenblatt, M A

    2016-01-01

    The efficacy of the transversus abdominis plane (TAP) block appears to vary considerably, depending on the surgical procedure and block technique. This study aims to add to the existing literature and provide a more clear understanding of the TAP blocks role as a postoperative analgesic technique, specifically in renal allotransplant recipients. A retrospective chart review was conducted by querying the intraoperative electronic medical record system of a 1200-bed tertiary academic hospital over a 5 months period, and reviewing anesthetic techniques, as well as postoperative morphine equivalent consumption. Fifty renal allotransplant recipients were identified, 13 of whom received TAP blocks while 37 received no regional analgesic technique. All blocks were performed under ultrasound guidance, with 20 mL of 0.25% bupivacaine injected in the transversus abdominis fascial plane under direct visualization. The primary outcome was postoperative morphine equivalent consumption. Morphine consumption was compared with the two-tailed Mann-Whitney U -test. Continuous variables of patient baseline characteristics were analyzed with unpaired t -test and categorical variables with Fischer Exact Test. A P < 0.05 was considered statistically significant. A statistically significant decrease in cumulative morphine consumption was found in the group that received the TAP block at 6 h (2.46 mg vs. 7.27 mg, P = 0.0010), 12 h (3.88 mg vs. 10.20 mg, P = 0.0005), 24 h (6.96 mg vs. 14.75 mg, P = 0.0013), and 48 h (11 mg vs. 20.13 mg, P = 0.0092). The TAP block is a beneficial postoperative analgesic, opiate-sparing technique in renal allotransplant recipients.

  4. [Effectiveness of the Military Mental Health Promotion Program].

    PubMed

    Woo, Chung Hee; Kim, Sun Ah

    2014-12-01

    This study was done to evaluate the Military Mental Health Promotion Program. The program was an email based cognitive behavioral intervention. The research design was a quasi-experimental study with a non-equivalent control group pretest-posttest design. Participants were 32 soldiers who agreed to participate in the program. Data were collected at three different times from January 2012 to March 2012; pre-test, post-test, and a one-month follow-up test. The data were statistically analyzed using SPSS 18.0. The effectiveness of the program was tested by repeated measures ANOVA. The first hypothesis that the level of depression in the experimental group who participated in the program would decrease compared to the control group was not supported in that the difference in group-time interaction was not statistically significant (F=2.19, p=.121). The second and third hypothesis related to anxiety and self-esteem were supported in group-time interaction, respectively (F=7.41, p=.001, F=11.67, p<.001). Results indicate that the program is effective in improving soldiers' mental health status in areas of anxiety and self-esteem.

  5. Force Relaxation Characteristics of Medium Force Orthodontic Latex Elastics: A Pilot Study

    PubMed Central

    Fernandes, Daniel J.; Abrahão, Gisele M.; Elias, Carlos N.; Mendes, Alvaro M.

    2011-01-01

    To evaluate force extension relaxation of different brands and diameters of latex elastics subjected to static tensile testing under an apparatus designed to simulate oral environments, sample sizes of 5 elastics from American Orthodontics (AO), Tp, and Morelli Orthodontics (Mo) of equivalent medium force, (3/16, 1/4, and 5/16 inch size) were tested. The forces were read after 1-, 3-, 6-, 12- and 24-hour periods in Emic testing machine with 30 mm/min cross-head speed and load cell of 20 N. Two-way ANOVA and Bonferroni tests were used to identify statistical significance. There were statistically differences among different manufacturers at all observation intervals (P < 0.0001). The relationships among loads at 24-hour time period were as follows: Morelli>AO>Tp for 3/16, 1/4, and 5/16 elastics. The force decay pattern showed a notable drop-off of forces until 3 hours, a slight increase in some groups from 3–6 hours and a more homogeneous force pattern over 6–24 hours. PMID:21991478

  6. A new principle for the standardization of long paragraphs for reading speed analysis.

    PubMed

    Radner, Wolfgang; Radner, Stephan; Diendorfer, Gabriela

    2016-01-01

    To investigate the reliability, validity, and statistical comparability of long paragraphs that were developed to be equivalent in construction and difficulty. Seven long paragraphs were developed that were equal in syntax, morphology, and number and position of words (111), with the same number of syllables (179) and number of characters (660). For validity analyses, the paragraphs were compared with the mean reading speed of a set of seven sentence optotypes of the RADNER Reading Charts (mean of 7 × 14 = 98 words read). Reliability analyses were performed by calculating the Cronbach's alpha value and the corrected total item correlation. Sixty participants (aged 20-77 years) read the paragraphs and the sentences (distance 40 cm; font: Times New Roman 12 pt). Test items were presented randomly; reading length was measured with a stopwatch. Reliability analysis yielded a Cronbach's alpha value of 0.988. When the long paragraphs were compared in pairwise fashion, significant differences were found in 13 of the 21 pairs (p < 0.05). In two sequences of three paragraphs each and in eight pairs of paragraphs, the paragraphs did not differ significantly, and these paragraph combinations are therefore suitable for comparative research studies. The mean reading speed was 173.34 ± 24.01 words per minute (wpm) for the long paragraphs and 198.26 ± 28.60 wpm for the sentence optotypes. The maximum difference in reading speed was 5.55 % for the long paragraphs and 2.95 % for the short sentence optotypes. The correlation between long paragraphs and sentence optotypes was high (r = 0.9243). Despite good reliability and equivalence in construction and degree of difficulty, a statistically significant difference in reading speed can occur between long paragraphs. Since statistical significance should be dependent only on the persons tested, either standardizing long paragraphs for statistical equality of reading speed measurements or increasing the number of presented paragraphs is recommended for comparative investigations.

  7. Assessing the P-wave attenuation and phase velocity characteristics of fractured media based on creep and relaxation tests

    NASA Astrophysics Data System (ADS)

    Milani, Marco; Germán Rubino, J.; Müller, Tobias M.; Quintal, Beatriz; Holliger, Klaus

    2014-05-01

    Fractures are present in most geological formations and they tend to dominate not only their mechanical but also, and in particular, their hydraulic properties. For these reasons, the detection and characterization of fractures are of great interest in several fields of Earth sciences. Seismic attenuation has been recognized as a key attribute for this purpose, as both laboratory and field experiments indicate that the presence of fractures typically produces significant energy dissipation and that this attribute tends to increase with increasing fracture density. This energy loss is generally considered to be primarily due to wave-induced pressure diffusion between the fractures and the embedding porous matrix. That is, due to the strong compressibility contrast between these two domains, the propagation of seismic waves can generate a strong fluid pressure gradient and associated pressure diffusion, which leads to fluid flow and in turn results in frictional energy dissipation. Numerical simulations based on Biot's poroelastic wave equations are computationally very expensive. Alternative approaches consist in performing numerical relaxation or creep tests on representative elementary volumes (REV) of the considered medium. These tests are typically based on Biot's consolidation equations. Assuming that the heterogeneous poroelastic medium can be replaced by an effective, homogeneous viscoelastic solid, these numerical creep and relaxation tests allow for computing the equivalent seismic P-wave attenuation and phase velocity. From a practical point of view, an REV is typically characterized by the smallest volume for which rock physical properties are statistically stationary and representative of the probed medium in its entirety. A more general definition in the context of wavefield attributes is to consider an REV as the smallest volume over which the P-wave attenuation and phase velocity dispersion are independent of the applied boundary conditions. That is, the corresponding results obtained from creep and relaxation tests must be equivalent. For most analyses of media characterized by patchy saturation or double-porosity-type structures these two definitions are equivalent. It is, however, not clear whether this equivalence remains true in the presence of strong material contrasts as those prevailing in fractured rocks. In this work, we explore this question for periodically fractured media. To this end, we build a medium composed of infinite replicas of a unit volume containing one fracture. This unit volume coincides with the smallest possible volume that is statistically representative of the whole. Then, we perform several creep and relaxation tests on samples composed of an increasing number of these unit volumes. We find that the wave field signatures determined from relaxation tests are independent from the number of unit volumes. Conversely, the P-wave attenuation and phase velocity characteristics inferred from creep tests are different and vary with the number of unit volumes considered. Quite interestingly, the creep test results converge with those of the relaxation tests as the number of unit volumes increases. These findings are expected to have direct implications for corresponding laboratory measurements as well as for our understanding of seismic wave propagation in fractured media.

  8. Use of Statistical Heuristics in Everyday Inductive Reasoning.

    ERIC Educational Resources Information Center

    Nisbett, Richard E.; And Others

    1983-01-01

    In everyday reasoning, people use statistical heuristics (judgmental tools that are rough intuitive equivalents of statistical principles). Use of statistical heuristics is more likely when (1) sampling is clear, (2) the role of chance is clear, (3) statistical reasoning is normative for the event, or (4) the subject has had training in…

  9. Inference of median difference based on the Box-Cox model in randomized clinical trials.

    PubMed

    Maruo, K; Isogawa, N; Gosho, M

    2015-05-10

    In randomized clinical trials, many medical and biological measurements are not normally distributed and are often skewed. The Box-Cox transformation is a powerful procedure for comparing two treatment groups for skewed continuous variables in terms of a statistical test. However, it is difficult to directly estimate and interpret the location difference between the two groups on the original scale of the measurement. We propose a helpful method that infers the difference of the treatment effect on the original scale in a more easily interpretable form. We also provide statistical analysis packages that consistently include an estimate of the treatment effect, covariance adjustments, standard errors, and statistical hypothesis tests. The simulation study that focuses on randomized parallel group clinical trials with two treatment groups indicates that the performance of the proposed method is equivalent to or better than that of the existing non-parametric approaches in terms of the type-I error rate and power. We illustrate our method with cluster of differentiation 4 data in an acquired immune deficiency syndrome clinical trial. Copyright © 2015 John Wiley & Sons, Ltd.

  10. A Retrospective Analysis of Hemostatic Techniques in Primary Total Knee Arthroplasty: Traditional Electrocautery, Bipolar Sealer, and Argon Beam Coagulation.

    PubMed

    Rosenthal, Brett D; Haughom, Bryan D; Levine, Brett R

    2016-01-01

    In this retrospective cohort study of 280 primary total knee arthroplasties, clinical outcomes relevant to hemostasis were compared by electrocautery type: traditional electrocautery (TE), bipolar sealer (BS), and argon beam coagulation (ABC). Age, sex, and preoperative diagnosis were not significantly different among the TE, BS, and ABC cohorts. The 3 hemostasis systems were statistically equivalent with respect to estimated blood loss. Wound drainage during the first 48 hours after surgery was equivalent between the BS and ABC cohorts but less for the TE cohort. Transfusion requirements were not significantly different among the cohorts. The 3 hemostasis systems were statistically equivalent with respect to mean change in hemoglobin level during the early postoperative period (levels were measured on postoperative day 1 and on discharge). As BS and ABC are clinically equivalent to TE, their increased cost may not be justified.

  11. On the Equivalence of a Likelihood Ratio of Drasgow, Levine, and Zickar (1996) and the Statistic Based on the Neyman-Pearson Lemma of Belov (2016).

    PubMed

    Sinharay, Sandip

    2017-03-01

    Levine and Drasgow (1988) suggested an approach based on the Neyman-Pearson lemma to detect examinees whose response patterns are "aberrant" due to cheating, language issues, and so on. Belov (2016) used the approach of Levine and Drasgow (1988) to suggest a statistic based on the Neyman-Pearson Lemma (SBNPL) to detect item preknowledge when the investigator knows which items are compromised. This brief report proves that the SBNPL of Belov (2016) is equivalent to a statistic suggested for the same purpose by Drasgow, Levine, and Zickar 20 years ago.

  12. A Prospective, Matched Comparison Study of SUV Measurements From Time-of-Flight Versus Non-Time-of-Flight PET/CT Scanners.

    PubMed

    Thompson, Holly M; Minamimoto, Ryogo; Jamali, Mehran; Barkhodari, Amir; von Eyben, Rie; Iagaru, Andrei

    2016-07-01

    As quantitative F-FDG PET numbers and pooling of results from different PET/CT scanners become more influential in the management of patients, it becomes imperative that we fully interrogate differences between scanners to fully understand the degree of scanner bias on the statistical power of studies. Participants with body mass index (BMI) greater than 25, scheduled on a time-of-flight (TOF)-capable PET/CT scanner, had a consecutive scan on a non-TOF-capable PET/CT scanner and vice versa. SUVmean in various tissues and SUVmax of malignant lesions were measured from both scans, matched to each subject. Data were analyzed using a mixed-effects model, and statistical significance was determined using equivalence testing, with P < 0.05 being significant. Equivalence was established in all baseline organs, except the cerebellum, matched per patient between scanner types. Mixed-effects method analysis of lesions, repeated between scan types and matched per patient, demonstrated good concordance between scanner types. Patients could be scanned on either a TOF or non-TOF-capable PET/CT scanner without clinical compromise to quantitative SUV measurements.

  13. Can Propensity Score Analysis Approximate Randomized Experiments Using Pretest and Demographic Information in Pre-K Intervention Research?

    PubMed

    Dong, Nianbo; Lipsey, Mark W

    2017-01-01

    It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.

  14. Improved score statistics for meta-analysis in single-variant and gene-level association studies.

    PubMed

    Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo

    2018-06-01

    Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.

  15. Relationship of Attention Deficit Hyperactivity Disorder and Postconcussion Recovery in Youth Athletes.

    PubMed

    Mautner, Kenneth; Sussman, Walter I; Axtman, Matthew; Al-Farsi, Yahya; Al-Adawi, Samir

    2015-07-01

    To investigate whether attention deficit hyperactivity disorder (ADHD) influences postconcussion recovery, as measured by computerized neurocognitive testing. This is a retrospective case control study. Computer laboratories across 10 high schools in the greater Atlanta, Georgia area. Immediate postconcussion assessment and cognitive testing (ImPACT) scores of 70 athletes with a self-reported diagnosis of ADHD and who sustained a sport-related concussion were compared with a randomly selected age-matched control group. Immediate postconcussion assessment and cognitive testing scores over a 5-year interval were reviewed for inclusion. Postconcussion recovery was defined as a return to equivalent baseline neurocognitive score on the ImPACT battery, and a concussion symptom score of ≤7. Athletes with ADHD had on average a longer time to recovery when compared with the control group (16.5 days compared with 13.5 days), although not statistically significant. The number of previous concussions did not have any effect on the rate of recovery in the ADHD or the control group. In addition, baseline neurocognitive testing did not statistically differ between the 2 groups, except in verbal memory. Although not statistically significant, youth athletes with ADHD took on average 3 days longer to return to baseline neurocognitive testing compared with a control group without ADHD. Youth athletes with ADHD may have a marginally prolonged recovery as indexed by neurocognitive testing and should be considered when prognosticating time to recovery in this subset of student athletes.

  16. Permutational distribution of the log-rank statistic under random censorship with applications to carcinogenicity assays.

    PubMed

    Heimann, G; Neuhaus, G

    1998-03-01

    In the random censorship model, the log-rank test is often used for comparing a control group with different dose groups. If the number of tumors is small, so-called exact methods are often applied for computing critical values from a permutational distribution. Two of these exact methods are discussed and shown to be incorrect. The correct permutational distribution is derived and studied with respect to its behavior under unequal censoring in the light of recent results proving that the permutational version and the unconditional version of the log-rank test are asymptotically equivalent even under unequal censoring. The log-rank test is studied by simulations of a realistic scenario from a bioassay with small numbers of tumors.

  17. [Interlaboratory Study on Evaporation Residue Test for Food Contact Products (Report 1)].

    PubMed

    Ohno, Hiroyuki; Mutsuga, Motoh; Abe, Tomoyuki; Abe, Yutaka; Amano, Homare; Ishihara, Kinuyo; Ohsaka, Ikue; Ohno, Haruka; Ohno, Yuichiro; Ozaki, Asako; Kakihara, Yoshiteru; Kobayashi, Hisashi; Sakuragi, Hiroshi; Shibata, Hiroshi; Shirono, Katsuhiro; Sekido, Haruko; Takasaka, Noriko; Takenaka, Yu; Tajima, Yoshiyasu; Tanaka, Aoi; Tanaka, Hideyuki; Tonooka, Hiroyuki; Nakanishi, Toru; Nomura, Chie; Haneishi, Nahoko; Hayakawa, Masato; Miura, Toshihiko; Yamaguchi, Miku; Watanabe, Kazunari; Sato, Kyoko

    2018-01-01

    An interlaboratory study was performed to evaluate the equivalence between an official method and a modified method of evaporation residue test using three food-simulating solvents (water, 4% acetic acid and 20% ethanol), based on the Japanese Food Sanitation Law for food contact products. Twenty-three laboratories participated, and tested the evaporation residues of nine test solutions as blind duplicates. For evaporation, a water bath was used in the official method, and a hot plate in the modified method. In most laboratories, the test solutions were heated until just prior to evaporation to dryness, and then allowed to dry under residual heat. Statistical analysis revealed that there was no significant difference between the two methods, regardless of the heating equipment used. Accordingly, the modified method provides performance equal to the official method, and is available as an alternative method.

  18. Toward "Constructing" the Concept of Statistical Power: An Optical Analogy.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    This paper presents a visual analogy that may be used by instructors to teach the concept of statistical power in statistical courses. Statistical power is mathematically defined as the probability of rejecting a null hypothesis when that null is false, or, equivalently, the probability of detecting a relationship when it exists. The analogy…

  19. Developing Statistical Knowledge for Teaching during Design-Based Research

    ERIC Educational Resources Information Center

    Groth, Randall E.

    2017-01-01

    Statistical knowledge for teaching is not precisely equivalent to statistics subject matter knowledge. Teachers must know how to make statistics understandable to others as well as understand the subject matter themselves. This dual demand on teachers calls for the development of viable teacher education models. This paper offers one such model,…

  20. 33 CFR 159.19 - Testing equivalency.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Testing equivalency. 159.19 Section 159.19 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) POLLUTION MARINE SANITATION DEVICES Certification Procedures § 159.19 Testing equivalency. (a) If a test...

  1. A minimum version of log-rank test for testing the existence of cancer cure using relative survival data.

    PubMed

    Yu, Binbing

    2012-01-01

    Cancer survival is one of the most important measures to evaluate the effectiveness of treatment and early diagnosis. The ultimate goal of cancer research and patient care is the cure of cancer. As cancer treatments progress, cure becomes a reality for many cancers if patients are diagnosed early and get effective treatment. If a cure does exist for a certain type of cancer, it is useful to estimate the time of cure. For cancers that impose excess risk of mortality, it is informative to understand the difference in survival between cancer patients and the general cancer-free population. In population-based cancer survival studies, relative survival is the standard measure of excess mortality due to cancer. Cure is achieved when the survival of cancer patients is equivalent to that of the general population. This definition of cure is usually called the statistical cure, which is an important measure of burden due to cancer. In this paper, a minimum version of the log-rank test is proposed to test the equivalence of cancer patients' survival using the relative survival data. Performance of the proposed test is evaluated by simulation. Relative survival data from population-based cancer registries in SEER Program are used to examine patients' survival after diagnosis for various major cancer sites. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Bees Algorithm for Construction of Multiple Test Forms in E-Testing

    ERIC Educational Resources Information Center

    Songmuang, Pokpong; Ueno, Maomi

    2011-01-01

    The purpose of this research is to automatically construct multiple equivalent test forms that have equivalent qualities indicated by test information functions based on item response theory. There has been a trade-off in previous studies between the computational costs and the equivalent qualities of test forms. To alleviate this problem, we…

  3. 40 CFR 1066.805 - Road-load power, test weight, and inertia weight class determination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (a) Simulate a vehicle's test weight on the dynamometer using the appropriate equivalent test weight shown in Table 1 of this section. Equivalent test weights are established according to each vehicle's... weight class corresponding to each equivalent test weight; the inertia weight class allows for grouping...

  4. Validation of a defibrillation lead ventricular volume measurement compared to three-dimensional echocardiography.

    PubMed

    Haines, David E; Wong, Wilson; Canby, Robert; Jewell, Coty; Houmsse, Mahmoud; Pederson, David; Sugeng, Lissa; Porterfield, John; Kottam, Anil; Pearce, John; Valvano, Jon; Michalek, Joel; Trevino, Aron; Sagar, Sandeep; Feldman, Marc D

    2017-10-01

    There is increasing evidence that using frequent invasive measures of pressure in patients with heart failure results in improved outcomes compared to traditional measures. Admittance, a measure of volume derived from preexisting defibrillation leads, is proposed as a new technique to monitor cardiac hemodynamics in patients with an implantable defibrillator. The purpose of this study was to evaluate the accuracy of a new ventricular volume sensor (VVS, CardioVol) compared with 3-dimenssional echocardiography (echo) in patients with an implantable defibrillator. Twenty-two patients referred for generator replacement had their defibrillation lead attached to VVS to determine the level of agreement to a volume measurement standard (echo). Two opposite hemodynamic challenges were sequentially applied to the heart (overdrive pacing and dobutamine administration) to determine whether real changes in hemodynamics could be reliably and repeatedly assessed with VVS. Equivalence of end-diastolic volume (EDV) and stroke volume (SV) determined by both methods was also assessed. EDV and SV were compared using VVS and echo. VVS tracked expected physiologic trends. EDV was modulated -10% by overdrive pacing (14 mL). SV was modulated -13.7% during overdrive pacing (-6 mL) and increased over baseline +14.6% (+8 mL) with dobutamine. VVS and echo mean EDVs were found statistically equivalent, with margin of equivalence 13.8 mL (P <.05). Likewise, mean SVs were found statistically equivalent with margin of equivalence 15.8 mL (P <.05). VVS provides an accurate method for ventricular volume assessment using chronically implanted defibrillator leads and is statistically equivalent to echo determination of mean EDV and SV. Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  5. Automated refraction is stable 1 week after uncomplicated cataract surgery.

    PubMed

    Ostri, Christoffer; Holfort, Stig K; Fich, Marianne S; Riise, Per

    2018-03-01

    To compare automated refraction 1 week and 1 month after uncomplicated cataract surgery. In this prospective cohort study, we recruited patients in a 2-month period and included consecutive patients scheduled for bilateral small-incision phacoemulsification cataract surgery. The exclusion criteria were (i) corneal and/or retinal pathology that could lead to automated refraction miscalculation and (ii) surgery complications. Automated refraction was measured 1 week and 1 month after surgery. Ninety-five patients met the in- and exclusion criteria and completed follow-up. The mean refractive shift in spherical equivalent was -0.02 dioptre (D) between 1 week and 1 month after surgery and not statistical significant (p = 0.78, paired t-test). The magnitude of refractive shift in either myopic or hyperopic direction was neither correlated to age, preoperative corneal astigmatism, axial length nor phacoemulsification energy used during surgery (p > 0.05 for all variables, regression analysis). The refractive target was missed with 1.0 D or more in 11 (12%) patients. In this subgroup, the mean refractive shift in spherical equivalent was 0.49 D between 1 week and 1 month after surgery with a trend towards statistical significance (p = 0.07, paired t-test). There was no difference in age, preoperative corneal astigmatism, axial length or phacoemulsification energy used during surgery compared to the remainder of the patients (p > 0.05 for all variables, unpaired t-test). Automated refraction is stabile 1 week after uncomplicated cataract surgery, but there is a trend towards instability, if the refractive target is missed with 1.0 D or more. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  6. [Carbon monoxide tests in a steady state. Uptake and transfer capacity, normal values and lower limits].

    PubMed

    Ramonatxo, M; Préfaut, C; Guerrero, H; Moutou, H; Bansard, X; Chardon, G

    1982-01-01

    The aim of this study was to establish data which would best demonstrate the variations of different tests using Carbon Monoxide as a tracer gas (total and partial functional uptake coefficient and transfer capacity) to establish mean values and lower limits of normal of these tests. Multivariate statistical analysis was used; in the first stage a connection was sought between the fractional uptake coefficient (partial and total) to other parameters, comparing subjects and data. In the second stage the comparison was refined by eliminating the least useful data, trying, despite a small loss of material, to reveal the most important connections, linear or otherwise. The fractional uptake coefficients varied according to sex, also the variation of the partial alveolar-expired fractional uptake equivalent (DuACO) was largely a function of respiratory rate and tidal volume. The alveolar-arterial partial fractional uptake equivalent (DuaCO) depended more on respiratory frequency and age. Finally the total fractional uptake coefficient (DuCO) and the transfer capacity corrected per liter of ventilation (TLCO/V) were functions of these parameters. The last stage of this work, after taking account of the statistical observations consistent with the facts of these physiological hypotheses led to a search for a better way of approaching the laws linking the collected data to the fractional uptake coefficient. The lower limits of normal were arbitrarily defined, separating those 5% of subjects deviating most strongly from the mean. As a result, the relationship between the lower limit of normal and the theoretical mean value was 90% for the partial and total fractional uptake coefficient and 70% for the transfer capacity corrected per liter of ventilation.

  7. Statistical comparison of the pediatric versus adult IKDC subjective knee evaluation form in adolescents.

    PubMed

    Oak, Sameer R; O'Rourke, Colin; Strnad, Greg; Andrish, Jack T; Parker, Richard D; Saluan, Paul; Jones, Morgan H; Stegmeier, Nicole A; Spindler, Kurt P

    2015-09-01

    The International Knee Documentation Committee (IKDC) Subjective Knee Evaluation Form is a patient-reported outcome with adult (1998) and pediatric (2011) versions validated at different ages. Prior longitudinal studies of patients aged 13 to 17 years who tore their anterior cruciate ligament (ACL) have used the only available adult IKDC, whereas currently the pediatric IKDC is the accepted form of choice. This study compared the adult and pediatric IKDC forms and tested whether the differences were clinically significant. The hypothesis was that the pediatric and adult IKDC questionnaires would show no clinically significant differences in score when completed by patients aged 13 to 17 years. Cohort study (diagnosis); Level of evidence, 2. A total of 100 participants aged 13 to 17 years with knee injuries were split into 2 groups by use of simple randomization. One group answered the adult IKDC form first and then the pediatric form. The second group answered the pediatric IKDC form first and then the adult form. A 10-minute break was given between form administrations to prevent rote repetition of answers. Study design was based on established methods to compare 2 forms of patient-reported outcomes. A 5-point threshold for clinical significance was set below previously published minimum clinically important differences for the adult IKDC. Paired t tests were used to test both differences and equivalence between scores. By ordinary least-squares models, scores were modeled to predict adult scores given certain pediatric scores and vice versa. Comparison between adult and pediatric IKDC scores showed a statistically significant difference of 1.5 points; however, the 95% CI (0.3-2.6) fell below the threshold of 5 points set for clinical significance. Further equivalence testing showed the 95% CI (0.5-2.4) between adult and pediatric scores being within the defined 5-point equivalence region. The scores were highly correlated, with a linear relationship (R(2) = 92%). There was no clinically significant difference between the pediatric and adult IKDC form scores in adolescents aged 13 to 17 years. This result allows use of whichever form is most practical for long-term tracking of patients. A simple linear equation can convert one form into the other. If the adult questionnaire is used at this age, it can be consistently used during follow-up. © 2015 The Author(s).

  8. Power and sample size evaluation for the Cochran-Mantel-Haenszel mean score (Wilcoxon rank sum) test and the Cochran-Armitage test for trend.

    PubMed

    Lachin, John M

    2011-11-10

    The power of a chi-square test, and thus the required sample size, are a function of the noncentrality parameter that can be obtained as the limiting expectation of the test statistic under an alternative hypothesis specification. Herein, we apply this principle to derive simple expressions for two tests that are commonly applied to discrete ordinal data. The Wilcoxon rank sum test for the equality of distributions in two groups is algebraically equivalent to the Mann-Whitney test. The Kruskal-Wallis test applies to multiple groups. These tests are equivalent to a Cochran-Mantel-Haenszel mean score test using rank scores for a set of C-discrete categories. Although various authors have assessed the power function of the Wilcoxon and Mann-Whitney tests, herein it is shown that the power of these tests with discrete observations, that is, with tied ranks, is readily provided by the power function of the corresponding Cochran-Mantel-Haenszel mean scores test for two and R > 2 groups. These expressions yield results virtually identical to those derived previously for rank scores and also apply to other score functions. The Cochran-Armitage test for trend assesses whether there is an monotonically increasing or decreasing trend in the proportions with a positive outcome or response over the C-ordered categories of an ordinal independent variable, for example, dose. Herein, it is shown that the power of the test is a function of the slope of the response probabilities over the ordinal scores assigned to the groups that yields simple expressions for the power of the test. Copyright © 2011 John Wiley & Sons, Ltd.

  9. On the assessment of the added value of new predictive biomarkers.

    PubMed

    Chen, Weijie; Samuelson, Frank W; Gallas, Brandon D; Kang, Le; Sahiner, Berkman; Petrick, Nicholas

    2013-07-29

    The surge in biomarker development calls for research on statistical evaluation methodology to rigorously assess emerging biomarkers and classification models. Recently, several authors reported the puzzling observation that, in assessing the added value of new biomarkers to existing ones in a logistic regression model, statistical significance of new predictor variables does not necessarily translate into a statistically significant increase in the area under the ROC curve (AUC). Vickers et al. concluded that this inconsistency is because AUC "has vastly inferior statistical properties," i.e., it is extremely conservative. This statement is based on simulations that misuse the DeLong et al. method. Our purpose is to provide a fair comparison of the likelihood ratio (LR) test and the Wald test versus diagnostic accuracy (AUC) tests. We present a test to compare ideal AUCs of nested linear discriminant functions via an F test. We compare it with the LR test and the Wald test for the logistic regression model. The null hypotheses of these three tests are equivalent; however, the F test is an exact test whereas the LR test and the Wald test are asymptotic tests. Our simulation shows that the F test has the nominal type I error even with a small sample size. Our results also indicate that the LR test and the Wald test have inflated type I errors when the sample size is small, while the type I error converges to the nominal value asymptotically with increasing sample size as expected. We further show that the DeLong et al. method tests a different hypothesis and has the nominal type I error when it is used within its designed scope. Finally, we summarize the pros and cons of all four methods we consider in this paper. We show that there is nothing inherently less powerful or disagreeable about ROC analysis for showing the usefulness of new biomarkers or characterizing the performance of classification models. Each statistical method for assessing biomarkers and classification models has its own strengths and weaknesses. Investigators need to choose methods based on the assessment purpose, the biomarker development phase at which the assessment is being performed, the available patient data, and the validity of assumptions behind the methodologies.

  10. A statistical estimation of Snow Water Equivalent coupling ground data and MODIS images

    NASA Astrophysics Data System (ADS)

    Bavera, D.; Bocchiola, D.; de Michele, C.

    2007-12-01

    The Snow Water Equivalent (SWE) is an important component of the hydrologic balance of mountain basins and snow fed areas in general. The total cumulated snow water equivalent at the end of the accumulation season represents the water availability at melt. Here, a statistical methodology to estimate the Snow Water Equivalent, at April 1st, is developed coupling ground data (snow depth and snow density measurements) and MODIS images. The methodology is applied to the Mallero river basin (about 320 km²) located in the Central Alps, northern Italy, where are available 11 snow gauges and a lot of sparse snow density measurements. The application covers 7 years from 2001 to 2007. The analysis has identified some problems in the MODIS information due to the cloud cover and misclassification for orographic shadow. The study is performed in the framework of AWARE (A tool for monitoring and forecasting Available WAter REsource in mountain environment) EU-project, a STREP Project in the VI F.P., GMES Initiative.

  11. Patellar Tendon Repair Augmentation With a Knotless Suture Anchor Internal Brace: A Biomechanical Cadaveric Study.

    PubMed

    Rothfeld, Alex; Pawlak, Amanda; Liebler, Stephenie A H; Morris, Michael; Paci, James M

    2018-04-01

    Patellar tendon repair with braided polyethylene suture alone is subject to knot slippage and failure. Several techniques to augment the primary repair have been described. Purpose/Hypothesis: The purpose was to evaluate a novel patellar tendon repair technique augmented with a knotless suture anchor internal brace with suture tape (SAIB). The hypothesis was that this technique would be biomechanically superior to a nonaugmented repair and equivalent to a standard augmentation with an 18-gauge steel wire. Controlled laboratory study. Midsubstance patellar tendon tears were created in 32 human cadaveric knees. Two comparison groups were created. Group 1 compared #2 supersuture repair without augmentation to #2 supersuture repair with SAIB augmentation. Group 2 compared #2 supersuture repair with an 18-gauge stainless steel cerclage wire augmentation to #2 supersuture repair with SAIB augmentation. The specimens were potted and biomechanically loaded on a materials testing machine. Yield load, maximum load, mode of failure, plastic displacement, elastic displacement, and total displacement were calculated for each sample. Standard statistical analysis was performed. There was a statistically significant increase in the mean ± SD yield load and maximum load in the SAIB augmentation group compared with supersuture alone (mean yield load: 646 ± 202 N vs 229 ± 60 N; mean maximum load: 868 ± 162 N vs 365 ± 54 N; P < .001). Group 2 showed no statistically significant differences between the augmented repairs (mean yield load: 495 ± 213 N vs 566 ± 172 N; P = .476; mean maximum load: 737 ± 210 N vs 697 ± 130 N; P = .721). Patellar tendon repair augmented with SAIB is biomechanically superior to repair without augmentation and is equivalent to repair with augmentation with an 18-gauge stainless steel cerclage wire. This novel patellar tendon repair augmentation is equivalent to standard 18-gauge wire augmentation at time zero. It does not require a second surgery for removal, and it is biomechanically superior to primary repair alone.

  12. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  13. True external diameter better predicts hemodynamic performance of bioprosthetic aortic valves than the manufacturers' stated size.

    PubMed

    Cevasco, Marisa; Mick, Stephanie L; Kwon, Michael; Lee, Lawrence S; Chen, Edward P; Chen, Frederick Y

    2013-05-01

    Currently, there is no universal standard for sizing bioprosthetic aortic valves. Hence, a standardized comparison was performed to clarify this issue. Every size of four commercially available bioprosthetic aortic valves marketed in the United States (Biocor Supra; Mosaic Ultra; Magna Ease; Mitroflow) was obtained. Subsequently, custom sizers were created that were accurate to 0.0025 mm to represent aortic roots 18 mm through 32 mm, and these were used to measure the external diameter of each valve. Using the effective orifice area (EOA) and transvalvular pressure gradient (TPG) data submitted to the FDA, a comparison was made between the hemodynamic properties of valves with equivalent manufacturer stated sizes and valves with equivalent measured external diameters. Based on manufacturer size alone, the valves at first seemed to be hemodynamically different from each other, with Mitroflow valves appearing to be hemodynamically superior, having a large EOA and equivalent or superior TPG (p < 0.05). However, Mitroflow valves had a larger measured external diameter than the other valves of a given numerical manufacturer size. Valves with equivalent external diameters were then compared, regardless of the stated manufacturer sizes. For truly equivalently sized valves (i.e., by measured external diameter) there was no clear hemodynamic difference. There was no statistical difference in the EOAs between the Biocor Supra, Mosaic Ultra, and Mitroflow valves, and the Magna Ease valve had a statistically smaller EOA (p < 0.05). On comparing the mean TPG, the Biocor Supra and Mitroflow valves had statistically equivalent gradients to each other, as did the Mosaic Ultra and Magna Ease valves. When comparing valves of the same numerical manufacturer size, there appears to be a difference in hemodynamic performance across different manufacturers' valves according to FDA data. However, comparing equivalently measured valves eliminates the differences between valves produced by different manufacturers.

  14. Exposure of the surgeon's hands to radiation during hand surgery procedures.

    PubMed

    Żyluk, Andrzej; Puchalski, Piotr; Szlosser, Zbigniew; Dec, Paweł; Chrąchol, Joanna

    2014-01-01

    The objective of the study was to assess the time of exposure of the surgeon's hands to radiation and calculate of the equivalent dose absorbed during surgery of hand and wrist fractures with C-arm fluoroscope guidance. The necessary data specified by the objective of the study were acquired from operations of 287 patients with fractures of fingers, metacarpals, wrist bones and distal radius. 218 operations (78%) were percutaneous procedures and 60 (22%) were performed by open method. Data on the time of exposure and dose of radiation were acquired from the display of the fluoroscope, where they were automatically generated. These data were assigned to the individual patient, type of fracture, method of surgery and the operating surgeon. Fixations of distal radial fractures required longer times of radiation exposure (mean 61 sec.) than fractures of the wrist/metacarpals and fingers (38 and 32 sec., respectively), which was associated with absorption of significantly higher equivalent doses. Fixations of distal radial fractures by open method were associated with statistically significantly higher equivalent doses (0.41 mSv) than percutaneous procedures (0.3 mSv). Fixations of wrist and metacarpal bone fractures by open method were associated with lower equivalent doses (0.34 mSv) than percutaneous procedures (0.37 mSv),but the difference was not significant. Fixations of finger fractures by open method were associated with lower equivalent doses (0.13 mSv) than percutaneous procedures (0.24 mSv), the difference being statistically non-significant. Statistically significant differences in exposure time and equivalent doses were noted between 4 surgeons participating in the study, but no definitive relationship was found between these parameters and surgeons' employment time. 1. Hand surgery procedures under fluoroscopic guidance are associated with mild exposure of the surgeons' hands to radiation. 2. The equivalent dose was related to the type of fracture, operative technique and - to some degree - to the time of employment of the surgeon.

  15. No Clinically Significant Difference Between Adult and Pediatric IKDC Subjective Knee Evaluation Scores in Adults.

    PubMed

    Stegmeier, Nicole; Oak, Sameer R; O'Rourke, Colin; Strnad, Greg; Spindler, Kurt P; Jones, Morgan; Farrow, Lutul D; Andrish, Jack; Saluan, Paul

    Two versions of the International Knee Documentation Committee (IKDC) Subjective Knee Evaluation form currently exist: the original version (1999) and a recently modified pediatric-specific version (2011). Comparison of the pediatric IKDC with the adult version in the adult population may reveal that either version could be used longitudinally. We hypothesize that the scores for the adult IKDC and pediatric IKDC will not be clinically different among adult patients aged 18 to 50 years. Randomized crossover study design. Level 2. The study consisted of 100 participants, aged 18 to 50 years, who presented to orthopaedic outpatient clinics with knee problems. All participants completed both adult and pediatric versions of the IKDC in random order with a 10-minute break in between. We used a paired t test to test for a difference between the scores and a Welch's 2-sample t test to test for equivalence. A least-squares regression model was used to model adult scores as a function of pediatric scores, and vice versa. A paired t test revealed a statistically significant 1.6-point difference between the mean adult and pediatric scores. However, the 95% confidence interval (0.54-2.66) for this difference did not exceed our a priori threshold of 5 points, indicating that this difference was not clinically important. Equivalence testing with an equivalence region of 5 points further supported this finding. The adult and pediatric scores had a linear relationship and were highly correlated with an R 2 of 92.6%. There is no clinically relevant difference between the scores of the adult and pediatric IKDC forms in adults, aged 18 to 50 years, with knee conditions. Either form, adult or pediatric, of the IKDC can be used in this population for longitudinal studies. If the pediatric version is administered in adolescence, it can be used for follow-up into adulthood.

  16. No Clinically Significant Difference Between Adult and Pediatric IKDC Subjective Knee Evaluation Scores in Adults

    PubMed Central

    Stegmeier, Nicole; Oak, Sameer R.; O’Rourke, Colin; Strnad, Greg; Spindler, Kurt P.; Jones, Morgan; Farrow, Lutul D.; Andrish, Jack; Saluan, Paul

    2017-01-01

    Background: Two versions of the International Knee Documentation Committee (IKDC) Subjective Knee Evaluation form currently exist: the original version (1999) and a recently modified pediatric-specific version (2011). Comparison of the pediatric IKDC with the adult version in the adult population may reveal that either version could be used longitudinally. Hypothesis: We hypothesize that the scores for the adult IKDC and pediatric IKDC will not be clinically different among adult patients aged 18 to 50 years. Study Design: Randomized crossover study design. Level of Evidence: Level 2. Methods: The study consisted of 100 participants, aged 18 to 50 years, who presented to orthopaedic outpatient clinics with knee problems. All participants completed both adult and pediatric versions of the IKDC in random order with a 10-minute break in between. We used a paired t test to test for a difference between the scores and a Welch’s 2-sample t test to test for equivalence. A least-squares regression model was used to model adult scores as a function of pediatric scores, and vice versa. Results: A paired t test revealed a statistically significant 1.6-point difference between the mean adult and pediatric scores. However, the 95% confidence interval (0.54-2.66) for this difference did not exceed our a priori threshold of 5 points, indicating that this difference was not clinically important. Equivalence testing with an equivalence region of 5 points further supported this finding. The adult and pediatric scores had a linear relationship and were highly correlated with an R2 of 92.6%. Conclusion: There is no clinically relevant difference between the scores of the adult and pediatric IKDC forms in adults, aged 18 to 50 years, with knee conditions. Clinical Relevance: Either form, adult or pediatric, of the IKDC can be used in this population for longitudinal studies. If the pediatric version is administered in adolescence, it can be used for follow-up into adulthood. PMID:28080306

  17. Using the Rasch Model to Determine Equivalence of Forms In the Trilingual Lollipop Readiness Test

    ERIC Educational Resources Information Center

    Lang, W. Steve; Chew, Alex L.; Crownover, Carol; Wilkerson, Judy R.

    2007-01-01

    Determining the cross-cultural equivalence of multilingual tests is a challenge that is more complex than simple horizontal equating of test forms. This study examines the functioning of a trilingual test of preschool readiness to determine the equivalence. Different forms of the test have previously been examined using classical statistical…

  18. Detection of coliform bacteria and Escherichia coli by multiplex polymerase chain reaction: comparison with defined substrate and plating methods for water quality monitoring.

    PubMed Central

    Bej, A K; McCarty, S C; Atlas, R M

    1991-01-01

    Multiplex polymerase chain reaction (PCR) and gene probe detection of target lacZ and uidA genes were used to detect total coliform bacteria and Escherichia coli, respectively, for determining water quality. In tests of environmental water samples, the lacZ PCR method gave results statistically equivalent to those of the plate count and defined substrate methods accepted by the U.S. Environmental Protection Agency for water quality monitoring and the uidA PCR method was more sensitive than 4-methylumbelliferyl-beta-D-glucuronide-based defined substrate tests for specific detection of E. coli. Images PMID:1768116

  19. A practical and systematic review of Weibull statistics for reporting strengths of dental materials

    PubMed Central

    Quinn, George D.; Quinn, Janet B.

    2011-01-01

    Objectives To review the history, theory and current applications of Weibull analyses sufficient to make informed decisions regarding practical use of the analysis in dental material strength testing. Data References are made to examples in the engineering and dental literature, but this paper also includes illustrative analyses of Weibull plots, fractographic interpretations, and Weibull distribution parameters obtained for a dense alumina, two feldspathic porcelains, and a zirconia. Sources Informational sources include Weibull's original articles, later articles specific to applications and theoretical foundations of Weibull analysis, texts on statistics and fracture mechanics and the international standards literature. Study Selection The chosen Weibull analyses are used to illustrate technique, the importance of flaw size distributions, physical meaning of Weibull parameters and concepts of “equivalent volumes” to compare measured strengths obtained from different test configurations. Conclusions Weibull analysis has a strong theoretical basis and can be of particular value in dental applications, primarily because of test specimen size limitations and the use of different test configurations. Also endemic to dental materials, however, is increased difficulty in satisfying application requirements, such as confirming fracture origin type and diligence in obtaining quality strength data. PMID:19945745

  20. Improving Project Performance through Implementation of Agile Methodologies in the Renewable Energy Construction Industry

    NASA Astrophysics Data System (ADS)

    Hernandez Mendez, Arturo

    Collaborative inquiry within undergraduate research experiences (UREs) is an effective curriculum tool to support student growth. This study seeks to understand how collaborative inquiry within undergraduate biology student experiences are affected within faculty mentored experiences and non-mentored experiences at a large private southeastern university. Undergraduate biology students engaged in UREs (faculty as mentor and non-mentor experiences) were examined for statistically significant differences in student self-efficacy. Self-efficacy was measured in three subcomponents (thinking and working like a scientist, scientific self-efficacy, and scientific identity) from student responses obtained in an online survey. Responses were analyzed using a nonparametric equivalent of a t test (Mann Whitney U test) to make comparisons between faculty mentored and non-mentored student groups. The conclusions of this study highlight the statistically significant effect of faculty mentoring in all three subcomponents. Faculty and university policy makers can apply these findings to develop further support for effective faculty mentoring practices in UREs.

  1. Difference of refraction values between standard autorefractometry and Plusoptix.

    PubMed

    Bogdănici, Camelia Margareta; Săndulache, Codrina Maria; Vasiliu, Rodica; Obadă, Otilia

    2016-01-01

    Aim: Comparison between the objective refraction measurement results determined with Topcon KR-8900 standard autorefractometer and Plusoptix A09 photo-refractometer in children. Material and methods: A prospective transversal study was performed in the Department of Ophthalmology of "Sf. Spiridon" Hospital in Iași on 90 eyes of 45 pediatric patients, with a mean age of 8,82 ± 3,52 years, examined with noncycloplegic measurements provided by Plusoptix A09 and cycloplegic and noncycloplegic measurements provided by Topcon KR-8900 standard autorefractometer. The clinical parameters compared were the following: spherical equivalent (SE), spherical and cylindrical values, and cylinder axis. Astigmatism was recorded and evaluated with the cylindrical value on minus after transposition. The statistical calculation was performed with paired t-tests and Pearson's correlation analysis. All the data were analyzed with SPSS statistical package 19 (SPSS for Windows, Chicago, IL). Results: Plusoptix A09 noncycloplegic values were relatively equal between the eyes, with slightly lower values compared to noncycloplegic auto refractometry. Mean (± SD) measurements provided by Plusoptix AO9 were the following: spherical power 1.11 ± 1.52, cylindrical power 0.80 ± 0.80, and spherical equivalent 0.71 ± 1.39. The noncycloplegic auto refractometer mean (± SD) measurements were spherical power 1.12 ± 1.63, cylindrical power 0.79 ± 0,77 and spherical equivalent 0.71 ± 1.58. The cycloplegic auto refractometer mean (± SD) measurements were spherical power 2.08 ± 1.95, cylindrical power 0,82 ± 0.85 and spherical equivalent 1.68 ± 1.87. 32% of the eyes were hyperopic, 2.67% were myopic, 65.33% had astigmatism, and 30% eyes had amblyopia. Conclusions: Noncycloplegic objective refraction values were similar with those determined by autorefractometry. Plusoptix had an important role in the ophthalmological screening, but did not detect higher refractive errors, justifying the cycloplegic autorefractometry.

  2. Small-scale deflagration cylinder test with velocimetry wall-motion diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooks, Daniel E; Hill, Larry G; Pierce, Timothy H

    Predicting the likelihood and effects of outcomes resultant from thermal initiation of explosives remains a significant challenge. For certain explosive formulations, the general outcome can be broadly predicted given knowledge of certain conditions. However, there remain unexplained violent events, and increased statistical understanding of outcomes as a function of many variables, or 'violence categorization,' is needed. Additionally, the development of an equation of state equivalent for deflagration would be very useful in predicting possible detailed event consequences using traditional hydrodynamic detonation moders. For violence categorization, it is desirable that testing be efficient, such that it is possible to statistically definemore » outcomes reliant on the processes of initiation of deflagration, steady state deflagration, and deflagration to detonation transitions. If the test simultaneously acquires information to inform models of violent deflagration events, overall predictive capabilities for event likelihood and consequence might improve remarkably. In this paper we describe an economical scaled deflagration cylinder test. The cyclotetramethylene tetranitramine (HMX) based explosive formu1lation PBX 9501 was tested using different temperature profiles in a thick-walled copper cylindrical confiner. This test is a scaled version of a recently demonstrated deflagration cylinder test, and is similar to several other thermal explosion tests. The primary difference is the passive velocimetry diagnostic, which enables measurement of confinement vessel wall velocities at failure, regardless of the timing and location of ignition.« less

  3. 46 CFR 161.002-17 - Equivalents.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... test that provides a level of safety equivalent to that established by specific provisions of this... require engineering evaluations and tests to demonstrate the equivalence of the substitute. [CGD 94-108...

  4. 46 CFR 161.002-17 - Equivalents.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... test that provides a level of safety equivalent to that established by specific provisions of this... require engineering evaluations and tests to demonstrate the equivalence of the substitute. [CGD 94-108...

  5. 46 CFR 161.002-17 - Equivalents.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... test that provides a level of safety equivalent to that established by specific provisions of this... require engineering evaluations and tests to demonstrate the equivalence of the substitute. [CGD 94-108...

  6. 46 CFR 161.002-17 - Equivalents.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... test that provides a level of safety equivalent to that established by specific provisions of this... require engineering evaluations and tests to demonstrate the equivalence of the substitute. [CGD 94-108...

  7. 46 CFR 161.002-17 - Equivalents.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... test that provides a level of safety equivalent to that established by specific provisions of this... require engineering evaluations and tests to demonstrate the equivalence of the substitute. [CGD 94-108...

  8. [The relationship between Ridit analysis and rank sum test for one-way ordinal contingency table in medical research].

    PubMed

    Wang, Ling; Xia, Jie-lai; Yu, Li-li; Li, Chan-juan; Wang, Su-zhen

    2008-06-01

    To explore several numerical methods of ordinal variable in one-way ordinal contingency table and their interrelationship, and to compare corresponding statistical analysis methods such as Ridit analysis and rank sum test. Formula deduction was based on five simplified grading approaches including rank_r(i), ridit_r(i), ridit_r(ci), ridit_r(mi), and table scores. Practical data set was verified by SAS8.2 in clinical practice (to test the effect of Shiwei solution in treatment for chronic tracheitis). Because of the linear relationship of rank_r(i) = N ridit_r(i) + 1/2 = N ridit_r(ci) = (N + 1) ridit_r(mi), the exact chi2 values in Ridit analysis based on ridit_r(i), ridit_r(ci), and ridit_r(mi), were completely the same, and they were equivalent to the Kruskal-Wallis H test. Traditional Ridit analysis was based on ridit_r(i), and its corresponding chi2 value calculated with an approximate variance (1/12) was conservative. The exact chi2 test of Ridit analysis should be used when comparing multiple groups in the clinical researches because of its special merits such as distribution of mean ridit value on (0,1) and clear graph expression. The exact chi2 test of Ridit analysis can be output directly by proc freq of SAS8.2 with ridit and modridit option (SCORES =). The exact chi2 test of Ridit analysis is equivalent to the Kruskal-Wallis H test, and should be used when comparing multiple groups in the clinical researches.

  9. Performance Equivalence and Validation of the Soleris Automated System for Quantitative Microbial Content Testing Using Pure Suspension Cultures.

    PubMed

    Limberg, Brian J; Johnstone, Kevin; Filloon, Thomas; Catrenich, Carl

    2016-09-01

    Using United States Pharmacopeia-National Formulary (USP-NF) general method <1223> guidance, the Soleris(®) automated system and reagents (Nonfermenting Total Viable Count for bacteria and Direct Yeast and Mold for yeast and mold) were validated, using a performance equivalence approach, as an alternative to plate counting for total microbial content analysis using five representative microbes: Staphylococcus aureus, Bacillus subtilis, Pseudomonas aeruginosa, Candida albicans, and Aspergillus brasiliensis. Detection times (DTs) in the alternative automated system were linearly correlated to CFU/sample (R(2) = 0.94-0.97) with ≥70% accuracy per USP General Chapter <1223> guidance. The LOD and LOQ of the automated system were statistically similar to the traditional plate count method. This system was significantly more precise than plate counting (RSD 1.2-2.9% for DT, 7.8-40.6% for plate counts), was statistically comparable to plate counting with respect to variations in analyst, vial lots, and instruments, and was robust when variations in the operating detection thresholds (dTs; ±2 units) were used. The automated system produced accurate results, was more precise and less labor-intensive, and met or exceeded criteria for a valid alternative quantitative method, consistent with USP-NF general method <1223> guidance.

  10. Evaluation of 3D-human skin equivalents for assessment of human dermal absorption of some brominated flame retardants.

    PubMed

    Abdallah, Mohamed Abou-Elwafa; Pawar, Gopal; Harrad, Stuart

    2015-11-01

    Ethical and technical difficulties inherent to studies in human tissues are impeding assessment of the dermal bioavailability of brominated flame retardants (BFRs). This is further complicated by increasing restrictions on the use of animals in toxicity testing, and the uncertainties associated with extrapolating data from animal studies to humans due to inter-species variations. To overcome these difficulties, we evaluate 3D-human skin equivalents (3D-HSE) as a novel in vitro alternative to human and animal testing for assessment of dermal absorption of BFRs. The percutaneous penetration of hexabromocyclododecanes (HBCD) and tetrabromobisphenol-A (TBBP-A) through two commercially available 3D-HSE models was studied and compared to data obtained for human ex vivo skin according to a standard protocol. No statistically significant differences were observed between the results obtained using 3D-HSE and human ex vivo skin at two exposure levels. The absorbed dose was low (less than 7%) and was significantly correlated with log Kow of the tested BFR. Permeability coefficient values showed increasing dermal resistance to the penetration of γ-HBCD>β-HBCD>α-HBCD>TBBPA. The estimated long lag times (>30 min) suggests that frequent hand washing may reduce human exposure to HBCDs and TBBPA via dermal contact. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Comparison of the Pentacam equivalent keratometry reading and IOL Master keratometry measurement in intraocular lens power calculations.

    PubMed

    Karunaratne, Nicholas

    2013-12-01

    To compare the accuracy of the Pentacam Holladay equivalent keratometry readings with the IOL Master 500 keratometry in calculating intraocular lens power. Non-randomized, prospective clinical study conducted in private practice. Forty-five consecutive normal patients undergoing cataract surgery. Forty-five consecutive patients had Pentacam equivalent keratometry readings at the 2-, 3 and 4.5-mm corneal zone and IOL Master keratometry measurements prior to cataract surgery. For each Pentacam equivalent keratometry reading zone and IOL Master measurement the difference between the observed and expected refractive error was calculated using the Holladay 2 and Sanders, Retzlaff and Kraff theoretic (SRKT) formulas. Mean keratometric value and mean absolute refractive error. There was a statistically significantly difference between the mean keratometric values of the IOL Master, Pentacam equivalent keratometry reading 2-, 3- and 4.5-mm measurements (P < 0.0001, analysis of variance). There was no statistically significant difference between the mean absolute refraction error for the IOL Master and equivalent keratometry readings 2 mm, 3 mm and 4.5 mm zones for either the Holladay 2 formula (P = 0.14) or SRKT formula (P = 0.47). The lowest mean absolute refraction error for Holladay 2 equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.17 D). The lowest mean absolute refraction error for SRKT equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.19 D). Comparing the absolute refraction error of IOL Master and Pentacam equivalent keratometry reading, best agreement was with Holladay 2 and equivalent keratometry reading 4.5 mm, with mean of the difference of 0.02 D and 95% limits of agreement of -0.35 and 0.39 D. The IOL Master keratometry and Pentacam equivalent keratometry reading were not equivalent when used only for corneal power measurements. However, the keratometry measurements of the IOL Master and Pentacam equivalent keratometry reading 4.5 mm may be similarly effective when used in intraocular lens power calculation formulas, following constant optimization. © 2013 Royal Australian and New Zealand College of Ophthalmologists.

  12. A comparative appraisal of two equivalence tests for multiple standardized effects.

    PubMed

    Shieh, Gwowen

    2016-04-01

    Equivalence testing is recommended as a better alternative to the traditional difference-based methods for demonstrating the comparability of two or more treatment effects. Although equivalent tests of two groups are widely discussed, the natural extensions for assessing equivalence between several groups have not been well examined. This article provides a detailed and schematic comparison of the ANOVA F and the studentized range tests for evaluating the comparability of several standardized effects. Power and sample size appraisals of the two grossly distinct approaches are conducted in terms of a constraint on the range of the standardized means when the standard deviation of the standardized means is fixed. Although neither method is uniformly more powerful, the studentized range test has a clear advantage in sample size requirements necessary to achieve a given power when the underlying effect configurations are close to the priori minimum difference for determining equivalence. For actual application of equivalence tests and advance planning of equivalence studies, both SAS and R computer codes are available as supplementary files to implement the calculations of critical values, p-values, power levels, and sample sizes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Testing the equivalence of modern human cranial covariance structure: Implications for bioarchaeological applications.

    PubMed

    von Cramon-Taubadel, Noreen; Schroeder, Lauren

    2016-10-01

    Estimation of the variance-covariance (V/CV) structure of fragmentary bioarchaeological populations requires the use of proxy extant V/CV parameters. However, it is currently unclear whether extant human populations exhibit equivalent V/CV structures. Random skewers (RS) and hierarchical analyses of common principal components (CPC) were applied to a modern human cranial dataset. Cranial V/CV similarity was assessed globally for samples of individual populations (jackknifed method) and for pairwise population sample contrasts. The results were examined in light of potential explanatory factors for covariance difference, such as geographic region, among-group distance, and sample size. RS analyses showed that population samples exhibited highly correlated multivariate responses to selection, and that differences in RS results were primarily a consequence of differences in sample size. The CPC method yielded mixed results, depending upon the statistical criterion used to evaluate the hierarchy. The hypothesis-testing (step-up) approach was deemed problematic due to sensitivity to low statistical power and elevated Type I errors. In contrast, the model-fitting (lowest AIC) approach suggested that V/CV matrices were proportional and/or shared a large number of CPCs. Pairwise population sample CPC results were correlated with cranial distance, suggesting that population history explains some of the variability in V/CV structure among groups. The results indicate that patterns of covariance in human craniometric samples are broadly similar but not identical. These findings have important implications for choosing extant covariance matrices to use as proxy V/CV parameters in evolutionary analyses of past populations. © 2016 Wiley Periodicals, Inc.

  14. Evaluation of nitrous oxide as a substitute for sulfur hexafluoride to reduce global warming impacts of ANSI/HPS N13.1 gaseous uniformity testing

    NASA Astrophysics Data System (ADS)

    Yu, Xiao-Ying; Barnett, J. Matthew; Amidan, Brett G.; Recknagle, Kurtis P.; Flaherty, Julia E.; Antonio, Ernest J.; Glissmeyer, John A.

    2018-03-01

    The ANSI/HPS N13.1-2011 standard requires gaseous tracer uniformity testing for sampling associated with stacks used in radioactive air emissions. Sulfur hexafluoride (SF6), a greenhouse gas with a high global warming potential, has long been the gas tracer used in such testing. To reduce the impact of gas tracer tests on the environment, nitrous oxide (N2O) was evaluated as a potential replacement to SF6. The physical evaluation included the development of a test plan to record percent coefficient of variance and the percent maximum deviation between the two gases while considering variables such as fan configuration, injection position, and flow rate. Statistical power was calculated to determine how many sample sets were needed, and computational fluid dynamic modeling was utilized to estimate overall mixing in stacks. Results show there are no significant differences between the behaviors of the two gases, and SF6 modeling corroborated N2O test results. Although, in principle, all tracer gases should behave in an identical manner for measuring mixing within a stack, the series of physical tests guided by statistics was performed to demonstrate the equivalence of N2O testing to SF6 testing in the context of stack qualification tests. The results demonstrate that N2O is a viable choice leading to a four times reduction in global warming impacts for future similar compliance driven testing.

  15. ROC curves in clinical chemistry: uses, misuses, and possible solutions.

    PubMed

    Obuchowski, Nancy A; Lieber, Michael L; Wians, Frank H

    2004-07-01

    ROC curves have become the standard for describing and comparing the accuracy of diagnostic tests. Not surprisingly, ROC curves are used often by clinical chemists. Our aims were to observe how the accuracy of clinical laboratory diagnostic tests is assessed, compared, and reported in the literature; to identify common problems with the use of ROC curves; and to offer some possible solutions. We reviewed every original work using ROC curves and published in Clinical Chemistry in 2001 or 2002. For each article we recorded phase of the research, prospective or retrospective design, sample size, presence/absence of confidence intervals (CIs), nature of the statistical analysis, and major analysis problems. Of 58 articles, 31% were phase I (exploratory), 50% were phase II (challenge), and 19% were phase III (advanced) studies. The studies increased in sample size from phase I to III and showed a progression in the use of prospective designs. Most phase I studies were powered to assess diagnostic tests with ROC areas >/=0.70. Thirty-eight percent of studies failed to include CIs for diagnostic test accuracy or the CIs were constructed inappropriately. Thirty-three percent of studies provided insufficient analysis for comparing diagnostic tests. Other problems included dichotomization of the gold standard scale and inappropriate analysis of the equivalence of two diagnostic tests. We identify available software and make some suggestions for sample size determination, testing for equivalence in diagnostic accuracy, and alternatives to a dichotomous classification of a continuous-scale gold standard. More methodologic research is needed in areas specific to clinical chemistry.

  16. Using volcano plots and regularized-chi statistics in genetic association studies.

    PubMed

    Li, Wentian; Freudenberg, Jan; Suh, Young Ju; Yang, Yaning

    2014-02-01

    Labor intensive experiments are typically required to identify the causal disease variants from a list of disease associated variants in the genome. For designing such experiments, candidate variants are ranked by their strength of genetic association with the disease. However, the two commonly used measures of genetic association, the odds-ratio (OR) and p-value may rank variants in different order. To integrate these two measures into a single analysis, here we transfer the volcano plot methodology from gene expression analysis to genetic association studies. In its original setting, volcano plots are scatter plots of fold-change and t-test statistic (or -log of the p-value), with the latter being more sensitive to sample size. In genetic association studies, the OR and Pearson's chi-square statistic (or equivalently its square root, chi; or the standardized log(OR)) can be analogously used in a volcano plot, allowing for their visual inspection. Moreover, the geometric interpretation of these plots leads to an intuitive method for filtering results by a combination of both OR and chi-square statistic, which we term "regularized-chi". This method selects associated markers by a smooth curve in the volcano plot instead of the right-angled lines which corresponds to independent cutoffs for OR and chi-square statistic. The regularized-chi incorporates relatively more signals from variants with lower minor-allele-frequencies than chi-square test statistic. As rare variants tend to have stronger functional effects, regularized-chi is better suited to the task of prioritization of candidate genes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    PubMed Central

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  18. Eruption patterns of the chilean volcanoes Villarrica, Llaima, and Tupungatito

    NASA Astrophysics Data System (ADS)

    Muñoz, Miguel

    1983-09-01

    The historical eruption records of three Chilean volcanoes have been subjected to many statistical tests, and none have been found to differ significantly from random, or Poissonian, behaviour. The statistical analysis shows rough conformity with the descriptions determined from the eruption rate functions. It is possible that a constant eruption rate describes the activity of Villarrica; Llaima and Tupungatito present complex eruption rate patterns that appear, however, to have no statistical significance. Questions related to loading and extinction processes and to the existence of shallow secondary magma chambers to which magma is supplied from a deeper system are also addressed. The analysis and the computation of the serial correlation coefficients indicate that the three series may be regarded as stationary renewal processes. None of the test statistics indicates rejection of the Poisson hypothesis at a level less than 5%, but the coefficient of variation for the eruption series at Llaima is significantly different from the value expected for a Poisson process. Also, the estimates of the normalized spectrum of the counting process for the three series suggest a departure from the random model, but the deviations are not found to be significant at the 5% level. Kolmogorov-Smirnov and chi-squared test statistics, applied directly to ascertaining to which probability P the random Poisson model fits the data, indicate that there is significant agreement in the case of Villarrica ( P=0.59) and Tupungatito ( P=0.3). Even though the P-value for Llaima is a marginally significant 0.1 (which is equivalent to rejecting the Poisson model at the 90% confidence level), the series suggests that nonrandom features are possibly present in the eruptive activity of this volcano.

  19. The cross-cultural equivalence of participation instruments: a systematic review.

    PubMed

    Stevelink, S A M; van Brakel, W H

    2013-07-01

    Concepts such as health-related quality of life, disability and participation may differ across cultures. Consequently, when assessing such a concept using a measure developed elsewhere, it is important to test its cultural equivalence. Previous research suggested a lack of cultural equivalence testing in several areas of measurement. This paper reviews the process of cross-cultural equivalence testing of instruments to measure participation in society. An existing cultural equivalence framework was adapted and used to assess participation instruments on five categories of equivalence: conceptual, item, semantic, measurement and operational equivalence. For each category, several aspects were rated, resulting in an overall category rating of 'minimal/none', 'partial' or 'extensive'. The best possible overall study rating was five 'extensive' ratings. Articles were included if the instruments focussed explicitly on measuring 'participation' and were theoretically grounded in the ICIDH(-2) or ICF. Cross-validation articles were only included if it concerned an adaptation of an instrument developed in a high or middle-income country to a low-income country or vice versa. Eight cross-cultural validation studies were included in which five participation instruments were tested (Impact on Participation and Autonomy, London Handicap Scale, Perceived Impact and Problem Profile, Craig Handicap Assessment Reporting Technique, Participation Scale). Of these eight studies, only three received at least two 'extensive' ratings for the different categories of equivalence. The majority of the cultural equivalence ratings given were 'partial' and 'minimal/none'. The majority of the 'none/minimal' ratings were given for item and measurement equivalence. The cross-cultural equivalence testing of the participation instruments included leaves much to be desired. A detailed checklist is proposed for designing a cross-validation study. Once a study has been conducted, the checklist can be used to ensure comprehensive reporting of the validation (equivalence) testing process and its results. • Participation instruments are often used in a different cultural setting than initial developed for. • The conceptualization of participation may vary across cultures. Therefore, cultural equivalence – the extent to which an instrument is equally suitable for use in two or more cultures – is an important concept to address. • This review showed that the process of cultural equivalence testing of the included participation instruments was often addressed insufficiently. • Clinicians should be aware that application of participations instruments in a different culture than initially developed for needs prior testing of cultural validity in the next context.

  20. 40 CFR 790.85 - Submission of equivalence data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... sought. The exact type of identifying data required will be specified in the test rule, but may include... 40 Protection of Environment 32 2011-07-01 2011-07-01 false Submission of equivalence data. 790.85... Test Rules § 790.85 Submission of equivalence data. If EPA requires in a test rule promulgated under...

  1. 40 CFR 790.85 - Submission of equivalence data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... sought. The exact type of identifying data required will be specified in the test rule, but may include... 40 Protection of Environment 32 2014-07-01 2014-07-01 false Submission of equivalence data. 790.85... Test Rules § 790.85 Submission of equivalence data. If EPA requires in a test rule promulgated under...

  2. Cross cultural translation and adaptation to Brazilian Portuguese of the Hearing Implant Sound Quality Index Questionnaire - (HISQUI19).

    PubMed

    Caporali, Priscila Faissola; Caporali, Sueli Aparecida; Bucuvic, Érika Cristina; Vieira, Sheila de Souza; Santos, Zeila Maria; Chiari, Brasília Maria

    2016-01-01

    Translation and cross-cultural adaptation of the instrument Hearing Implant Sound Quality Index (HISQUI19), and characterization of the target population and auditory performance in Cochlear Implant (CI) users through the application of a synthesis version of this tool. Evaluations of conceptual, item, semantic and operational equivalences were performed. The synthesis version was applied as a pre-test to 33 individuals, whose final results characterized the final sample and performance of the questionnaire. The results were analyzed statistically. The final translation (FT) was back-translated and compared with the original version, revealing a minimum difference between items. The changes observed between the FT and the synthesis version were characterized by the application of simplified vocabulary used on a daily basis. For the pre-test, the average score of the interviewees was 90.2, and a high level of reliability was achieved (0.83). The translation and cross-cultural adaptation of the HISQUI19 questionnaire showed suitability for conceptual, item, semantic and operational equivalences. For the sample characterization, the sound quality was classified as good with better performance for the categories of location and distinction of sound/voices.

  3. High School Equivalency Testing in Washington. Forum: Responding to Changes in High School Equivalency Testing

    ERIC Educational Resources Information Center

    Kerr, Jon

    2015-01-01

    In 2013, as new high school equivalency exams were being developed and implemented across the nation and states were deciding which test was best for their population, Washington state identified the need to adopt the most rigorous test so that preparation to take it would equip students with the skills to be able to move directly from adult…

  4. Lateral cephalometric analysis for treatment planning in orthodontics based on MRI compared with radiographs: A feasibility study in children and adolescents

    PubMed Central

    Lazo Gonzalez, Eduardo; Hilgenfeld, Tim; Kickingereder, Philipp; Bendszus, Martin; Heiland, Sabine; Ozga, Ann-Kathrin; Sommer, Andreas; Lux, Christopher J.; Zingler, Sebastian

    2017-01-01

    Objective The objective of this prospective study was to evaluate whether magnetic resonance imaging (MRI) is equivalent to lateral cephalometric radiographs (LCR, “gold standard”) in cephalometric analysis. Methods The applied MRI technique was optimized for short scanning time, high resolution, high contrast and geometric accuracy. Prior to orthodontic treatment, 20 patients (mean age ± SD, 13.95 years ± 5.34) received MRI and LCR. MRI datasets were postprocessed into lateral cephalograms. Cephalometric analysis was performed twice by two independent observers for both modalities with an interval of 4 weeks. Eight bilateral and 10 midsagittal landmarks were identified, and 24 widely used measurements (14 angles, 10 distances) were calculated. Statistical analysis was performed by using intraclass correlation coefficient (ICC), Bland-Altman analysis and two one-sided tests (TOST) within the predefined equivalence margin of ± 2°/mm. Results Geometric accuracy of the MRI technique was confirmed by phantom measurements. Mean intraobserver ICC were 0.977/0.975 for MRI and 0.975/0.961 for LCR. Average interobserver ICC were 0.980 for MRI and 0.929 for LCR. Bland-Altman analysis showed high levels of agreement between the two modalities, bias range (mean ± SD) was -0.66 to 0.61 mm (0.06 ± 0.44) for distances and -1.33 to 1.14° (0.06 ± 0.71) for angles. Except for the interincisal angle (p = 0.17) all measurements were statistically equivalent (p < 0.05). Conclusions This study demonstrates feasibility of orthodontic treatment planning without radiation exposure based on MRI. High-resolution isotropic MRI datasets can be transformed into lateral cephalograms allowing reliable measurements as applied in orthodontic routine with high concordance to the corresponding measurements on LCR. PMID:28334054

  5. Comparison of the visual results after SMILE and femtosecond laser-assisted LASIK for myopia.

    PubMed

    Lin, Fangyu; Xu, Yesheng; Yang, Yabo

    2014-04-01

    To perform a comparative clinical analysis of the safety, efficacy, and predictability of two surgical procedures (ie, small incision lenticule extraction [SMILE] and femtosecond laser-assisted LASIK [FS-LASIK]) to correct myopia. Sixty eyes of 31 patients with a mean spherical equivalent of -5.13 ± 1.75 diopters underwent myopia correction with the SMILE procedure. Fifty-one eyes of 27 patients with a mean spherical equivalent of -5.58 ± 2.41 diopters were treated with the FS-LASIK procedure. Postoperative uncorrected and corrected distance visual acuity, manifest refraction, and higher-order aberrations were analyzed statistically at 1 and 3 months postoperatively. No statistically significant differences were found at 1 and 3 months in parameters that included the percentage of eyes with an uncorrected distance visual acuity of 20/20 or better (P = .556, .920) and mean spherical equivalent refraction (P = .055, .335). At 1 month, 4 SMILE-treated eyes and 1 FS-LASIK-treated eye lost one or more line of visual acuity (P = .214, chi-square test). At 3 months, 2 SMILE-treated eyes lost one or more line of visual acuity, whereas all FS-LASIK-treated eyes had an unchanged or corrected distance visual acuity. Higher-order aberrations and spherical aberration were significantly lower in the SMILE group than the FS-LASIK group at 1 (P = .007, .000) and 3 (P = .006, .000) months of follow-up. SMILE and FS-LASIK are safe, effective, and predictable surgical procedures to treat myopia. SMILE has a lower induction rate of higher-order aberrations and spherical aberration than the FS-LASIK procedure. Copyright 2014, SLACK Incorporated.

  6. Lateral cephalometric analysis for treatment planning in orthodontics based on MRI compared with radiographs: A feasibility study in children and adolescents.

    PubMed

    Heil, Alexander; Lazo Gonzalez, Eduardo; Hilgenfeld, Tim; Kickingereder, Philipp; Bendszus, Martin; Heiland, Sabine; Ozga, Ann-Kathrin; Sommer, Andreas; Lux, Christopher J; Zingler, Sebastian

    2017-01-01

    The objective of this prospective study was to evaluate whether magnetic resonance imaging (MRI) is equivalent to lateral cephalometric radiographs (LCR, "gold standard") in cephalometric analysis. The applied MRI technique was optimized for short scanning time, high resolution, high contrast and geometric accuracy. Prior to orthodontic treatment, 20 patients (mean age ± SD, 13.95 years ± 5.34) received MRI and LCR. MRI datasets were postprocessed into lateral cephalograms. Cephalometric analysis was performed twice by two independent observers for both modalities with an interval of 4 weeks. Eight bilateral and 10 midsagittal landmarks were identified, and 24 widely used measurements (14 angles, 10 distances) were calculated. Statistical analysis was performed by using intraclass correlation coefficient (ICC), Bland-Altman analysis and two one-sided tests (TOST) within the predefined equivalence margin of ± 2°/mm. Geometric accuracy of the MRI technique was confirmed by phantom measurements. Mean intraobserver ICC were 0.977/0.975 for MRI and 0.975/0.961 for LCR. Average interobserver ICC were 0.980 for MRI and 0.929 for LCR. Bland-Altman analysis showed high levels of agreement between the two modalities, bias range (mean ± SD) was -0.66 to 0.61 mm (0.06 ± 0.44) for distances and -1.33 to 1.14° (0.06 ± 0.71) for angles. Except for the interincisal angle (p = 0.17) all measurements were statistically equivalent (p < 0.05). This study demonstrates feasibility of orthodontic treatment planning without radiation exposure based on MRI. High-resolution isotropic MRI datasets can be transformed into lateral cephalograms allowing reliable measurements as applied in orthodontic routine with high concordance to the corresponding measurements on LCR.

  7. Posterior paramedian subrhomboidal analgesia versus thoracic epidural analgesia for pain control in patients with multiple rib fractures.

    PubMed

    Shelley, Casey L; Berry, Stepheny; Howard, James; De Ruyter, Martin; Thepthepha, Melissa; Nazir, Niaman; McDonald, Tracy; Dalton, Annemarie; Moncure, Michael

    2016-09-01

    Rib fractures are common in trauma admissions and are associated with an increased risk of pulmonary complications, intensive care unit admissions, and mortality. Providing adequate pain control in patients with multiple rib fractures decreases the risk of adverse events. Thoracic epidural analgesia is currently the preferred method for pain control. This study compared outcomes in patients with multiple acute rib fractures treated with posterior paramedian subrhomboidal (PoPS) analgesia versus thoracic epidural analgesia (TEA). This prospective study included 30 patients with three or more acute rib fractures admitted to a Level I trauma center. Thoracic epidural analgesia or PoPS catheters were placed, and local anesthesia was infused. Data were collected including patients' pain level, adjunct morphine equivalent use, adverse events, length of stay, lung volumes, and discharge disposition. Nonparametric tests were used and two-sided p < 0.05 were considered statistically significant. Nineteen (63%) of 30 patients received TEA and 11 (37%) of 30 patients received PoPS. Pain rating was lower in the PoPS group (2.5 vs. 5; p = 0.03) after initial placement. Overall, there was no other statistically significant difference in pain control or use of oral morphine adjuncts between the groups. Hypotension occurred in eight patients, 75% with TEA and only 25% with PoPS. No difference was found in adverse events, length of stay, lung volumes, or discharge disposition. In patients with rib fractures, PoPS analgesia may provide pain control equivalent to TEA while being less invasive and more readily placed by a variety of hospital staff. This pilot study is limited by its small sample size, and therefore additional studies are needed to prove equivalence of PoPS compared to TEA. Therapeutic study, level IV.

  8. Predictive modeling of altitude decompression sickness in humans

    NASA Technical Reports Server (NTRS)

    Kenyon, D. J.; Hamilton, R. W., Jr.; Colley, I. A.; Schreiner, H. R.

    1972-01-01

    The coding of data on 2,565 individual human altitude chamber tests is reported as part of a selection procedure designed to eliminate individuals who are highly susceptible to decompression sickness, individual aircrew members were exposed to the pressure equivalent of 37,000 feet and observed for one hour. Many entries refer to subjects who have been tested two or three times. This data contains a substantial body of statistical information important to the understanding of the mechanisms of altitude decompression sickness and for the computation of improved high altitude operating procedures. Appropriate computer formats and encoding procedures were developed and all 2,565 entries have been converted to these formats and stored on magnetic tape. A gas loading file was produced.

  9. Transfer Entropy as a Log-Likelihood Ratio

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  10. Transfer entropy as a log-likelihood ratio.

    PubMed

    Barnett, Lionel; Bossomaier, Terry

    2012-09-28

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  11. Equivalent electron fluence for space qualification of shallow junction heteroface GaAs solar cells

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Stock, L. V.

    1984-01-01

    It is desirable to perform qualification tests prior to deployment of solar cells in space power applications. Such test procedures are complicated by the complex mixture of differing radiation components in space which are difficult to simulate in ground test facilities. Although it has been shown that an equivalent electron fluence ratio cannot be uniquely defined for monoenergetic proton exposure of GaAs shallow junction cells, an equivalent electron fluence test can be defined for common spectral components of protons found in space. Equivalent electron fluence levels for the geosynchronous environment are presented.

  12. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nedic, Vladimir, E-mail: vnedic@kg.ac.rs; Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs; Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs

    2014-11-15

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. Themore » output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.« less

  13. A simple signaling rule for variable life-adjusted display derived from an equivalent risk-adjusted CUSUM chart.

    PubMed

    Wittenberg, Philipp; Gan, Fah Fatt; Knoth, Sven

    2018-04-17

    The variable life-adjusted display (VLAD) is the first risk-adjusted graphical procedure proposed in the literature for monitoring the performance of a surgeon. It displays the cumulative sum of expected minus observed deaths. It has since become highly popular because the statistic plotted is easy to understand. But it is also easy to misinterpret a surgeon's performance by utilizing the VLAD, potentially leading to grave consequences. The problem of misinterpretation is essentially caused by the variance of the VLAD's statistic that increases with sample size. In order for the VLAD to be truly useful, a simple signaling rule is desperately needed. Various forms of signaling rules have been developed, but they are usually quite complicated. Without signaling rules, making inferences using the VLAD alone is difficult if not misleading. In this paper, we establish an equivalence between a VLAD with V-mask and a risk-adjusted cumulative sum (RA-CUSUM) chart based on the difference between the estimated probability of death and surgical outcome. Average run length analysis based on simulation shows that this particular RA-CUSUM chart has similar performance as compared to the established RA-CUSUM chart based on the log-likelihood ratio statistic obtained by testing the odds ratio of death. We provide a simple design procedure for determining the V-mask parameters based on a resampling approach. Resampling from a real data set ensures that these parameters can be estimated appropriately. Finally, we illustrate the monitoring of a real surgeon's performance using VLAD with V-mask. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Analysis of the effect of numbers of aircraft operations on community annoyance

    NASA Technical Reports Server (NTRS)

    Connor, W. K.; Patterson, H. P.

    1976-01-01

    The general validity of the equivalent-energy concept as applied to community annoyance to aircraft noise has been recently questioned by investigators using a peak-dBA concept. Using data previously gathered around nine U.S. airports, empirical tests of both concepts are presented. Results show that annoyance response follows neither concept, that annoyance increases steadily with energy-mean level for constant daily operations and with numbers of operations up to 100-199 per day (then decreases for higher numbers), and that the behavior of certain response descriptors is dependent upon the statistical distributions of numbers and levels.

  15. Can electronic medical images replace hard-copy film? Defining and testing the equivalence of diagnostic tests.

    PubMed

    Obuchowski, N A

    2001-10-15

    Electronic medical images are an efficient and convenient format in which to display, store and transmit radiographic information. Before electronic images can be used routinely to screen and diagnose patients, however, it must be shown that readers have the same diagnostic performance with this new format as traditional hard-copy film. Currently, there exist no suitable definitions of diagnostic equivalence. In this paper we propose two criteria for diagnostic equivalence. The first criterion ('population equivalence') considers the variability between and within readers, as well as the mean reader performance. This criterion is useful for most applications. The second criterion ('individual equivalence') involves a comparison of the test results for individual patients and is necessary when patients are followed radiographically over time. We present methods for testing both individual and population equivalence. The properties of the proposed methods are assessed in a Monte Carlo simulation study. Data from a mammography screening study is used to illustrate the proposed methods and compare them with results from more conventional methods of assessing equivalence and inter-procedure agreement. Copyright 2001 John Wiley & Sons, Ltd.

  16. Military Research ColorDx and Printed Color Vision Tests.

    PubMed

    Almustanyir, Ali; Hovis, Jeffery K

    2015-10-01

    To determine the equivalence of the ColorDx Military Research version (mColorDx) test and three printed pseudoisochromatic tests (HRR, Ishihara, and PIPIC) for color vision testing. Participating in the study were 75 color-normals and 47 subjects with red-green color vision defects. Color vision was classified by an anomaloscope. The HRR (4(th) edition), Ishihara 38-plate edition, and PIPIC tests are printed color vision tests, whereas mColorDx test figures were displayed on a calibrated computer desktop monitor. All tests were repeated in about 1 wk. The kappa level of agreement (κ) values with the anomaloscope for screening for each test was 0.96 or greater. The values were statistically identical. Specificity for each test was at least 0.99 and sensitivity was at least 0.95. The repeatability of the screening sections for all tests was very good with κ values greater than 0.95. Deutans tended to miss the tritan screening plates on the HRR and mColorDx tests. The Spearman rank correlation coefficients between the severity of the defect and anomaloscope range was moderate with r = 0.45 for the mColorDx and r = 0.6 for the HRR. Both the mColorDx and HRR had perfect agreement with the anomaloscope in classifying the defects as either protan or deutan. The validity of the four tests for color vision screening was statistically identical; however, the HRR may be preferred because it had the highest sensitivity of 0.99, a specificity of 1.0, and a reasonable correlation between the severity rating of the defect and the anomaloscope range.

  17. Leishmania Infection: Laboratory Diagnosing in the Absence of a “Gold Standard”

    PubMed Central

    Rodríguez-Cortés, Alhelí; Ojeda, Ana; Francino, Olga; López-Fuertes, Laura; Timón, Marcos; Alberola, Jordi

    2010-01-01

    There is no gold standard for diagnosing leishmaniases. Our aim was to assess the operative validity of tests used in detecting Leishmania infection using samples from experimental infections, a reliable equivalent to the classic definition of gold standard. Without statistical differences, the highest sensitivity was achieved by protein A (ProtA), immunoglobulin (Ig)G2, indirect fluorescenece antibody test (IFAT), lymphocyte proliferation assay, quantitative real-time polymerase chain reaction of bone marrow (qPCR-BM), qPCR-Blood, and IgG; and the highest specificity by IgG1, IgM, IgA, qPCR-Blood, IgG, IgG2, and qPCR-BM. Maximum positive predictive value was obtained simultaneously by IgG2, qPCR-Blood, and IgG; and maximum negative predictive value by qPCR-BM. Best positive and negative likelihood ratios were obtained by IgG2. The test having the greatest, statistically significant, area under the receiver operating characteristics curve was IgG2 enzyme-linked immunosorbent assay (ELISA). Thus, according to the gold standard used, IFAT and qPCR are far from fulfilling the requirements to be considered gold standards, and the test showing the highest potential to detect Leishmania infection is Leishmania-specific ELISA IgG2. PMID:20134001

  18. Leishmania infection: laboratory diagnosing in the absence of a "gold standard".

    PubMed

    Rodríguez-Cortés, Alhelí; Ojeda, Ana; Francino, Olga; López-Fuertes, Laura; Timón, Marcos; Alberola, Jordi

    2010-02-01

    There is no gold standard for diagnosing leishmaniases. Our aim was to assess the operative validity of tests used in detecting Leishmania infection using samples from experimental infections, a reliable equivalent to the classic definition of gold standard. Without statistical differences, the highest sensitivity was achieved by protein A (ProtA), immunoglobulin (Ig)G2, indirect fluorescenece antibody test (IFAT), lymphocyte proliferation assay, quantitative real-time polymerase chain reaction of bone marrow (qPCR-BM), qPCR-Blood, and IgG; and the highest specificity by IgG1, IgM, IgA, qPCR-Blood, IgG, IgG2, and qPCR-BM. Maximum positive predictive value was obtained simultaneously by IgG2, qPCR-Blood, and IgG; and maximum negative predictive value by qPCR-BM. Best positive and negative likelihood ratios were obtained by IgG2. The test having the greatest, statistically significant, area under the receiver operating characteristics curve was IgG2 enzyme-linked immunosorbent assay (ELISA). Thus, according to the gold standard used, IFAT and qPCR are far from fulfilling the requirements to be considered gold standards, and the test showing the highest potential to detect Leishmania infection is Leishmania-specific ELISA IgG2.

  19. Minimal sufficient positive-operator valued measure on a separable Hilbert space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuramochi, Yui, E-mail: kuramochi.yui.22c@st.kyoto-u.ac.jp

    We introduce a concept of a minimal sufficient positive-operator valued measure (POVM), which is the least redundant POVM among the POVMs that have the equivalent information about the measured quantum system. Assuming the system Hilbert space to be separable, we show that for a given POVM, a sufficient statistic called a Lehmann-Scheffé-Bahadur statistic induces a minimal sufficient POVM. We also show that every POVM has an equivalent minimal sufficient POVM and that such a minimal sufficient POVM is unique up to relabeling neglecting null sets. We apply these results to discrete POVMs and information conservation conditions proposed by the author.

  20. Xenogeneic collagen matrix with coronally advanced flap compared to connective tissue with coronally advanced flap for the treatment of dehiscence-type recession defects.

    PubMed

    McGuire, Michael K; Scheyer, E Todd

    2010-08-01

    For root coverage therapy, the connective tissue graft (CTG) plus coronally advanced flap (CAF) is considered the gold standard therapy against which alternative therapies are generally compared. When evaluating these therapies, in addition to traditional measures of root coverage, subject-reported, qualitative measures of esthetics, pain, and overall preferences for alternative procedures should also be considered. This study determines if a xenogeneic collagen matrix (CM) with CAF might be as effective as CTG+CAF in the treatment of recession defects. This study was a single-masked, randomized, controlled, split-mouth study of dehiscence-type recession defects in contralateral sites; one defect received CTG+CAF and the other defect received CM+CAF. A total of 25 subjects (8 male, 17 female; mean age: 43.7 +/- 12.2 years) were evaluated at 6 months and 1 year. The primary efficacy endpoint was recession depth at 6 months. Secondary endpoints included traditional periodontal measures, such as width of keratinized tissue and percentage of root coverage. Subject-reported values of pain, discomfort, and esthetic satisfaction were also recorded. At 6 months, recession depth was on average 0.52 mm for test sites and 0.10 mm for control sites. Recession depth change from baseline was statistically significant between test and control, with an average of 2.62 mm gained at test sites and 3.10 mm gained at control sites for a difference of 0.4 mm (P = 0.0062). At 1 year, test percentage of root coverage averaged 88.5%, and controls averaged 99.3% (P = 0.0313). Keratinized tissue width gains were equivalent for both therapies and averaged 1.34 mm for test sites and 1.26 mm for control sites (P = 0.9061). There were no statistically significant differences between subject-reported values for esthetic satisfaction, and subjects' assessments of pain and discomfort were also equivalent. When balanced with subject-reported esthetic values and compared to historical root coverage outcomes reported by other investigators, CM+CAF presents a viable alternative to CTG+CAF, without the morbidity of soft tissue graft harvest.

  1. Examining Equivalency of the Driver Risk Inventory Test Versions: Does It Matter Which Version I Use?

    ERIC Educational Resources Information Center

    Degiorgio, Lisa

    2015-01-01

    Equivalency of test versions is often assumed by counselors and evaluators. This study examined two versions, paper-pencil and computer based, of the Driver Risk Inventory, a DUI/DWI (driving under the influence/driving while intoxicated) risk assessment. An overview of computer-based testing and standards for equivalency is also provided. Results…

  2. Invariance levels across language versions of the PISA 2009 reading comprehension tests in Spain.

    PubMed

    Elosua Oliden, Paula; Mujika Lizaso, Josu

    2013-01-01

    The PISA project provides the basis for studying curriculum design and for comparing factors associated with school effectiveness. These studies are only valid if the different language versions are equivalent to each other. In Spain, the application of PISA in autonomous regions with their own languages means that equivalency must also be extended to the Spanish, Galician, Catalan and Basque versions of the test. The aim of this work was to analyse the equivalence among the four language versions of the Reading Comprehension Test (PISA 2009). After defining the testlet as the unit of analysis, equivalence among the language versions was analysed using two invariance testing procedures: multiple-group mean and covariance structure analyses for ordinal data and ordinal logistic regression. The procedures yielded concordant results supporting metric equivalence across all four language versions: Spanish, Basque, Galician and Catalan. The equivalence supports the estimated reading literacy score comparability among the language versions used in Spain.

  3. Equivalence relations in individuals with language limitations and mental retardation.

    PubMed Central

    O'Donnell, Jennifer; Saunders, Kathryn J

    2003-01-01

    The study of equivalence relations exhibited by individuals with mental retardation and language limitations holds the promise of providing information of both theoretical and practical significance. We reviewed the equivalence literature with this population, defined in terms of subjects having moderate, severe, or profound mental retardation. The literature includes 55 such individuals, most of whom showed positive outcomes on equivalence tests. The results to date suggest that naming skills are not necessary for positive equivalence test outcomes. Thus far, however, relatively few subjects with minimal language have been studied. Moreover, we suggest that the scientific contributions of studies in this area would be enhanced with better documentation of language skills and other subject characteristics. With recent advances in laboratory procedures for establishing the baseline performances necessary for equivalence tests, this research area is poised for rapid growth. PMID:13677612

  4. Design and experimental verification of an equivalent forebody to produce disturbances equivalent to those of a forebody with flowing inlets

    NASA Technical Reports Server (NTRS)

    Haynes, Davy A.; Miller, David S.; Klein, John R.; Louie, Check M.

    1988-01-01

    A method by which a simple equivalent faired body can be designed to replace a more complex body with flowing inlets has been demonstrated for supersonic flow. An analytically defined, geometrically simple faired inlet forebody has been designed using a linear potential code to generate flow perturbations equivalent to those produced by a much more complex forebody with inlets. An equivalent forebody wind-tunnel model was fabricated and a test was conducted in NASA Langley Research Center's Unitary Plan Wind Tunnel. The test Mach number range was 1.60 to 2.16 for angles of attack of -4 to 16 deg. Test results indicate that, for the purposes considered here, the equivalent forebody simulates the original flowfield disturbances to an acceptable degree of accuracy.

  5. Brain or strain? Symptoms alone do not distinguish physiologic concussion from cervical/vestibular injury.

    PubMed

    Leddy, John J; Baker, John G; Merchant, Asim; Picano, John; Gaile, Daniel; Matuszak, Jason; Willer, Barry

    2015-05-01

    To compare symptoms in patients with physiologic postconcussion disorder (PCD) versus cervicogenic/vestibular PCD. We hypothesized that most symptoms would not be equivalent. In particular, we hypothesized that cognitive symptoms would be more often associated with physiologic PCD. Retrospective review of symptom reports from patients who completed a 22-item symptom questionnaire. University-based concussion clinic. Convenience sample of 128 patients who had symptoms after head injury for more than 3 weeks and who had provocative treadmill exercise testing. Subjects were classified as either physiologic PCD (abnormal treadmill performance and a normal cervical/vestibular physical examination) or cervicogenic/vestibular PCD (CGV, normal treadmill performance, and an abnormal cervical/vestibular physical examination). Self-reported symptoms. Univariate and multivariate methods, including t tests, tests of equivalence, a logistic regression model, k-nearest neighbor analysis, multidimensional scaling, and principle components analysis were used to see whether symptoms could distinguish PCD from CGV. None of the statistical methods used to analyze self-reported symptoms was able to adequately distinguish patients with PCD from patients with CGV. Symptoms after head injury, including cognitive symptoms, have traditionally been ascribed to brain injury, but they do not reliably discriminate between physiologic PCD and cervicogenic/vestibular PCD. Clinicians should consider specific testing of exercise tolerance and perform a physical examination of the cervical spine and the vestibular/ocular systems to determine the etiology of postconcussion symptoms. Symptoms after head injury, including cognitive symptoms, do not discriminate between concussion and cervical/vestibular injury.

  6. Quantifying low-frequency revertants in oral poliovirus vaccine using next generation sequencing.

    PubMed

    Sarcey, Eric; Serres, Aurélie; Tindy, Fabrice; Chareyre, Audrey; Ng, Siemon; Nicolas, Marine; Vetter, Emmanuelle; Bonnevay, Thierry; Abachin, Eric; Mallet, Laurent

    2017-08-01

    Spontaneous reversion to neurovirulence of live attenuated oral poliovirus vaccine (OPV) serotype 3 (chiefly involving the n.472U>C mutation), must be monitored during production to ensure vaccine safety and consistency. Mutant analysis by polymerase chain reaction and restriction enzyme cleavage (MAPREC) has long been endorsed by the World Health Organization as the preferred in vitro test for this purpose; however, it requires radiolabeling, which is no longer supported by many laboratories. We evaluated the performance and suitability of next generation sequencing (NGS) as an alternative to MAPREC. The linearity of NGS was demonstrated at revertant concentrations equivalent to the study range of 0.25%-1.5%. NGS repeatability and intermediate precision were comparable across all tested samples, and NGS was highly reproducible, irrespective of sequencing platform or analysis software used. NGS was performed on OPV serotype 3 working seed lots and monovalent bulks (n=21) that were previously tested using MAPREC, and which covered the representative range of vaccine production. Percentages of 472-C revertants identified by NGS and MAPREC were comparable and highly correlated (r≥0.80), with a Pearson correlation coefficient of 0.95585 (p<0.0001). NGS demonstrated statistically equivalent performance to that of MAPREC for quantifying low-frequency OPV serotype 3 revertants, and offers a valid alternative to MAPREC. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Evaluation of nitrous oxide as a substitute for sulfur hexafluoride to reduce global warming impacts of ANSI/HPS N13.1 gaseous uniformity testing

    DOE PAGES

    Yu, Xiao-Ying; Barnett, J. Matthew; Amidan, Brett G.; ...

    2017-12-12

    The ANSI/HPS N13.1–2011 standard requires gaseous tracer uniformity testing for sampling associated with stacks used in radioactive air emissions. Sulfur hexafluoride (SF 6), a greenhouse gas with a high global warming potential, has long been the gas tracer used in such testing. To reduce the impact of gas tracer tests on the environment, nitrous oxide (N 2O) was evaluated as a potential replacement to SF 6. The physical evaluation included the development of a test plan to record percent coefficient of variance and the percent maximum deviation between the two gases while considering variables such as fan configuration, injection position,more » and flow rate. Statistical power was calculated to determine how many sample sets were needed, and computational fluid dynamic modeling was utilized to estimate overall mixing in stacks. Results show there are no significant differences between the behaviors of the two gases, and SF 6 modeling corroborated N 2O test results. Although, in principle, all tracer gases should behave in an identical manner for measuring mixing within a stack, the series of physical tests guided by statistics was performed to demonstrate the equivalence of N 2O testing to SF 6 testing in the context of stack qualification tests. In conclusion, the results demonstrate that N 2O is a viable choice leading to a four times reduction in global warming impacts for future similar compliance driven testing.« less

  8. Evaluation of nitrous oxide as a substitute for sulfur hexafluoride to reduce global warming impacts of ANSI/HPS N13.1 gaseous uniformity testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Xiao-Ying; Barnett, J. Matthew; Amidan, Brett G.

    The ANSI/HPS N13.1–2011 standard requires gaseous tracer uniformity testing for sampling associated with stacks used in radioactive air emissions. Sulfur hexafluoride (SF 6), a greenhouse gas with a high global warming potential, has long been the gas tracer used in such testing. To reduce the impact of gas tracer tests on the environment, nitrous oxide (N 2O) was evaluated as a potential replacement to SF 6. The physical evaluation included the development of a test plan to record percent coefficient of variance and the percent maximum deviation between the two gases while considering variables such as fan configuration, injection position,more » and flow rate. Statistical power was calculated to determine how many sample sets were needed, and computational fluid dynamic modeling was utilized to estimate overall mixing in stacks. Results show there are no significant differences between the behaviors of the two gases, and SF 6 modeling corroborated N 2O test results. Although, in principle, all tracer gases should behave in an identical manner for measuring mixing within a stack, the series of physical tests guided by statistics was performed to demonstrate the equivalence of N 2O testing to SF 6 testing in the context of stack qualification tests. In conclusion, the results demonstrate that N 2O is a viable choice leading to a four times reduction in global warming impacts for future similar compliance driven testing.« less

  9. Diffusion tensor imaging with tract-based spatial statistics reveals local white matter abnormalities in preterm infants.

    PubMed

    Anjari, Mustafa; Srinivasan, Latha; Allsop, Joanna M; Hajnal, Joseph V; Rutherford, Mary A; Edwards, A David; Counsell, Serena J

    2007-04-15

    Infants born preterm have a high incidence of neurodevelopmental impairment in later childhood, often associated with poorly defined cerebral white matter abnormalities. Diffusion tensor imaging quantifies the diffusion of water within tissues and can assess microstructural abnormalities in the developing preterm brain. Tract-based spatial statistics (TBSS) is an automated observer-independent method of aligning fractional anisotropy (FA) images from multiple subjects to allow groupwise comparisons of diffusion tensor imaging data. We applied TBSS to test the hypothesis that preterm infants have reduced fractional anisotropy in specific regions of white matter compared to term-born controls. We studied 26 preterm infants with no evidence of focal lesions on conventional magnetic resonance imaging (MRI) at term equivalent age and 6 healthy term-born control infants. We found that the centrum semiovale, frontal white matter and the genu of the corpus callosum showed significantly lower FA in the preterm group. Infants born at less than or equal to 28 weeks gestational age (n=11) displayed additional reductions in FA in the external capsule, the posterior aspect of the posterior limb of the internal capsule and the isthmus and middle portion of the body of the corpus callosum. This study demonstrates that TBSS provides an observer-independent method of identifying white matter abnormalities in the preterm brain at term equivalent age in the absence of focal lesions.

  10. Quantification of liver fat with respiratory-gated quantitative chemical shift encoded MRI.

    PubMed

    Motosugi, Utaroh; Hernando, Diego; Bannas, Peter; Holmes, James H; Wang, Kang; Shimakawa, Ann; Iwadate, Yuji; Taviani, Valentina; Rehm, Jennifer L; Reeder, Scott B

    2015-11-01

    To evaluate free-breathing chemical shift-encoded (CSE) magnetic resonance imaging (MRI) for quantification of hepatic proton density fat-fraction (PDFF). A secondary purpose was to evaluate hepatic R2* values measured using free-breathing quantitative CSE-MRI. Fifty patients (mean age, 56 years) were prospectively recruited and underwent the following four acquisitions to measure PDFF and R2*; 1) conventional breath-hold CSE-MRI (BH-CSE); 2) respiratory-gated CSE-MRI using respiratory bellows (BL-CSE); 3) respiratory-gated CSE-MRI using navigator echoes (NV-CSE); and 4) single voxel MR spectroscopy (MRS) as the reference standard for PDFF. Image quality was evaluated by two radiologists. MRI-PDFF measured from the three CSE-MRI methods were compared with MRS-PDFF using linear regression. The PDFF and R2* values were compared using two one-sided t-test to evaluate statistical equivalence. There was no significant difference in the image quality scores among the three CSE-MRI methods for either PDFF (P = 1.000) or R2* maps (P = 0.359-1.000). Correlation coefficients (95% confidence interval [CI]) for the PDFF comparisons were 0.98 (0.96-0.99) for BH-, 0.99 (0.97-0.99) for BL-, and 0.99 (0.98-0.99) for NV-CSE. The statistical equivalence test revealed that the mean difference in PDFF and R2* between any two of the three CSE-MRI methods was less than ±1 percentage point (pp) and ±5 s(-1) , respectively (P < 0.046). Respiratory-gated CSE-MRI with respiratory bellows or navigator echo are feasible methods to quantify liver PDFF and R2* and are as valid as the standard breath-hold technique. © 2015 Wiley Periodicals, Inc.

  11. Using Optimal Test Assembly Methods for Shortening Patient-Reported Outcome Measures: Development and Validation of the Cochin Hand Function Scale-6: A Scleroderma Patient-Centered Intervention Network Cohort Study.

    PubMed

    Levis, Alexander W; Harel, Daphna; Kwakkenbos, Linda; Carrier, Marie-Eve; Mouthon, Luc; Poiraudeau, Serge; Bartlett, Susan J; Khanna, Dinesh; Malcarne, Vanessa L; Sauve, Maureen; van den Ende, Cornelia H M; Poole, Janet L; Schouffoer, Anne A; Welling, Joep; Thombs, Brett D

    2016-11-01

    To develop and validate a short form of the Cochin Hand Function Scale (CHFS), which measures hand disability, for use in systemic sclerosis, using objective criteria and reproducible techniques. Responses on the 18-item CHFS were obtained from English-speaking patients enrolled in the Scleroderma Patient-Centered Intervention Network Cohort. CHFS unidimensionality was verified using confirmatory factor analysis, and an item response theory model was fit to CHFS items. Optimal test assembly (OTA) methods identified a maximally precise short form for each possible form length between 1 and 17 items. The final short form selected was the form with the least number of items that maintained statistically equivalent convergent validity, compared to the full-length CHFS, with the Health Assessment Questionnaire (HAQ) disability index (DI) and the physical function domain of the 29-item Patient-Reported Outcomes Measurement Information System (PROMIS-29). There were 601 patients included. A 6-item short form of the CHFS (CHFS-6) was selected. The CHFS-6 had a Cronbach's alpha of 0.93. Correlations of the CHFS-6 summed score with HAQ DI (r = 0.79) and PROMIS-29 physical function (r = -0.54) were statistically equivalent to the CHFS (r = 0.81 and r = -0.56). The correlation with the full CHFS was high (r = 0.98). The OTA procedure generated a valid short form of the CHFS with minimal loss of information compared to the full-length form. The OTA method used was based on objective, prespecified criteria, but should be further studied for viability as a general procedure for shortening patient-reported outcome measures in health research. © 2016, American College of Rheumatology.

  12. Method modification of the Legipid® Legionella fast detection test kit.

    PubMed

    Albalat, Guillermo Rodríguez; Broch, Begoña Bedrina; Bono, Marisa Jiménez

    2014-01-01

    Legipid(®) Legionella Fast Detection is a test based on combined magnetic immunocapture and enzyme-immunoassay (CEIA) for the detection of Legionella in water. The test is based on the use of anti-Legionella antibodies immobilized on magnetic microspheres. Target microorganism is preconcentrated by filtration. Immunomagnetic analysis is applied on these preconcentrated water samples in a final test portion of 9 mL. The test kit was certified by the AOAC Research Institute as Performance Tested Method(SM) (PTM) No. 111101 in a PTM validation which certifies the performance claims of the test method in comparison to the ISO reference method 11731-1998 and the revision 11731-2004 "Water Quality: Detection and Enumeration of Legionella pneumophila" in potable water, industrial water, and waste water. The modification of this test kit has been approved. The modification includes increasing the target analyte from L. pneumophila to Legionella species and adding an optical reader to the test method. In this study, 71 strains of Legionella spp. other than L. pneumophila were tested to determine its reactivity with the kit based on CEIA. All the strains of Legionella spp. tested by the CEIA test were confirmed positive by reference standard method ISO 11731. This test (PTM 111101) has been modified to include a final optical reading. A methods comparison study was conducted to demonstrate the equivalence of this modification to the reference culture method. Two water matrixes were analyzed. Results show no statistically detectable difference between the test method and the reference culture method for the enumeration of Legionella spp. The relative level of detection was 93 CFU/volume examined (LOD50). For optical reading, the LOD was 40 CFU/volume examined and the LOQ was 60 CFU/volume examined. Results showed that the test Legipid Legionella Fast Detection is equivalent to the reference culture method for the enumeration of Legionella spp.

  13. Monte Carlo study of out-of-field exposure in carbon-ion radiotherapy with a passive beam: Organ doses in prostate cancer treatment.

    PubMed

    Yonai, Shunsuke; Matsufuji, Naruhiro; Akahane, Keiichi

    2018-04-23

    The aim of this work was to estimate typical dose equivalents to out-of-field organs during carbon-ion radiotherapy (CIRT) with a passive beam for prostate cancer treatment. Additionally, sensitivity analyses of organ doses for various beam parameters and phantom sizes were performed. Because the CIRT out-of-field dose depends on the beam parameters, the typical values of those parameters were determined from statistical data on the target properties of patients who received CIRT at the Heavy-Ion Medical Accelerator in Chiba (HIMAC). Using these typical beam-parameter values, out-of-field organ dose equivalents during CIRT for typical prostate treatment were estimated by Monte Carlo simulations using the Particle and Heavy-Ion Transport Code System (PHITS) and the ICRP reference phantom. The results showed that the dose decreased with distance from the target, ranging from 116 mSv in the testes to 7 mSv in the brain. The organ dose equivalents per treatment dose were lower than those either in 6-MV intensity-modulated radiotherapy or in brachytherapy with an Ir-192 source for organs within 40 cm of the target. Sensitivity analyses established that the differences from typical values were within ∼30% for all organs, except the sigmoid colon. The typical out-of-field organ dose equivalents during passive-beam CIRT were shown. The low sensitivity of the dose equivalent in organs farther than 20 cm from the target indicated that individual dose assessments required for retrospective epidemiological studies may be limited to organs around the target in cases of passive-beam CIRT for prostate cancer. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. A global logrank test for adaptive treatment strategies based on observational studies.

    PubMed

    Li, Zhiguo; Valenstein, Marcia; Pfeiffer, Paul; Ganoczy, Dara

    2014-02-28

    In studying adaptive treatment strategies, a natural question that is of paramount interest is whether there is any significant difference among all possible treatment strategies. When the outcome variable of interest is time-to-event, we propose an inverse probability weighted logrank test for testing the equivalence of a fixed set of pre-specified adaptive treatment strategies based on data from an observational study. The weights take into account both the possible selection bias in an observational study and the fact that the same subject may be consistent with more than one treatment strategy. The asymptotic distribution of the weighted logrank statistic under the null hypothesis is obtained. We show that, in an observational study where the treatment selection probabilities need to be estimated, the estimation of these probabilities does not have an effect on the asymptotic distribution of the weighted logrank statistic, as long as the estimation of the parameters in the models for these probabilities is n-consistent. Finite sample performance of the test is assessed via a simulation study. We also show in the simulation that the test can be pretty robust to misspecification of the models for the probabilities of treatment selection. The method is applied to analyze data on antidepressant adherence time from an observational database maintained at the Department of Veterans Affairs' Serious Mental Illness Treatment Research and Evaluation Center. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    PubMed Central

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  16. Development and implementation of an international proficiency testing program for a neutralizing antibody assay for HIV-1 in TZM-bl cells.

    PubMed

    Todd, Christopher A; Greene, Kelli M; Yu, Xuesong; Ozaki, Daniel A; Gao, Hongmei; Huang, Yunda; Wang, Maggie; Li, Gary; Brown, Ronald; Wood, Blake; D'Souza, M Patricia; Gilbert, Peter; Montefiori, David C; Sarzotti-Kelsoe, Marcella

    2012-01-31

    Recent advances in assay technology have led to major improvements in how HIV-1 neutralizing antibodies are measured. A luciferase reporter gene assay performed in TZM-bl (JC53bl-13) cells has been optimized and validated. Because this assay has been adopted by multiple laboratories worldwide, an external proficiency testing program was developed to ensure data equivalency across laboratories performing this neutralizing antibody assay for HIV/AIDS vaccine clinical trials. The program was optimized by conducting three independent rounds of testing, with an increased level of stringency from the first to third round. Results from the participating domestic and international laboratories improved each round as factors that contributed to inter-assay variability were identified and minimized. Key contributors to increased agreement were experience among laboratories and standardization of reagents. A statistical qualification rule was developed using a simulation procedure based on the three optimization rounds of testing, where a laboratory qualifies if at least 25 of the 30 ID50 values lie within the acceptance ranges. This ensures no more than a 20% risk that a participating laboratory fails to qualify when it should, as defined by the simulation procedure. Five experienced reference laboratories were identified and tested a series of standardized reagents to derive the acceptance ranges for pass-fail criteria. This Standardized Proficiency Testing Program is the first available for the evaluation and documentation of assay equivalency for laboratories performing HIV-1 neutralizing antibody assays and may provide guidance for the development of future proficiency testing programs for other assay platforms. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Is digital photography an accurate and precise method for measuring range of motion of the hip and knee?

    PubMed

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2017-09-07

    Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.

  18. Lepidopteran larva consumption of soybean foliage: basis for developing multiple-species economic thresholds for pest management decisions.

    PubMed

    Bueno, Regiane Cristina Oliveira de Freitas; Bueno, Adeney de Freitas; Moscardi, Flávio; Parra, José Roberto Postali; Hoffmann-Campo, Clara Beatriz

    2011-02-01

    Defoliation by Anticarsia gemmatalis (Hübner), Pseudoplusia includens (Walker), Spodoptera eridania (Cramer), S. cosmioides (Walker) and S. frugiperda (JE Smith) (Lepidoptera: Noctuidae) was evaluated in four soybean genotypes. A multiple-species economic threshold (ET), based upon the species' feeding capacity, is proposed with the aim of improving growers' management decisions on when to initiate control measures for the species complex. Consumption by A. gemmatalis, S. cosmioides or S. eridania on different genotypes was similar. The highest consumption of P. includens was 92.7 cm(2) on Codetec 219RR; that of S. frugiperda was 118 cm(2) on Codetec 219RR and 115.1 cm(2) on MSoy 8787RR. The insect injury equivalent for S. cosmoides, calculated on the basis of insect consumption, was double the standard consumption by A. gemmatalis, and statistically different from the other species tested, which were similar to each other. As S. cosmioides always defoliated nearly twice the leaf area of the other species, the injury equivalent would be 2 for this lepidopteran species and 1 for the other species. The recommended multiple-species ET to trigger the beginning of insect control would then be 20 insect equivalents per linear metre. Copyright © 2010 Society of Chemical Industry.

  19. Prediction/discussion-based learning cycle versus conceptual change text: comparative effects on students' understanding of genetics

    NASA Astrophysics Data System (ADS)

    khawaldeh, Salem A. Al

    2013-07-01

    Background and purpose: The purpose of this study was to investigate the comparative effects of a prediction/discussion-based learning cycle (HPD-LC), conceptual change text (CCT) and traditional instruction on 10th grade students' understanding of genetics concepts. Sample: Participants were 112 10th basic grade male students in three classes of the same school located in an urban area. The three classes taught by the same biology teacher were randomly assigned as a prediction/discussion-based learning cycle class (n = 39), conceptual change text class (n = 37) and traditional class (n = 36). Design and method: A quasi-experimental research design of pre-test-post-test non-equivalent control group was adopted. Participants completed the Genetics Concept Test as pre-test-post-test, to examine the effects of instructional strategies on their genetics understanding. Pre-test scores and Test of Logical Thinking scores were used as covariates. Results: The analysis of covariance showed a statistically significant difference between the experimental and control groups in the favor of experimental groups after treatment. However, no statistically significant difference between the experimental groups (HPD-LC versus CCT instruction) was found. Conclusions: Overall, the findings of this study support the use of the prediction/discussion-based learning cycle and conceptual change text in both research and teaching. The findings may be useful for improving classroom practices in teaching science concepts and for the development of suitable materials promoting students' understanding of science.

  20. Minimally Invasive and Open Distal Chevron Osteotomy for Mild to Moderate Hallux Valgus.

    PubMed

    Brogan, Kit; Lindisfarne, Edward; Akehurst, Harold; Farook, Usama; Shrier, Will; Palmer, Simon

    2016-11-01

    Minimally invasive surgical (MIS) techniques are increasingly being used in foot and ankle surgery but it is important that they are adopted only once they have been shown to be equivalent or superior to open techniques. We believe that the main advantages of MIS are found in the early postoperative period, but in order to adopt it as a technique longer-term studies are required. The aim of this study was to compare the 2-year outcomes of a third-generation MIS distal chevron osteotomy with a comparable traditional open distal chevron osteotomy for mild-moderate hallux valgus. Our null hypothesis was that the 2 techniques would yield equivalent clinical and radiographic results at 2 years. This was a retrospective cohort study. Eighty-one consecutive feet (49 MIS and 32 open distal chevron osteotomies) were followed up for a minimum 24 months (range 24-58). All patients were clinically assessed using the Manchester-Oxford Foot Questionnaire. Radiographic measures included hallux valgus angle, the intermetatarsal angle, hallux interphalangeal angle, metatarsal phalangeal joint angle, distal metatarsal articular angle, tibial sesamoid position, shape of the first metatarsal head, and plantar offset. Statistical analysis was done using Student t test or Wilcoxon rank-sum test for continuous data and Pearson chi-square test for categorical data. Clinical and radiologic postoperative scores in all domains were substantially improved in both groups (P < .001), but there was no statistically significant difference in improvement of any domain between open and MIS groups (P > .05). There were no significant differences in complications between the 2 groups ( > .5). The midterm results of this third-generation technique show that it was a safe procedure with good clinical outcomes and comparable to traditional open techniques for symptomatic mild-moderate hallux valgus. Level III, retrospective comparative study. © The Author(s) 2016.

  1. Examination of the Equivalence of Self-Report Survey-Based Paper-and-Pencil and Internet Data Collection Methods

    ERIC Educational Resources Information Center

    Weigold, Arne; Weigold, Ingrid K.; Russell, Elizabeth J.

    2013-01-01

    Self-report survey-based data collection is increasingly carried out using the Internet, as opposed to the traditional paper-and-pencil method. However, previous research on the equivalence of these methods has yielded inconsistent findings. This may be due to methodological and statistical issues present in much of the literature, such as…

  2. The Equivalence of Regression Models Using Difference Scores and Models Using Separate Scores for Each Informant: Implications for the Study of Informant Discrepancies

    ERIC Educational Resources Information Center

    Laird, Robert D.; Weems, Carl F.

    2011-01-01

    Research on informant discrepancies has increasingly utilized difference scores. This article demonstrates the statistical equivalence of regression models using difference scores (raw or standardized) and regression models using separate scores for each informant to show that interpretations should be consistent with both models. First,…

  3. Comparison of traditional methods with 3D computer models in the instruction of hepatobiliary anatomy.

    PubMed

    Keedy, Alexander W; Durack, Jeremy C; Sandhu, Parmbir; Chen, Eric M; O'Sullivan, Patricia S; Breiman, Richard S

    2011-01-01

    This study was designed to determine whether an interactive three-dimensional presentation depicting liver and biliary anatomy is more effective for teaching medical students than a traditional textbook format presentation of the same material. Forty-six medical students volunteered for participation in this study. Baseline demographic information, spatial ability, and knowledge of relevant anatomy were measured. Participants were randomized into two groups and presented with a computer-based interactive learning module comprised of animations and still images to highlight various anatomical structures (3D group), or a computer-based text document containing the same images and text without animation or interactive features (2D group). Following each teaching module, students completed a satisfaction survey and nine-item anatomic knowledge post-test. The 3D group scored higher on the post-test than the 2D group, with a mean score of 74% and 64%, respectively; however, when baseline differences in pretest scores were accounted for, this difference was not statistically significant (P = 0.33). Spatial ability did not statistically significantly correlate with post-test scores for the 3D group or the 2D group. In the post-test satisfaction survey the 3D group expressed a statistically significantly higher overall satisfaction rating compared to students in the 2D control group (4.5 versus 3.7 out of 5, P = 0.02). While the interactive 3D multimedia module received higher satisfaction ratings from students, it neither enhanced nor inhibited learning of complex hepatobiliary anatomy compared to an informationally equivalent traditional textbook style approach. . Copyright © 2011 American Association of Anatomists.

  4. Integrated Assessment and Improvement of the Quality Assurance System for the Cosworth Casting Process

    NASA Astrophysics Data System (ADS)

    Yousif, Dilon

    The purpose of this study was to improve the Quality Assurance (QA) System at the Nemak Windsor Aluminum Plant (WAP). The project used Six Sigma method based on Define, Measure, Analyze, Improve, and Control (DMAIC). Analysis of in process melt at WAP was based on chemical, thermal, and mechanical testing. The control limits for the W319 Al Alloy were statistically recalculated using the composition measured under stable conditions. The "Chemistry Viewer" software was developed for statistical analysis of alloy composition. This software features the Silicon Equivalency (SiBQ) developed by the IRC. The Melt Sampling Device (MSD) was designed and evaluated at WAP to overcome traditional sampling limitations. The Thermal Analysis "Filters" software was developed for cooling curve analysis of the 3XX Al Alloy(s) using IRC techniques. The impact of low melting point impurities on the start of melting was evaluated using the Universal Metallurgical Simulator and Analyzer (UMSA).

  5. Equivalence principle and bound kinetic energy.

    PubMed

    Hohensee, Michael A; Müller, Holger; Wiringa, R B

    2013-10-11

    We consider the role of the internal kinetic energy of bound systems of matter in tests of the Einstein equivalence principle. Using the gravitational sector of the standard model extension, we show that stringent limits on equivalence principle violations in antimatter can be indirectly obtained from tests using bound systems of normal matter. We estimate the bound kinetic energy of nucleons in a range of light atomic species using Green's function Monte Carlo calculations, and for heavier species using a Woods-Saxon model. We survey the sensitivities of existing and planned experimental tests of the equivalence principle, and report new constraints at the level of between a few parts in 10(6) and parts in 10(8) on violations of the equivalence principle for matter and antimatter.

  6. 40 CFR 1037.615 - Hybrid vehicles and other advanced technologies.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... system by chassis testing a vehicle equipped with the advanced system and an equivalent conventional vehicle, or by testing the hybrid systems and the equivalent non-hybrid systems as described in § 1037.550... include regenerative braking (or the equivalent) and energy storage systems, fuel cell vehicles, and...

  7. 40 CFR 790.85 - Submission of equivalence data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 31 2010-07-01 2010-07-01 true Submission of equivalence data. 790.85... Test Rules § 790.85 Submission of equivalence data. If EPA requires in a test rule promulgated under... exemption applicant must submit the following data: (a) The chemical identity of each technical-grade...

  8. Beyond effective teaching: Enhancing students’ metacognitive skill through guided inquiry

    NASA Astrophysics Data System (ADS)

    Adnan; Bahri, Arsad

    2018-01-01

    This research was quasi experimental with pretest posttes non-equivalent control group design. This research aimed to compare metacognitive skill of students between tought by guided inquiry and traditional teaching. Sample of this research was the students at even semester at the first year, Department of Biology, Faculty of Mathematics and Natural Sciences, Universitas Negeri Makassar, Indonesia. The data of students’ metacognitive skill was measured by essay test. The data was analyzed by inferential statistic of ANCOVA test. The result of research showed that there was the effect of teaching model towards metacognitive skill of students. Students were tought by guided inquiry had higher metacognitive skill than tought by traditional teaching. The lecturer can use the guided inquiry model in others courses with considering the course materials and also student characteristics.

  9. Extracellular matrix proteins as temporary coating for thin-film neural implants

    NASA Astrophysics Data System (ADS)

    Ceyssens, Frederik; Deprez, Marjolijn; Turner, Neill; Kil, Dries; van Kuyck, Kris; Welkenhuysen, Marleen; Nuttin, Bart; Badylak, Stephen; Puers, Robert

    2017-02-01

    Objective. This study investigates the suitability of a thin sheet of extracellular matrix (ECM) proteins as a resorbable coating for temporarily reinforcing fragile or ultra-low stiffness thin-film neural implants to be placed on the brain, i.e. microelectrocorticographic (µECOG) implants. Approach. Thin-film polyimide-based electrode arrays were fabricated using lithographic methods. ECM was harvested from porcine tissue by a decellularization method and coated around the arrays. Mechanical tests and an in vivo experiment on rats were conducted, followed by a histological tissue study combined with a statistical equivalence test (confidence interval approach, 0.05 significance level) to compare the test group with an uncoated control group. Main results. After 3 months, no significant damage was found based on GFAP and NeuN staining of the relevant brain areas. Significance. The study shows that ECM sheets are a suitable temporary coating for thin µECOG neural implants.

  10. From random microstructures to representative volume elements

    NASA Astrophysics Data System (ADS)

    Zeman, J.; Šejnoha, M.

    2007-06-01

    A unified treatment of random microstructures proposed in this contribution opens the way to efficient solutions of large-scale real world problems. The paper introduces a notion of statistically equivalent periodic unit cell (SEPUC) that replaces in a computational step the actual complex geometries on an arbitrary scale. A SEPUC is constructed such that its morphology conforms with images of real microstructures. Here, the appreciated two-point probability function and the lineal path function are employed to classify, from the statistical point of view, the geometrical arrangement of various material systems. Examples of statistically equivalent unit cells constructed for a unidirectional fibre tow, a plain weave textile composite and an irregular-coursed masonry wall are given. A specific result promoting the applicability of the SEPUC as a tool for the derivation of homogenized effective properties that are subsequently used in an independent macroscopic analysis is also presented.

  11. 12 CFR 741.6 - Financial and statistical and other reports.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... accordance with the instructions in the notice. Insured credit unions must use NCUA's information management... equivalent, within 10 days after an election or appointment of senior management or volunteer officials or within 30 days of any change of the information in the profile. (2) Financial and statistical report...

  12. 7 CFR 1710.401 - Loan application documents.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., Checklist for Electric Loan Application, or a computer generated equivalent as this list. (1) Transmittal... beginning date of the loan period and shall be the same as the date on the Financial and Statistical Report... headquarters facilities, Form 740g need not be submitted. (5) Financial and statistical report. Distribution...

  13. 7 CFR 1710.401 - Loan application documents.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., Checklist for Electric Loan Application, or a computer generated equivalent as this list. (1) Transmittal... beginning date of the loan period and shall be the same as the date on the Financial and Statistical Report... headquarters facilities, Form 740g need not be submitted. (5) Financial and statistical report. Distribution...

  14. 7 CFR 1710.401 - Loan application documents.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., Checklist for Electric Loan Application, or a computer generated equivalent as this list. (1) Transmittal... beginning date of the loan period and shall be the same as the date on the Financial and Statistical Report... headquarters facilities, Form 740g need not be submitted. (5) Financial and statistical report. Distribution...

  15. 40 CFR 90.405 - Recorded information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... addition to a pre- and post-test measurement. (4) Recorder chart or equivalent. Identify for each test... applicable. (e) Test data; post-test. (1) Recorder chart or equivalent. Identify the hang-up check. (2...). (4) Barometric pressure, post-test segment. [60 FR 34598, July 13, 1995, as amended at 70 FR 40449...

  16. 40 CFR 90.405 - Recorded information.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... addition to a pre- and post-test measurement. (4) Recorder chart or equivalent. Identify for each test... applicable. (e) Test data; post-test. (1) Recorder chart or equivalent. Identify the hang-up check. (2...). (4) Barometric pressure, post-test segment. [60 FR 34598, July 13, 1995, as amended at 70 FR 40449...

  17. 40 CFR 90.405 - Recorded information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... addition to a pre- and post-test measurement. (4) Recorder chart or equivalent. Identify for each test... applicable. (e) Test data; post-test. (1) Recorder chart or equivalent. Identify the hang-up check. (2...). (4) Barometric pressure, post-test segment. [60 FR 34598, July 13, 1995, as amended at 70 FR 40449...

  18. 40 CFR 90.405 - Recorded information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... addition to a pre- and post-test measurement. (4) Recorder chart or equivalent. Identify for each test... applicable. (e) Test data; post-test. (1) Recorder chart or equivalent. Identify the hang-up check. (2...). (4) Barometric pressure, post-test segment. [60 FR 34598, July 13, 1995, as amended at 70 FR 40449...

  19. 40 CFR 90.405 - Recorded information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... addition to a pre- and post-test measurement. (4) Recorder chart or equivalent. Identify for each test... applicable. (e) Test data; post-test. (1) Recorder chart or equivalent. Identify the hang-up check. (2...). (4) Barometric pressure, post-test segment. [60 FR 34598, July 13, 1995, as amended at 70 FR 40449...

  20. Statistical Equilibria of Turbulence on Surfaces of Different Symmetry

    NASA Astrophysics Data System (ADS)

    Qi, Wanming; Marston, Brad

    2012-02-01

    We test the validity of statistical descriptions of freely decaying 2D turbulence by performing direct numerical simulations (DNS) of the Euler equation with hyperviscosity on a square torus and on a sphere. DNS shows, at long times, a dipolar coherent structure in the vorticity field on the torus but a quadrapole on the sphereootnotetextJ. Y-K. Cho and L. Polvani, Phys. Fluids 8, 1531 (1996).. A truncated Miller-Robert-Sommeria theoryootnotetextA. J. Majda and X. Wang, Nonlinear Dynamics and Statistical Theories for Basic Geophysical Flows (Cambridge University Press, 2006). can explain the difference. The theory conserves up to the second-order Casimir, while also respecting conservation laws that reflect the symmetry of the domain. We further show that it is equivalent to the phenomenological minimum-enstrophy principle by generalizing the work by Naso et al.ootnotetextA. Naso, P. H. Chavanis, and B. Dubrulle, Eur. Phys. J. B 77, 284 (2010). to the sphere. To explain finer structures of the coherent states seen in DNS, especially the phenomenon of confinement, we investigate the perturbative inclusion of the higher Casimir constraints.

  1. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    PubMed

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  2. Comparison of Amount of Primary Tooth Reduction Required for Anterior and Posterior Zirconia and Stainless Steel Crowns.

    PubMed

    Clark, Larkin; Wells, Martha H; Harris, Edward F; Lou, Jennifer

    2016-01-01

    To determine if aggressiveness of primary tooth preparation varied among different brands of zirconia and stainless steel (SSC) crowns. One hundred primary typodont teeth were divided into five groups (10 posterior and 10 anterior) and assigned to: Cheng Crowns (CC); EZ Pedo (EZP); Kinder Krowns (KKZ); NuSmile (NSZ); and SSC. Teeth were prepared, and assigned crowns were fitted. Teeth were weighed prior to and after preparation. Weight changes served as a surrogate measure of tooth reduction. Analysis of variance showed a significant difference in tooth reduction among brand/type for both the anterior and posterior. Tukey's honest significant difference test (HSD), when applied to anterior data, revealed that SSCs required significantly less tooth removal compared to the composite of the four zirconia brands, which showed no significant difference among them. Tukey's HSD test, applied to posterior data, revealed that CC required significantly greater removal of crown structure, while EZP, KKZ, and NSZ were statistically equivalent, and SSCs required significantly less removal. Zirconia crowns required more tooth reduction than stainless steel crowns for primary anterior and posterior teeth. Tooth reduction for anterior zirconia crowns was equivalent among brands. For posterior teeth, reduction for three brands (EZ Pedo, Kinder Krowns, NuSmile) did not differ, while Cheng Crowns required more reduction.

  3. Effective or ineffective: attribute framing and the human papillomavirus (HPV) vaccine.

    PubMed

    Bigman, Cabral A; Cappella, Joseph N; Hornik, Robert C

    2010-12-01

    To experimentally test whether presenting logically equivalent, but differently valenced effectiveness information (i.e. attribute framing) affects perceived effectiveness of the human papillomavirus (HPV) vaccine, vaccine-related intentions and policy opinions. A survey-based experiment (N=334) was fielded in August and September 2007 as part of a larger ongoing web-enabled monthly survey, the Annenberg National Health Communication Survey. Participants were randomly assigned to read a short passage about the HPV vaccine that framed vaccine effectiveness information in one of five ways. Afterward, they rated the vaccine and related opinion questions. Main statistical methods included ANOVA and t-tests. On average, respondents exposed to positive framing (70% effective) rated the HPV vaccine as more effective and were more supportive of vaccine mandate policy than those exposed to the negative frame (30% ineffective) or the control frame. Mixed valence frames showed some evidence for order effects; phrasing that ended by emphasizing vaccine ineffectiveness showed similar vaccine ratings to the negative frame. The experiment finds that logically equivalent information about vaccine effectiveness not only influences perceived effectiveness, but can in some cases influence support for policies mandating vaccine use. These framing effects should be considered when designing messages. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  4. Effective or ineffective: Attribute framing and the human papillomavirus (HPV) vaccine

    PubMed Central

    Bigman, Cabral A.; Cappella, Joseph N.; Hornik, Robert C.

    2010-01-01

    Objectives To experimentally test whether presenting logically equivalent, but differently valenced effectiveness information (i.e. attribute framing) affects perceived effectiveness of the human papillomavirus (HPV) vaccine, vaccine related intentions and policy opinions. Method A survey-based experiment (N= 334) was fielded in August and September 2007 as part of a larger ongoing web-enabled monthly survey, the Annenberg National Health Communication Survey. Participants were randomly assigned to read a short passage about the HPV vaccine that framed vaccine effectiveness information in one of five ways. Afterward, they rated the vaccine and related opinion questions. Main statistical methods included ANOVA and t-tests. Results On average, respondents exposed to positive framing (70% effective) rated the HPV vaccine as more effective and were more supportive of vaccine mandate policy than those exposed to the negative frame (30% ineffective) or the control frame. Mixed valence frames showed some evidence for order effects; phrasing that ended by emphasizing vaccine ineffectiveness showed similar vaccine ratings to the negative frame. Conclusions The experiment finds that logically equivalent information about vaccine effectiveness not only influences perceived effectiveness, but can in some cases influence support for policies mandating vaccine use. Practice implications These framing effects should be considered when designing messages. PMID:20851560

  5. Multiscale strain analysis of tissue equivalents using a custom-designed biaxial testing device.

    PubMed

    Bell, B J; Nauman, E; Voytik-Harbin, S L

    2012-03-21

    Mechanical signals transferred between a cell and its extracellular matrix play an important role in regulating fundamental cell behavior. To further define the complex mechanical interactions between cells and matrix from a multiscale perspective, a biaxial testing device was designed and built. Finite element analysis was used to optimize the cruciform specimen geometry so that stresses within the central region were concentrated and homogenous while minimizing shear and grip effects. This system was used to apply an equibiaxial loading and unloading regimen to fibroblast-seeded tissue equivalents. Digital image correlation and spot tracking were used to calculate three-dimensional strains and associated strain transfer ratios at macro (construct), meso, matrix (collagen fibril), cell (mitochondria), and nuclear levels. At meso and matrix levels, strains in the 1- and 2-direction were statistically similar throughout the loading-unloading cycle. Interestingly, a significant amplification of cellular and nuclear strains was observed in the direction perpendicular to the cell axis. Findings indicate that strain transfer is dependent upon local anisotropies generated by the cell-matrix force balance. Such multiscale approaches to tissue mechanics will assist in advancement of modern biomechanical theories as well as development and optimization of preconditioning regimens for functional engineered tissue constructs. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. Hispanic ethnicity and Caucasian race: Relations with posttraumatic stress disorder's factor structure in clinic-referred youth.

    PubMed

    Contractor, Ateka A; Claycomb, Meredith A; Byllesby, Brianna M; Layne, Christopher M; Kaplow, Julie B; Steinberg, Alan M; Elhai, Jon D

    2015-09-01

    The severity of posttraumatic stress disorder (PTSD) symptoms is linked to race and ethnicity, albeit with contradictory findings (reviewed in Alcántara, Casement, & Lewis-Fernández, 2013; Pole, Gone, & Kulkarni, 2008). We systematically examined Caucasian (n = 3,767) versus non-Caucasian race (n = 2,824) and Hispanic (n = 2,395) versus non-Hispanic ethnicity (n = 3,853) as candidate moderators of PTSD's 5-factor model structural parameters (Elhai et al., 2013). The sample was drawn from the National Child Traumatic Stress Network's Core Data Set, currently the largest national data set of clinic-referred children and adolescents exposed to potentially traumatic events. Using confirmatory factor analysis, we tested the invariance of PTSD symptom structural parameters by race and ethnicity. Chi-square difference tests and goodness-of-fit values showed statistical equivalence across racial and ethnic groups in the factor structure of PTSD and in mean item-level indicators of PTSD symptom severity. Results support the structural invariance of PTSD's 5-factor model across the compared racial and ethnic groups. Furthermore, results indicated equivalent item-level severity across racial and ethnic groups; this supports the use of item-level comparisons across these groups. (c) 2015 APA, all rights reserved).

  7. It Pays to Be Organized: Organizing Arithmetic Practice around Equivalent Values Facilitates Understanding of Math Equivalence

    ERIC Educational Resources Information Center

    McNeil, Nicole M.; Chesney, Dana L.; Matthews, Percival G.; Fyfe, Emily R.; Petersen, Lori A.; Dunwiddie, April E.; Wheeler, Mary C.

    2012-01-01

    This experiment tested the hypothesis that organizing arithmetic fact practice by equivalent values facilitates children's understanding of math equivalence. Children (M age = 8 years 6 months, N = 104) were randomly assigned to 1 of 3 practice conditions: (a) equivalent values, in which problems were grouped by equivalent sums (e.g., 3 + 4 = 7, 2…

  8. Statistical analogues of thermodynamic extremum principles

    NASA Astrophysics Data System (ADS)

    Ramshaw, John D.

    2018-05-01

    As shown by Jaynes, the canonical and grand canonical probability distributions of equilibrium statistical mechanics can be simply derived from the principle of maximum entropy, in which the statistical entropy S=- {k}{{B}}{\\sum }i{p}i{log}{p}i is maximised subject to constraints on the mean values of the energy E and/or number of particles N in a system of fixed volume V. The Lagrange multipliers associated with those constraints are then found to be simply related to the temperature T and chemical potential μ. Here we show that the constrained maximisation of S is equivalent to, and can therefore be replaced by, the essentially unconstrained minimisation of the obvious statistical analogues of the Helmholtz free energy F = E ‑ TS and the grand potential J = F ‑ μN. Those minimisations are more easily performed than the maximisation of S because they formally eliminate the constraints on the mean values of E and N and their associated Lagrange multipliers. This procedure significantly simplifies the derivation of the canonical and grand canonical probability distributions, and shows that the well known extremum principles for the various thermodynamic potentials possess natural statistical analogues which are equivalent to the constrained maximisation of S.

  9. Conditional equivalence testing: An alternative remedy for publication bias

    PubMed Central

    Gustafson, Paul

    2018-01-01

    We introduce a publication policy that incorporates “conditional equivalence testing” (CET), a two-stage testing scheme in which standard NHST is followed conditionally by testing for equivalence. The idea of CET is carefully considered as it has the potential to address recent concerns about reproducibility and the limited publication of null results. In this paper we detail the implementation of CET, investigate similarities with a Bayesian testing scheme, and outline the basis for how a scientific journal could proceed to reduce publication bias while remaining relevant. PMID:29652891

  10. Testing Einstein's theory of gravity in a millisecond pulsar triple system

    NASA Astrophysics Data System (ADS)

    Archibald, Anne

    2015-04-01

    Einstein's theory of gravity depends on a key postulate, the strong equivalence principle. This principle says, among other things, that all objects fall the same way, even objects with strong self-gravity. Almost every metric theory of gravity other than Einstein's general relativity violates the strong equivalence principle at some level. While the weak equivalence principle--for objects with negligible self-gravity--has been tested in the laboratory, the strong equivalence principle requires astrophysical tests. Lunar laser ranging provides the best current tests by measuring whether the Earth and the Moon fall the same way in the gravitational field of the Sun. These tests are limited by the weak self-gravity of the Earth: the gravitational binding energy (over c2) over the mass is only 4 . 6 ×10-10 . By contrast, for neutron stars this same ratio is expected to be roughly 0 . 1 . Thus the recently-discovered system PSR J0337+17, a hierarchical triple consisting of a millisecond pulsar and two white dwarfs, offers the possibility of a test of the strong equivalence principle that is more sensitive by a factor of 20 to 100 than the best existing test. I will describe our observations of this system and our progress towards such a test.

  11. 40 CFR Table E-1 to Subpart E of... - Summary of Test Requirements for Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 Pt. 53...

  12. Exploring Equivalent Forms Reliability Using a Key Stage 2 Reading Test

    ERIC Educational Resources Information Center

    Benton, Tom

    2013-01-01

    This article outlines an empirical investigation into equivalent forms reliability using a case study of a national curriculum reading test. Within the situation being studied, there has been a genuine attempt to create several equivalent forms and so it is of interest to compare the actual behaviour of the relationship between these forms to the…

  13. Testing for Measurement and Structural Equivalence in Large-Scale Cross-Cultural Studies: Addressing the Issue of Nonequivalence

    ERIC Educational Resources Information Center

    Byrne, Barbara M.; van de Vijver, Fons J. R.

    2010-01-01

    A critical assumption in cross-cultural comparative research is that the instrument measures the same construct(s) in exactly the same way across all groups (i.e., the instrument is measurement and structurally equivalent). Structural equation modeling (SEM) procedures are commonly used in testing these assumptions of multigroup equivalence.…

  14. [Effects of an educational program for the reduction of physical restraint use by caregivers in geriatric hospitals].

    PubMed

    Choi, Keumbong; Kim, Jinsun

    2009-12-01

    The purposes of this study were to develop an educational program to reduce the use of physical restraints for caregivers in geriatric hospitals and to evaluate the effects of the program on caregivers' knowledge, attitude and nursing practice related to the use of physical restraints. A quasi experimental study with a non-equivalent control group pretest-posttest design was used. Participants were recruited from two geriatric hospitals. Eighteen caregivers were assigned to the experimental group and 20 to the control group. The data were collected prior to the intervention and at 6 weeks after the intervention through the use of self-administered questionnaires. Descriptive statistics, X(2) test, Fisher's exact probability test, and Mann-Whitney U test were used to analyze the data. After the intervention, knowledge about physical restraints increased significantly in experimental group compared to the control group. However, there were no statistically significant differences between the groups for attitude and nursing practice involving physical restraints. Findings indicate that it is necessary to apply knowledge acquired through educational programs to nursing practice to reduce the use of physical restraints. User friendly guidelines for physical restraints, administrative support of institutions, and multidisciplinary approaches are required to achieve this goal.

  15. An experimental limit on the charge of antihydrogen

    PubMed Central

    Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olchanski, K.; Olin, A.; Povilus, A.; Pusa, P.; Rasmussen, C.Ø.; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Tharp, T. D.; Thompson, R. I.; van der Werf, D. P.; Vendeiro, Z.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.

    2014-01-01

    The properties of antihydrogen are expected to be identical to those of hydrogen, and any differences would constitute a profound challenge to the fundamental theories of physics. The most commonly discussed antiatom-based tests of these theories are searches for antihydrogen-hydrogen spectral differences (tests of CPT (charge-parity-time) invariance) or gravitational differences (tests of the weak equivalence principle). Here we, the ALPHA Collaboration, report a different and somewhat unusual test of CPT and of quantum anomaly cancellation. A retrospective analysis of the influence of electric fields on antihydrogen atoms released from the ALPHA trap finds a mean axial deflection of 4.1±3.4 mm for an average axial electric field of 0.51 V mm−1. Combined with extensive numerical modelling, this measurement leads to a bound on the charge Qe of antihydrogen of Q=(−1.3±1.1±0.4) × 10−8. Here, e is the unit charge, and the errors are from statistics and systematic effects. PMID:24892800

  16. Equivalence testing using existing reference data: An example with genetically modified and conventional crops in animal feeding studies.

    PubMed

    van der Voet, Hilko; Goedhart, Paul W; Schmidt, Kerstin

    2017-11-01

    An equivalence testing method is described to assess the safety of regulated products using relevant data obtained in historical studies with assumedly safe reference products. The method is illustrated using data from a series of animal feeding studies with genetically modified and reference maize varieties. Several criteria for quantifying equivalence are discussed, and study-corrected distribution-wise equivalence is selected as being appropriate for the example case study. An equivalence test is proposed based on a high probability of declaring equivalence in a simplified situation, where there is no between-group variation, where the historical and current studies have the same residual variance, and where the current study is assumed to have a sample size as set by a regulator. The method makes use of generalized fiducial inference methods to integrate uncertainties from both the historical and the current data. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Variation of indoor radon concentration and ambient dose equivalent rate in different outdoor and indoor environments.

    PubMed

    Stojanovska, Zdenka; Boev, Blazo; Zunic, Zora S; Ivanova, Kremena; Ristova, Mimoza; Tsenova, Martina; Ajka, Sorsa; Janevik, Emilija; Taleski, Vaso; Bossew, Peter

    2016-05-01

    Subject of this study is an investigation of the variations of indoor radon concentration and ambient dose equivalent rate in outdoor and indoor environments of 40 dwellings, 31 elementary schools and five kindergartens. The buildings are located in three municipalities of two, geologically different, areas of the Republic of Macedonia. Indoor radon concentrations were measured by nuclear track detectors, deployed in the most occupied room of the building, between June 2013 and May 2014. During the deploying campaign, indoor and outdoor ambient dose equivalent rates were measured simultaneously at the same location. It appeared that the measured values varied from 22 to 990 Bq/m(3) for indoor radon concentrations, from 50 to 195 nSv/h for outdoor ambient dose equivalent rates, and from 38 to 184 nSv/h for indoor ambient dose equivalent rates. The geometric mean value of indoor to outdoor ambient dose equivalent rates was found to be 0.88, i.e. the outdoor ambient dose equivalent rates were on average higher than the indoor ambient dose equivalent rates. All measured can reasonably well be described by log-normal distributions. A detailed statistical analysis of factors which influence the measured quantities is reported.

  18. The concept of equivalence and its application to the assessment of thrombolytic effects.

    PubMed

    Hampton, J R

    1997-12-01

    Very large clinical trials have become the norm in the evaluation of thrombolytic agents, and these 'megatrials' are administratively complex and expensive. It remains to be seen whether new thrombolytics will lead to further large reductions in fatality from an acute myocardial infarction, but new agents may well have advantages in areas such as safety and ease of administration, in addition to other clinical benefits (i.e. fewer cases of cardiac shock, heart failure and atrial fibrillation). The problem is how to introduce such new agents without a megatrial for each one. Endpoints other than fatality have some advantages and, in thrombolysis, angiographic studies are a necessary step in the development of new agents. However, such studies may not always correlate precisely with the results of mortality endpoint studies. Measurements of the resolution of ST segment elevation in myocardial infarction seem to provide a very useful method of assessing thrombolysis, but although such a technique can be applied to large numbers of patients, it cannot totally replace mortality endpoint trials. The 'equivalence' of two treatments is a clinical, not a statistical, concept, although statistical principles that allow equivalence to be investigated with medium-sized trials should be applied. Demonstrating equivalence in outcome between the new thrombolytic reteplase and streptokinase was the aim of the INJECT study.

  19. Systematic comparisons between PRISM version 1.0.0, BAP, and CSMIP ground-motion processing

    USGS Publications Warehouse

    Kalkan, Erol; Stephens, Christopher

    2017-02-23

    A series of benchmark tests was run by comparing results of the Processing and Review Interface for Strong Motion data (PRISM) software version 1.0.0 to Basic Strong-Motion Accelerogram Processing Software (BAP; Converse and Brady, 1992), and to California Strong Motion Instrumentation Program (CSMIP) processing (Shakal and others, 2003, 2004). These tests were performed by using the MatLAB implementation of PRISM, which is equivalent to its public release version in Java language. Systematic comparisons were made in time and frequency domains of records processed in PRISM and BAP, and in CSMIP, by using a set of representative input motions with varying resolutions, frequency content, and amplitudes. Although the details of strong-motion records vary among the processing procedures, there are only minor differences among the waveforms for each component and within the frequency passband common to these procedures. A comprehensive statistical evaluation considering more than 1,800 ground-motion components demonstrates that differences in peak amplitudes of acceleration, velocity, and displacement time series obtained from PRISM and CSMIP processing are equal to or less than 4 percent for 99 percent of the data, and equal to or less than 2 percent for 96 percent of the data. Other statistical measures, including the Euclidian distance (L2 norm) and the windowed root mean square level of processed time series, also indicate that both processing schemes produce statistically similar products.

  20. Effect of filtration on rolling-element-bearing life in contaminated lubricant environment

    NASA Technical Reports Server (NTRS)

    Loewenthal, S. H.; Moyer, D. W.; Sherlock, J. J.

    1978-01-01

    Fatigue tests were conducted on groups of 65 millimeter-bore ball bearings under four levels of filtration with and without a contaminated MIL-L-23699 lubricant. The baseline series used noncontaminated oil with 49 micron absolute filtration. In the remaining tests contaminants of the composition found in aircraft engine filters were injected into the filter's supply line at a constant rate of 125 milligrams per bearing-hour. The test filters had absolute particle removal ratings of 3, 30, 49, and 105 microns (0.45, 10, 30, and 70 microns nominal), respectively. Bearings were tested at 15,000 rpm under 4580 newtons radial load. Bearing life and running tract condition generally improved with finer filtration. The 3 and 30 micron filter bearings in a contaminated lubricant had statistically equivalent lives, approaching those from the baseline tests. The experimental lives of 49 micron bearings were approximately half the baseline bearing's lives. Bearings tested with the 105 micron filter experienced wear failures. The degree of surface distress, weight loss, and probable failure mode were found to be dependent on filtration level, with finer filtration being clearly beneficial.

  1. Assessment of image quality in soft tissue and bone visualization tasks for a dedicated extremity cone-beam CT system.

    PubMed

    Demehri, S; Muhit, A; Zbijewski, W; Stayman, J W; Yorkston, J; Packard, N; Senn, R; Yang, D; Foos, D; Thawait, G K; Fayad, L M; Chhabra, A; Carrino, J A; Siewerdsen, J H

    2015-06-01

    To assess visualization tasks using cone-beam CT (CBCT) compared to multi-detector CT (MDCT) for musculoskeletal extremity imaging. Ten cadaveric hands and ten knees were examined using a dedicated CBCT prototype and a clinical multi-detector CT using nominal protocols (80 kVp-108mAs for CBCT; 120 kVp- 300 mAs for MDCT). Soft tissue and bone visualization tasks were assessed by four radiologists using five-point satisfaction (for CBCT and MDCT individually) and five-point preference (side-by-side CBCT versus MDCT image quality comparison) rating tests. Ratings were analyzed using Kruskal-Wallis and Wilcoxon signed-rank tests, and observer agreement was assessed using the Kappa-statistic. Knee CBCT images were rated "excellent" or "good" (median scores 5 and 4) for "bone" and "soft tissue" visualization tasks. Hand CBCT images were rated "excellent" or "adequate" (median scores 5 and 3) for "bone" and "soft tissue" visualization tasks. Preference tests rated CBCT equivalent or superior to MDCT for bone visualization and favoured the MDCT for soft tissue visualization tasks. Intraobserver agreement for CBCT satisfaction tests was fair to almost perfect (κ ~ 0.26-0.92), and interobserver agreement was fair to moderate (κ ~ 0.27-0.54). CBCT provided excellent image quality for bone visualization and adequate image quality for soft tissue visualization tasks. • CBCT provided adequate image quality for diagnostic tasks in extremity imaging. • CBCT images were "excellent" for "bone" and "good/adequate" for "soft tissue" visualization tasks. • CBCT image quality was equivalent/superior to MDCT for bone visualization tasks.

  2. Development of Statistical Process Control Methodology for an Environmentally Compliant Surface Cleaning Process in a Bonding Laboratory

    NASA Technical Reports Server (NTRS)

    Hutchens, Dale E.; Doan, Patrick A.; Boothe, Richard E.

    1997-01-01

    Bonding labs at both MSFC and the northern Utah production plant prepare bond test specimens which simulate or witness the production of NASA's Reusable Solid Rocket Motor (RSRM). The current process for preparing the bonding surfaces employs 1,1,1-trichloroethane vapor degreasing, which simulates the current RSRM process. Government regulations (e.g., the 1990 Amendments to the Clean Air Act) have mandated a production phase-out of a number of ozone depleting compounds (ODC) including 1,1,1-trichloroethane. In order to comply with these regulations, the RSRM Program is qualifying a spray-in-air (SIA) precision cleaning process using Brulin 1990, an aqueous blend of surfactants. Accordingly, surface preparation prior to bonding process simulation test specimens must reflect the new production cleaning process. The Bonding Lab Statistical Process Control (SPC) program monitors the progress of the lab and its capabilities, as well as certifies the bonding technicians, by periodically preparing D6AC steel tensile adhesion panels with EA-91 3NA epoxy adhesive using a standardized process. SPC methods are then used to ensure the process is statistically in control, thus producing reliable data for bonding studies, and identify any problems which might develop. Since the specimen cleaning process is being changed, new SPC limits must be established. This report summarizes side-by-side testing of D6AC steel tensile adhesion witness panels and tapered double cantilevered beams (TDCBs) using both the current baseline vapor degreasing process and a lab-scale spray-in-air process. A Proceco 26 inches Typhoon dishwasher cleaned both tensile adhesion witness panels and TDCBs in a process which simulates the new production process. The tests were performed six times during 1995, subsequent statistical analysis of the data established new upper control limits (UCL) and lower control limits (LCL). The data also demonstrated that the new process was equivalent to the vapor degreasing process.

  3. Examining an Alternative to Score Equating: A Randomly Equivalent Forms Approach. Research Report. ETS RR-08-14

    ERIC Educational Resources Information Center

    Liao, Chi-Wen; Livingston, Samuel A.

    2008-01-01

    Randomly equivalent forms (REF) of tests in listening and reading for nonnative speakers of English were created by stratified random assignment of items to forms, stratifying on item content and predicted difficulty. The study included 50 replications of the procedure for each test. Each replication generated 2 REFs. The equivalence of those 2…

  4. Evidence and Clinical Trials.

    NASA Astrophysics Data System (ADS)

    Goodman, Steven N.

    1989-11-01

    This dissertation explores the use of a mathematical measure of statistical evidence, the log likelihood ratio, in clinical trials. The methods and thinking behind the use of an evidential measure are contrasted with traditional methods of analyzing data, which depend primarily on a p-value as an estimate of the statistical strength of an observed data pattern. It is contended that neither the behavioral dictates of Neyman-Pearson hypothesis testing methods, nor the coherency dictates of Bayesian methods are realistic models on which to base inference. The use of the likelihood alone is applied to four aspects of trial design or conduct: the calculation of sample size, the monitoring of data, testing for the equivalence of two treatments, and meta-analysis--the combining of results from different trials. Finally, a more general model of statistical inference, using belief functions, is used to see if it is possible to separate the assessment of evidence from our background knowledge. It is shown that traditional and Bayesian methods can be modeled as two ends of a continuum of structured background knowledge, methods which summarize evidence at the point of maximum likelihood assuming no structure, and Bayesian methods assuming complete knowledge. Both schools are seen to be missing a concept of ignorance- -uncommitted belief. This concept provides the key to understanding the problem of sampling to a foregone conclusion and the role of frequency properties in statistical inference. The conclusion is that statistical evidence cannot be defined independently of background knowledge, and that frequency properties of an estimator are an indirect measure of uncommitted belief. Several likelihood summaries need to be used in clinical trials, with the quantitative disparity between summaries being an indirect measure of our ignorance. This conclusion is linked with parallel ideas in the philosophy of science and cognitive psychology.

  5. Analysis of the color alteration and radiopacity promoted by bismuth oxide in calcium silicate cement.

    PubMed

    Marciano, Marina Angélica; Estrela, Carlos; Mondelli, Rafael Francisco Lia; Ordinola-Zapata, Ronald; Duarte, Marco Antonio Hungaro

    2013-01-01

    The aim of the study was to determine if the increase in radiopacity provided by bismuth oxide is related to the color alteration of calcium silicate-based cement. Calcium silicate cement (CSC) was mixed with 0%, 15%, 20%, 30% and 50% of bismuth oxide (BO), determined by weight. Mineral trioxide aggregate (MTA) was the control group. The radiopacity test was performed according to ISO 6876/2001. The color was evaluated using the CIE system. The assessments were performed after 24 hours, 7 and 30 days of setting time, using a spectrophotometer to obtain the ΔE, Δa, Δb and ΔL values. The statistical analyses were performed using the Kruskal-Wallis/Dunn and ANOVA/Tukey tests (p<0.05). The cements in which bismuth oxide was added showed radiopacity corresponding to the ISO recommendations (>3 mm equivalent of Al). The MTA group was statistically similar to the CSC/30% BO group (p>0.05). In regard to color, the increase of bismuth oxide resulted in a decrease in the ΔE value of the calcium silicate cement. The CSC group presented statistically higher ΔE values than the CSC/50% BO group (p<0.05). The comparison between 24 hours and 7 days showed higher ΔE for the MTA group, with statistical differences for the CSC/15% BO and CSC/50% BO groups (p<0.05). After 30 days, CSC showed statistically higher ΔE values than CSC/30% BO and CSC/50% BO (p<0.05). In conclusion, the increase in radiopacity provided by bismuth oxide has no relation to the color alteration of calcium silicate-based cements.

  6. Precision Tests of a Quantum Hall Effect Device DC Equivalent Circuit Using Double-Series and Triple-Series Connections

    PubMed Central

    Jeffery, A.; Elmquist, R. E.; Cage, M. E.

    1995-01-01

    Precision tests verify the dc equivalent circuit used by Ricketts and Kemeny to describe a quantum Hall effect device in terms of electrical circuit elements. The tests employ the use of cryogenic current comparators and the double-series and triple-series connection techniques of Delahaye. Verification of the dc equivalent circuit in double-series and triple-series connections is a necessary step in developing the ac quantum Hall effect as an intrinsic standard of resistance. PMID:29151768

  7. The difference between “equivalent” and “not different”

    DOE PAGES

    Anderson-Cook, Christine M.; Borror, Connie M.

    2015-10-27

    Often, experimenters wish to establish that populations of units can be considered equivalent to each other, in order to leverage improved knowledge about one population for characterizing the new population, or to establish the comparability of items. Equivalence tests have existed for many years, but their use in industry seems to have been largely restricted to biomedical applications, such as for assessing the equivalence of two drugs or protocols. We present the fundamentals of equivalence tests, compare them to traditional two-sample and ANOVA tests that are better suited to establishing differences in populations, and propose the use of a graphicalmore » summary to compare p-values across different thresholds of practically important differences.« less

  8. [Five days ceftibuten versus 10 days penicillin in the treatment of 2099 patients with A-streptococcal tonsillopharyngitis].

    PubMed

    Adam, D; Scholz, H; Helmerking, M

    2001-07-19

    Group A Streptococci have remained sensitive to penicillins and other betalactam antibiotics, e. g. cephalosporins. Since the beginning of the 1950s oral penicillin V given three times daily in a dose of 50,000 IU daily has been the drug of choice against Group A streptococcal infection. The German Society for Pediatric Infectious Diseases (DGPI) undertook a large scale multicenter randomized study of culture-proven A-streptococcal tonsillopharyngitis to compare the efficacy and safety of a five day regimen of ceftibuten (9 mg/kg KG, once daily) with 10 days of penicillin V (50,000 I.E./kg KG, divided in three doses), testing for equivalence of clinical and bacteriological efficacy. A one year follow-up served to assess poststreptococcal sequelae like rheumatic fever or glomerulonephritis. The clinical efficacy at the clinical end-point 7-9 days after end of treatment was 86.9% (419/482) for ceftibuten and 88.6% (1,198/1,352) for penicillin V. This result is statistically equivalent (P = 0.0152). Resolution of clinical symptoms was significantly faster in the ceftibuten group (P = 0.043/Fisher-Test) and compliance was significantly superior as well (P (0.001). Eradication of group A streptococci at an early control 2-4 days after end of treatment was not equivalent, 78.49% for ceftibuten and 84.42% for penicillin V (P = 0.5713). Both eradication rates were comparable 7-8 weeks after end of treatment (84.65%, 375/443 ceftibuten vs. 86.82%, 1,067/1,229 penicillin V), the difference not being significant. No cases of poststreptococcal sequelae, e.g. rheumatic fever or glomerulonephritis, attributable to either ceftibuten or penicillin were observed in the course of the study.

  9. Addressing astronomy misconceptions and achieving national science standards utilizing aspects of multiple intelligences theory in the classroom and the planetarium

    NASA Astrophysics Data System (ADS)

    Sarrazine, Angela Renee

    The purpose of this study was to incorporate multiple intelligences techniques in both a classroom and planetarium setting to create a significant increase in student learning about the moon and lunar phases. Utilizing a free-response questionnaire and a 25 item multiple choice pre-test/post-test design, this study identified middle school students' misconceptions and measured increases in student learning about the moon and lunar phases. The study spanned two semesters and contained six treatment groups which consisted of both single and multiple interventions. One group only attended the planetarium program. Two groups attended one of two classes a week prior to the planetarium program, and two groups attended one of two classes a week after the planetarium program. The most rigorous treatment group attended a class both a week before and after the planetarium program. Utilizing Rasch analysis techniques and parametric statistical tests, all six groups exhibited statistically significant gains in knowledge at the 0.05 level. There were no significant differences between students who attended only a planetarium program versus a single classroom program. Also, subjects who attended either a pre-planetarium class or a post- planetarium class did not show a statistically significant gain over the planetarium only situation. Equivalent effects on student learning were exhibited by the pre-planetarium class groups and post-planetarium class groups. Therefore, it was determined that the placement of the second intervention does not have a significant impact on student learning. However, a decrease in learning was observed with the addition of a third intervention. Further instruction and testing appeared to hinder student learning. This is perhaps an effect of subject fatigue.

  10. The influence of control group reproduction on the statistical power of the Environmental Protection Agency's Medaka Extended One Generation Reproduction Test (MEOGRT).

    PubMed

    Flynn, Kevin; Swintek, Joe; Johnson, Rodney

    2017-02-01

    Because of various Congressional mandates to protect the environment from endocrine disrupting chemicals (EDCs), the United States Environmental Protection Agency (USEPA) initiated the Endocrine Disruptor Screening Program. In the context of this framework, the Office of Research and Development within the USEPA developed the Medaka Extended One Generation Reproduction Test (MEOGRT) to characterize the endocrine action of a suspected EDC. One important endpoint of the MEOGRT is fecundity of medaka breeding pairs. Power analyses were conducted to determine the number of replicates needed in proposed test designs and to determine the effects that varying reproductive parameters (e.g. mean fecundity, variance, and days with no egg production) would have on the statistical power of the test. The MEOGRT Reproduction Power Analysis Tool (MRPAT) is a software tool developed to expedite these power analyses by both calculating estimates of the needed reproductive parameters (e.g. population mean and variance) and performing the power analysis under user specified scenarios. Example scenarios are detailed that highlight the importance of the reproductive parameters on statistical power. When control fecundity is increased from 21 to 38 eggs per pair per day and the variance decreased from 49 to 20, the gain in power is equivalent to increasing replication by 2.5 times. On the other hand, if 10% of the breeding pairs, including controls, do not spawn, the power to detect a 40% decrease in fecundity drops to 0.54 from nearly 0.98 when all pairs have some level of egg production. Perhaps most importantly, MRPAT was used to inform the decision making process that lead to the final recommendation of the MEOGRT to have 24 control breeding pairs and 12 breeding pairs in each exposure group. Published by Elsevier Inc.

  11. Does endothelial cell density correlate with corneal diameter in a group of young adults?

    PubMed

    Giasson, Claude J; Gosselin, Lucie; Masella, Aviva; Forcier, Pierre

    2008-07-01

    In children, but not in the elderly, an association exists between corneal diameter and endothelial cell density (ECD). We tested whether such an association also held true in young adults. The eyes of 35 healthy subjects (mean age, 23.1 +/- 3.1 years) were photographed by using a video camera and a noncontact endothelial microscope. Both sets of images were analyzed with image software and the contour method to measure corneal diameter, ECD, and endothelial coefficients. Axial lengths, refractive errors, and corneal curvatures were measured by using an A-scan ultrasonic biometer and kerato-refractometer. Measurements, averaged for the right and left eyes, were analyzed depending on (1) use of contact lenses, (2) ametropia, and on whether (3) axial length or (4) corneal diameter was above or below group means. Differences were tested for statistical significance with independent t tests and association with the Pearson correlation coefficient. ECD, corneal diameter, and spherical equivalent refraction were 3022 +/- 262 cells/mm2, 12.0 +/- 0.5 mm, and -3.1 +/- 2.5 D, respectively. The only significant differences between wearers and nonwearers of contact lenses were the spherical refractive equivalent and axial length. There was no correlation between ECD and corneal diameter or axial length. As opposed to previously reported results in children, but as found in the elderly, there is no correlation between ECD and corneal diameter in young adults. Therefore, corneal size cannot be considered a determinant of ECD in young adults.

  12. Transfer of analytical procedures: a panel of strategies selected for risk management, with emphasis on an integrated equivalence-based comparative testing approach.

    PubMed

    Agut, C; Caron, A; Giordano, C; Hoffman, D; Ségalini, A

    2011-09-10

    In 2001, a multidisciplinary team made of analytical scientists and statisticians at Sanofi-aventis has published a methodology which has governed, from that time, the transfers from R&D sites to Manufacturing sites of the release monographs. This article provides an overview of the recent adaptations brought to this original methodology taking advantage of our experience and the new regulatory framework, and, in particular, the risk management perspective introduced by ICH Q9. Although some alternate strategies have been introduced in our practices, the comparative testing one, based equivalence testing as statistical approach, remains the standard for assays lying on very critical quality attributes. This is conducted with the concern to control the most important consumer's risk involved at two levels in analytical decisions in the frame of transfer studies: risk, for the receiving laboratory, to take poor release decisions with the analytical method and risk, for the sending laboratory, to accredit such a receiving laboratory on account of its insufficient performances with the method. Among the enhancements to the comparative studies, the manuscript presents the process settled within our company for a better integration of the transfer study into the method life-cycle, just as proposals of generic acceptance criteria and designs for assay and related substances methods. While maintaining rigor and selectivity of the original approach, these improvements tend towards an increased efficiency in the transfer operations. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Evaluation of PCR Systems for Field Screening of Bacillus anthracis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozanich, Richard M.; Colburn, Heather A.; Victry, Kristin D.

    There is little published data on the performance of hand-portable polymerase chain reaction (PCR) instruments that could be used by first responders to determine if a suspicious powder contains a potential biothreat agent. We evaluated five commercially available hand-portable PCR instruments for detection of Bacillus anthracis (Ba). We designed a cost-effective, statistically-based test plan that allows instruments to be evaluated at performance levels ranging from 0.85-0.95 lower confidence bound (LCB) on the probability of detection (POD) at confidence levels of 80-95%. We assessed specificity using purified genomic DNA from 13 Ba strains and 18 Bacillus near neighbors, interference with 22more » common hoax powders encountered in the field, and PCR inhibition when Ba spores were spiked into these powders. Our results indicated that three of the five instruments achieved >0.95 LCB on the POD with 95% confidence at test concentrations of 2,000 genome equivalents/mL (comparable to 2,000 spores/mL), displaying more than sufficient sensitivity for screening suspicious powders. These instruments exhibited no false positive results or PCR inhibition with common hoax powders, and reliably detected Ba spores spiked into common hoax powders, though some issues with instrument controls were observed. Our testing approach enables efficient instrument performance testing to a statistically rigorous and cost-effective test plan to generate performance data that will allow users to make informed decisions regarding the purchase and use of biodetection equipment in the field.« less

  14. Analytical and clinical performance characteristics of the Abbott RealTime MTB RIF/INH Resistance, an assay for the detection of rifampicin and isoniazid resistant Mycobacterium tuberculosis in pulmonary specimens.

    PubMed

    Kostera, Joshua; Leckie, Gregor; Tang, Ning; Lampinen, John; Szostak, Magdalena; Abravaya, Klara; Wang, Hong

    2016-12-01

    Clinical management of drug-resistant tuberculosis patients continues to present significant challenges to global health. To tackle these challenges, the Abbott RealTime MTB RIF/INH Resistance assay was developed to accelerate the diagnosis of rifampicin and/or isoniazid resistant tuberculosis to within a day. This article summarizes the performance of the Abbott RealTime MTB RIF/INH Resistance assay; including reliability, analytical sensitivity, and clinical sensitivity/specificity as compared to Cepheid GeneXpert MTB/RIF version 1.0 and Hain MTBDRplus version 2.0. The limit of detection (LOD) of the Abbott RealTime MTB RIF/INH Resistance assay was determined to be 32 colony forming units/milliliter (cfu/mL) using the Mycobacterium tuberculosis (MTB) strain H37Rv cell line. For rifampicin resistance detection, the Abbott RealTime MTB RIF/INH Resistance assay demonstrated statistically equivalent clinical sensitivity and specificity as compared to Cepheid GeneXpert MTB/RIF. For isoniazid resistance detection, the assay demonstrated statistically equivalent clinical sensitivity and specificity as compared to Hain MTBDRplus. The performance data presented herein demonstrate that the Abbott RealTime MTB RIF/INH Resistance assay is a sensitive, robust, and reliable test for realtime simultaneous detection of first line anti-tuberculosis antibiotics rifampicin and isoniazid in patient specimens. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  15. Validation of the tablet-administered Brief Assessment of Cognition (BAC App).

    PubMed

    Atkins, Alexandra S; Tseng, Tina; Vaughan, Adam; Twamley, Elizabeth W; Harvey, Philip; Patterson, Thomas; Narasimhan, Meera; Keefe, Richard S E

    2017-03-01

    Computerized tests benefit from automated scoring procedures and standardized administration instructions. These methods can reduce the potential for rater error. However, especially in patients with severe mental illnesses, the equivalency of traditional and tablet-based tests cannot be assumed. The Brief Assessment of Cognition in Schizophrenia (BACS) is a pen-and-paper cognitive assessment tool that has been used in hundreds of research studies and clinical trials, and has normative data available for generating age- and gender-corrected standardized scores. A tablet-based version of the BACS called the BAC App has been developed. This study compared performance on the BACS and the BAC App in patients with schizophrenia and healthy controls. Test equivalency was assessed, and the applicability of paper-based normative data was evaluated. Results demonstrated the distributions of standardized composite scores for the tablet-based BAC App and the pen-and-paper BACS were indistinguishable, and the between-methods mean differences were not statistically significant. The discrimination between patients and controls was similarly robust. The between-methods correlations for individual measures in patients were r>0.70 for most subtests. When data from the Token Motor Test was omitted, the between-methods correlation of composite scores was r=0.88 (df=48; p<0.001) in healthy controls and r=0.89 (df=46; p<0.001) in patients, consistent with the test-retest reliability of each measure. Taken together, results indicate that the tablet-based BAC App generates results consistent with the traditional pen-and-paper BACS, and support the notion that the BAC App is appropriate for use in clinical trials and clinical practice. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  16. 40 CFR 53.58 - Operational field precision and blank test.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...

  17. 40 CFR 53.58 - Operational field precision and blank test.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...

  18. 40 CFR 53.58 - Operational field precision and blank test.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...

  19. 40 CFR 53.58 - Operational field precision and blank test.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...

  20. Turbulent premixed combustion in V-shaped flames: Characteristics of flame front

    NASA Astrophysics Data System (ADS)

    Kheirkhah, S.; Gülder, Ö. L.

    2013-05-01

    Flame front characteristics of turbulent premixed V-shaped flames were investigated experimentally using the Mie scattering and the particle image velocimetry techniques. The experiments were performed at mean streamwise exit velocities of 4.0, 6.2, and 8.6 m/s, along with fuel-air equivalence ratios of 0.7, 0.8, and 0.9. Effects of vertical distance from the flame-holder, mean streamwise exit velocity, and fuel-air equivalence ratio on statistics of the distance between the flame front and the vertical axis, flame brush thickness, flame front curvature, and angle between tangent to the flame front and the horizontal axis were studied. The results show that increasing the vertical distance from the flame-holder and the fuel-air equivalence ratio increase the mean and root-mean-square (RMS) of the distance between the flame front and the vertical axis; however, increasing the mean streamwise exit velocity decreases these statistics. Spectral analysis of the fluctuations of the flame front position depicts that the normalized and averaged power-spectrum-densities collapse and show a power-law relation with the normalized wave number. The flame brush thickness is linearly correlated with RMS of the distance between the flame front and the vertical axis. Analysis of the curvature of the flame front data shows that the mean curvature is independent of the experimental conditions tested and equals to zero. Values of the inverse of the RMS of flame front curvature are similar to those of the integral length scale, suggesting that the large eddies in the flow make a significant contribution in wrinkling of the flame front. Spectral analyses of the flame front curvature as well as the angle between tangent to the flame front and the horizontal axis show that the power-spectrum-densities feature a peak. Value of the inverse of the wave number pertaining to the peak is larger than that of the integral length scale.

  1. Causal inference in biology networks with integrated belief propagation.

    PubMed

    Chang, Rui; Karr, Jonathan R; Schadt, Eric E

    2015-01-01

    Inferring causal relationships among molecular and higher order phenotypes is a critical step in elucidating the complexity of living systems. Here we propose a novel method for inferring causality that is no longer constrained by the conditional dependency arguments that limit the ability of statistical causal inference methods to resolve causal relationships within sets of graphical models that are Markov equivalent. Our method utilizes Bayesian belief propagation to infer the responses of perturbation events on molecular traits given a hypothesized graph structure. A distance measure between the inferred response distribution and the observed data is defined to assess the 'fitness' of the hypothesized causal relationships. To test our algorithm, we infer causal relationships within equivalence classes of gene networks in which the form of the functional interactions that are possible are assumed to be nonlinear, given synthetic microarray and RNA sequencing data. We also apply our method to infer causality in real metabolic network with v-structure and feedback loop. We show that our method can recapitulate the causal structure and recover the feedback loop only from steady-state data which conventional method cannot.

  2. Linguistic steganography on Twitter: hierarchical language modeling with manual interaction

    NASA Astrophysics Data System (ADS)

    Wilson, Alex; Blunsom, Phil; Ker, Andrew D.

    2014-02-01

    This work proposes a natural language stegosystem for Twitter, modifying tweets as they are written to hide 4 bits of payload per tweet, which is a greater payload than previous systems have achieved. The system, CoverTweet, includes novel components, as well as some already developed in the literature. We believe that the task of transforming covers during embedding is equivalent to unilingual machine translation (paraphrasing), and we use this equivalence to de ne a distortion measure based on statistical machine translation methods. The system incorporates this measure of distortion to rank possible tweet paraphrases, using a hierarchical language model; we use human interaction as a second distortion measure to pick the best. The hierarchical language model is designed to model the speci c language of the covers, which in this setting is the language of the Twitter user who is embedding. This is a change from previous work, where general-purpose language models have been used. We evaluate our system by testing the output against human judges, and show that humans are unable to distinguish stego tweets from cover tweets any better than random guessing.

  3. Effect of montelukast monotherapy on oxidative stress parameters and DNA damage in children with asthma.

    PubMed

    Dilek, Fatih; Ozkaya, Emin; Kocyigit, Abdurrahim; Yazici, Mebrure; Kesgin, Siddika; Gedik, Ahmet Hakan; Cakir, Erkan

    2015-01-01

    There is ample knowledge reported in the literature about the role of oxidative stress in asthma pathogenesis. It is also known that the interaction of reactive oxygen species with DNA may result in DNA strand breaks. The aim of this study was to investigate if montelukast monotherapy affects oxidative stress and DNA damage parameters in a population of pediatric asthma patients. Group I consisted of 31 newly diagnosed asthmatic patients not taking any medication, and group II consisted of 32 patients who had been treated with montelukast for at least 6 months. Forty healthy control subjects were also enrolled in the study. Plasma total oxidant status (TOS) and total antioxidant status (TAS) were measured to assess oxidative stress. DNA damage was assessed by means of alkaline comet assay. The patients in both group I and group II had statistically significant higher plasma TOS (13.1 ± 4 and 11.1 ± 4.1 μmol H2O2 equivalent/liter, respectively) and low TAS levels (1.4 ± 0.5 and 1.5 ± 0.5 mmol Trolox equivalent/liter, respectively) compared with the control group (TOS: 6.3 ± 3.5 μmol H2O2 equivalent/liter and TAS: 2.7 ± 0.6 mmol Trolox equivalent/liter; p < 0.05). DNA damage was 18.2 ± 1.0 arbitrary units (a.u.) in group I, 16.7 ± 8.2 a.u. in group II and 13.7 ± 3.4 a.u. in the control group. There were statistically significant differences only between group I and the control group (p < 0.05). According to the findings, montelukast therapy makes only minimal but not statistically significant improvement in all TOS, TAS and DNA damage parameters. © 2015 S. Karger AG, Basel.

  4. Clinical Comparison of Two Methods of Graft Preparation in Descemet Membrane Endothelial Keratoplasty.

    PubMed

    Rickmann, Annekatrin; Opitz, Natalia; Szurman, Peter; Boden, Karl Thomas; Jung, Sascha; Wahl, Silke; Haus, Arno; Damm, Lara-Jil; Januschowski, Kai

    2018-01-01

    Descemet membrane endothelial keratoplasty (DMEK) has been improved over the last decade. The aim of this study was to compare the clinical outcome of the recently introduced liquid bubble method compared to the standard manual preparation. This retrospective study evaluated the outcome of 200 patients after DMEK surgery using two different graft preparation techniques. Ninety-six DMEK were prepared by manual dissection and 104 by the novel liquid bubble technique. The mean follow-up time was 13.7 months (SD ± 8, range 6-36 months). Best corrected mean visual acuity (BCVA) increased for all patients statistically significant from baseline 0.85 logMAR (SD ± 0.5) to 0.26 logMAR (SD ± 0.27) at the final follow-up (Wilcoxon, p = 0.001). Subgroup analyses of BCVA at the final follow-up between manual dissection and liquid bubble preparation showed no statistically significant difference (Mann-Whitney U Test, p = 0.64). The mean central corneal thickness was not statistically different (manual dissection: 539 µm, SD ± 68 µm and liquid bubble technique: 534 µm, SD ± 52 µm,) between the two groups (Mann-Whitney U Test, p = 0.64). At the final follow-up, mean endothelial cell count of donor grafts was statistically not significant different at the final follow-up with 1761 cells/mm 2 (-30.7%, SD ± 352) for manual dissection compared to liquid bubble technique with 1749 cells/mm 2 (-29.9%, SD ± 501) (Mann-Whitney U-Test, p = 0.73). The re-DMEK rate was comparable for manual dissection with 8 cases (8.3%) and 7 cases (6.7%) for liquid bubble dissection (p = 0.69, Chi-Square Test). Regarding the clinical outcome, we did not find a statistical significant difference between manual dissection and liquid bubble graft preparation. Both preparation techniques lead to an equivalent clinical outcome after DMEK surgery.

  5. [Development and evaluation of a program to promote self management in patients with chronic hepatitis B].

    PubMed

    Yang, Jin-Hyang

    2012-04-01

    The purpose of this study was to identify the effects of the program to promote self management for patients with chronic hepatitis B. The research was a quasi-experimental design using a non-equivalent control group pre-post test. The participants were 61 patients, 29 in the experimental group and 32 in the control group. A pretest and 2 posttests were conducted to measure main variables. For the experimental group, the self-management program, consisting of counseling-centered activities in small groups, was given for 6 weeks. Data were analyzed using χ², t-test, and repeated measures ANOVA with PASW statistics program. There were statistically significant increases in knowledge, self-efficacy, active ways of coping, and self-management compliance but not in passive ways of coping in the experimental group compared to the control group over two different times. The results of this study indicate that the self-management program is effective in increasing knowledge, self-efficacy, active ways of coping, and self-management compliance among patients with chronic hepatitis B. Therefore, it can be usefully utilized in the field of nursing for patients with chronic disease as a nursing intervention for people with chronic hepatitis B.

  6. Identification of Low Order Equivalent System Models From Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.

  7. Unidimensional IRT Item Parameter Estimates across Equivalent Test Forms with Confounding Specifications within Dimensions

    ERIC Educational Resources Information Center

    Matlock, Ki Lynn; Turner, Ronna

    2016-01-01

    When constructing multiple test forms, the number of items and the total test difficulty are often equivalent. Not all test developers match the number of items and/or average item difficulty within subcontent areas. In this simulation study, six test forms were constructed having an equal number of items and average item difficulty overall.…

  8. Electroencephalography (EEG) in the Study of Equivalence Class Formation. An Explorative Study.

    PubMed

    Arntzen, Erik; Steingrimsdottir, Hanna S

    2017-01-01

    Teaching arbitrary conditional discriminations and testing for derived relations may be essential for understanding changes in cognitive skills. Such conditional discrimination procedures are often used within stimulus equivalence research. For example, the participant is taught AB and BC relations and tested if emergent relations as BA, CB, AC and CA occur. The purpose of the current explorative experiment was to study stimulus equivalence class formation in older adults with electroencephalography (EEG) recordings as an additional measure. The EEG was used to learn about whether there was an indication of cognitive changes such as those observed in neurocognitive disorders (NCD). The present study included four participants who did conditional discrimination training and testing. The experimental design employed pre-class formation sorting and post-class formation sorting of the stimuli used in the experiment. EEG recordings were conducted before training, after training and after testing. The results showed that two participants formed equivalence classes, one participant failed in one of the three test relations, and one participant failed in two of the three test relations. This fourth participant also failed to sort the stimuli in accordance with the experimenter-defined stimulus equivalence classes during post-class formation sorting. The EEG indicated no cognitive decline in the first three participants but possible mild cognitive impairment (MCI) in the fourth participant. The results suggest that equivalence class formation may provide information about cognitive impairments such as those that are likely to occur in the early stages of NCD. The study recommends replications with broader samples.

  9. 46 CFR 110.20-1 - Equivalents.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Equivalents. 110.20-1 Section 110.20-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING GENERAL PROVISIONS Equivalents... engineering evaluations and tests to demonstrate the equivalence of the substitute. [CGD 94-108, 61 FR 28275...

  10. 46 CFR 110.20-1 - Equivalents.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Equivalents. 110.20-1 Section 110.20-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING GENERAL PROVISIONS Equivalents... engineering evaluations and tests to demonstrate the equivalence of the substitute. [CGD 94-108, 61 FR 28275...

  11. 46 CFR 110.20-1 - Equivalents.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Equivalents. 110.20-1 Section 110.20-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING GENERAL PROVISIONS Equivalents... engineering evaluations and tests to demonstrate the equivalence of the substitute. [CGD 94-108, 61 FR 28275...

  12. 46 CFR 110.20-1 - Equivalents.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Equivalents. 110.20-1 Section 110.20-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING GENERAL PROVISIONS Equivalents... engineering evaluations and tests to demonstrate the equivalence of the substitute. [CGD 94-108, 61 FR 28275...

  13. 46 CFR 110.20-1 - Equivalents.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Equivalents. 110.20-1 Section 110.20-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING GENERAL PROVISIONS Equivalents... engineering evaluations and tests to demonstrate the equivalence of the substitute. [CGD 94-108, 61 FR 28275...

  14. The Effects of Different Training Structures in the Establishment of Conditional Discriminations and Subsequent Performance on Tests for Stimulus Equivalence

    ERIC Educational Resources Information Center

    Arntzen, Erik; Grondahl, Terje; Eilifsen, Christoffer

    2010-01-01

    Previous studies comparing groups of subjects have indicated differential probabilities of stimulus equivalence outcome as a function of training structures. One-to-Many (OTM) and Many-to-One (MTO) training structures seem to produce positive outcomes on tests for stimulus equivalence more often than a Linear Series (LS) training structure does.…

  15. 40 CFR Table F-1 to Subpart F of... - Performance Specifications for PM 2.5 Class II Equivalent Samplers

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... II Equivalent Samplers Performance test Specifications Acceptance criteria § 53.62 Full Wind Tunnel... Results: 95% ≤ Rc ≤ 105%. § 53.63 Wind Tunnel Inlet Aspiration Test Liquid VOAG produced aerosol at 2 km... Class II Equivalent Samplers F Table F-1 to Subpart F of Part 53 Protection of Environment ENVIRONMENTAL...

  16. 40 CFR Table F-1 to Subpart F of... - Performance Specifications for PM2.5 Class II Equivalent Samplers

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Equivalent Samplers Performance test Specifications Acceptance criteria § 53.62 Full Wind Tunnel Evaluation...% ≤ Rc ≤ 105%. § 53.63 Wind Tunnel Inlet Aspiration Test Liquid VOAG produced aerosol at 2 km/hr and 24... Class II Equivalent Samplers F Table F-1 to Subpart F of Part 53 Protection of Environment ENVIRONMENTAL...

  17. 40 CFR Table F-1 to Subpart F of... - Performance Specifications for PM 2.5 Class II Equivalent Samplers

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... II Equivalent Samplers Performance test Specifications Acceptance criteria § 53.62 Full Wind Tunnel... Results: 95% ≤Rc ≤105%. § 53.63 Wind Tunnel Inlet Aspiration Test Liquid VOAG produced aerosol at 2 km/hr... Class II Equivalent Samplers F Table F-1 to Subpart F of Part 53 Protection of Environment ENVIRONMENTAL...

  18. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  19. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  20. How Framing Statistical Statements Affects Subjective Veracity: Validation and Application of a Multinomial Model for Judgments of Truth

    ERIC Educational Resources Information Center

    Hilbig, Benjamin E.

    2012-01-01

    Extending the well-established negativity bias in human cognition to truth judgments, it was recently shown that negatively framed statistical statements are more likely to be considered true than formally equivalent statements framed positively. However, the underlying processes responsible for this effect are insufficiently understood.…

  1. Digest of Adult Education Statistics--1998.

    ERIC Educational Resources Information Center

    Elliott, Barbara G.

    Information on literacy programs for adults in the United States was compiled from the annual statistical performance reports states submit to the U.S. Department of Education at the end of each program year (PY). Nearly 27 percent of adults had not completed a high school diploma or equivalent. In PY 1991, the nation's adult education (AE)…

  2. Analysis of equivalent parameters of two spinal cord injury devices: the New York University impactor versus the Infinite Horizon impactor.

    PubMed

    Park, Jin Hoon; Kim, Jeong Hoon; Oh, Sun-Kyu; Baek, Se Rim; Min, Joongkee; Kim, Yong Whan; Kim, Sang Tae; Woo, Chul-Woong; Jeon, Sang Ryong

    2016-11-01

    The New York University (NYU) impactor and the Infinite Horizon (IH) impactor are used to create spinal cord injury (SCI) models. However, the parameters of these two devices that yield equivalent SCI severity remain unclear. To identify equivalent parameters, rats with SCIs induced by either device set at various parameters were subjected to behavioral and histologic analyses. This is an animal laboratory study. Groups of eight rats acquired SCIs by dropping a 10 g rod from a height of 25 mm or 50 mm by using the NYU device or by delivering a force of 150 kdyn, 175 kdyn, 200 kdyn, or 250 kdyn by using the IH impactor. All injured rats were tested weekly for 8 weeks by using the Basso, Beattie, and Bresnahan (BBB) test and the ladder rung test. On the 10th week, the lesion volume of each group was measured by using a 9.4 Tesla magnetic resonance imaging (MRI), and the spinal cords were subjected to histologic analysis using anterograde biotinylated dextran amine (BDA) tracing and immunofluorescence staining with an anti-protein kinase C-gamma (PKC-γ) antibody. Basso, Beattie, and Bresnahan test scores between the 25 mm and the 200 kdyn groups as well as between the 50 mm and and 250 kdyn groups were very similar. Although it was not statistically significant, the mean scores of the ladder rung test in the 200 kdyn group were higher than the 25 mm group at all assessment time points. There was a significantly different cavity volume only between the 50 mm and the 200 kdyn groups. Midline sagittal images of the spinal cord on the MRI revealed that the 25 mm group predominantly had dorsal injuries, whereas the 200 kdyn group had deeper injuries. Anterograde tracing with BDA showed that in the 200 kdyn group, the dorsal corticospinal tract of the caudal area of the lesion was labeled. Similar labeling was not observed in the 25 mm group. Immunofluorescence staining of PKC-γ also revealed strong staining of the dorsal corticospinal tract in the 200 kdyn group but not in the 25 mm group. The 25 mm injuries generated by the NYU impactor are generally equivalent to the 200 kdyn injuries generated by using the IH impactor. However, differences in the ladder rung test scores, MRI images, BDA traces, and PKC-γ staining demonstrate that the two devices exert qualitatively different impacts on the spinal cord. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. SEDIDAT: A BASIC program for the collection and statistical analysis of particle settling velocity data

    NASA Astrophysics Data System (ADS)

    Wright, Robyn; Thornberg, Steven M.

    SEDIDAT is a series of compiled IBM-BASIC (version 2.0) programs that direct the collection, statistical calculation, and graphic presentation of particle settling velocity and equivalent spherical diameter for samples analyzed using the settling tube technique. The programs follow a menu-driven format that is understood easily by students and scientists with little previous computer experience. Settling velocity is measured directly (cm,sec) and also converted into Chi units. Equivalent spherical diameter (reported in Phi units) is calculated using a modified Gibbs equation for different particle densities. Input parameters, such as water temperature, settling distance, particle density, run time, and Phi;Chi interval are changed easily at operator discretion. Optional output to a dot-matrix printer includes a summary of moment and graphic statistical parameters, a tabulation of individual and cumulative weight percents, a listing of major distribution modes, and cumulative and histogram plots of a raw time, settling velocity. Chi and Phi data.

  4. On Probability Domains IV

    NASA Astrophysics Data System (ADS)

    Frič, Roman; Papčo, Martin

    2017-12-01

    Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.

  5. Radiation-induced changes in gustatory function comparison of effects of neutron and photon irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mossman, K.L.; Chencharick, J.D.; Scheer, A.C.

    1979-04-01

    Changes in gustatory function were measured in 51 patients with various forms of cancer who received radiation to the head and neck region. Forty patients (group I) were treated with conventional photon radiation (e.g. 66 Gy/7 weeks), and 11 patients (group II) were treated with cyclotron produced fast neutrons (e.g. 22 Gy/7 weeks). Taste acuity was measured for four taste qualities (salt, sweet, sour, and bitter) by a forced choice-three stimulus drop technique which measured detection and recognition thresholds and by a forced scaling technique which measured taste intensity responsiveness. Subjective complaints of anorexia, dysgeusia, taste loss, and xerostomia weremore » also recorded. Patients were studied before, during and up to two months after therapy. Prior to therapy, detection and recognition thresholds, intensity responsiveness, and the frequency of subjective complaints in patients from groups I and II were statistically equivalent. During and up to 2 months after therapy, taste impairment and frequency of subjective complaints increased significantly in neutron and photon treated patients, but were statistically equivalent. Results of this study indicate that gustatory tissue response as measured by taste detection and recognition and intensity responsiveness, and the frequency of subjective complaints related to taste are statistically equivalent in patients before, during, or up 2 months after they were given either neutron or photon radiation for tumors of the head and neck.« less

  6. Section Preequating under the Equivalent Groups Design without IRT

    ERIC Educational Resources Information Center

    Guo, Hongwen; Puhan, Gautam

    2014-01-01

    In this article, we introduce a section preequating (SPE) method (linear and nonlinear) under the randomly equivalent groups design. In this equating design, sections of Test X (a future new form) and another existing Test Y (an old form already on scale) are administered. The sections of Test X are equated to Test Y, after adjusting for the…

  7. Satellite Test of the Equivalence Principle as a Probe of Modified Newtonian Dynamics.

    PubMed

    Pereira, Jonas P; Overduin, James M; Poyneer, Alexander J

    2016-08-12

    The proposed satellite test of the equivalence principle (STEP) will detect possible violations of the weak equivalence principle by measuring relative accelerations between test masses of different composition with a precision of one part in 10^{18}. A serendipitous by-product of the experimental design is that the absolute or common-mode acceleration of the test masses is also measured to high precision as they oscillate along a common axis under the influence of restoring forces produced by the position sensor currents, which in drag-free mode lead to Newtonian accelerations as small as 10^{-14}  g. This is deep inside the low-acceleration regime where modified Newtonian dynamics (MOND) diverges strongly from the Newtonian limit of general relativity. We show that MOND theories (including those based on the widely used "n family" of interpolating functions as well as the covariant tensor-vector-scalar formulation) predict an easily detectable increase in the frequency of oscillations of the STEP test masses if the strong equivalence principle holds. If it does not hold, MOND predicts a cumulative increase in oscillation amplitude which is also detectable. STEP thus provides a new and potentially decisive test of Newton's law of inertia, as well as the equivalence principle in both its strong and weak forms.

  8. Visual Survey of Infantry Troops. Part 1. Visual Acuity, Refractive Status, Interpupillary Distance and Visual Skills

    DTIC Science & Technology

    1989-06-01

    letters on one line and several letters on the next line, there is no accurate way to credit these extra letters for statistical analysis. The decimal and...contains the descriptive statistics of the objective refractive error components of infantrymen. Figures 8-11 show the frequency distributions for sphere...equivalents. Nonspectacle wearers Table 12 contains the idescriptive statistics for non- spectacle wearers. Based or these refractive error data, about 30

  9. On the (In)Validity of Tests of Simple Mediation: Threats and Solutions

    PubMed Central

    Pek, Jolynn; Hoyle, Rick H.

    2015-01-01

    Mediation analysis is a popular framework for identifying underlying mechanisms in social psychology. In the context of simple mediation, we review and discuss the implications of three facets of mediation analysis: (a) conceptualization of the relations between the variables, (b) statistical approaches, and (c) relevant elements of design. We also highlight the issue of equivalent models that are inherent in simple mediation. The extent to which results are meaningful stem directly from choices regarding these three facets of mediation analysis. We conclude by discussing how mediation analysis can be better applied to examine causal processes, highlight the limits of simple mediation, and make recommendations for better practice. PMID:26985234

  10. Kernel Equating Under the Non-Equivalent Groups With Covariates Design

    PubMed Central

    Bränberg, Kenny

    2015-01-01

    When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests. PMID:29881012

  11. Kernel Equating Under the Non-Equivalent Groups With Covariates Design.

    PubMed

    Wiberg, Marie; Bränberg, Kenny

    2015-07-01

    When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests.

  12. The Remote Food Photography Method accurately estimates dry powdered foods—the source of calories for many infants

    PubMed Central

    Duhé, Abby F.; Gilmore, L. Anne; Burton, Jeffrey H.; Martin, Corby K.; Redman, Leanne M.

    2016-01-01

    Background Infant formula is a major source of nutrition for infants with over half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water making it necessary to develop methods that can accurately estimate the amount of powder used prior to reconstitution. Objective To assess the use of the Remote Food Photography Method (RFPM) to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. Methods For each serving size (1-scoop, 2-scoop, 3-scoop, and 4-scoop), a set of seven test bottles and photographs were prepared including the recommended gram weight of powdered formula of the respective serving size by the manufacturer, three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended, and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard RFPM analysis procedures. The ratio estimates and the United States Department of Agriculture (USDA) data tables were used to generate food and nutrient information to provide the RFPM estimates. Statistical Analyses Performed Equivalence testing using the two one-sided t- test (TOST) approach was used to determine equivalence between the actual gram weights and the RFPM estimated weights for all samples, within each serving size, and within under-prepared and over-prepared bottles. Results For all bottles, the gram weights estimated by the RFPM were within 5% equivalence bounds with a slight under-estimation of 0.05 g (90% CI [−0.49, 0.40]; p<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. Conclusion The maximum observed mean error was an overestimation of 1.58% of powdered formula by the RFPM under controlled laboratory conditions indicating that the RFPM accurately estimated infant powdered formula. PMID:26947889

  13. Emergence of relations and the essence of learning: a review of Sidman's Equivalence relations and behavior: a research story. Book review

    NASA Technical Reports Server (NTRS)

    Rumbaugh, D. M.

    1995-01-01

    The author reviews and comments on the book Equivalence relations and behavior: a research story by Murray Sidman. Sidman's book reports his research about equivalence relations and competencies in children with mental retardation and how it relates to behavior. Sidman used the idea of stimulus-stimulus relations among features of the environment to develop his theories about equivalence relations. Experimental work with children and animals demonstrated their ability to use equivalence relations to learn new tasks. The subject received feedback and reinforcement for specific choices made during training, then was presented with new choices during testing. Results of the tests indicate that subjects were able to establish relations and retrieve them in different situations.

  14. A statistical model to estimate refractivity turbulence structure constant C sub n sup 2 in the free atmosphere

    NASA Technical Reports Server (NTRS)

    Warnock, J. M.; Vanzandt, T. E.

    1986-01-01

    A computer program has been tested and documented (Warnock and VanZandt, 1985) that estimates mean values of the refractivity turbulence structure constant in the stable free atmosphere from standard National Weather Service balloon data or an equivalent data set. The program is based on the statistical model for the occurrence of turbulence developed by VanZandt et al. (1981). Height profiles of the estimated refractivity turbulence structure constant agree well with profiles measured by the Sunset radar with a height resolution of about 1 km. The program also estimates the energy dissipation rate (epsilon), but because of the lack of suitable observations of epsilon, the model for epsilon has not yet been evaluated sufficiently to be used in routine applications. Vertical profiles of the refractivity turbulence structure constant were compared with profiles measured by both radar and optical remote sensors and good agreement was found. However, at times the scintillometer measurements were less than both the radar and model values.

  15. Analysis Monthly Import of Palm Oil Products Using Box-Jenkins Model

    NASA Astrophysics Data System (ADS)

    Ahmad, Nurul F. Y.; Khalid, Kamil; Saifullah Rusiman, Mohd; Ghazali Kamardan, M.; Roslan, Rozaini; Che-Him, Norziha

    2018-04-01

    The palm oil industry has been an important component of the national economy especially the agriculture sector. The aim of this study is to identify the pattern of import of palm oil products, to model the time series using Box-Jenkins model and to forecast the monthly import of palm oil products. The method approach is included in the statistical test for verifying the equivalence model and statistical measurement of three models, namely Autoregressive (AR) model, Moving Average (MA) model and Autoregressive Moving Average (ARMA) model. The model identification of all product import palm oil is different in which the AR(1) was found to be the best model for product import palm oil while MA(3) was found to be the best model for products import palm kernel oil. For the palm kernel, MA(4) was found to be the best model. The results forecast for the next four months for products import palm oil, palm kernel oil and palm kernel showed the most significant decrease compared to the actual data.

  16. Assessment of the Equivalence of Conventional versus Computer Administration of the Test of Workplace Essential Skills

    ERIC Educational Resources Information Center

    Whiting, Hal; Kline, Theresa J. B.

    2006-01-01

    This study examined the equivalency of computer and conventional versions of the Test of Workplace Essential Skills (TOWES), a test of adult literacy skills in Reading Text, Document Use and Numeracy. Seventy-three college students completed the computer version, and their scores were compared with those who had taken the test in the conventional…

  17. Electroencephalography (EEG) in the Study of Equivalence Class Formation. An Explorative Study

    PubMed Central

    Arntzen, Erik; Steingrimsdottir, Hanna S.

    2017-01-01

    Teaching arbitrary conditional discriminations and testing for derived relations may be essential for understanding changes in cognitive skills. Such conditional discrimination procedures are often used within stimulus equivalence research. For example, the participant is taught AB and BC relations and tested if emergent relations as BA, CB, AC and CA occur. The purpose of the current explorative experiment was to study stimulus equivalence class formation in older adults with electroencephalography (EEG) recordings as an additional measure. The EEG was used to learn about whether there was an indication of cognitive changes such as those observed in neurocognitive disorders (NCD). The present study included four participants who did conditional discrimination training and testing. The experimental design employed pre-class formation sorting and post-class formation sorting of the stimuli used in the experiment. EEG recordings were conducted before training, after training and after testing. The results showed that two participants formed equivalence classes, one participant failed in one of the three test relations, and one participant failed in two of the three test relations. This fourth participant also failed to sort the stimuli in accordance with the experimenter-defined stimulus equivalence classes during post-class formation sorting. The EEG indicated no cognitive decline in the first three participants but possible mild cognitive impairment (MCI) in the fourth participant. The results suggest that equivalence class formation may provide information about cognitive impairments such as those that are likely to occur in the early stages of NCD. The study recommends replications with broader samples. PMID:28377704

  18. Estimation of sample size and testing power (Part 3).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  19. Student Enrollment, Full-time Equivalent (FTE), Staff/Faculty Information, Annual Statistical Report. 1995-96. Volume 31.

    ERIC Educational Resources Information Center

    Ijames, Steve; Byers, Carl

    This document contains statistical information about the North Carolina Community College System for the academic year 1995-1996. It presents a summary of the 1995-1996 information collected from each of the 58 community colleges in North Carolina, as well as historical information for an 11-year period. This report is organized in sections that…

  20. Orthodontic soft-tissue parameters: a comparison of cone-beam computed tomography and the 3dMD imaging system.

    PubMed

    Metzger, Tasha E; Kula, Katherine S; Eckert, George J; Ghoneima, Ahmed A

    2013-11-01

    Orthodontists rely heavily on soft-tissue analysis to determine facial esthetics and treatment stability. The aim of this retrospective study was to determine the equivalence of soft-tissue measurements between the 3dMD imaging system (3dMD, Atlanta, Ga) and the segmented skin surface images derived from cone-beam computed tomography. Seventy preexisting 3dMD facial photographs and cone-beam computed tomography scans taken within minutes of each other for the same subjects were registered in 3 dimensions and superimposed using Vultus (3dMD) software. After reliability studies, 28 soft-tissue measurements were recorded with both imaging modalities and compared to analyze their equivalence. Intraclass correlation coefficients and Bland-Altman plots were used to assess interexaminer and intraexaminer repeatability and agreement. Summary statistics were calculated for all measurements. To demonstrate equivalence of the 2 methods, the difference needed a 95% confidence interval contained entirely within the equivalence limits defined by the repeatability results. Statistically significant differences were reported for the vermilion height, mouth width, total facial width, mouth symmetry, soft-tissue lip thickness, and eye symmetry. There are areas of nonequivalence between the 2 imaging methods; however, the differences are clinically acceptable from the orthodontic point of view. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  1. Testing an automated method to estimate ground-water recharge from streamflow records

    USGS Publications Warehouse

    Rutledge, A.T.; Daniel, C.C.

    1994-01-01

    The computer program, RORA, allows automated analysis of streamflow hydrographs to estimate ground-water recharge. Output from the program, which is based on the recession-curve-displacement method (often referred to as the Rorabaugh method, for whom the program is named), was compared to estimates of recharge obtained from a manual analysis of 156 years of streamflow record from 15 streamflow-gaging stations in the eastern United States. Statistical tests showed that there was no significant difference between paired estimates of annual recharge by the two methods. Tests of results produced by the four workers who performed the manual method showed that results can differ significantly between workers. Twenty-two percent of the variation between manual and automated estimates could be attributed to having different workers perform the manual method. The program RORA will produce estimates of recharge equivalent to estimates produced manually, greatly increase the speed od analysis, and reduce the subjectivity inherent in manual analysis.

  2. Apparent Yield Strength of Hot-Pressed SiCs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daloz, William L; Wereszczak, Andrew A; Jadaan, Osama M.

    2008-01-01

    Apparent yield strengths (YApp) of four hot-pressed silicon carbides (SiC-B, SiC-N,SiC-HPN, and SiC-SC-1RN) were estimated using diamond spherical or Hertzian indentation. The von Mises and Tresca criteria were considered. The developed test method was robust, simple and quick to execute, and thusly enabled the acquisition of confident sampling statistics. The choice of indenter size, test method, and method of analysis are described. The compressive force necessary to initiate apparent yielding was identified postmortem using differential interference contrast (or Nomarski) imaging with an optical microscope. It was found that the YApp of SiC-HPN (14.0 GPa) was approximately 10% higher than themore » equivalently valued YApp of SiC-B, SiC-N, and SiC-SC-1RN. This discrimination in YApp shows that the use of this test method could be insightful because there were no differences among the average Knoop hardnesses of the four SiC grades.« less

  3. 40 CFR 53.3 - General requirements for an equivalent method determination.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... other tests, full wind-tunnel tests similar to those described in § 53.62, or to special tests adapted... 40 Protection of Environment 6 2012-07-01 2012-07-01 false General requirements for an equivalent method determination. 53.3 Section 53.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  4. 40 CFR 53.3 - General requirements for an equivalent method determination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... other tests, full wind-tunnel tests similar to those described in § 53.62, or to special tests adapted... 40 Protection of Environment 5 2011-07-01 2011-07-01 false General requirements for an equivalent method determination. 53.3 Section 53.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  5. 40 CFR 53.3 - General requirements for an equivalent method determination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... other tests, full wind-tunnel tests similar to those described in § 53.62, or to special tests adapted... 40 Protection of Environment 6 2014-07-01 2014-07-01 false General requirements for an equivalent method determination. 53.3 Section 53.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  6. 40 CFR 53.3 - General requirements for an equivalent method determination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... other tests, full wind-tunnel tests similar to those described in § 53.62, or to special tests adapted... 40 Protection of Environment 6 2013-07-01 2013-07-01 false General requirements for an equivalent method determination. 53.3 Section 53.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  7. Testing for multigroup equivalence of a measuring instrument: a walk through the process.

    PubMed

    Byrne, Barbara M

    2008-11-01

    This article presents an overview and application of the steps taken in testing for the equivalence of a measuring instrument across one or more groups. Following a basic description of, and rationale underlying these steps, the process is illustrated with data comprising response scores to four nonacademic subscales (Physical SC [Ability], Physical SC [Appearance], Social SC [Peers], and Social SC [Parents]) of the Self Description Questionnaire-I for Australian (N = 497) and Nigerian (N = 439) adolescents. All tests for validity and equivalence are based on the analysis of covariance structures within the framework of CFA models using the EQS 6 program. Prospective impediments to equivalence are suggested and additional caveats proposed in the special case where the groups under study represent different cultures.

  8. Theory and experiment in gravitational physics

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1981-01-01

    New technological advances have made it feasible to conduct measurements with precision levels which are suitable for experimental tests of the theory of general relativity. This book has been designed to fill a new need for a complete treatment of techniques for analyzing gravitation theory and experience. The Einstein equivalence principle and the foundations of gravitation theory are considered, taking into account the Dicke framework, basic criteria for the viability of a gravitation theory, experimental tests of the Einstein equivalence principle, Schiff's conjecture, and a model theory devised by Lightman and Lee (1973). Gravitation as a geometric phenomenon is considered along with the parametrized post-Newtonian formalism, the classical tests, tests of the strong equivalence principle, gravitational radiation as a tool for testing relativistic gravity, the binary pulsar, and cosmological tests.

  9. Theory and experiment in gravitational physics

    NASA Astrophysics Data System (ADS)

    Will, C. M.

    New technological advances have made it feasible to conduct measurements with precision levels which are suitable for experimental tests of the theory of general relativity. This book has been designed to fill a new need for a complete treatment of techniques for analyzing gravitation theory and experience. The Einstein equivalence principle and the foundations of gravitation theory are considered, taking into account the Dicke framework, basic criteria for the viability of a gravitation theory, experimental tests of the Einstein equivalence principle, Schiff's conjecture, and a model theory devised by Lightman and Lee (1973). Gravitation as a geometric phenomenon is considered along with the parametrized post-Newtonian formalism, the classical tests, tests of the strong equivalence principle, gravitational radiation as a tool for testing relativistic gravity, the binary pulsar, and cosmological tests.

  10. Advancing Research on Racial–Ethnic Health Disparities: Improving Measurement Equivalence in Studies with Diverse Samples

    PubMed Central

    Landrine, Hope; Corral, Irma

    2014-01-01

    To conduct meaningful, epidemiologic research on racial–ethnic health disparities, racial–ethnic samples must be rendered equivalent on other social status and contextual variables via statistical controls of those extraneous factors. The racial–ethnic groups must also be equally familiar with and have similar responses to the methods and measures used to collect health data, must have equal opportunity to participate in the research, and must be equally representative of their respective populations. In the absence of such measurement equivalence, studies of racial–ethnic health disparities are confounded by a plethora of unmeasured, uncontrolled correlates of race–ethnicity. Those correlates render the samples, methods, and measures incomparable across racial–ethnic groups, and diminish the ability to attribute health differences discovered to race–ethnicity vs. to its correlates. This paper reviews the non-equivalent yet normative samples, methodologies and measures used in epidemiologic studies of racial–ethnic health disparities, and provides concrete suggestions for improving sample, method, and scalar measurement equivalence. PMID:25566524

  11. Skin integrated with perfusable vascular channels on a chip.

    PubMed

    Mori, Nobuhito; Morimoto, Yuya; Takeuchi, Shoji

    2017-02-01

    This paper describes a method for fabricating perfusable vascular channels coated with endothelial cells within a cultured skin-equivalent by fixing it to a culture device connected to an external pump and tubes. A histological analysis showed that vascular channels were constructed in the skin-equivalent, which showed a conventional dermal/epidermal morphology, and the endothelial cells formed tight junctions on the vascular channel wall. The barrier function of the skin-equivalent was also confirmed. Cell distribution analysis indicated that the vascular channels supplied nutrition to the skin-equivalent. Moreover, the feasibility of a skin-equivalent containing vascular channels as a model for studying vascular absorption was demonstrated by measuring test molecule permeation from the epidermal layer into the vascular channels. The results suggested that this skin-equivalent can be used for skin-on-a-chip applications including drug development, cosmetics testing, and studying skin biology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Dark matter and the equivalence principle

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gradwohl, Ben-Ami

    1993-01-01

    A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.

  13. Effects of select and reject control on equivalence class formation and transfer of function.

    PubMed

    Perez, William F; Tomanari, Gerson Y; Vaidya, Manish

    2015-09-01

    The present study used a single-subject design to evaluate the effects of select or reject control on equivalence class formation and transfer of function. Adults were exposed to a matching-to-sample task with observing requirements (MTS-OR) in order to bias the establishment of sample/S+ (select) or sample/S- (reject) relations. In Experiment 1, four sets of baseline conditional relations were taught-two under reject control (A1B2C1, A2B1C2) and two under select control (D1E1F1, D2E2F2). Participants were tested for transitivity, symmetry, equivalence and reflexivity. They also learned a simple discrimination involving one of the stimuli from the equivalence classes and were tested for the transfer of the discriminative function. In general, participants performed with high accuracy on all equivalence-related probes as well as the transfer of function probes under select control. Under reject control, participants had high scores only on the symmetry test; transfer of function was attributed to stimuli programmed as S-. In Experiment 2, the equivalence class under reject control was expanded to four members (A1B2C1D2; A2B1C2D1). Participants had high scores only on symmetry and on transitivity and equivalence tests involving two nodes. Transfer of function was extended to the programmed S- added to each class. Results from both experiments suggest that select and reject controls might differently affect the formation of equivalence classes and the transfer of stimulus functions. © Society for the Experimental Analysis of Behavior.

  14. Theoretical Conversions of Different Hardness and Tensile Strength for Ductile Materials Based on Stress-Strain Curves

    NASA Astrophysics Data System (ADS)

    Chen, Hui; Cai, Li-Xun

    2018-04-01

    Based on the power-law stress-strain relation and equivalent energy principle, theoretical equations for converting between Brinell hardness (HB), Rockwell hardness (HR), and Vickers hardness (HV) were established. Combining the pre-existing relation between the tensile strength ( σ b ) and Hollomon parameters ( K, N), theoretical conversions between hardness (HB/HR/HV) and tensile strength ( σ b ) were obtained as well. In addition, to confirm the pre-existing σ b -( K, N) relation, a large number of uniaxial tensile tests were conducted in various ductile materials. Finally, to verify the theoretical conversions, plenty of statistical data listed in ASTM and ISO standards were adopted to test the robustness of the converting equations with various hardness and tensile strength. The results show that both hardness conversions and hardness-strength conversions calculated from the theoretical equations accord well with the standard data.

  15. [Determining the efficacy of a high-school life-skills' programme in Huancavelica, Peru].

    PubMed

    Choque-Larrauri, Raúl; Chirinos-Cáceres, Jesús Lorenzo

    2009-01-01

    Determining the efficacy of a life-skills' programme within the context of a school health promotion programme using teenagers from a high-school in the district of Huancavelica, Peru during school year 2006. This was non-equivalent experimental research with pre-test and post-test. The subjects consisted of 284 high school students. The variables analyzed were communication, self esteem, assertiveness, decision making, sex and age. There was a significant increase in the experimental group's communication and assertiveness skills' development. There were no significant differences in decision-making and self-esteem skills. The life-skills' programme was effective during one school year, especially in terms of learning and developing communication and assertiveness skills. However, self-esteem and decision-making skills did not present a statistically significance difference. Programme implementation must thus be redirected and the life-skills' programme should be implemented throughout all high-school years.

  16. A controlled phantom study of a noise equalization algorithm for detecting microcalcifications in digital mammograms.

    PubMed

    Gürün, O O; Fatouros, P P; Kuhn, G M; de Paredes, E S

    2001-04-01

    We report on some extensions and further developments of a well-known microcalcification detection algorithm based on adaptive noise equalization. Tissue equivalent phantom images with and without labeled microcalcifications were subjected to this algorithm, and analyses of results revealed some shortcomings in the approach. Particularly, it was observed that the method of estimating the width of distributions in the feature space was based on assumptions which resulted in the loss of similarity preservation characteristics. A modification involving a change of estimator statistic was made, and the modified approach was tested on the same phantom images. Other modifications for improving detectability such as downsampling and use of alternate local contrast filters were also tested. The results indicate that these modifications yield improvements in detectability, while extending the generality of the approach. Extensions to real mammograms and further directions of research are discussed.

  17. Effect of case-based learning on the development of graduate nurses' problem-solving ability.

    PubMed

    Yoo, Moon-Sook; Park, Jin-Hee

    2014-01-01

    Case-based learning (CBL) is a teaching strategy which promotes clinical problem-solving ability. This research was performed to investigate the effects of CBL on problem-solving ability of graduate nurses. This research was a quasi-experimental design using pre-test, intervention, and post-test with a non-synchronized, non-equivalent control group. The study population was composed of 190 new graduate nurses from university hospital A in Korea. Results of the research indicate that there was a statistically significant difference in objective problem-solving ability scores of CBL group demonstrating higher scores. Subjective problem-solving ability was also significantly higher in CBL group than in the lecture-based group. These results may suggest that CBL is a beneficial and effective instructional method of training graduate nurses to improve their clinical problem-solving ability. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Natural air leak test without submergence for spontaneous pneumothorax.

    PubMed

    Uramoto, Hidetaka; Tanaka, Fumihiro

    2011-12-24

    Postoperative air leaks are frequent complications after surgery for a spontaneous pneumothorax (SP). We herein describe a new method to test for air leaks by using a transparent film and thoracic tube in a closed system. Between 2005 and 2010, 35 patients underwent a novel method for evaluating air leaks without submergence, and their clinical records were retrospectively reviewed. The data on patient characteristics, surgical details, and perioperative outcomes were analyzed. The differences in the clinical background and intraoperative factors did not reach a statistically significant level between the new and classical methods. The incidence of recurrence was also equivalent to the standard method. However, the length of the operation and drainage periods were significantly shorter in patients evaluated using the new method than the conventional method. Further, no postoperative complications were observed in patients evaluated using the new method. This simple technique is satisfactorily effective and does not result in any complications.

  19. Quantitative assessment of Naegleria fowleri and Escherichia coli concentrations within a Texas reservoir.

    PubMed

    Painter, Stephanie M; Pfau, Russell S; Brady, Jeff A; McFarland, Anne M S

    2013-06-01

    Previous presence/absence studies have indicated a correlation between the presence of the pathogenic amoeba Naegleria fowleri and the presence of bacteria, such as the fecal indicator Escherichia coli, in environmental surface waters. The objective of this study was to use quantitative real-time polymerase chain reaction (qPCR) methodologies to measure N. fowleri and E. coli concentrations within a Texas reservoir in late summer, and to determine if concentrations of N. fowleri and E. coli were statistically correlated. N. fowleri was detected in water samples from 67% of the reservoir sites tested, with concentrations ranging up to an estimated 26 CE (cell equivalents)/100 mL. E. coli was detected in water samples from 60% of the reservoir sites tested, with concentrations ranging up to 427 CE/100 mL. In this study, E. coli concentrations were not indicative of N. fowleri concentrations.

  20. Exchanging the liquidity hypothesis: Delay discounting of money and self-relevant non-money rewards

    PubMed Central

    Stuppy-Sullivan, Allison M.; Tormohlen, Kayla N.; Yi, Richard

    2015-01-01

    Evidence that primary rewards (e.g., food and drugs of abuse) are discounted more than money is frequently attributed to money's high degree of liquidity, or exchangeability for many commodities. The present study provides some evidence against this liquidity hypothesis by contrasting delay discounting of monetary rewards (liquid) and non-monetary commodities (non-liquid) that are self-relevant and utility-matched. Ninety-seven (97) undergraduate students initially completed a conventional binary-choice delay discounting of money task. Participants returned one week later and completed a self-relevant commodity delay discounting task. Both conventional hypothesis testing and more-conservative tests of statistical equivalence revealed correspondence in rate of delay discounting of money and self-relevant commodities, and in one magnitude condition, less discounting for the latter. The present results indicate that liquidity of money cannot fully account for the lower rate of delay discounting compared to non-money rewards. PMID:26556504

  1. A comparison of the efficacy of ketotifen (HC 20-511) with sodium cromoglycate (SCG) in skin test positive asthma.

    PubMed Central

    Clarke, C W; May, C S

    1980-01-01

    1 Ketotifen (HC 20-511 Sandoz) 1 mg twice daily for 12 weeks was found to be equivalent to sodium cromoglycate (SCG) 20 mg four times daily for 12 weeks in 35 skin test positive asthmatic patients in a randomised double-blind cross-over study. 2 No statistically significant difference between the two drugs in mean values for daily peak flow rates, diary card scores and spirometry at monthly visits was demonstrated. 3 Treatment failures as judged by severe asthma requiring withdrawal from the trial or addition of short courses of prednisone occurred in three patients on each drug. 4 Sedation was noted by 10 patients onHC 20-511 and 5 on SCG. 5 Weight loss was noted in those patients on SCG, but not those on HC 20-511. PMID:6108129

  2. Low and High Frequency Models of Response Statistics of a Cylindrical Orthogrid Vehicle Panel to Acoustic Excitation

    NASA Technical Reports Server (NTRS)

    Smith, Andrew; LaVerde, Bruce; Teague, David; Gardner, Bryce; Cotoni, Vincent

    2010-01-01

    This presentation further develops the orthogrid vehicle panel work. Employed Hybrid Module capabilities to assess both low/mid frequency and high frequency models in the VA One simulation environment. The response estimates from three modeling approaches are compared to ground test measurements. Detailed Finite Element Model of the Test Article -Expect to capture both the global panel modes and the local pocket mode response, but at a considerable analysis expense (time & resources). A Composite Layered Construction equivalent global stiffness approximation using SEA -Expect to capture response of the global panel modes only. An SEA approximation using the Periodic Subsystem Formulation. A finite element model of a single periodic cell is used to derive the vibroacoustic properties of the entire periodic structure (modal density, radiation efficiency, etc. Expect to capture response at various locations on the panel (on the skin and on the ribs) with less analysis expense

  3. Long-range correlations, geometrical structure, and transport properties of macromolecular solutions. The equivalence of configurational statistics and geometrodynamics of large molecules.

    PubMed

    Mezzasalma, Stefano A

    2007-12-04

    A special theory of Brownian relativity was previously proposed to describe the universal picture arising in ideal polymer solutions. In brief, it redefines a Gaussian macromolecule in a 4-dimensional diffusive spacetime, establishing a (weak) Lorentz-Poincaré invariance between liquid and polymer Einstein's laws for Brownian movement. Here, aimed at inquiring into the effect of correlations, we deepen the extension of the special theory to a general formulation. The previous statistical equivalence, for dynamic trajectories of liquid molecules and static configurations of macromolecules, and rather obvious in uncorrelated systems, is enlarged by a more general principle of equivalence, for configurational statistics and geometrodynamics. Accordingly, the three geodesic motion, continuity, and field equations could be rewritten, and a number of scaling behaviors were recovered in a spacetime endowed with general static isotropic metric (i.e., for equilibrium polymer solutions). We also dealt with universality in the volume fraction and, unexpectedly, found that a hyperscaling relation of the form, (average size) x (diffusivity) x (viscosity)1/2 ~f(N0, phi0) is fulfilled in several regimes, both in the chain monomer number (N) and polymer volume fraction (phi). Entangled macromolecular dynamics was treated as a geodesic light deflection, entaglements acting in close analogy to the field generated by a spherically symmetric mass source, where length fluctuations of the chain primitive path behave as azimuth fluctuations of its shape. Finally, the general transformation rule for translational and diffusive frames gives a coordinate gauge invariance, suggesting a widened Lorentz-Poincaré symmetry for Brownian statistics. We expect this approach to find effective applications to solutions of arbitrarily large molecules displaying a variety of structures, where the effect of geometry is more explicit and significant in itself (e.g., surfactants, lipids, proteins).

  4. Cross-cultural equivalence of the patient- and parent-reported quality of life in short stature youth (QoLISSY) questionnaire.

    PubMed

    Bullinger, Monika; Quitmann, Julia; Silva, Neuza; Rohenkohl, Anja; Chaplin, John E; DeBusk, Kendra; Mimoun, Emmanuelle; Feigerlova, Eva; Herdman, Michael; Sanz, Dolores; Wollmann, Hartmut; Pleil, Andreas; Power, Michael

    2014-01-01

    Testing cross-cultural equivalence of patient-reported outcomes requires sufficiently large samples per country, which is difficult to achieve in rare endocrine paediatric conditions. We describe a novel approach to cross-cultural testing of the Quality of Life in Short Stature Youth (QoLISSY) questionnaire in five countries by sequentially taking one country out (TOCO) from the total sample and iteratively comparing the resulting psychometric performance. Development of the QoLISSY proceeded from focus group discussions through pilot testing to field testing in 268 short-statured patients and their parents. To explore cross-cultural equivalence, the iterative TOCO technique was used to examine and compare the validity, reliability, and convergence of patient and parent responses on QoLISSY in the field test dataset, and to predict QoLISSY scores from clinical, socio-demographic and psychosocial variables. Validity and reliability indicators were satisfactory for each sample after iteratively omitting one country. Comparisons with the total sample revealed cross-cultural equivalence in internal consistency and construct validity for patients and parents, high inter-rater agreement and a substantial proportion of QoLISSY variance explained by predictors. The TOCO technique is a powerful method to overcome problems of country-specific testing of patient-reported outcome instruments. It provides an empirical support to QoLISSY's cross-cultural equivalence and is recommended for future research.

  5. Quadriceps tendon rupture: a biomechanical comparison of transosseous equivalent double-row suture anchor versus transosseous tunnel repair.

    PubMed

    Hart, Nathan D; Wallace, Matthew K; Scovell, J Field; Krupp, Ryan J; Cook, Chad; Wyland, Douglas J

    2012-09-01

    Quadriceps rupture off the patella is traditionally repaired by a transosseous tunnel technique, although a single-row suture anchor repair has recently been described. This study biomechanically tested a new transosseous equivalent (TE) double-row suture anchor technique compared with the transosseous repair for quadriceps repair. After simulated quadriceps-patella avulsion in 10 matched cadaveric knees, repairs were completed by either a three tunnel transosseous (TT = 5) or a TE suture anchor (TE = 5) technique. Double-row repairs were done using two 5.5 Bio-Corkscrew FT (fully threaded) (Arthrex, Inc., Naples, FL, USA) and two 3.5 Bio-PushLock anchors (Arthrex, Inc., Naples, FL, USA) with all 10 repairs done with #2 FiberWire suture (Arthrex, Inc., Naples, FL). Cyclic testing from 50 to 250 N for 250 cycles and pull to failure load (1 mm/s) were undertaken. Gap formation and ultimate tensile load (N) were recorded and stiffness data (N/mm) were calculated. Statistical analysis was performed using a Mann-Whitney U test and survival characteristics examined with Kaplan-Meier test. No significant difference was found between the TE and TT groups in stiffness (TE = 134 +/- 15 N/mm, TT = 132 +/- 26 N/mm, p = 0.28). The TE group had significantly less ultimate tensile load (N) compared with the TT group (TE = 447 +/- 86 N, TT = 591 +/- 84 N, p = 0.04), with all failures occurring at the suture eyelets. Although both quadriceps repairs were sufficiently strong, the transosseous repairs were stronger than the TE suture anchor repairs. The repair stiffness and gap formation were similar between the groups.

  6. Basic Concepts in Classical Test Theory: Tests Aren't Reliable, the Nature of Alpha, and Reliability Generalization as a Meta-analytic Method.

    ERIC Educational Resources Information Center

    Helms, LuAnn Sherbeck

    This paper discusses the fact that reliability is about scores and not tests and how reliability limits effect sizes. The paper also explores the classical reliability coefficients of stability, equivalence, and internal consistency. Stability is concerned with how stable test scores will be over time, while equivalence addresses the relationship…

  7. R-on-1 automatic mapping: A new tool for laser damage testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hue, J.; Garrec, P.; Dijon, J.

    1996-12-31

    Laser damage threshold measurement is statistical in nature. For a commercial qualification or for a user, the threshold determined by the weakest point is a satisfactory characterization. When a new coating is designed, threshold mapping is very useful. It enables the technology to be improved and followed more accurately. Different statistical parameters such as the minimum, maximum, average, and standard deviation of the damage threshold as well as spatial parameters such as the threshold uniformity of the coating can be determined. Therefore, in order to achieve a mapping, all the tested sites should give data. This is the major interestmore » of the R-on-1 test in spite of the fact that the laser damage threshold obtained by this method may be different from the 1-on-1 test (smaller or greater). Moreover, on the damage laser test facility, the beam size is smaller (diameters of a few hundred micrometers) than the characteristic sizes of the components in use (diameters of several centimeters up to one meter). Hence, a laser damage threshold mapping appears very interesting, especially for applications linked to large optical components like the Megajoule project or the National Ignition Facility (N.I.F). On the test bench used, damage detection with a Nomarski microscope and scattered light measurement are almost equivalent. Therefore, it becomes possible to automatically detect on line the first defects induced by YAG irradiation. Scattered light mappings and laser damage threshold mappings can therefore be achieved using a X-Y automatic stage (where the test sample is located). The major difficulties due to the automatic capabilities are shown. These characterizations are illustrated at 355 nm. The numerous experiments performed show different kinds of scattering curves, which are discussed in relation with the damage mechanisms.« less

  8. Investigating the relationship between foveal morphology and refractive error in a population with infantile nystagmus syndrome.

    PubMed

    Healey, Natasha; McLoone, Eibhlin; Mahon, Gerald; Jackson, A Jonathan; Saunders, Kathryn J; McClelland, Julie F

    2013-04-26

    We explored associations between refractive error and foveal hypoplasia in infantile nystagmus syndrome (INS). We recruited 50 participants with INS (albinism n = 33, nonalbinism infantile nystagmus [NAIN] n = 17) aged 4 to 48 years. Cycloplegic refractive error and logMAR acuity were obtained. Spherical equivalent (SER), most ametropic meridian (MAM) refractive error, and better eye acuity (VA) were used for analyses. High resolution spectral-domain optical coherence tomography (SD-OCT) was used to obtain foveal scans, which were graded using the Foveal Hypoplasia Grading Scale. Associations between grades of severity of foveal hypoplasia, and refractive error and VA were explored. Participants with more severe foveal hypoplasia had significantly higher MAMs and SERs (Kruskal-Wallis H test P = 0.005 and P = 0.008, respectively). There were no statistically significant associations between foveal hypoplasia and cylindrical refractive error (Kruskal-Wallis H test P = 0.144). Analyses demonstrated significant differences between participants with albinism or NAIN in terms of SER and MAM (Mann-Whitney U test P = 0.001). There were no statistically significant differences between astigmatic errors between participants with albinism and NAIN. Controlling for the effects of albinism, results demonstrated no significant associations between SER, and MAM and foveal hypoplasia (partial correlation P > 0.05). Poorer visual acuity was associated statistically significantly with more severe foveal hypoplasia (Kruskal-Wallis H test P = 0.001) and with a diagnosis of albinism (Mann-Whitney U test P = 0.001). Increasing severity of foveal hypoplasia is associated with poorer VA, reflecting reduced cone density in INS. Individuals with INS also demonstrate a significant association between more severe foveal hypoplasia and increasing hyperopia. However, in the absence of albinism, there is no significant relation between refractive outcome and degree of foveal hypoplasia, suggesting that foveal maldevelopment in isolation does not impair significantly the emmetropization process. It likely is that impaired emmetropization evidenced in the albinism group may be attributed to the whole eye effect of albinism.

  9. High precision test of the equivalence principle

    NASA Astrophysics Data System (ADS)

    Schlamminger, Stephan; Wagner, Todd; Choi, Ki-Young; Gundlach, Jens; Adelberger, Eric

    2007-05-01

    The equivalence principle is the underlying foundation of General Relativity. Many modern quantum theories of gravity predict violations of the equivalence principle. We are using a rotating torsion balance to search for a new equivalence principle violating, long range interaction. A sensitive torsion balance is mounted on a turntable rotating with constant angular velocity. On the torsion pendulum beryllium and titanium test bodies are installed in a composition dipole configuration. A violation of the equivalence principle would yield to a differential acceleration of the two materials towards a source mass. I will present measurements with a differential acceleration sensitivity of 3x10-15;m/s^2. To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2007.NWS07.B3.5

  10. Testing of an oral dosing technique for double-crested cormorants, Phalacocorax auritus, laughing gulls, Leucophaeus atricilla, homing pigeons, Columba livia, and western sandpipers, Calidris mauri, with artificially weather MC252 oil.

    PubMed

    Dean, K M; Cacela, D; Carney, M W; Cunningham, F L; Ellis, C; Gerson, A R; Guglielmo, C G; Hanson-Dorr, K C; Harr, K E; Healy, K A; Horak, K E; Isanhart, J P; Kennedy, L V; Link, J E; Lipton, I; McFadden, A K; Moye, J K; Perez, C R; Pritsos, C A; Pritsos, K L; Muthumalage, T; Shriner, S A; Bursian, S J

    2017-12-01

    Scoping studies were designed to determine if double-crested cormorants (Phalacocorax auritus), laughing gulls (Leucophaues atricilla), homing pigeons (Columba livia) and western sandpipers (Calidris mauri) that were gavaged with a mixture of artificially weathered MC252 oil and food for either a single day or 4-5 consecutive days showed signs of oil toxicity. Where volume allowed, samples were collected for hematology, plasma protein electrophoresis, clinical chemistry and electrolytes, oxidative stress and organ weigh changes. Double-crested cormorants, laughing gulls and western sandpipers all excreted oil within 30min of dose, while pigeons regurgitated within less than one hour of dosing. There were species differences in the effectiveness of the dosing technique, with double-crested cormorants having the greatest number of responsive endpoints at the completion of the trial. Statistically significant changes in packed cell volume, white cell counts, alkaline phosphatase, alanine aminotransferase, creatine phosphokinase, gamma glutamyl transferase, uric acid, chloride, sodium, potassium, calcium, total glutathione, glutathione disulfide, reduced glutathione, spleen and liver weights were measured in double-crested cormorants. Homing pigeons had statistically significant changes in creatine phosphokinase, total glutathione, glutathione disulfide, reduced glutathione and Trolox equivalents. Laughing gulls exhibited statistically significant decreases in spleen and kidney weight, and no changes were observed in any measurement endpoints tested in western sandpipers. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Effectiveness of probiotic, chlorhexidine and fluoride mouthwash against Streptococcus mutans – Randomized, single-blind, in vivo study

    PubMed Central

    Jothika, Mohan; Vanajassun, P. Pranav; Someshwar, Battu

    2015-01-01

    Aim: To determine the short-term efficiency of probiotic, chlorhexidine, and fluoride mouthwashes on plaque Streptococcus mutans level at four periodic intervals. Materials and Methods: This was a single-blind, randomized control study in which each subject was tested with only one mouthwash regimen. Fifty-two healthy qualified adult patients were selected randomly for the study and were divided into the following groups: group 1- 10 ml of distilled water, group 2- 10 ml of 0.2% chlorhexidine mouthwash, group 3- 10 ml of 500 ppm F/400 ml sodium fluoride mouthwash, and group 4- 10 ml of probiotic mouthwash. Plaque samples were collected from the buccal surface of premolars and molars in the maxillary quadrant. Sampling procedure was carried out by a single examiner after 7 days, 14 days, and 30 days, respectively, after the use of the mouthwash. All the samples were subjected to microbiological analysis and statistically analyzed with one-way analysis of variance (ANOVA) and post-hoc test. Results: One-way ANOVA comparison among groups 2, 3, and 4 showed no statistical significance, whereas group 1 showed statistically significant difference when compared with groups 2, 3, and 4 at 7th, 14th, and 30th day. Conclusion: Chlorhexidine, sodium fluoride, and probiotic mouthwashes reduce plaque S. mutans levels. Probiotic mouthwash is effective and equivalent to chlorhexidine and sodium fluoride mouthwashes. Thus, probiotic mouthwash can also be considered as an effective oral hygiene regimen. PMID:25984467

  12. MIDAS: Regionally linear multivariate discriminative statistical mapping.

    PubMed

    Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos

    2018-07-01

    Statistical parametric maps formed via voxel-wise mass-univariate tests, such as the general linear model, are commonly used to test hypotheses about regionally specific effects in neuroimaging cross-sectional studies where each subject is represented by a single image. Despite being informative, these techniques remain limited as they ignore multivariate relationships in the data. Most importantly, the commonly employed local Gaussian smoothing, which is important for accounting for registration errors and making the data follow Gaussian distributions, is usually chosen in an ad hoc fashion. Thus, it is often suboptimal for the task of detecting group differences and correlations with non-imaging variables. Information mapping techniques, such as searchlight, which use pattern classifiers to exploit multivariate information and obtain more powerful statistical maps, have become increasingly popular in recent years. However, existing methods may lead to important interpretation errors in practice (i.e., misidentifying a cluster as informative, or failing to detect truly informative voxels), while often being computationally expensive. To address these issues, we introduce a novel efficient multivariate statistical framework for cross-sectional studies, termed MIDAS, seeking highly sensitive and specific voxel-wise brain maps, while leveraging the power of regional discriminant analysis. In MIDAS, locally linear discriminative learning is applied to estimate the pattern that best discriminates between two groups, or predicts a variable of interest. This pattern is equivalent to local filtering by an optimal kernel whose coefficients are the weights of the linear discriminant. By composing information from all neighborhoods that contain a given voxel, MIDAS produces a statistic that collectively reflects the contribution of the voxel to the regional classifiers as well as the discriminative power of the classifiers. Critically, MIDAS efficiently assesses the statistical significance of the derived statistic by analytically approximating its null distribution without the need for computationally expensive permutation tests. The proposed framework was extensively validated using simulated atrophy in structural magnetic resonance imaging (MRI) and further tested using data from a task-based functional MRI study as well as a structural MRI study of cognitive performance. The performance of the proposed framework was evaluated against standard voxel-wise general linear models and other information mapping methods. The experimental results showed that MIDAS achieves relatively higher sensitivity and specificity in detecting group differences. Together, our results demonstrate the potential of the proposed approach to efficiently map effects of interest in both structural and functional data. Copyright © 2018. Published by Elsevier Inc.

  13. A Proposal on the Validation Model of Equivalence between PBLT and CBLT

    ERIC Educational Resources Information Center

    Chen, Huilin

    2014-01-01

    The validity of the computer-based language test is possibly affected by three factors: computer familiarity, audio-visual cognitive competence, and other discrepancies in construct. Therefore, validating the equivalence between the paper-and-pencil language test and the computer-based language test is a key step in the procedure of designing a…

  14. Evaluation of PCR Systems for Field Screening of Bacillus anthracis

    PubMed Central

    Ozanich, Richard M.; Colburn, Heather A.; Victry, Kristin D.; Bartholomew, Rachel A.; Arce, Jennifer S.; Heredia-Langner, Alejandro; Jarman, Kristin; Kreuzer, Helen W.

    2017-01-01

    There is little published data on the performance of hand-portable polymerase chain reaction (PCR) systems that can be used by first responders to determine if a suspicious powder contains a potential biothreat agent. We evaluated 5 commercially available hand-portable PCR instruments for detection of Bacillus anthracis. We used a cost-effective, statistically based test plan to evaluate systems at performance levels ranging from 0.85-0.95 lower confidence bound (LCB) of the probability of detection (POD) at confidence levels of 80% to 95%. We assessed specificity using purified genomic DNA from 13 B. anthracis strains and 18 Bacillus near neighbors, potential interference with 22 suspicious powders that are commonly encountered in the field by first responders during suspected biothreat incidents, and the potential for PCR inhibition when B. anthracis spores were spiked into these powders. Our results indicate that 3 of the 5 systems achieved 0.95 LCB of the probability of detection with 95% confidence levels at test concentrations of 2,000 genome equivalents/mL (GE/mL), which is comparable to 2,000 spores/mL. This is more than sufficient sensitivity for screening visible suspicious powders. These systems exhibited no false-positive results or PCR inhibition with common suspicious powders and reliably detected B. anthracis spores spiked into these powders, though some issues with assay controls were observed. Our testing approach enables efficient performance testing using a statistically rigorous and cost-effective test plan to generate performance data that allow users to make informed decisions regarding the purchase and use of field biodetection equipment. PMID:28192050

  15. 49 CFR 391.33 - Equivalent of road test.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Equivalent of road test. 391.33 Section 391.33 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS QUALIFICATIONS OF DRIVERS...

  16. 49 CFR 391.33 - Equivalent of road test.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Equivalent of road test. 391.33 Section 391.33 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS QUALIFICATIONS OF DRIVERS...

  17. Is Conscious Stimulus Identification Dependent on Knowledge of the Perceptual Modality? Testing the “Source Misidentification Hypothesis”

    PubMed Central

    Overgaard, Morten; Lindeløv, Jonas; Svejstrup, Stinna; Døssing, Marianne; Hvid, Tanja; Kauffmann, Oliver; Mouridsen, Kim

    2013-01-01

    This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness Scale (PAS). We suggest that knowledge about perceptual modality may be a necessary precondition in order to issue correct reports of which stimulus was presented. Furthermore, we find that PAS ratings correlate with correctness, and that subjects are at chance level when reporting no conscious experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence. PMID:23508677

  18. Comparing the performance of a new disposable pneumatic tocodynamometer with a standard tocodynamometer.

    PubMed

    Eswaran, Hari; Wilson, James D; Murphy, Pam; Siegel, Eric R; Lowery, Curtis L

    2016-03-01

    The goal was to test a newly developed pneumatic tocodynamometer (pTOCO) that is disposable and lightweight, and evaluate its equivalence to the standard strain gauge-based tocodynamometer (TOCO). The equivalence between the devices was determined by both mechanical testing and recording of contractile events on women. The data were recorded simultaneously from a pTOCO prototype and standard TOCO that were in place on women who were undergoing routine contraction monitoring in the Labor and Delivery unit at the University of Arkansas for Medical Sciences. In this prospective equivalence study, the output from 31 recordings on 28 pregnant women that had 171 measureable contractions simultaneously in both types of TOCO were analyzed. The traces were scored for contraction start, peak and end times, and the duration of the event was computed from these times. The response curve to loaded weights and applied pressure were similar for both devices, indicating their mechanical equivalence. The paired differences in times and duration between devices were subjected to mixed-models analysis to test the pTOCO for equivalence with standard TOCOs using the two-one-sided tests procedure. The event times and duration analyzed simultaneously from both TOCO types were all found to be significantly equivalent to within ±10 s (all p-values ≤0.0001). pTOCO is equivalent to the standard TOCO in the detection of the timing and duration of uterine contractions. pTOCO would provide a lightweight, disposable alternative to commercially available standard TOCOs. © 2015 Nordic Federation of Societies of Obstetrics and Gynecology.

  19. Student Enrollment, Full-time Equivalent (FTE), Staff/Faculty Information. Annual Statistical Report, 1997-98. Volume 33.

    ERIC Educational Resources Information Center

    North Carolina Community Coll. System, Raleigh.

    This document contains statistical information for the academic year 1997-1998 collected from each of the 58 community colleges in North Carolina, as well as historical information for an 11-year period. This was the first year in which the North Carolina Community College System used the semester system. In addition, it was the first year of…

  20. Datamining approaches for modeling tumor control probability.

    PubMed

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  1. Preparation of fatty acid methyl esters for gas-chromatographic analysis of marine lipids: insight studies.

    PubMed

    Carvalho, Ana P; Malcata, F Xavier

    2005-06-29

    Assays for fatty acid composition in biological materials are commonly carried out by gas chromatography, after conversion of the lipid material into the corresponding methyl esters (FAME) via suitable derivatization reactions. Quantitative derivatization depends on the type of catalyst and processing conditions employed, as well as the solubility of said sample in the reaction medium. Most literature pertinent to derivatization has focused on differential comparison between alternative methods; although useful to find out the best method for a particular sample, additional studies on factors that may affect each step of FAME preparation are urged. In this work, the influence of various parameters in each step of derivatization reactions was studied, using both cod liver oil and microalgal biomass as model systems. The accuracies of said methodologies were tested via comparison with the AOCS standard method, whereas their reproducibility was assessed by analysis of variance of (replicated) data. Alkaline catalysts generated lower levels of long-chain unsaturated FAME than acidic ones. Among these, acetyl chloride and BF(3) were statistically equivalent to each other. The standard method, which involves alkaline treatment of samples before acidic methylation with BF(3), provided equivalent results when compared with acidic methylation with BF(3) alone. Polarity of the reaction medium was found to be of the utmost importance in the process: intermediate values of polarity [e.g., obtained by a 1:1 (v/v) mixture of methanol with diethyl ether or toluene] provided amounts of extracted polyunsaturated fatty acids statistically higher than those obtained via the standard method.

  2. A Comparison of Fuzzy Models in Similarity Assessment of Misregistered Area Class Maps

    NASA Astrophysics Data System (ADS)

    Brown, Scott

    Spatial uncertainty refers to unknown error and vagueness in geographic data. It is relevant to land change and urban growth modelers, soil and biome scientists, geological surveyors and others, who must assess thematic maps for similarity, or categorical agreement. In this paper I build upon prior map comparison research, testing the effectiveness of similarity measures on misregistered data. Though several methods compare uncertain thematic maps, few methods have been tested on misregistration. My objective is to test five map comparison methods for sensitivity to misregistration, including sub-pixel errors in both position and rotation. Methods included four fuzzy categorical models: fuzzy kappa's model, fuzzy inference, cell aggregation, and the epsilon band. The fifth method used conventional crisp classification. I applied these methods to a case study map and simulated data in two sets: a test set with misregistration error, and a control set with equivalent uniform random error. For all five methods, I used raw accuracy or the kappa statistic to measure similarity. Rough-set epsilon bands report the most similarity increase in test maps relative to control data. Conversely, the fuzzy inference model reports a decrease in test map similarity.

  3. The Sensitivity of the Midlatitude Moist Isentropic Circulation on Both Sides of the Climate Model Hierarchy

    NASA Astrophysics Data System (ADS)

    Fajber, R. A.; Kushner, P. J.; Laliberte, F. B.

    2017-12-01

    In the midlatitude atmosphere, baroclinic eddies are able to raise warm, moist air from the surface into the midtroposphere where it condenses and warms the atmosphere through latent heating. This coupling between dynamics and moist thermodynamics motivates using a conserved moist thermodynamic variable, such as the equivalent potential temperature, to study the midlatitude circulation and associated heat transport since it implicitly accounts for latent heating. When the equivalent potential temperature is used to zonally average the circulation, the moist isentropic circulation takes the form of a single cell in each hemisphere. By utilising the statistical transformed Eulerian mean (STEM) circulation we are able to parametrize the moist isentropic circulation in terms of second order dynamic and moist thermodynamic statistics. The functional dependence of the STEM allows us to analytically calculate functional derivatives that reveal the spatially varying sensitivity of the moist isentropic circulation to perturbations in different statistics. Using the STEM functional derivatives as sensitivity kernels we interpret changes in the moist isentropic circulation from two experiments: surface heating in an idealised moist model, and a climate change scenario in a comprehensive atmospheric general circulation model. In both cases we find that the changes in the moist isentropic circulation are well predicted by the functional sensitivities, and that the total heat transport is more sensitive to changes in dynamical processes driving local changes in poleward heat transport than it is to thermodynamic and/or radiative processes driving changes to the distribution of equivalent potential temperature.

  4. Improving mass-univariate analysis of neuroimaging data by modelling important unknown covariates: Application to Epigenome-Wide Association Studies.

    PubMed

    Guillaume, Bryan; Wang, Changqing; Poh, Joann; Shen, Mo Jun; Ong, Mei Lyn; Tan, Pei Fang; Karnani, Neerja; Meaney, Michael; Qiu, Anqi

    2018-06-01

    Statistical inference on neuroimaging data is often conducted using a mass-univariate model, equivalent to fitting a linear model at every voxel with a known set of covariates. Due to the large number of linear models, it is challenging to check if the selection of covariates is appropriate and to modify this selection adequately. The use of standard diagnostics, such as residual plotting, is clearly not practical for neuroimaging data. However, the selection of covariates is crucial for linear regression to ensure valid statistical inference. In particular, the mean model of regression needs to be reasonably well specified. Unfortunately, this issue is often overlooked in the field of neuroimaging. This study aims to adopt the existing Confounder Adjusted Testing and Estimation (CATE) approach and to extend it for use with neuroimaging data. We propose a modification of CATE that can yield valid statistical inferences using Principal Component Analysis (PCA) estimators instead of Maximum Likelihood (ML) estimators. We then propose a non-parametric hypothesis testing procedure that can improve upon parametric testing. Monte Carlo simulations show that the modification of CATE allows for more accurate modelling of neuroimaging data and can in turn yield a better control of False Positive Rate (FPR) and Family-Wise Error Rate (FWER). We demonstrate its application to an Epigenome-Wide Association Study (EWAS) on neonatal brain imaging and umbilical cord DNA methylation data obtained as part of a longitudinal cohort study. Software for this CATE study is freely available at http://www.bioeng.nus.edu.sg/cfa/Imaging_Genetics2.html. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  5. The effects of a hardiness educational intervention on hardiness and perceived stress of junior baccalaureate nursing students.

    PubMed

    Jameson, Paula R

    2014-04-01

    Baccalaureate nursing education is stressful. The stress encompasses a range of academic, personal, clinical, and social reasons. A hardiness educational program, a tool for stress management, based on theory, research, and practice, exists to enhance the attitudes and coping strategies of hardiness (Maddi, 2007; Maddi et al., 2002). Research has shown that students who completed the hardiness educational program, subsequently improved in grade point average (GPA), college retention rates, and health (Maddi et al., 2002). Little research has been done to explore the effects of hardiness education with junior baccalaureate nursing students. Early identification of hardiness, the need for hardiness education, or stress management in this population may influence persistence in and completion of a nursing program (Hensel and Stoelting-Gettelfinger, 2011). Therefore, the aims were to determine if an increase in hardiness and a decrease in perceived stress in junior baccalaureate nursing students occurred in those who participated in a hardiness intervention. The application of the Hardiness Model and the Roy Adaptation Model established connections and conceptual collaboration among stress, stimuli, adaptation, and hardi-coping. A quasi-experimental non-equivalent control group with pre-test and post-test was used with a convenience sample of full-time junior level baccalaureate nursing students. Data were collected from August 2011 to December 2011. Results of statistical analyses by paired t-tests revealed that the hardiness intervention did not have a statistically significant effect on increasing hardiness scores. The hardiness intervention did have a statistically significant effect on decreasing perceived stress scores. The significant decrease in perceived stress was congruent with the Hardiness Model and the Roy Adaptation Model. Further hardiness research among junior baccalaureate nursing students, utilizing the entire hardiness intervention, was recommended. © 2013.

  6. Long-Term Follow-up to a Randomized Controlled Trial Comparing Peroneal Nerve Functional Electrical Stimulation to an Ankle Foot Orthosis for Patients With Chronic Stroke.

    PubMed

    Bethoux, Francois; Rogers, Helen L; Nolan, Karen J; Abrams, Gary M; Annaswamy, Thiru; Brandstater, Murray; Browne, Barbara; Burnfield, Judith M; Feng, Wuwei; Freed, Mitchell J; Geis, Carolyn; Greenberg, Jason; Gudesblatt, Mark; Ikramuddin, Farha; Jayaraman, Arun; Kautz, Steven A; Lutsep, Helmi L; Madhavan, Sangeetha; Meilahn, Jill; Pease, William S; Rao, Noel; Seetharama, Subramani; Sethi, Pramod; Turk, Margaret A; Wallis, Roi Ann; Kufta, Conrad

    2015-01-01

    Evidence supports peroneal nerve functional electrical stimulation (FES) as an effective alternative to ankle foot orthoses (AFO) for treatment of foot drop poststroke, but few long-term, randomized controlled comparisons exist. Compare changes in gait quality and function between FES and AFOs in individuals with foot drop poststroke over a 12-month period. Follow-up analysis of an unblinded randomized controlled trial (ClinicalTrials.gov #NCT01087957) conducted at 30 rehabilitation centers comparing FES to AFOs over 6 months. Subjects continued to wear their randomized device for another 6 months to final 12-month assessments. Subjects used study devices for all home and community ambulation. Multiply imputed intention-to-treat analyses were utilized; primary endpoints were tested for noninferiority and secondary endpoints for superiority. Primary endpoints: 10 Meter Walk Test (10MWT) and device-related serious adverse event rate. Secondary endpoints: 6-Minute Walk Test (6MWT), GaitRite Functional Ambulation Profile, and Modified Emory Functional Ambulation Profile (mEFAP). A total of 495 subjects were randomized, and 384 completed the 12-month follow-up. FES proved noninferior to AFOs for all primary endpoints. Both FES and AFO groups showed statistically and clinically significant improvement for 10MWT compared with initial measurement. No statistically significant between-group differences were found for primary or secondary endpoints. The FES group demonstrated statistically significant improvements for 6MWT and mEFAP Stair-time subscore. At 12 months, both FES and AFOs continue to demonstrate equivalent gains in gait speed. Results suggest that long-term FES use may lead to additional improvements in walking endurance and functional ambulation; further research is needed to confirm these findings. © The Author(s) 2015.

  7. Stability of therapeutic retreatment of corneal wavefront customized ablation with the SCHWIND CAM: 4-year data.

    PubMed

    Aslanides, Ioannis M; Kolli, Sai; Padroni, Sara; Padron, Sara; Arba Mosquera, Samuel

    2012-05-01

    To evaluate the long-term outcomes of aspheric corneal wavefront ablation profiles for excimer laser retreatment. Eighteen eyes that had previously undergone LASIK or photorefractive keratectomy (PRK) were retreated with LASIK using the corneal wavefront ablation profile. Custom Ablation Manager (SCHWIND eye-tech-solutions, Kleinostheim, Germany) software and the ESIRIS flying spot excimer laser system (SCHWIND) were used to perform the ablations. Refractive outcomes and wavefront data are reported up to 4 years after retreatment. Pre- and postoperative data were compared with Student t tests and (multivariate) correlation tests. P<.05 was considered statistically significant. A bilinear correlation of various postoperative wavefront aberrations versus planned correction and preoperative aberration was performed. Mean manifest refraction spherical equivalent (MRSE) before retreatment was -0.38±1.85 diopters (D) and -0.09±0.22 D at 6 months and -0.10±0.38 D at 4 years postoperatively. The reduction in MRSE was statistically significant at both postoperative time points (P<.005). Postoperative aberrations were statistically lower (spherical aberration P<.05; coma P<.005; root-mean-square higher order aberration P<.0001) at 4 years postoperatively. Distribution of the postoperative uncorrected distance visual acuity (P<.0001) and corrected distance visual acuity (P<.01) were statistically better than preoperative values. Aspheric corneal wavefront customization with the ESIRIS yields visual, optical, and refractive results comparable to those of other wavefront-guided customized techniques for the correction of myopia and myopic astigmatism. The corneal wavefront customized approach shows its strength in cases where abnormal optical systems are expected. Systematic wavefront customized corneal ablation appears safe and efficacious for retreatment cases. Copyright 2012, SLACK Incorporated.

  8. 40 CFR Table F-1 to Subpart F of... - Performance Specifications for PM2.5 Class II Equivalent Samplers

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... specified in the post-loading evaluation test (§ 53.62, § 53.63, or § 53.64). § 53.66 Volatility Test... Equivalent Samplers Performance test Specifications Acceptance criteria § 53.62 Full Wind Tunnel Evaluation...: 95% ≤ Rc ≤ 105%. § 53.63 Wind Tunnel Inlet Aspiration Test Liquid VOAG produced aerosol at 2 km/hr...

  9. 40 CFR Table F-1 to Subpart F of... - Performance Specifications for PM2.5 Class II Equivalent Samplers

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... specified in the post-loading evaluation test (§ 53.62, § 53.63, or § 53.64). § 53.66 Volatility Test... Equivalent Samplers Performance test Specifications Acceptance criteria § 53.62 Full Wind Tunnel Evaluation...: 95% ≤ Rc ≤ 105%. § 53.63 Wind Tunnel Inlet Aspiration Test Liquid VOAG produced aerosol at 2 km/hr...

  10. Potency Determination of Antidandruff Shampoos in Nystatin International Unit Equivalents

    PubMed Central

    Anusha Hewage, D. B. G.; Pathirana, W.; Pinnawela, Amara

    2008-01-01

    A convenient standard microbiological potency determination test for the antidandruff shampoos was developed by adopting the pharmacopoeial microbiological assay procedure of the drug nystatin. A standard curve was drawn consisting of the inhibition zone diameters vs. logarithm of nystatin concentrations in international units using the fungus Saccharomyces cerevisiae (yeast) strain National Collection of Type Culture (NCTC) 1071606 as the test organism. From the standard curve the yeast inhibitory potencies of the shampoos in nystatin international unit equivalents were determined from the respective inhibition zones of the test samples of the shampoos. Under test conditions four shampoo samples showed remarkable fungal inhibitory potencies of 10227, 10731, 12396 and 18211 nystatin international unit equivalents/ml while two shampoo samples had extremely feeble inhibitory potencies 4.07 and 4.37 nystatin international unit equivalents/ml although the latter two products claimed antifungal activity. The potency determination method could be applied to any antidandruff shampoo with any one or a combination of active ingredients. PMID:21394271

  11. Research on the time-temperature-damage superposition principle of NEPE propellant

    NASA Astrophysics Data System (ADS)

    Han, Long; Chen, Xiong; Xu, Jin-sheng; Zhou, Chang-sheng; Yu, Jia-quan

    2015-11-01

    To describe the relaxation behavior of NEPE (Nitrate Ester Plasticized Polyether) propellant, we analyzed the equivalent relationships between time, temperature, and damage. We conducted a series of uniaxial tensile tests and employed a cumulative damage model to calculate the damage values for relaxation tests at different strain levels. The damage evolution curve of the tensile test at 100 mm/min was obtained through numerical analysis. Relaxation tests were conducted over a range of temperature and strain levels, and the equivalent relationship between time, temperature, and damage was deduced based on free volume theory. The equivalent relationship was then used to generate predictions of the long-term relaxation behavior of the NEPE propellant. Subsequently, the equivalent relationship between time and damage was introduced into the linear viscoelastic model to establish a nonlinear model which is capable of describing the mechanical behavior of composite propellants under a uniaxial tensile load. The comparison between model prediction and experimental data shows that the presented model provides a reliable forecast of the mechanical behavior of propellants.

  12. Quantum test of the equivalence principle for atoms in coherent superposition of internal energy states

    PubMed Central

    Rosi, G.; D'Amico, G.; Cacciapuoti, L.; Sorrentino, F.; Prevedelli, M.; Zych, M.; Brukner, Č.; Tino, G. M.

    2017-01-01

    The Einstein equivalence principle (EEP) has a central role in the understanding of gravity and space–time. In its weak form, or weak equivalence principle (WEP), it directly implies equivalence between inertial and gravitational mass. Verifying this principle in a regime where the relevant properties of the test body must be described by quantum theory has profound implications. Here we report on a novel WEP test for atoms: a Bragg atom interferometer in a gravity gradiometer configuration compares the free fall of rubidium atoms prepared in two hyperfine states and in their coherent superposition. The use of the superposition state allows testing genuine quantum aspects of EEP with no classical analogue, which have remained completely unexplored so far. In addition, we measure the Eötvös ratio of atoms in two hyperfine levels with relative uncertainty in the low 10−9, improving previous results by almost two orders of magnitude. PMID:28569742

  13. 46 CFR 175.540 - Equivalents.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... safety management system is in place on board a vessel. The Commandant will consider the size and corporate structure of a vessel's company when determining the acceptability of an equivalent system... require engineering evaluations and tests to demonstrate the equivalence of the substitute. (b) The...

  14. Maximal exercise testing variables and 10-year survival: fitness risk score derivation from the FIT Project.

    PubMed

    Ahmed, Haitham M; Al-Mallah, Mouaz H; McEvoy, John W; Nasir, Khurram; Blumenthal, Roger S; Jones, Steven R; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J

    2015-03-01

    To determine which routinely collected exercise test variables most strongly correlate with survival and to derive a fitness risk score that can be used to predict 10-year survival. This was a retrospective cohort study of 58,020 adults aged 18 to 96 years who were free of established heart disease and were referred for an exercise stress test from January 1, 1991, through May 31, 2009. Demographic, clinical, exercise, and mortality data were collected on all patients as part of the Henry Ford ExercIse Testing (FIT) Project. Cox proportional hazards models were used to identify exercise test variables most predictive of survival. A "FIT Treadmill Score" was then derived from the β coefficients of the model with the highest survival discrimination. The median age of the 58,020 participants was 53 years (interquartile range, 45-62 years), and 28,201 (49%) were female. Over a median of 10 years (interquartile range, 8-14 years), 6456 patients (11%) died. After age and sex, peak metabolic equivalents of task and percentage of maximum predicted heart rate achieved were most highly predictive of survival (P<.001). Subsequent addition of baseline blood pressure and heart rate, change in vital signs, double product, and risk factor data did not further improve survival discrimination. The FIT Treadmill Score, calculated as [percentage of maximum predicted heart rate + 12(metabolic equivalents of task) - 4(age) + 43 if female], ranged from -200 to 200 across the cohort, was near normally distributed, and was found to be highly predictive of 10-year survival (Harrell C statistic, 0.811). The FIT Treadmill Score is easily attainable from any standard exercise test and translates basic treadmill performance measures into a fitness-related mortality risk score. The FIT Treadmill Score should be validated in external populations. Copyright © 2015 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  15. Develop real-time dosimetry concepts and instrumentation for long term missions

    NASA Technical Reports Server (NTRS)

    Braby, L. A.

    1982-01-01

    The development of a rugged portable instrument to evaluate dose and dose equivalent is described. A tissue-equivalent proportional counter simulating a 2 micrometer spherical tissue volume was operated satisfactorily for over a year. The basic elements of the electronic system were designed and tested. And finally, the most suitable mathematical technique for evaluating dose equivalent with a portable instrument was selected. Design and fabrication of a portable prototype, based on the previously tested circuits, is underway.

  16. 40 CFR Table E-1 to Subpart E of... - Summary of Test Requirements for Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...

  17. 40 CFR Table E-1 to Subpart E of... - Summary of Test Requirements for Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...

  18. 40 CFR Table E-1 to Subpart E of... - Summary of Test Requirements for Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...

  19. 40 CFR Table E-1 to Subpart E of... - Summary of Test Requirements for Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...

  20. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  1. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  2. Helmholtz and Gibbs ensembles, thermodynamic limit and bistability in polymer lattice models

    NASA Astrophysics Data System (ADS)

    Giordano, Stefano

    2017-12-01

    Representing polymers by random walks on a lattice is a fruitful approach largely exploited to study configurational statistics of polymer chains and to develop efficient Monte Carlo algorithms. Nevertheless, the stretching and the folding/unfolding of polymer chains within the Gibbs (isotensional) and the Helmholtz (isometric) ensembles of the statistical mechanics have not been yet thoroughly analysed by means of the lattice methodology. This topic, motivated by the recent introduction of several single-molecule force spectroscopy techniques, is investigated in the present paper. In particular, we analyse the force-extension curves under the Gibbs and Helmholtz conditions and we give a proof of the ensembles equivalence in the thermodynamic limit for polymers represented by a standard random walk on a lattice. Then, we generalize these concepts for lattice polymers that can undergo conformational transitions or, equivalently, for chains composed of bistable or two-state elements (that can be either folded or unfolded). In this case, the isotensional condition leads to a plateau-like force-extension response, whereas the isometric condition causes a sawtooth-like force-extension curve, as predicted by numerous experiments. The equivalence of the ensembles is finally proved also for lattice polymer systems exhibiting conformational transitions.

  3. Equivalence Reliability among the FITNESSGRAM[R] Upper-Body Tests of Muscular Strength and Endurance

    ERIC Educational Resources Information Center

    Sherman, Todd; Barfield, J. P.

    2006-01-01

    This study was designed to investigate the equivalence reliability between the suggested FITNESSGRAM[R] muscular strength and endurance test, the 90[degrees] push-up (PSU), and alternate FITNESSGRAM[R] tests of upper-body strength and endurance (i.e., modified pull-up [MPU], flexed-arm hang [FAH], and pull-up [PU]). Children (N = 383) in Grades 3…

  4. Experimental limit on the ratio of the gravitational mass to the inertial mass of antihydrogen

    NASA Astrophysics Data System (ADS)

    Fajans, Joel; Wurtele, Jonathan; Charman, Andrew; Zhmoginov, Andrey

    2012-10-01

    Physicists have long wondered if the gravitational interactions between matter and antimatter might be different from those between matter and itself. While there are many indirect indications that no such differences exist, i.e., that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. By searching for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap, we have determined that we can reject ratios of the gravitational mass to the inertial mass of antihydrogen greater than about 100 at a statistical significance level of 5%. A similar search places somewhat lower limits on a negative gravitational mass, i.e., on antigravity.

  5. Gender similarities and differences.

    PubMed

    Hyde, Janet Shibley

    2014-01-01

    Whether men and women are fundamentally different or similar has been debated for more than a century. This review summarizes major theories designed to explain gender differences: evolutionary theories, cognitive social learning theory, sociocultural theory, and expectancy-value theory. The gender similarities hypothesis raises the possibility of theorizing gender similarities. Statistical methods for the analysis of gender differences and similarities are reviewed, including effect sizes, meta-analysis, taxometric analysis, and equivalence testing. Then, relying mainly on evidence from meta-analyses, gender differences are reviewed in cognitive performance (e.g., math performance), personality and social behaviors (e.g., temperament, emotions, aggression, and leadership), and psychological well-being. The evidence on gender differences in variance is summarized. The final sections explore applications of intersectionality and directions for future research.

  6. Interacting steps with finite-range interactions: Analytical approximation and numerical results

    NASA Astrophysics Data System (ADS)

    Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.

    2013-05-01

    We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.

  7. Explosive materials equivalency, test methods and evaluation

    NASA Technical Reports Server (NTRS)

    Koger, D. M.; Mcintyre, F. L.

    1980-01-01

    Attention is given to concepts of explosive equivalency of energetic materials based on specific airblast parameters. A description is provided of a wide bandwidth high accuracy instrumentation system which has been used extensively in obtaining pressure time profiles of energetic materials. The object of the considered test method is to determine the maximum output from the detonation of explosive materials in terms of airblast overpressure and positive impulse. The measured pressure and impulse values are compared with known characteristics of hemispherical TNT data to determine the equivalency of the test material in relation to TNT. An investigation shows that meaningful comparisons between various explosives and a standard reference material such as TNT should be based upon the same parameters. The tests should be conducted under the same conditions.

  8. Retest effects in working memory capacity tests: A meta-analysis.

    PubMed

    Scharfen, Jana; Jansen, Katrin; Holling, Heinz

    2018-06-15

    The repeated administration of working memory capacity tests is common in clinical and research settings. For cognitive ability tests and different neuropsychological tests, meta-analyses have shown that they are prone to retest effects, which have to be accounted for when interpreting retest scores. Using a multilevel approach, this meta-analysis aims at showing the reproducibility of retest effects in working memory capacity tests for up to seven test administrations, and examines the impact of the length of the test-retest interval, test modality, equivalence of test forms and participant age on the size of retest effects. Furthermore, it is assessed whether the size of retest effects depends on the test paradigm. An extensive literature search revealed 234 effect sizes from 95 samples and 68 studies, in which healthy participants between 12 and 70 years repeatedly performed a working memory capacity test. Results yield a weighted average of g = 0.28 for retest effects from the first to the second test administration, and a significant increase in effect sizes was observed up to the fourth test administration. The length of the test-retest interval and publication year were found to moderate the size of retest effects. Retest effects differed between the paradigms of working memory capacity tests. These findings call for the development and use of appropriate experimental or statistical methods to address retest effects in working memory capacity tests.

  9. 49 CFR 391.33 - Equivalent of road test.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 5 2014-10-01 2014-10-01 false Equivalent of road test. 391.33 Section 391.33 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS QUALIFICATIONS OF DRIVERS AND LONGER COMBINATION VEHICLE (LCV)...

  10. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM 2.5 or PM 10-2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  11. Confidence limits and sample size for determining nonhost status of fruits and vegetables to tephritid fruit flies as a quarantine measure.

    PubMed

    Follett, Peter A; Hennessey, Michael K

    2007-04-01

    Quarantine measures including treatments are applied to exported fruit and vegetable commodities to control regulatory fruit fly pests and to reduce the likelihood of their introduction into new areas. Nonhost status can be an effective measure used to achieve quarantine security. As with quarantine treatments, nonhost status can stand alone as a measure if there is high efficacy and statistical confidence. The numbers of insects or fruit tested during investigation of nonhost status will determine the level of statistical confidence. If the level of confidence of nonhost status is not high, then additional measures may be required to achieve quarantine security as part of a systems approach. Certain countries require that either 99.99 or 99.9968% mortality, as a measure of efficacy, at the 95% confidence level, be achieved by a quarantine treatment to meet quarantine security. This article outlines how the level of confidence in nonhost status can be quantified so that its equivalency to traditional quarantine treatments may be demonstrated. Incorporating sample size and confidence levels into host status testing protocols along with efficacy will lead to greater consistency by regulatory decision-makers in interpreting results and, therefore, to more technically sound decisions on host status.

  12. Granularity refined by knowledge: contingency tables and rough sets as tools of discovery

    NASA Astrophysics Data System (ADS)

    Zytkow, Jan M.

    2000-04-01

    Contingency tables represent data in a granular way and are a well-established tool for inductive generalization of knowledge from data. We show that the basic concepts of rough sets, such as concept approximation, indiscernibility, and reduct can be expressed in the language of contingency tables. We further demonstrate the relevance to rough sets theory of additional probabilistic information available in contingency tables and in particular of statistical tests of significance and predictive strength applied to contingency tables. Tests of both type can help the evaluation mechanisms used in inductive generalization based on rough sets. Granularity of attributes can be improved in feedback with knowledge discovered in data. We demonstrate how 49er's facilities for (1) contingency table refinement, for (2) column and row grouping based on correspondence analysis, and (3) the search for equivalence relations between attributes improve both granularization of attributes and the quality of knowledge. Finally we demonstrate the limitations of knowledge viewed as concept approximation, which is the focus of rough sets. Transcending that focus and reorienting towards the predictive knowledge and towards the related distinction between possible and impossible (or statistically improbable) situations will be very useful in expanding the rough sets approach to more expressive forms of knowledge.

  13. RCT: Module 2.06, Air Sampling Program and Methods, Course 8772

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hillmer, Kurt T.

    The inhalation of radioactive particles is the largest cause of an internal radiation dose. Airborne radioactivity measurements are necessary to ensure that the control measures are and continue to be effective. Regulations govern the allowable effective dose equivalent to an individual. The effective dose equivalent is determined by combining the external and internal dose equivalent values. Typically, airborne radioactivity levels are maintained well below allowable levels to keep the total effective dose equivalent small. This course will prepare the student with the skills necessary for RCT qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examinationmore » (TEST 27566) and will provide in-the-field skills.« less

  14. Recommendations on evidence needed to support measurement equivalence between electronic and paper-based patient-reported outcome (PRO) measures: ISPOR ePRO Good Research Practices Task Force report.

    PubMed

    Coons, Stephen Joel; Gwaltney, Chad J; Hays, Ron D; Lundy, J Jason; Sloan, Jeff A; Revicki, Dennis A; Lenderking, William R; Cella, David; Basch, Ethan

    2009-06-01

    Patient-reported outcomes (PROs) are the consequences of disease and/or its treatment as reported by the patient. The importance of PRO measures in clinical trials for new drugs, biological agents, and devices was underscored by the release of the US Food and Drug Administration's draft guidance for industry titled "Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims." The intent of the guidance was to describe how the FDA will evaluate the appropriateness and adequacy of PRO measures used as effectiveness end points in clinical trials. In response to the expressed need of ISPOR members for further clarification of several aspects of the draft guidance, ISPOR's Health Science Policy Council created three task forces, one of which was charged with addressing the implications of the draft guidance for the collection of PRO data using electronic data capture modes of administration (ePRO). The objective of this report is to present recommendations from ISPOR's ePRO Good Research Practices Task Force regarding the evidence necessary to support the comparability, or measurement equivalence, of ePROs to the paper-based PRO measures from which they were adapted. The task force was composed of the leadership team of ISPOR's ePRO Working Group and members of another group (i.e., ePRO Consensus Development Working Group) that had already begun to develop recommendations regarding ePRO good research practices. The resulting task force membership reflected a broad array of backgrounds, perspectives, and expertise that enriched the development of this report. The prior work became the starting point for the Task Force report. A subset of the task force members became the writing team that prepared subsequent iterations of the report that were distributed to the full task force for review and feedback. In addition, review beyond the task force was sought and obtained. Along with a presentation and discussion period at an ISPOR meeting, a draft version of the full report was distributed to roughly 220 members of a reviewer group. The reviewer group comprised individuals who had responded to an emailed invitation to the full membership of ISPOR. This Task Force report reflects the extensive internal and external input received during the 16-month good research practices development process. RESULTS/RECOMMENDATIONS: An ePRO questionnaire that has been adapted from a paper-based questionnaire ought to produce data that are equivalent or superior (e.g., higher reliability) to the data produced from the original paper version. Measurement equivalence is a function of the comparability of the psychometric properties of the data obtained via the original and adapted administration mode. This comparability is driven by the amount of modification to the content and format of the original paper PRO questionnaire required during the migration process. The magnitude of a particular modification is defined with reference to its potential effect on the content, meaning, or interpretation of the measure's items and/or scales. Based on the magnitude of the modification, evidence for measurement equivalence can be generated through combinations of the following: cognitive debriefing/testing, usability testing, equivalence testing, or, if substantial modifications have been made, full psychometric testing. As long as only minor modifications were made to the measure during the migration process, a substantial body of existing evidence suggests that the psychometric properties of the original measure will still hold for the ePRO version. Hence, an evaluation limited to cognitive debriefing and usability testing only may be sufficient. However, where more substantive changes in the migration process has occurred, confirming that the adaptation to the ePRO format did not introduce significant response bias and that the two modes of administration produce essentially equivalent results is necessary. Recommendations regarding the study designs and statistical approaches for assessing measurement equivalence are provided. The electronic administration of PRO measures offers many advantages over paper administration. We provide a general framework for decisions regarding the level of evidence needed to support modifications that are made to PRO measures when they are migrated from paper to ePRO devices. The key issues include: 1) the determination of the extent of modification required to administer the PRO on the ePRO device and 2) the selection and implementation of an effective strategy for testing the measurement equivalence of the two modes of administration. We hope that these good research practice recommendations provide a path forward for researchers interested in migrating PRO measures to electronic data collection platforms.

  15. Flight Test of a Head-Worn Display as an Equivalent-HUD for Terminal Operations

    NASA Technical Reports Server (NTRS)

    Shelton, K. J.; Arthur, J. J., III; Prinzel, L. J., III; Nicholas, S. N.; Williams, S. P.; Bailey, R. E.

    2015-01-01

    Research, development, test, and evaluation of flight deck interface technologies is being conducted by NASA to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System (NextGen). Under NASA's Aviation Safety Program, one specific area of research is the use of small Head-Worn Displays (HWDs) as a potential equivalent display to a Head-up Display (HUD). Title 14 of the US CFR 91.175 describes a possible operational credit which can be obtained with airplane equipage of a HUD or an "equivalent"' display combined with Enhanced Vision (EV). A successful HWD implementation may provide the same safety and operational benefits as current HUD-equipped aircraft but for significantly more aircraft in which HUD installation is neither practical nor possible. A flight test was conducted to evaluate if the HWD, coupled with a head-tracker, can provide an equivalent display to a HUD. Approach and taxi testing was performed on-board NASA's experimental King Air aircraft in various visual conditions. Preliminary quantitative results indicate the HWD tested provided equivalent HUD performance, however operational issues were uncovered. The HWD showed significant potential as all of the pilots liked the increased situation awareness attributable to the HWD's unique capability of unlimited field-of-regard.

  16. Flight test of a head-worn display as an equivalent-HUD for terminal operations

    NASA Astrophysics Data System (ADS)

    Shelton, K. J.; Arthur, J. J.; Prinzel, L. J.; Nicholas, S. N.; Williams, S. P.; Bailey, R. E.

    2015-05-01

    Research, development, test, and evaluation of flight deck interface technologies is being conducted by NASA to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System (NextGen). Under NASA's Aviation Safety Program, one specific area of research is the use of small Head-Worn Displays (HWDs) as a potential equivalent display to a Head-up Display (HUD). Title 14 of the US CFR 91.175 describes a possible operational credit which can be obtained with airplane equipage of a HUD or an "equivalent"' display combined with Enhanced Vision (EV). A successful HWD implementation may provide the same safety and operational benefits as current HUD-equipped aircraft but for significantly more aircraft in which HUD installation is neither practical nor possible. A flight test was conducted to evaluate if the HWD, coupled with a head-tracker, can provide an equivalent display to a HUD. Approach and taxi testing was performed on-board NASA's experimental King Air aircraft in various visual conditions. Preliminary quantitative results indicate the HWD tested provided equivalent HUD performance, however operational issues were uncovered. The HWD showed significant potential as all of the pilots liked the increased situation awareness attributable to the HWD's unique capability of unlimited field-of-regard.

  17. Effect of yogurt and pH equivalent lemon juice on salivary flow rate in healthy volunteers - An experimental crossover study.

    PubMed

    Murugesh, Jeevitha; Annigeri, Rajeshwari G; Raheel, Syed Ahmed; Azzeghaiby, Saleh; Alshehri, Mohammad; Kujan, Omar

    2015-12-01

    Xerostomia is a common clinical problem, and different medications have been tried in its management. In the present study, routine dietary products are used to assess their effect on salivary flow. To assess the efficacy of yogurt and lemon juice on increase in salivation and its comparison with that of unstimulated saliva. A total of 40 volunteers (aged 19-48) were selected. The pH of yogurt was calculated, and equivalent pH lemon juice was prepared. First, normal resting saliva was collected as baseline followed by every 1 min for 5 min. Patients were given lemon juice or yogurt and then crossed over to the other group to assess the impact of the stimulants on salivary flow from 1 to 5 min. The results were analyzed statistically. Comparisons between baseline saliva secretion and that by yogurt and lemon juice (using the ANOVA test) showed that there was a significant increase after treatment at the end of the experiment for both yogurt and lemon juice. However, yogurt showed a significant increase in saliva secretion compared to baseline than lemon juice. Our findings suggest that yogurt is a potential candidate for the treatment of dry mouth.

  18. Split-Face Comparison of an Advanced Non-Hydroquinone Lightening Solution to 4% Hydroquinone.

    PubMed

    Schlessinger, Joel; Saxena, Subhash; Mohr, Stuart

    2016-12-01

    Hyperpigmentation is a primary concern for many cosmetic patients because of its high rate of occurrence and significant impact on perceived age. While 4% hydroquinone has been the gold-standard of treatment, there is a growing interest in non-hydroquinone solutions, however, many of these newer solutions fail to deliver equivalent improvement. This double-blind, randomized, split-face study compares the effects of a new OTC non-hydroquinone lightening product (JM) to an available 4% hydroquinone lightening solution (OB) on the appearance of hyperpigmentation, texture, and ne lines and wrinkles. Comparisons were determined by both physician assessment and subject self-assessment at baseline, 4, 8, and 12 weeks. Physician assessment showed statistically equivalent improvement on both sides of the face with the JM side showing equivalent or superior average improvement in all assessed categories. Subject self-assessment showed a significant preference for the JM product over the 4% hydroquinone and a substantially higher perception of overall improvement over 4% hydroquinone (P=0.058). Physician assessment showed equal or superior average improvement in all measured categories with no statistically significant difference between the two sides. Subject self-assessment, however, showed a significant and growing preference toward the investigated JM product over the course of the study. Overall, the results of this study show the JM product to be equivalent if not superior to 4% hydroquinone for results and patient satisfaction. J Drugs Dermatol. 2016;15(12):1571-1577.

  19. EDS V25 containment vessel explosive qualification test report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rudolphi, John Joseph

    2012-04-01

    The V25 containment vessel was procured by the Project Manager, Non-Stockpile Chemical Materiel (PMNSCM) as a replacement vessel for use on the P2 Explosive Destruction Systems. It is the first EDS vessel to be fabricated under Code Case 2564 of the ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel based on the Code Case is nine (9) pounds TNT-equivalent for up to 637 detonations. This limit is an increase from the 4.8 pounds TNT-equivalency rating for previous vessels. This report describes the explosive qualification tests thatmore » were performed in the vessel as part of the process for qualifying the vessel for explosive use. The tests consisted of a 11.25 pound TNT equivalent bare charge detonation followed by a 9 pound TNT equivalent detonation.« less

  20. Development of flexible rotor balancing criteria

    NASA Technical Reports Server (NTRS)

    Walter, W. W.; Rieger, N. F.

    1979-01-01

    Several studies in which analytical procedures were used to obtain balancing criteria for flexible rotors are described. General response data for a uniform rotor in damped flexible supports were first obtained for plain cylindrical bearings, tilting pad bearings, axial groove bearings, and partial arc bearings. These data formed the basis for the flexible rotor balance criteria presented. A procedure by which a practical rotor in bearings could be reduced to an equivalent uniform rotor was developed and tested. It was found that the equivalent rotor response always exceeded to practical rotor response by more than sixty percent for the cases tested. The equivalent rotor procedure was then tested against six practical rotor configurations for which data was available. It was found that the equivalent rotor method offered a procedure by which balance criteria could be selected for practical flexible rotors, using the charts given for the uniform rotor.

  1. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    PubMed

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  2. 40 CFR 86.1845-04 - Manufacturer in-use verification testing requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... group to determine the equivalent NMOG exhaust emission values for the test vehicle. The equivalent NMOG... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Compliance Provisions for Control of Air Pollution From New and In-Use Light...

  3. 40 CFR 86.1845-04 - Manufacturer in-use verification testing requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... group to determine the equivalent NMOG exhaust emission values for the test vehicle. The equivalent NMOG... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Compliance Provisions for Control of Air Pollution From New and In-Use Light...

  4. 40 CFR 86.1845-04 - Manufacturer in-use verification testing requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... group to determine the equivalent NMOG exhaust emission values for the test vehicle. The equivalent NMOG... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Compliance Provisions for Control of Air Pollution From New and In-Use Light...

  5. 40 CFR 86.1845-04 - Manufacturer in-use verification testing requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... group to determine the equivalent NMOG exhaust emission values for the test vehicle. The equivalent NMOG... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) General Compliance Provisions for Control of Air Pollution From New and In-Use Light...

  6. Maintaining Equivalent Cut Scores for Small Sample Test Forms

    ERIC Educational Resources Information Center

    Dwyer, Andrew C.

    2016-01-01

    This study examines the effectiveness of three approaches for maintaining equivalent performance standards across test forms with small samples: (1) common-item equating, (2) resetting the standard, and (3) rescaling the standard. Rescaling the standard (i.e., applying common-item equating methodology to standard setting ratings to account for…

  7. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  8. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  9. 40 CFR 53.52 - Leak check test.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent Methods for PM2.5 or PM10â2.5 § 53.52... to include the facility, including components, instruments, operator controls, a written procedure...

  10. 40 CFR 86.1804-01 - Acronyms and abbreviations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...—Nonmethane Hydrocarbons. NMHCE—Non-Methane Hydrocarbon Equivalent. NMOG—Non-methane organic gases. NO—nitric....—Degree(s). DNPH—2,4-dinitrophenylhydrazine. EDV—Emission Data Vehicle. EP—End point. ETW—Equivalent test...—dispensed fuel temperature. THC—Total Hydrocarbons. THCE—Total Hydrocarbon Equivalent. TLEV—Transitional Low...

  11. 40 CFR 86.1804-01 - Acronyms and abbreviations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...—Nonmethane Hydrocarbons. NMHCE—Non-Methane Hydrocarbon Equivalent. NMOG—Non-methane organic gases. NO—nitric....—Degree(s). DNPH—2,4-dinitrophenylhydrazine. EDV—Emission Data Vehicle. EP—End point. ETW—Equivalent test...—dispensed fuel temperature. THC—Total Hydrocarbons. THCE—Total Hydrocarbon Equivalent. TLEV—Transitional Low...

  12. [Interlaboratory Study on Evaporation Residue Test for Food Contact Products (Report 2)].

    PubMed

    Ohno, Hiroyuki; Mutsuga, Motoh; Abe, Tomoyuki; Abe, Yutaka; Amano, Homare; Ishihara, Kinuyo; Ohsaka, Ikue; Ohno, Haruka; Ohno, Yuichiro; Ozaki, Asako; Kakihara, Yoshiteru; Kobayashi, Hisashi; Sakuragi, Hiroshi; Shibata, Hiroshi; Shirono, Katsuhiro; Sekido, Haruko; Takasaka, Noriko; Takenaka, Yu; Tajima, Yoshiyasu; Tanaka, Aoi; Tanaka, Hideyuki; Nakanishi, Toru; Nomura, Chie; Haneishi, Nahoko; Hayakawa, Masato; Miura, Toshihiko; Yamaguchi, Miku; Yamada, Kyohei; Watanabe, Kazunari; Sato, Kyoko

    2018-01-01

    An interlaboratory study was performed to evaluate the equivalence between an official method and a modified method of evaporation residue test using heptane as a food-simulating solvent for oily or fatty foods, based on the Japanese Food Sanitation Law for food contact products. Twenty-three laboratories participated, and tested the evaporation residues of nine test solutions as blind duplicates. In the official method, heating for evaporation was done with a water bath. In the modified method, a hot plate was used for evaporation, and/or a vacuum concentration procedure was skipped. In most laboratories, the test solutions were heated until just prior to dryness, and then allowed to dry under residual heat. Statistical analysis revealed that there was no significant difference between the two methods. Accordingly, the modified method provides performance equal to the official method, and is available as an alternative method. Furthermore, an interlaboratory study was performed to evaluate and compare two leaching solutions (95% ethanol and isooctane) used as food-simulating solvents for oily or fatty foods in the EU. The results demonstrated that there was no significant difference between heptane and these two leaching solutions.

  13. The performance of the new enhanced-resolution satellite passive microwave dataset applied for snow water equivalent estimation

    NASA Astrophysics Data System (ADS)

    Pan, J.; Durand, M. T.; Jiang, L.; Liu, D.

    2017-12-01

    The newly-processed NASA MEaSures Calibrated Enhanced-Resolution Brightness Temperature (CETB) reconstructed using antenna measurement response function (MRF) is considered to have significantly improved fine-resolution measurements with better georegistration for time-series observations and equivalent field of view (FOV) for frequencies with the same monomial spatial resolution. We are looking forward to its potential for the global snow observing purposes, and therefore aim to test its performance for characterizing snow properties, especially the snow water equivalent (SWE) in large areas. In this research, two candidate SWE algorithms will be tested in China for the years between 2005 to 2010 using the reprocessed TB from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E), with the results to be evaluated using the daily snow depth measurements at over 700 national synoptic stations. One of the algorithms is the SWE retrieval algorithm used for the FengYun (FY) - 3 Microwave Radiation Imager. This algorithm uses the multi-channel TB to calculate SWE for three major snow regions in China, with the coefficients adapted for different land cover types. The second algorithm is the newly-established Bayesian Algorithm for SWE Estimation with Passive Microwave measurements (BASE-PM). This algorithm uses the physically-based snow radiative transfer model to find the histogram of most-likely snow property that matches the multi-frequency TB from 10.65 to 90 GHz. It provides a rough estimation of snow depth and grain size at the same time and showed a 30 mm SWE RMS error using the ground radiometer measurements at Sodankyla. This study will be the first attempt to test it spatially for satellite. The use of this algorithm benefits from the high resolution and the spatial consistency between frequencies embedded in the new dataset. This research will answer three questions. First, to what extent can CETB increase the heterogeneity in the mapped SWE? Second, will the SWE estimation error statistics be improved using this high-resolution dataset? Third, how will the SWE retrieval accuracy be improved using CETB and the new SWE retrieval techniques?

  14. Results of Li-Tho trial: a prospective randomized study on effectiveness of LigaSure® in lung resections.

    PubMed

    Bertolaccini, Luca; Viti, Andrea; Cavallo, Antonio; Terzi, Alberto

    2014-04-01

    The role of electro-thermal bipolar tissue sealing system (LigaSure(®), (LS); Covidien, Inc., CO, USA) in thoracic surgery is still undefined. Reports of its use are still limited. The objective of the trial was to evaluate the cost and benefits of LS in major lung resection surgery. A randomized blinded study of a consecutive series of 100 patients undergoing lobectomy was undertaken. After muscle-sparing thoracotomy and classification of lung fissures according to Craig-Walker, patients with fissure Grade 2-4 were randomized to Stapler group or LS group fissure completion. Recorded parameters were analysed for differences in selected intraoperative and postoperative outcomes. Statistical analysis was performed with the bootstrap method. Pearson's χ(2) test and Fisher's exact test were used to calculate probability value for dichotomous variables comparison. Cost-benefit evaluation was performed using Pareto optimal analysis. There were no significant differences between groups, regarding demographic and baseline characteristics. No patient was withdrawn from the study; no adverse effect was recorded. There was no mortality or major complications in both groups. There were no statistically significant differences as to operative time or morbidity between patients in the LS group compared with the Stapler group. In the LS group, there was a not statistically significant increase of postoperative air leaks in the first 24 postoperative hours, while a statistically significant increase of drainage amount was observed in the LS group. No statistically significant difference in hospital length of stay was observed. Overall, the LS group had a favourable multi-criteria analysis of cost/benefit ratio with a good 'Pareto optimum'. LS is a safe device for thoracic surgery and can be a valid alternative to Staplers. In this setting, LS allows functional lung tissue preservation. As to costs, LS seems equivalent to Staplers.

  15. Attention to Physical Activity-Equivalent Calorie Information on Nutrition Facts Labels: An Eye-Tracking Investigation.

    PubMed

    Wolfson, Julia A; Graham, Dan J; Bleich, Sara N

    2017-01-01

    Investigate attention to Nutrition Facts Labels (NFLs) with numeric only vs both numeric and activity-equivalent calorie information, and attitudes toward activity-equivalent calories. An eye-tracking camera monitored participants' viewing of NFLs for 64 packaged foods with either standard NFLs or modified NFLs. Participants self-reported demographic information and diet-related attitudes and behaviors. Participants came to the Behavioral Medicine Lab at Colorado State University in spring, 2015. The researchers randomized 234 participants to view NFLs with numeric calorie information only (n = 108) or numeric and activity-equivalent calorie information (n = 126). Attention to and attitudes about activity-equivalent calorie information. Differences by experimental condition and weight loss intention (overall and within experimental condition) were assessed using t tests and Pearson's chi-square tests of independence. Overall, participants viewed numeric calorie information on 20% of NFLs for 249 ms. Participants in the modified NFL condition viewed activity-equivalent information on 17% of NFLs for 231 ms. Most participants indicated that activity-equivalent calorie information would help them decide whether to eat a food (69%) and that they preferred both numeric and activity-equivalent calorie information on NFLs (70%). Participants used activity-equivalent calorie information on NFLs and found this information helpful for making food decisions. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  16. Virial Coefficients from Unified Statistical Thermodynamics of Quantum Gases Trapped under Generic Power Law Potential in d Dimension and Equivalence of Quantum Gases

    NASA Astrophysics Data System (ADS)

    Bahauddin, Shah Mohammad; Mehedi Faruk, Mir

    2016-09-01

    From the unified statistical thermodynamics of quantum gases, the virial coefficients of ideal Bose and Fermi gases, trapped under generic power law potential are derived systematically. From the general result of virial coefficients, one can produce the known results in d = 3 and d = 2. But more importantly we found that, the virial coefficients of Bose and Fermi gases become identical (except the second virial coefficient, where the sign is different) when the gases are trapped under harmonic potential in d = 1. This result suggests the equivalence between Bose and Fermi gases established in d = 1 (J. Stat. Phys. DOI 10.1007/s10955-015-1344-4). Also, it is found that the virial coefficients of two-dimensional free Bose (Fermi) gas are equal to the virial coefficients of one-dimensional harmonically trapped Bose (Fermi) gas.

  17. Statistics of equivalent width data and new oscillator strengths for Si II, Fe II, and Mn II. [in interstellar medium

    NASA Technical Reports Server (NTRS)

    Van Buren, Dave

    1986-01-01

    Equivalent width data from Copernicus and IUE appear to have an exponential, rather than a Gaussian distribution of errors. This is probably because there is one dominant source of error: the assignment of the background continuum shape. The maximum likelihood method of parameter estimation is presented for the case of exponential statistics, in enough generality for application to many problems. The method is applied to global fitting of Si II, Fe II, and Mn II oscillator strengths and interstellar gas parameters along many lines of sight. The new values agree in general with previous determinations but are usually much more tightly constrained. Finally, it is shown that care must be taken in deriving acceptable regions of parameter space because the probability contours are not generally ellipses whose axes are parallel to the coordinate axes.

  18. Statistical analysis of loopy belief propagation in random fields

    NASA Astrophysics Data System (ADS)

    Yasuda, Muneki; Kataoka, Shun; Tanaka, Kazuyuki

    2015-10-01

    Loopy belief propagation (LBP), which is equivalent to the Bethe approximation in statistical mechanics, is a message-passing-type inference method that is widely used to analyze systems based on Markov random fields (MRFs). In this paper, we propose a message-passing-type method to analytically evaluate the quenched average of LBP in random fields by using the replica cluster variation method. The proposed analytical method is applicable to general pairwise MRFs with random fields whose distributions differ from each other and can give the quenched averages of the Bethe free energies over random fields, which are consistent with numerical results. The order of its computational cost is equivalent to that of standard LBP. In the latter part of this paper, we describe the application of the proposed method to Bayesian image restoration, in which we observed that our theoretical results are in good agreement with the numerical results for natural images.

  19. Optimizing Equivalence-Based Instruction: Effects of Training Protocols on Equivalence Class Formation

    ERIC Educational Resources Information Center

    Fienup, Daniel M.; Wright, Nicole A.; Fields, Lanny

    2015-01-01

    Two experiments evaluated the effects of the simple-to-complex and simultaneous training protocols on the formation of academically relevant equivalence classes. The simple-to-complex protocol intersperses derived relations probes with training baseline relations. The simultaneous protocol conducts all training trials and test trials in separate…

  20. Students' Conceptions of Models of Fractions and Equivalence

    ERIC Educational Resources Information Center

    Jigyel, Karma; Afamasaga-Fuata'i, Karoline

    2007-01-01

    A solid understanding of equivalent fractions is considered a steppingstone towards a better understanding of operations with fractions. In this article, 55 rural Australian students' conceptions of equivalent fractions are presented. Data collected included students' responses to a short written test and follow-up interviews with three students…

  1. Oxygenation state and twilight vision at 2438 m.

    PubMed

    Connolly, Desmond M

    2011-01-01

    Under twilight viewing conditions, hypoxia, equivalent to breathing air at 3048 m (10,000 ft), compromises low contrast acuity, dynamic contrast sensitivity, and chromatic sensitivity. Selected past experiments have been repeated under milder hypoxia, equivalent to altitude exposure below 2438 m (8000 ft), to further define the influence of oxygenation state on mesopic vision. To assess photopic and mesopic visual function, 12 subjects each undertook three experiments using the Contrast Acuity Assessment test, the Frequency Doubling Perimeter, and the Color Assessment and Diagnosis (CAD) test. Experiments were conducted near sea level breathing 15.2% oxygen (balance nitrogen) and 100% oxygen, representing mild hypobaric hypoxia at 2438 m (8000 ft) and the benefit of supplementary oxygen, respectively. Oxygenation state was a statistically significant determinant of visual performance on all three visual parameters at mesopic, but not photopic, luminance. Mesopic sensitivity was greater with supplementary oxygen, but the magnitude of each hypoxic decrement was slight. Hypoxia elevated mesopic contrast acuity thresholds by approximately 4%; decreased mesopic dynamic contrast sensitivity by approximately 2 dB; and extended mean color ellipse axis length by approximately one CAD unit at mesopic luminance (that is, hypoxia decreased chromatic sensitivity). The results indicate that twilight vision may be susceptible to conditions of altered oxygenation at upper-to-mid mesopic luminance with relevance to contemporary night flying, including using night vision devices. Supplementary oxygen should be considered when optimal visual performance is mission-critical during flight above 2438 m (8000 ft) in dim light.

  2. Modified physiologically equivalent temperature—basics and applications for western European climate

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Chang; Matzarakis, Andreas

    2018-05-01

    A new thermal index, the modified physiologically equivalent temperature (mPET) has been developed for universal application in different climate zones. The mPET has been improved against the weaknesses of the original physiologically equivalent temperature (PET) by enhancing evaluation of the humidity and clothing variability. The principles of mPET and differences between original PET and mPET are introduced and discussed in this study. Furthermore, this study has also evidenced the usability of mPET with climatic data in Freiburg, which is located in Western Europe. Comparisons of PET, mPET, and Universal Thermal Climate Index (UTCI) have shown that mPET gives a more realistic estimation of human thermal sensation than the other two thermal indices (PET, UTCI) for the thermal conditions in Freiburg. Additionally, a comparison of physiological parameters between mPET model and PET model (Munich Energy Balance Model for Individual, namely MEMI) is proposed. The core temperatures and skin temperatures of PET model vary more violently to a low temperature during cold stress than the mPET model. It can be regarded as that the mPET model gives a more realistic core temperature and mean skin temperature than the PET model. Statistical regression analysis of mPET based on the air temperature, mean radiant temperature, vapor pressure, and wind speed has been carried out. The R square (0.995) has shown a well co-relationship between human biometeorological factors and mPET. The regression coefficient of each factor represents the influence of the each factor on changing mPET (i.e., ±1 °C of T a = ± 0.54 °C of mPET). The first-order regression has been considered predicting a more realistic estimation of mPET at Freiburg during 2003 than the other higher order regression model, because the predicted mPET from the first-order regression has less difference from mPET calculated from measurement data. Statistic tests recognize that mPET can effectively evaluate the influences of all human biometeorological factors on thermal environments. Moreover, a first-order regression function can also predict the thermal evaluations of the mPET by using human biometeorological factors in Freiburg.

  3. Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.

    PubMed

    Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E

    2014-02-28

    The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Comparative bioavailability study of cefuroxime axetil (equivalent to 500 mg cefuroxime/tablet) tablets (Zednad® versus Zinnat®) in healthy male volunteers.

    PubMed

    Asiri, Y A; Al-Hadiya, B M; Kadi, A A; Al-Khamis, K I; Mowafy, H A; El-Sayed, Y M

    2011-09-01

    This study was performed to investigate the bioequivalence of cefuroxime axetil tablets between a generic test product (A) Zednad® Tablet (500 mg cefuroxime/ tablet, Diamond Pharma, Syria), and the Reference Product (B) Zinnat® Tablet (500 mg cefuroxime/tablet, GlaxoSmithKline, Saudi Arabia). The bioavailability study was carried out for 24 healthy male volunteers. The subjects received 1 Zednad® Tablet (500 mg/ tablet) and 1 Zinnat® Tablet (500 mg/tablet) in a randomized, two-way crossover design fashion on 2 treatment days, after an overnight fast of at least 10 h, with a washout period of 7 days. 24 volunteers plus 2 alternatives completed the crossover. The bioanalysis of clinical plasma samples was accomplished by HPLC method, which was developed and validated in accordance with international guidelines. Pharmacokinetic parameters, determined by standard non-compartmental methods, and ANOVA statistics were calculated using SAS Statistical Software. The significance of a sequence effect was tested using the subjects nested in sequence as the error term. The 90% confidence intervals for the ratio between the test and reference product pharmacokinetic parameters of AUC0→t, AUC0→∞, and Cmax were calculated and found to be within the confidence limits of 80.00 - 125.00% for AUC0→t, AUC0→∞ and Cmax. The study demonstrated that the test product (A) was found bioequivalent to the reference product (B) following an oral dose of 500 mg tablet. Therefore, the two formulations were considered to be bioequivalent.

  5. Ensemble inequivalence and Maxwell construction in the self-gravitating ring model

    NASA Astrophysics Data System (ADS)

    Rocha Filho, T. M.; Silvestre, C. H.; Amato, M. A.

    2018-06-01

    The statement that Gibbs equilibrium ensembles are equivalent is a base line in many approaches in the context of equilibrium statistical mechanics. However, as a known fact, for some physical systems this equivalence may not be true. In this paper we illustrate from first principles the inequivalence between the canonical and microcanonical ensembles for a system with long range interactions. We make use of molecular dynamics simulations and Monte Carlo simulations to explore the thermodynamics properties of the self-gravitating ring model and discuss on what conditions the Maxwell construction is applicable.

  6. [Comparison between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection].

    PubMed

    Sun, Zong-ke; Wu, Rong; Ding, Pei; Xue, Jin-Rong

    2006-07-01

    To compare between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection. Using inoculated and real water samples to compare the equivalence and false positive rate between two methods. Results demonstrate that enzyme substrate technique shows equivalence with multiple-tube fermentation technique (P = 0.059), false positive rate between the two methods has no statistical difference. It is suggested that enzyme substrate technique can be used as a standard method for water microbiological safety evaluation.

  7. Mars Seasonal Polar Caps as a Test of the Equivalence Principle

    NASA Technical Reports Server (NTRS)

    Rubincam, Daivd Parry

    2011-01-01

    The seasonal polar caps of Mars can be used to test the equivalence principle in general relativity. The north and south caps, which are composed of carbon dioxide, wax and wane with the seasons. If the ratio of the inertial to gravitational masses of the caps differs from the same ratio for the rest of Mars, then the equivalence principle fails, Newton's third law fails, and the caps will pull Mars one way and then the other with a force aligned with the planet's spin axis. This leads to a secular change in Mars's along-track position in its orbit about the Sun, and to a secular change in the orbit's semimajor axis. The caps are a poor E6tv6s test of the equivalence principle, being 4 orders-of-magnitude weaker than laboratory tests and 7 orders-of-magnitude weaker than that found by lunar laser ranging; the reason is the small mass of the caps compared to Mars as a whole. The principal virtue of using Mars is that the caps contain carbon, an element not normally considered in such experiments. The Earth with its seasonal snow cover can also be used for a similar test.

  8. Open-label, randomized, single-dose, crossover study to evaluate the pharmacokinetics and safety differences between two docetaxel products, CKD-810 and Taxotere injection, in patients with advanced solid cancer.

    PubMed

    Cho, Eun Kyung; Park, Ji-Young; Lee, Kyung Hee; Song, Hong Suk; Min, Young Joo; Kim, Yeul Hong; Kang, Jin-Hyoung

    2014-01-01

    The aim of this study was to compare CKD-810 (test docetaxel) with Taxotere(®) (reference docetaxel) in terms of pharmacokinetics and safety for patients with advanced or metastatic carcinoma. A randomized, open-label, two-way crossover study was conducted in eligible patients. Patients received with reference or test drugs of 75 mg/m(2) docetaxel by intravenous infusion for 60 min in the first period and the alternative drug in the second period with a washout of 3 weeks. Plasma concentrations of docetaxel were determined by validated high-performance liquid chromatography coupled to tandem mass spectrometry detection. Pharmacokinetic parameters, including the maximum plasma concentration (C(max)) and the area under the concentration-time curve (AUC), were determined by non-compartmental analysis. A total of 44 patients were included in the study, 21 patients received test drug and 23 received reference drug for the first cycle. The C(max) of docetaxel was 2,658.77 ng/mL for test drug and 2,827.60 ng/mL for reference drug, and two drugs showed no difference with a statistical significance. Time to reach C(max) (T(max)) of CKD-810 (0.94 h) versus reference docetaxel (0.97 h) was also not significantly different. Other pharmacokinetic parameters including the plasma AUC, elimination half-life, and total body clearance exhibited similar values without a significant difference. The most common grade 3 or 4 toxicity was neutropenia (CKD-810 19.5 or 29.3 %; reference docetaxel 14.6 or 41.5 %). Febrile neutropenia was experienced by only one patient in each group. Two patients died of progression of disease during the study. Docetaxel anhydrous CKD-810 use with patients suffering advanced or metastatic solid malignancies was equivalent to reference docetaxel in terms of pharmacokinetic parameters and safety profile. Additionally, the test and reference drug met the regulatory criteria for pharmacokinetic equivalence.

  9. Randomized controlled trial of web-based multimodal therapy for children with acquired brain injury to improve gross motor capacity and performance.

    PubMed

    Baque, Emmah; Barber, Lee; Sakzewski, Leanne; Boyd, Roslyn N

    2017-06-01

    To compare efficacy of a web-based multimodal training programme, 'Move it to improve it' (Mitii TM ), to usual care on gross motor capacity and performance for children with an acquired brain injury. Randomized waitlist controlled trial. Home environment. A total of 60 independently ambulant children (30 in each group), minimum 12 months post-acquired brain injury were recruited and randomly allocated to receive either 20 weeks of Mitii TM training (30 minutes/day, six days/week, total 60 hours) immediately, or waitlisted (usual care control group) for 20 weeks. A total of 58 children completed baseline assessments (32 males; age 11 years 11 months ± 2 years 6 months; Gross Motor Function Classification System equivalent I = 29, II = 29). The Mitii TM program comprised of gross motor, upper limb and visual perception/cognitive activities. The primary outcome was 30-second, repetition maximum functional strength tests for the lower limb (sit-to-stand, step-ups, half-kneel to stand). Secondary outcomes were the 6-minute walk test, High-level Mobility Assessment Tool, Timed Up and Go Test and habitual physical activity as captured by four-day accelerometry. Groups were equivalent at baseline on demographic and clinical measures. The Mitii TM group demonstrated significantly greater improvements on combined score of functional strength tests (mean difference 10.19 repetitions; 95% confidence interval, 3.26-17.11; p = 0.006) compared with the control group. There were no other between-group differences on secondary outcomes. Although the Mitii TM programme demonstrated statistically significant improvements in the functional strength tests of the lower limb, results did not exceed the minimum detectable change and cannot be considered clinically relevant for children with an acquired brain injury. Australian New Zealand Clinical Trials Registration Number, ANZCTR12613000403730.

  10. Evaluation of the safety and durability of low-cost nonprogrammable electric powered wheelchairs.

    PubMed

    Pearlman, Jonathan L; Cooper, Rory A; Karnawat, Jaideep; Cooper, Rosemarie; Boninger, Michael L

    2005-12-01

    To evaluate whether a selection of low-cost, nonprogrammable electric-powered wheelchairs (EPWs) meets the American National Standards Institute (ANSI)/Rehabilitation Engineering and Assistive Technology Society of North America (RESNA) Wheelchair Standards requirements. Objective comparison tests of various aspects of power wheelchair design and performance of 4 EPW types. Three of each of the following EPWs: Pride Mobility Jet 10 (Pride), Invacare Pronto M50 (Invacare), Electric Mobility Rascal 250PC (Electric Mobility), and the Golden Technologies Alanté GP-201-F (Golden). Rehabilitation engineering research center. Not applicable. Static tipping angle; dynamic tipping score; braking distance; energy consumption; climatic conditioning; power and control systems integrity and safety; and static, impact, and fatigue life (equivalent cycles). Static tipping angle and dynamic tipping score were significantly different across manufacturers for each tipping direction (range, 6.6 degrees-35.6 degrees). Braking distances were significantly different across manufacturers (range, 7.4-117.3 cm). Significant differences among groups were found with analysis of variance (ANOVA). Energy consumption results show that all EPWs can travel over 17 km before the battery is expected to be exhausted under idealized conditions (range, 18.2-32.0 km). Significant differences among groups were found with ANOVA. All EPWs passed the climatic conditioning tests. Several adverse responses were found during the power and control systems testing, including motors smoking during the stalling condition (Electric Mobility), charger safety issues (Electric Mobility, Invacare), and controller failures (Golden). All EPWs passed static and impact testing; 9 of 12 failed fatigue testing (3 Invacare, 3 Golden, 1 Electric Mobility, 2 Pride). Equivalent cycles did not differ statistically across manufacturers (range, 9759-824,628 cycles). Large variability in the results, especially with respect to static tipping, power and control system failures, and fatigue life suggest design improvements must be made to make these low-cost, nonprogrammable EPWs safe and reliable for the consumer. Based on our results, these EPWs do not, in general, meet the ANSI/RESNA Wheelchair Standards requirements.

  11. Effectiveness of Test-Enhanced Learning (TEL) in lectures for undergraduate medical students

    PubMed Central

    Ayyub, Aisha; Mahboob, Usman

    2017-01-01

    Objective: To determine the effectiveness of Test-Enhanced learning as a learning tool in lectures for undergraduate medical students Method: This quantitative, randomized controlled trial included eighty-four students of 4th year MBBS from Yusra Medical & Dental College, Islamabad. The duration of study was from March 2016 to August 2016. After obtaining the informed consent; participants were equally assigned to interventional and non-interventional study groups through stratified randomization. Single best answer MCQs of special pathology were used as data collection instrument after validation. A pre- and post-test was taken from both groups, before and after the intervention, respectively and their results were compared using SPSS version 21. Results: There were 13 male (31%) and 29 female (69%) participants in each study group who showed an equivalent baseline performance on pre-test (p=0.95). Statistically significant difference was found among mean scores of interventional and non-interventional study groups at exit exam (p=0.00). Interventional group also showed a significant improvement in their post-test scores (mean: 17.17±1.59) as compared to pre-test scores (mean: 6.19±1.81). Conclusions: Test-enhanced learning has significant effect on improving the learning of course content delivered to undergraduate medical students through lectures. PMID:29492055

  12. Assessing the Practical Equivalence of Conversions when Measurement Conditions Change

    ERIC Educational Resources Information Center

    Liu, Jinghua; Dorans, Neil J.

    2012-01-01

    At times, the same set of test questions is administered under different measurement conditions that might affect the psychometric properties of the test scores enough to warrant different score conversions for the different conditions. We propose a procedure for assessing the practical equivalence of conversions developed for the same set of test…

  13. The Stanford equivalence principle program

    NASA Technical Reports Server (NTRS)

    Worden, Paul W., Jr.; Everitt, C. W. Francis; Bye, M.

    1989-01-01

    The Stanford Equivalence Principle Program (Worden, Jr. 1983) is intended to test the uniqueness of free fall to the ultimate possible accuracy. The program is being conducted in two phases: first, a ground-based version of the experiment, which should have a sensitivity to differences in rate of fall of one part in 10(exp 12); followed by an orbital experiment with a sensitivity of one part in 10(exp 17) or better. The ground-based experiment, although a sensitive equivalence principle test in its own right, is being used for technology development for the orbital experiment. A secondary goal of the experiment is a search for exotic forces. The instrument is very well suited for this search, which would be conducted mostly with the ground-based apparatus. The short range predicted for these forces means that forces originating in the Earth would not be detectable in orbit. But detection of Yukawa-type exotic forces from a nearby large satellite (such as Space Station) is feasible, and gives a very sensitive and controllable test for little more effort than the orbiting equivalence principle test itself.

  14. Comparison of two surgical procedures for use of the acellular dermal matrix graft in the treatment of gingival recessions: a randomized controlled clinical study.

    PubMed

    Felipe, Maria Emília M C; Andrade, Patrícia F; Grisi, Marcio F M; Souza, Sérgio L S; Taba, Mário; Palioto, Daniela B; Novaes, Arthur B

    2007-07-01

    The aim of this randomized, controlled, clinical investigation was to compare two surgical techniques for root coverage with the acellular dermal matrix graft to evaluate which technique provided better root coverage, a better esthetic result, and less postoperative discomfort. Fifteen patients with bilateral Miller Class I or II gingival recessions were selected. Fifteen pairs of recessions were treated and assigned randomly to the test group, and the contralateral recessions were assigned to the control group. The control group was treated with a broader flap and vertical releasing incisions; the test group was treated with the proposed surgical technique, without vertical releasing incisions. The clinical parameters evaluated were probing depth, relative clinical attachment level, gingival recession (GR), width of keratinized tissue, thickness of keratinized tissue, esthetic result, and pain evaluation. The measurements were taken before the surgeries and after 6 months. At baseline, all parameters were similar for both groups. At 6 months, a statistically significant greater reduction in GR favored the control group. The percentage of root coverage was 68.98% and 84.81% for the test and control groups, respectively. The esthetic result was equivalent between the groups, and all patients tolerated both procedures well. Both techniques provided significant root coverage, good esthetic results, and similar levels of postoperative discomfort. However, the control technique had statistically significantly better results for root coverage of localized gingival recessions.

  15. Quantitative Skills, Critical Thinking, and Writing Mechanics in Blended versus Face-to-Face Versions of a Research Methods and Statistics Course

    ERIC Educational Resources Information Center

    Goode, Christopher T.; Lamoreaux, Marika; Atchison, Kristin J.; Jeffress, Elizabeth C.; Lynch, Heather L.; Sheehan, Elizabeth

    2018-01-01

    Hybrid or blended learning (BL) has been shown to be equivalent to or better than face-to-face (FTF) instruction in a broad variety of contexts. We randomly assigned students to either 50/50 BL or 100% FTF versions of a research methods and statistics in psychology course. Students who took the BL version of the course scored significantly lower…

  16. High Dimensional Classification Using Features Annealed Independence Rules.

    PubMed

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  17. Simplified Approach to Predicting Rough Surface Transition

    NASA Technical Reports Server (NTRS)

    Boyle, Robert J.; Stripf, Matthias

    2009-01-01

    Turbine vane heat transfer predictions are given for smooth and rough vanes where the experimental data show transition moving forward on the vane as the surface roughness physical height increases. Consiste nt with smooth vane heat transfer, the transition moves forward for a fixed roughness height as the Reynolds number increases. Comparison s are presented with published experimental data. Some of the data ar e for a regular roughness geometry with a range of roughness heights, Reynolds numbers, and inlet turbulence intensities. The approach ta ken in this analysis is to treat the roughness in a statistical sense , consistent with what would be obtained from blades measured after e xposure to actual engine environments. An approach is given to determ ine the equivalent sand grain roughness from the statistics of the re gular geometry. This approach is guided by the experimental data. A roughness transition criterion is developed, and comparisons are made with experimental data over the entire range of experimental test co nditions. Additional comparisons are made with experimental heat tran sfer data, where the roughness geometries are both regular as well a s statistical. Using the developed analysis, heat transfer calculatio ns are presented for the second stage vane of a high pressure turbine at hypothetical engine conditions.

  18. Practicality of Elementary Statistics Module Based on CTL Completed by Instructions on Using Software R

    NASA Astrophysics Data System (ADS)

    Delyana, H.; Rismen, S.; Handayani, S.

    2018-04-01

    This research is a development research using 4-D design model (define, design, develop, and disseminate). The results of the define stage are analyzed for the needs of the following; Syllabus analysis, textbook analysis, student characteristics analysis and literature analysis. The results of textbook analysis obtained the description that of the two textbooks that must be owned by students also still difficulty in understanding it, the form of presentation also has not facilitated students to be independent in learning to find the concept, textbooks are also not equipped with data processing referrals by using software R. The developed module is considered valid by the experts. Further field trials are conducted to determine the practicality and effectiveness. The trial was conducted to the students of Mathematics Education Study Program of STKIP PGRI which was taken randomly which has not taken Basic Statistics Course that is as many as 4 people. Practical aspects of attention are easy, time efficient, easy to interpret, and equivalence. The practical value in each aspect is 3.7; 3.79, 3.7 and 3.78. Based on the results of the test students considered that the module has been very practical use in learning. This means that the module developed can be used by students in Elementary Statistics learning.

  19. 14 CFR 23.621 - Casting factors.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... either magnetic particle, penetrant or other approved equivalent non-destructive inspection method; or... percent approved non-destructive inspection. When an approved quality control procedure is established and an acceptable statistical analysis supports reduction, non-destructive inspection may be reduced from...

  20. 14 CFR 23.621 - Casting factors.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... either magnetic particle, penetrant or other approved equivalent non-destructive inspection method; or... percent approved non-destructive inspection. When an approved quality control procedure is established and an acceptable statistical analysis supports reduction, non-destructive inspection may be reduced from...

  1. 14 CFR 23.621 - Casting factors.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... either magnetic particle, penetrant or other approved equivalent non-destructive inspection method; or... percent approved non-destructive inspection. When an approved quality control procedure is established and an acceptable statistical analysis supports reduction, non-destructive inspection may be reduced from...

  2. 14 CFR 23.621 - Casting factors.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... either magnetic particle, penetrant or other approved equivalent non-destructive inspection method; or... percent approved non-destructive inspection. When an approved quality control procedure is established and an acceptable statistical analysis supports reduction, non-destructive inspection may be reduced from...

  3. A Comparison of Online, Video Synchronous, and Traditional Learning Modes for an Introductory Undergraduate Physics Course

    NASA Astrophysics Data System (ADS)

    Faulconer, E. K.; Griffith, J.; Wood, B.; Acharyya, S.; Roberts, D.

    2018-05-01

    While the equivalence between online and traditional classrooms has been well-researched, very little of this includes college-level introductory Physics. Only one study explored Physics at the whole-class level rather than specific course components such as a single lab or a homework platform. In this work, we compared the failure rate, grade distribution, and withdrawal rates in an introductory undergraduate Physics course across several learning modes including traditional face-to-face instruction, synchronous video instruction, and online classes. Statistically significant differences were found for student failure rates, grade distribution, and withdrawal rates but yielded small effect sizes. Post-hoc pair-wise test was run to determine differences between learning modes. Online students had a significantly lower failure rate than students who took the class via synchronous video classroom. While statistically significant differences were found for grade distributions, the pair-wise comparison yielded no statistically significance differences between learning modes when using the more conservative Bonferroni correction in post-hoc testing. Finally, in this study, student withdrawal rates were lowest for students who took the class in person (in-person classroom and synchronous video classroom) than online. Students that persist in an online introductory Physics class are more likely to achieve an A than in other modes. However, the withdrawal rate is higher from online Physics courses. Further research is warranted to better understand the reasons for higher withdrawal rates in online courses. Finding the root cause to help eliminate differences in student performance across learning modes should remain a high priority for education researchers and the education community as a whole.

  4. Experimental and statistical study on fracture boundary of non-irradiated Zircaloy-4 cladding tube under LOCA conditions

    NASA Astrophysics Data System (ADS)

    Narukawa, Takafumi; Yamaguchi, Akira; Jang, Sunghyon; Amaya, Masaki

    2018-02-01

    For estimating fracture probability of fuel cladding tube under loss-of-coolant accident conditions of light-water-reactors, laboratory-scale integral thermal shock tests were conducted on non-irradiated Zircaloy-4 cladding tube specimens. Then, the obtained binary data with respect to fracture or non-fracture of the cladding tube specimen were analyzed statistically. A method to obtain the fracture probability curve as a function of equivalent cladding reacted (ECR) was proposed using Bayesian inference for generalized linear models: probit, logit, and log-probit models. Then, model selection was performed in terms of physical characteristics and information criteria, a widely applicable information criterion and a widely applicable Bayesian information criterion. As a result, it was clarified that the log-probit model was the best among the three models to estimate the fracture probability in terms of the degree of prediction accuracy for both next data to be obtained and the true model. Using the log-probit model, it was shown that 20% ECR corresponded to a 5% probability level with a 95% confidence of fracture of the cladding tube specimens.

  5. Cross-Ethnicity Measurement Equivalence of Family Coping for Breast Cancer Survivors

    ERIC Educational Resources Information Center

    Lim, Jung-won; Townsend, Aloen

    2012-01-01

    Objective: The current study examines the equivalence of a measure of family coping, the Family Crisis Oriented Personal Evaluation scales (F-COPES), in Chinese American and Korean American breast cancer survivors (BCS). Methods: Factor structure and cross-ethnicity equivalence of the F-COPES were tested using structural equation modeling with 157…

  6. Cervical screening programmes: can automation help? Evidence from systematic reviews, an economic analysis and a simulation modelling exercise applied to the UK.

    PubMed

    Willis, B H; Barton, P; Pearmain, P; Bryan, S; Hyde, C

    2005-03-01

    To assess the effectiveness and cost-effectiveness of adding automated image analysis to cervical screening programmes. Searching of all major electronic databases to the end of 2000 was supplemented by a detailed survey for unpublished UK literature. Four systematic reviews were conducted according to recognised guidance. The review of 'clinical effectiveness' included studies assessing reproducibility and impact on health outcomes and processes in addition to evaluations of test accuracy. A discrete event simulation model was developed, although the economic evaluation ultimately relied on a cost-minimisation analysis. The predominant finding from the systematic reviews was the very limited amount of rigorous primary research. None of the included studies refers to the only commercially available automated image analysis device in 2002, the AutoPap Guided Screening (GS) System. The results of the included studies were debatably most compatible with automated image analysis being equivalent in test performance to manual screening. Concerning process, there was evidence that automation does lead to reductions in average slide processing times. In the PRISMATIC trial this was reduced from 10.4 to 3.9 minutes, a statistically significant and practically important difference. The economic evaluation tentatively suggested that the AutoPap GS System may be efficient. The key proviso is that credible data become available to support that the AutoPap GS System has test performance and processing times equivalent to those obtained for PAPNET. The available evidence is still insufficient to recommend implementation of automated image analysis systems. The priority for action remains further research, particularly the 'clinical effectiveness' of the AutoPap GS System. Assessing the cost-effectiveness of introducing automation alongside other approaches is also a priority.

  7. Bladder symptoms assessed with overactive bladder questionnaire in Parkinson's disease.

    PubMed

    Iacovelli, Elisa; Gilio, Francesca; Meco, Giuseppe; Fattapposta, Francesco; Vanacore, Nicola; Brusa, Livia; Giacomelli, Elena; Gabriele, Maria; Rubino, Alfonso; Locuratolo, Nicoletta; Iani, Cesare; Pichiorri, Floriana; Colosimo, Carlo; Carbone, Antonio; Palleschi, Giovanni; Inghilleri, Maurizio

    2010-07-15

    In Parkinson's disease (PD) the urinary dysfunction manifests primarily with symptoms of overactive bladder (OAB). The OAB questionnaire (OAB-q) is a measure designed to assess the impact of OAB symptoms on health-related quality of life. In this study, we quantified the urinary symptoms in a large cohort of PD patients by using the OAB-q short form. Possible correlations between the OAB-q and clinical features were tested. Three hundred and two PD patients were enrolled in the study. Correlations between the OAB-q and sex, age, Unified Parkinson's Disease Rating Scale part III (UPDRS-III), Hoehn-Yahr (H-Y) staging, disease duration, and treatment were analyzed. Data were compared with a large cohort of 303 age-matched healthy subjects. The OAB-q yielded significantly higher scores in PD patients than in healthy subjects. In the group of PD patients, all the variables tested were similar between men and women. Pearson's coefficient showed a significant correlation between mean age, disease duration, mean OAB-q scores, UPDRS-III scores, and H-Y staging. A multiple linear regression analysis showed that OAB-q values were significantly influenced by age and UPDRS-III. No statistical correlations were found between OAB-q scores and drug therapy or the equivalent levodopa dose, whilst the items relating to the nocturia symptoms were significantly associated with the equivalent levodopa dose. Our findings suggest that bladder dysfunction assessed by OAB-q mainly correlates with UPDRS-III scores for severity of motor impairment, possibly reflecting the known role of the decline in nigrostriatal dopaminergic function in bladder dysfunction associated with PD and patients' age. Our study also suggests that the OAB-q is a simple, easily administered test that can objectively evaluate bladder function in patients with PD.

  8. Experimental Study on Fatigue Performance of Foamed Lightweight Soil

    NASA Astrophysics Data System (ADS)

    Qiu, Youqiang; Yang, Ping; Li, Yongliang; Zhang, Liujun

    2017-12-01

    In order to study fatigue performance of foamed lightweight soil and forecast its fatigue life in the supporting project, on the base of preliminary tests, beam fatigue tests on foamed lightweight soil is conducted by using UTM-100 test system. Based on Weibull distribution and lognormal distribution, using the mathematical statistics method, fatigue equations of foamed lightweight soil are obtained. At the same time, according to the traffic load on real road surface of the supporting project, fatigue life of formed lightweight soil is analyzed and compared with the cumulative equivalent axle loads during the design period of the pavement. The results show that even the fatigue life of foamed lightweight soil has discrete property, the linear relationship between logarithmic fatigue life and stress ratio still performs well. Especially, the fatigue life of Weibull distribution is more close to that derived from the lognormal distribution, in the instance of 50% guarantee ratio. In addition, the results demonstrated that foamed lightweight soil as subgrade filler has good anti-fatigue performance, which can be further adopted by other projects in the similar research domain.

  9. Cost of opioid-treated chronic low back pain: Findings from a pilot randomized controlled trial of mindfulness meditation-based intervention.

    PubMed

    Zgierska, Aleksandra E; Ircink, James; Burzinski, Cindy A; Mundt, Marlon P

    Opioid-treated chronic low back pain (CLBP) is debilitating, costly, and often refractory to existing treatments. This secondary analysis aims to pilot-test the hypothesis that mindfulness meditation (MM) can reduce economic burden related to opioid-treated CLBP. Twenty-six-week unblinded pilot randomized controlled trial, comparing MM, adjunctive to usual-care, to usual care alone. Outpatient. Thirty-five adults with opioid-treated CLBP (≥30 morphine-equivalent mg/day) for 3 + months enrolled; none withdrew. Eight weekly therapist-led MM sessions and at-home practice. Costs related to self-reported healthcare utilization, medication use (direct costs), lost productivity (indirect costs), and total costs (direct + indirect costs) were calculated for 6-month pre-enrollment and postenrollment periods and compared within and between the groups. Participants (21 MM; 14 control) were 20 percent men, age 51.8 ± 9.7 years, with severe disability, opioid dose of 148.3 ± 129.2 morphine-equivalent mg/d, and individual annual income of $18,291 ± $19,345. At baseline, total costs were estimated at $15,497 ± 13,677 (direct: $10,635 ± 9,897; indirect: $4,862 ± 7,298) per participant. Although MM group participants, compared to controls, reduced their pain severity ratings and pain sensitivity to heat stimuli (p < 0.05), no statistically significant within-group changes or between-group differences in direct and indirect costs were noted. Adults with opioid-treated CLBP experience a high burden of disability despite the high costs of treatment. Although this pilot study did not show a statistically significant impact of MM on costs related to opioid-treated CLBP, MM can improve clinical outcomes and should be assessed in a larger trial with long-term follow-up.

  10. "Galileo Airborne Test Of Equivalence"-Gate

    NASA Astrophysics Data System (ADS)

    Nobili, A. M.; Unnikrishnan, C. S.; Suresh, D.

    A differential Galileo-type mass dropping experiment named GAL was proposed at the University of Pisa in 1986 and completed at CERN in 1992 (Carusotto et al., PRL 69, 1722) in order to test the Equivalence Principle by testing the Universality of Free Fall. The free falling mass was a disk made of two half disks of different composition; a violation of equivalence would produce an angular acceleration of the disk around its symmetry axis, which was measured with a modified Michelson interferometer. GATE -``Galileo Airborne Test of Equivalence'' is a variant of that experiment to be performed in parabolic flight on-board the ``Airbus A300 Zero-g'' aircraft of the European Space Agency (ESA). The main advantages of GATE with respect to GAL are the longer time of free fall and the absence of weight in the final stage of unlocking. The longer time of fall makes the signal stronger (the signal grows quadratically with the time of fall); unlocking at zero-g can significantly reduce spurious angular accelerations of the disk due to inevitable imperfections in the locking/unlocking mechanism, which turned out to be the limiting factor in GAL. A preliminary estimate indicates that GATE should be able to achieve a sensitivity η ≡ Δ g/g≃ 10-13, an improvement by about 3 orders of magnitude with respect to GAL and by about 1 order of magnitude with respect to the best result obtained with a slowly rotating torsion balance by the ``Eöt-Wash'' group at the University of Washington. Ground tests of the read-out and of the locking/unlocking disturbances can be carried out prior to the aircraft experiment. Locking/unlocking tests, retrieval tests, as well as tests of the aircraft environment can be performed onboard the Airbus A-300 in preparation for the actual experiment. The GATE experiment can be viewed as an Equivalence Principle test of intermediate sensitivity between torsion balance ground tests (10-12), balloon or micro-satellite (150 kg) tests (GREAT and μ SCOPE: ≃ 10-15), small-satellite (300 kg) room temperature tests (GG: ≃ 10-17), large-satellite (1 ton) cryogenic tests (STEP: ≃ 10-18)

  11. Vacuum Microelectronic Field Emission Array Devices for Microwave Amplification.

    NASA Astrophysics Data System (ADS)

    Mancusi, Joseph Edward

    This dissertation presents the design, analysis, and measurement of vacuum microelectronic devices which use field emission to extract an electron current from arrays of silicon cones. The arrays of regularly-spaced silicon cones, the field emission cathodes or emitters, are fabricated with an integrated gate electrode which controls the electric field at the tip of the cone, and thus the electron current. An anode or collector electrode is placed above the array to collect the emission current. These arrays, which are fabricated in a standard silicon processing facility, are developed for use as high power microwave amplifiers. Field emission has been studied extensively since it was first characterized in 1928, however due to the large electric fields required practical field emission devices are difficult to make. With the development of the semiconductor industry came the development of fabrication equipment and techniques which allow for the manufacture of the precision micron-scale structures necessary for practical field emission devices. The active region of a field emission device is a vacuum, therefore the electron travel is ballistic. This analysis of field emission devices includes electric field and electron emission modeling, development of a device equivalent circuit, analysis of the parameters in the equivalent circuit, and device testing. Variations in device structure are taken into account using a statistical model based upon device measurements. Measurements of silicon field emitter arrays at DC and RF are presented and analyzed. In this dissertation, the equivalent circuit is developed from the analysis of the device structure. The circuit parameters are calculated from geometrical considerations and material properties, or are determined from device measurements. It is necessary to include the emitter resistance in the equivalent circuit model since relatively high resistivity silicon wafers are used. As is demonstrated, the circuit model accurately predicts the magnitude of the emission current at a number of typical bias current levels when the device is operating at frequencies within the range of 10 MHz to 1 GHz. At low frequencies and at high frequencies within this range, certain parameters are negligible, and simplifications may be made in the equivalent circuit model.

  12. Testing equivalence of mediating models of income, parenting, and school readiness for white, black, and Hispanic children in a national sample.

    PubMed

    Raver, C Cybele; Gershoff, Elizabeth T; Aber, J Lawrence

    2007-01-01

    This paper examines complex models of the associations between family income, material hardship, parenting, and school readiness among White, Black, and Hispanic 6-year-olds, using the Early Childhood Longitudinal Study-Kindergarten Cohort (ECLS-K). It is critical to test the universality of such complex models, particularly given their implications for intervention, prevention, and public policy. Therefore this study asks: Do measures and models of low income and early school readiness indicators fit differently or similarly for White, Black, and Hispanic children? Measurement equivalence of material hardship, parent stress, parenting behaviors, child cognitive skills, and child social competence is first tested. Model equivalence is then tested by examining whether category membership in a race/ethnic group moderates associations between predictors and young children's school readiness.

  13. Testing Equivalence of Mediating Models of Income, Parenting, and School Readiness for White, Black, and Hispanic Children in a National Sample

    PubMed Central

    Raver, C. Cybele; Gershoff, Elizabeth T.; Aber, J. Lawrence

    2010-01-01

    This paper examines complex models of the associations between family income, material hardship, parenting, and school readiness among White, Black, and Hispanic 6-year-olds, using the Early Childhood Longitudinal Study – Kindergarten Cohort (ECLS – K). It is critical to test the universality of such complex models, particularly given their implications for intervention, prevention, and public policy. Therefore this study asks: Do measures and models of low income and early school readiness indicators fit differently or similarly for White, Black, and Hispanic children? Measurement equivalence of material hardship, parent stress, parenting behaviors, child cognitive skills, and child social competence is first tested. Model equivalence is then tested by examining whether category membership in a race/ethnic group moderates associations between predictors and young children’s school readiness. PMID:17328695

  14. Effects of metacognitive instruction on the academic achievement of students in the secondary sciences

    NASA Astrophysics Data System (ADS)

    Bianchi, Gregory A.

    The purpose of this study was to investigate the effects of reflective assessment in the form of situated metacognitive prompts on student achievement in the secondary sciences. A second goal was to determine whether specific gender differences existed in terms of student responsiveness to the metacognitive interventions. Participants in the study consisted of a convenience sample from a population of ninth-grade honors biology students in a large suburban school district located near Seattle, Washington. Beyond answering the specific research questions raised in this study, an additional aim was to broaden the growing body of research pertaining to the effect of metacognition on student achievement. A quasi-experimental, non-equivalent control group design was employed in this study. Descriptive and inferential statistics were computed to address the specific research questions raised. Specifically, a three-way repeated-measures ANOVA was performed. For this purpose, a single within-subjects factor, termed Testing, was defined. Three levels were allocated to this factor, and quantitative data from the Pretest, Posttest, and Retention Test were assigned to the levels, respectively. Group and Gender were defined as between-subjects factors, and both were allocated two levels; the two Group levels were Reflective and Non-Reflective. The effects of Group and Gender on each of the three quantitative measures were examined singly and in interaction with each other. Tests of statistical significance were analyzed at the .05 level. There was a statistically significant effect for Group (Reflective, Non-Reflective) by Testing (Pretest, Posttest, Retention Test). A three-way repeated-measures ANOVA procedure revealed that students in the Reflective group outperformed students in the Non-Reflective group (F = 10.258, p = .002, Partial eta 2 = .088). According to the effect size estimate, almost 9% of variance in the Testing variable was attributable to the Group variable. There was not a significant interaction effect for Gender. A three-way repeated-measures ANOVA procedure revealed that Testing X Group X Gender did not yield a statistically significant F ratio (F = 1.471, p = .228, Partial eta2 = .014). Students in the Reflective group outperformed students in the Non-Reflective group, regardless of gender. The findings of this study offer modest evidence that reflective assessment in the form of situated metacognitive prompts may improve student academic outcomes at the secondary level. This study failed to provide a significant finding regarding gender-related variation in a metacognitive learning cycle.

  15. Development, equivalence study, and normative data of version B of the Spanish-language Free and Cued Selective Reminding Test.

    PubMed

    Grau-Guinea, L; Pérez-Enríquez, C; García-Escobar, G; Arrondo-Elizarán, C; Pereira-Cutiño, B; Florido-Santiago, M; Piqué-Candini, J; Planas, A; Paez, M; Peña-Casanova, J; Sánchez-Benavides, G

    2018-05-08

    The Free and Cued Selective Reminding Test (FCSRT) is widely used for the assessment of verbal episodic memory, mainly in patients with Alzheimer disease. A Spanish version of the FCSRT and normative data were developed within the NEURONORMA project. Availability of alternative, equivalent versions is useful for following patients up in clinical settings. This study aimed to develop an alternative version of the original FCSRT (version B) and to study its equivalence to the original Spanish test (version A), and its performance in a sample of healthy individuals, in order to develop reference data. We evaluated 232 healthy participants of the NEURONORMA-Plus project, aged between 18 and 90. Thirty-three participants were assessed with both versions using a counterbalanced design. High intra-class correlation coefficients (between 0.8 and 0.9) were observed in the equivalence study. While no significant differences in performance were observed in total recall scores, free recall scores were significantly lower for version B. These preliminary results suggest that the newly developed FCSRT version B is equivalent to version A in the main variables tested. Further studies are necessary to ensure interchangeability between versions. We provide normative data for the new version. Copyright © 2018 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. Consequences of Violated Equating Assumptions under the Equivalent Groups Design

    ERIC Educational Resources Information Center

    Lyren, Per-Erik; Hambleton, Ronald K.

    2011-01-01

    The equal ability distribution assumption associated with the equivalent groups equating design was investigated in the context of a selection test for admission to higher education. The purpose was to assess the consequences for the test-takers in terms of receiving improperly high or low scores compared to their peers, and to find strong…

  17. Comparing fire spread algorithms using equivalence testing and neutral landscape models

    Treesearch

    Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson

    2009-01-01

    We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...

  18. 76 FR 50220 - Availability of Draft ICCVAM Recommendations on Using Fewer Animals to Identify Chemical Eye...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-12

    ... Fewer Animals to Identify Chemical Eye Hazards: Revised Criteria Necessary to Maintain Equivalent Hazard... criteria using results from 3-animal tests that would provide eye hazard classification equivalent to... least 1 positive animal in a 3-animal test to identify eye hazards will provide the same or greater...

  19. Processing Maple Syrup with a Vapor Compression Distiller: An Economic Analysis

    Treesearch

    Lawrence D. Garrett

    1977-01-01

    A test of vapor compression distillers for processing maple syrup revealed that: (1) vapor compression equipment tested evaporated 1 pound of water with .047 pounds of steam equivalent (electrical energy); open-pan evaporators of similar capacity required 1.5 pounds of steam equivalent (oil energy) to produce 1 pound of water; (2) vapor compression evaporation produced...

  20. Testing Response-Stimulus Equivalence Relations Using Differential Responses as a Sample

    ERIC Educational Resources Information Center

    Shimizu, Hirofumi

    2006-01-01

    This study tested the notion that an equivalence relation may include a response when differential responses are paired with stimuli presented during training. Eight normal adults learned three kinds of computer mouse movements as differential response topographies (R1, R2, and R3). Next, in matching-to-sample training, one of the response…

  1. Dual control and prevention of the turn-off phenomenon in a class of mimo systems

    NASA Technical Reports Server (NTRS)

    Mookerjee, P.; Bar-Shalom, Y.; Molusis, J. A.

    1985-01-01

    A recently developed methodology of adaptive dual control based upon sensitivity functions is applied here to a multivariable input-output model. The plant has constant but unknown parameters. It represents a simplified linear version of the relationship between the vibration output and the higher harmonic control input for a helicopter. The cautious and the new dual controller are examined. In many instances, the cautious controller is seen to turn off. The new dual controller modifies the cautious control design by numerator and denominator correction terms which depend upon the sensitivity functions of the expected future cost and avoids the turn-off and burst phenomena. Monte Carlo simulations and statistical tests of significance indicate the superiority of the dual controller over the cautious and the heuristic certainity equivalence controllers.

  2. Surface changes of enamel after brushing with charcoal toothpaste

    NASA Astrophysics Data System (ADS)

    Pertiwi, U. I.; Eriwati, Y. K.; Irawan, B.

    2017-08-01

    The aim of this study was to determine the surface roughness changes of tooth enamel after brushing with charcoal toothpaste. Thirty specimens were brushed using distilled water (the first group), Strong® Formula toothpaste (the second group), and Charcoal® Formula toothpaste for four minutes and 40 seconds (equivalent to one month) and for 14 minutes (equivalent to three months) using a soft fleece toothbrush with a mass of 150 gr. The roughness was measured using a surface roughness tester, and the results were tested with repeated ANOVA test and one-way ANOVA. The value of the surface roughness of tooth enamel was significantly different (p<0.05) after brushing for an equivalent of one month and an equivalent of three months. Using toothpaste containing charcoal can increase the surface roughness of tooth enamel.

  3. Criterion 6, indicator 30 : value and volume in round wood equivalents of exports and imports of wood products

    Treesearch

    James Howard; Rebecca Westby; Kenneth Skog

    2010-01-01

    This report provides a wide range of specific and statistical information on forest products markets in terms of production, trade, prices and consumption, employment, and other factors influencing forest sustainability.

  4. Effectiveness of maternal counseling in reducing caries in Cree children.

    PubMed

    Harrison, R L; Veronneau, J; Leroux, B

    2012-11-01

    This cluster-randomized pragmatic (effectiveness) trial tested maternal counseling based on Motivational Interviewing (MI) as an approach to control caries in indigenous children. Nine Cree communities in Quebec, Canada were randomly allocated to test or control. MI-style counseling was delivered in test communities to mothers during pregnancy and at well-baby visits. Data on outcomes were collected when children were 30 months old. Two hundred seventy-two mothers were recruited from the 5 test and 4 control communities. Baseline characteristics were comparable but not equivalent for both groups. At trial's end, 241 children had follow-up. The primary analysis outcome was enamel caries with substance loss (d2); no statistically significant treatment effect was detected. Prevalence of treated and untreated caries at the d2 level was 76% in controls vs. 65% in test (p = 0.17). Exploratory analyses suggested a substantial preventive effect for untreated decay at or beyond the level of the dentin, d3 (prevalences: 60% controls vs. 35% test), and a particularly large treatment effect when mothers had 4 or more MI-style sessions. Overall, these results provide preliminary evidence that, for these young, indigenous children, an MI-style intervention has an impact on severity of caries (clinical trial registration ISRCTN41467632).

  5. Exchanging the liquidity hypothesis: Delay discounting of money and self-relevant non-money rewards.

    PubMed

    Stuppy-Sullivan, Allison M; Tormohlen, Kayla N; Yi, Richard

    2016-01-01

    Evidence that primary rewards (e.g., food and drugs of abuse) are discounted more than money is frequently attributed to money's high degree of liquidity, or exchangeability for many commodities. The present study provides some evidence against this liquidity hypothesis by contrasting delay discounting of monetary rewards (liquid) and non-monetary commodities (non-liquid) that are self-relevant and utility-matched. Ninety-seven (97) undergraduate students initially completed a conventional binary-choice delay discounting of money task. Participants returned one week later and completed a self-relevant commodity delay discounting task. Both conventional hypothesis testing and more-conservative tests of statistical equivalence revealed correspondence in rate of delay discounting of money and self-relevant commodities, and in one magnitude condition, less discounting for the latter. The present results indicate that liquidity of money cannot fully account for the lower rate of delay discounting compared to non-money rewards. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Beyond statistical inference: a decision theory for science.

    PubMed

    Killeen, Peter R

    2006-08-01

    Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests--which place all value on the replicability of an effect and none on its magnitude--as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute.

  7. Lot-to-lot consistency of a tetravalent dengue vaccine in healthy adults in Australia: a randomised study.

    PubMed

    Torresi, Joseph; Heron, Leon G; Qiao, Ming; Marjason, Joanne; Chambonneau, Laurent; Bouckenooghe, Alain; Boaz, Mark; van der Vliet, Diane; Wallace, Derek; Hutagalung, Yanee; Nissen, Michael D; Richmond, Peter C

    2015-09-22

    The recombinant yellow fever-17D-dengue virus, live, attenuated, tetravalent dengue vaccine (CYD-TDV) has undergone extensive clinical trials. Here safety and consistency of immunogenicity of phase III manufacturing lots of CYD-TDV were evaluated and compared with a phase II lot and placebo in a dengue-naïve population. Healthy 18-60 year-olds were randomly assigned in a 3:3:3:3:1 ratio to receive three subcutaneous doses of either CYD-TDV from any one of three phase III lots or a phase II lot, or placebo, respectively in a 0, 6, 12 month dosing schedule. Neutralising antibody geometric mean titres (PRNT50 GMTs) for each of the four dengue serotypes were compared in sera collected 28 days after the third vaccination-equivalence among lots was demonstrated if the lower and upper limits of the two-sided 95% CIs of the GMT ratio were ≥0.5 and ≤2.0, respectively. 712 participants received vaccine or placebo and 614 (86%) completed the study; 17 (2.4%) participants withdrew after adverse events. Equivalence of phase III lots was demonstrated for 11 of 12 pairwise comparisons. One of three comparisons for serotype 2 was not statistically equivalent. GMTs for serotype 2 in phase III lots were close to each other (65.9, 44.1 and 58.1, respectively). Phase III lots can be produced in a consistent manner with predictable immune response and acceptable safety profile similar to previously characterised phase II lots. The phase III lots may be considered as not clinically different as statistical equivalence was shown for serotypes 1, 3 and 4 across the phase III lots. For serotype 2, although equivalence was not shown between two lots, the GMTs observed in the phase III lots were consistently higher than those for the phase II lot. As such, in our view, biological equivalence for all serotypes was demonstrated. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Probability of Equivalence Formation: Familiar Stimuli and Training Sequence

    ERIC Educational Resources Information Center

    Arntzen, Erik

    2004-01-01

    The present study was conducted to show how responding in accord with equivalence relations changes as a function of position of familiar stimuli, pictures, and with the use of nonsense syllables in an MTO-training structure. Fifty college students were tested for responding in accord with equivalence in an AB, CB, DB, and EB training structure.…

  9. Comparison of a Stimulus Equivalence Protocol and Traditional Lecture for Teaching Single-Subject Designs

    ERIC Educational Resources Information Center

    Lovett, Sadie; Rehfeldt, Ruth Anne; Garcia, Yors; Dunning, Johnna

    2011-01-01

    This study compared the effects of a computer-based stimulus equivalence protocol to a traditional lecture format in teaching single-subject experimental design concepts to undergraduate students. Participants were assigned to either an equivalence or a lecture group, and performance on a paper-and-pencil test that targeted relations among the…

  10. Equivalence in Symbolic and Nonsymbolic Contexts: Benefits of Solving Problems with Manipulatives

    ERIC Educational Resources Information Center

    Sherman, Jody; Bisanz, Jeffrey

    2009-01-01

    Children's failure on equivalence problems (e.g., 5 + 4 = 7 + __) is believed to be the result of misunderstanding the equal sign and has been tested using symbolic problems (including "="). For Study 1 (N = 48), we designed a nonsymbolic method for presenting equivalence problems to determine whether Grade 2 children's difficulty is due…

  11. In vitro 3D full thickness skin equivalent tissue model using silk and collagen biomaterials

    PubMed Central

    Bellas, Evangelia; Seiberg, Miri; Garlick, Jonathan; Kaplan, David L.

    2013-01-01

    Current approaches to develop skin equivalents often only include the epidermal and dermal components. Yet, full thickness skin includes the hypodermis, a layer below the dermis of adipose tissue containing vasculature, nerves and fibroblasts, necessary to support the epidermis and dermis. In the present study, we developed a full thickness skin equivalent including an epidermis, dermis and hypodermis that could serve as an in vitro model for studying skin development, disease or as a platform for consumer product testing as a means to avoid animal testing. The full thickness skin equivalent was easy to handle and was maintained in culture for greater than 14 days while expressing physiologically relevant morphologies of both the epidermis and dermis, as seen by keratin 10, collagen I and collagen IV expression. The skin equivalent produced glycerol and leptin, markers of adipose tissue metabolism. This work serves as a foundation for our understanding of some of the necessary factors needed to develop a stable, functional model of full-thickness skin. PMID:23161763

  12. Measurement equivalence of the German Job Satisfaction Survey used in a multinational organization: implications of Schwartz's culture model.

    PubMed

    Liu, Cong; Borg, Ingwer; Spector, Paul E

    2004-12-01

    The authors tested measurement equivalence of the German Job Satisfaction Survey (GJSS) using structural equation modeling methodology. Employees from 18 countries and areas provided data on 5 job satisfaction facets. The effects of language and culture on measurement equivalence were examined. A cultural distance hypothesis, based on S. H. Schwartz's (1999) theory, was tested with 4 cultural groups: West Europe, English speaking, Latin America, and Far East. Findings indicated the robustness of the GJSS in terms of measurement equivalence across countries. The survey maintained high transportability across countries speaking the same language and countries sharing similar cultural backgrounds. Consistent with Schwartz's model, a cultural distance effect on scale transportability among scales used in maximally dissimilar cultures was detected. Scales used in the West Europe group showed greater equivalence to scales used in the English-speaking and Latin America groups than scales used in the Far East group. 2004 APA, all rights reserved

  13. Dose Equivalents for Antipsychotic Drugs: The DDD Method.

    PubMed

    Leucht, Stefan; Samara, Myrto; Heres, Stephan; Davis, John M

    2016-07-01

    Dose equivalents of antipsychotics are an important but difficult to define concept, because all methods have weaknesses and strongholds. We calculated dose equivalents based on defined daily doses (DDDs) presented by the World Health Organisation's Collaborative Center for Drug Statistics Methodology. Doses equivalent to 1mg olanzapine, 1mg risperidone, 1mg haloperidol, and 100mg chlorpromazine were presented and compared with the results of 3 other methods to define dose equivalence (the "minimum effective dose method," the "classical mean dose method," and an international consensus statement). We presented dose equivalents for 57 first-generation and second-generation antipsychotic drugs, available as oral, parenteral, or depot formulations. Overall, the identified equivalent doses were comparable with those of the other methods, but there were also outliers. The major strength of this method to define dose response is that DDDs are available for most drugs, including old antipsychotics, that they are based on a variety of sources, and that DDDs are an internationally accepted measure. The major limitations are that the information used to estimate DDDS is likely to differ between the drugs. Moreover, this information is not publicly available, so that it cannot be reviewed. The WHO stresses that DDDs are mainly a standardized measure of drug consumption, and their use as a measure of dose equivalence can therefore be misleading. We, therefore, recommend that if alternative, more "scientific" dose equivalence methods are available for a drug they should be preferred to DDDs. Moreover, our summary can be a useful resource for pharmacovigilance studies. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  14. Mars seasonal polar caps as a test of the equivalence principle

    NASA Astrophysics Data System (ADS)

    Rubincam, David Parry

    2011-08-01

    The seasonal polar caps of Mars can be used to test the equivalence principle in general relativity. The north and south caps, which are composed of carbon dioxide, wax and wane with the seasons. If the ratio of the inertial (passive) to gravitational (active) masses of the caps differs from the same ratio for the rest of Mars, then the equivalence principle fails, Newton’s third law fails, and the caps will pull Mars one way and then the other with a force aligned with the planet’s spin axis. This leads to a secular change in Mars’s along-track position in its orbit about the Sun, and to a secular change in the orbit’s semimajor axis. The caps are a poor Eötvös test of the equivalence principle, being 4 orders-of-magnitude weaker than laboratory tests and 7 orders-of-magnitude weaker than that found by lunar laser ranging; the reason is the small mass of the caps compared to Mars as a whole. The principal virtue of using Mars is that the caps contain carbon, an element not normally considered in such experiments. The Earth with its seasonal snow cover can also be used for a similar test.

  15. Development of a traffic noise prediction model for an urban environment.

    PubMed

    Sharma, Asheesh; Bodhe, G L; Schimak, G

    2014-01-01

    The objective of this study is to develop a traffic noise model under diverse traffic conditions in metropolitan cities. The model has been developed to calculate equivalent traffic noise based on four input variables i.e. equivalent traffic flow (Q e ), equivalent vehicle speed (S e ) and distance (d) and honking (h). The traffic data is collected and statistically analyzed in three different cases for 15-min during morning and evening rush hours. Case I represents congested traffic where equivalent vehicle speed is <30 km/h while case II represents free-flowing traffic where equivalent vehicle speed is >30 km/h and case III represents calm traffic where no honking is recorded. The noise model showed better results than earlier developed noise model for Indian traffic conditions. A comparative assessment between present and earlier developed noise model has also been presented in the study. The model is validated with measured noise levels and the correlation coefficients between measured and predicted noise levels were found to be 0.75, 0.83 and 0.86 for case I, II and III respectively. The noise model performs reasonably well under different traffic conditions and could be implemented for traffic noise prediction at other region as well.

  16. Statistics of Data Fitting: Flaws and Fixes of Polynomial Analysis of Channeled Spectra

    NASA Astrophysics Data System (ADS)

    Karstens, William; Smith, David

    2013-03-01

    Starting from general statistical principles, we have critically examined Baumeister's procedure* for determining the refractive index of thin films from channeled spectra. Briefly, the method assumes that the index and interference fringe order may be approximated by polynomials quadratic and cubic in photon energy, respectively. The coefficients of the polynomials are related by differentiation, which is equivalent to comparing energy differences between fringes. However, we find that when the fringe order is calculated from the published IR index for silicon* and then analyzed with Baumeister's procedure, the results do not reproduce the original index. This problem has been traced to 1. Use of unphysical powers in the polynomials (e.g., time-reversal invariance requires that the index is an even function of photon energy), and 2. Use of insufficient terms of the correct parity. Exclusion of unphysical terms and addition of quartic and quintic terms to the index and order polynomials yields significantly better fits with fewer parameters. This represents a specific example of using statistics to determine if the assumed fitting model adequately captures the physics contained in experimental data. The use of analysis of variance (ANOVA) and the Durbin-Watson statistic to test criteria for the validity of least-squares fitting will be discussed. *D.F. Edwards and E. Ochoa, Appl. Opt. 19, 4130 (1980). Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.

  17. Tract-Based Spatial Statistics in Preterm-Born Neonates Predicts Cognitive and Motor Outcomes at 18 Months.

    PubMed

    Duerden, E G; Foong, J; Chau, V; Branson, H; Poskitt, K J; Grunau, R E; Synnes, A; Zwicker, J G; Miller, S P

    2015-08-01

    Adverse neurodevelopmental outcome is common in children born preterm. Early sensitive predictors of neurodevelopmental outcome such as MR imaging are needed. Tract-based spatial statistics, a diffusion MR imaging analysis method, performed at term-equivalent age (40 weeks) is a promising predictor of neurodevelopmental outcomes in children born very preterm. We sought to determine the association of tract-based spatial statistics findings before term-equivalent age with neurodevelopmental outcome at 18-months corrected age. Of 180 neonates (born at 24-32-weeks' gestation) enrolled, 153 had DTI acquired early at 32 weeks' postmenstrual age and 105 had DTI acquired later at 39.6 weeks' postmenstrual age. Voxelwise statistics were calculated by performing tract-based spatial statistics on DTI that was aligned to age-appropriate templates. At 18-month corrected age, 166 neonates underwent neurodevelopmental assessment by using the Bayley Scales of Infant Development, 3rd ed, and the Peabody Developmental Motor Scales, 2nd ed. Tract-based spatial statistics analysis applied to early-acquired scans (postmenstrual age of 30-33 weeks) indicated a limited significant positive association between motor skills and axial diffusivity and radial diffusivity values in the corpus callosum, internal and external/extreme capsules, and midbrain (P < .05, corrected). In contrast, for term scans (postmenstrual age of 37-41 weeks), tract-based spatial statistics analysis showed a significant relationship between both motor and cognitive scores with fractional anisotropy in the corpus callosum and corticospinal tracts (P < .05, corrected). Tract-based spatial statistics in a limited subset of neonates (n = 22) scanned at <30 weeks did not significantly predict neurodevelopmental outcomes. The strength of the association between fractional anisotropy values and neurodevelopmental outcome scores increased from early-to-late-acquired scans in preterm-born neonates, consistent with brain dysmaturation in this population. © 2015 by American Journal of Neuroradiology.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The NOXSO Process uses a regenerable sorbent that removes SO{sub 2} and NO{sub x} simultaneously from flue gas. The sorbent is a stabilized {gamma}-alumina bed impregnated with sodium carbonate. The process was successfully tested at three different scales, equivalent to 0.017, 0.06 and 0.75 MW of flue gas generated from a coal-fired power plant. The Proof-of-Concept (POC) Test is the last test prior to a full-scale demonstration. A slip stream of flue gas equivalent to a 5 MW coal-fired power plant was used for the POC test. This paper summarizes the NOXSO POC plant and its test results.

  19. Assessing the equivalence of Web-based and paper-and-pencil questionnaires using differential item and test functioning (DIF and DTF) analysis: a case of the Four-Dimensional Symptom Questionnaire (4DSQ).

    PubMed

    Terluin, Berend; Brouwers, Evelien P M; Marchand, Miquelle A G; de Vet, Henrica C W

    2018-05-01

    Many paper-and-pencil (P&P) questionnaires have been migrated to electronic platforms. Differential item and test functioning (DIF and DTF) analysis constitutes a superior research design to assess measurement equivalence across modes of administration. The purpose of this study was to demonstrate an item response theory (IRT)-based DIF and DTF analysis to assess the measurement equivalence of a Web-based version and the original P&P format of the Four-Dimensional Symptom Questionnaire (4DSQ), measuring distress, depression, anxiety, and somatization. The P&P group (n = 2031) and the Web group (n = 958) consisted of primary care psychology clients. Unidimensionality and local independence of the 4DSQ scales were examined using IRT and Yen's Q3. Bifactor modeling was used to assess the scales' essential unidimensionality. Measurement equivalence was assessed using IRT-based DIF analysis using a 3-stage approach: linking on the latent mean and variance, selection of anchor items, and DIF testing using the Wald test. DTF was evaluated by comparing expected scale scores as a function of the latent trait. The 4DSQ scales proved to be essentially unidimensional in both modalities. Five items, belonging to the distress and somatization scales, displayed small amounts of DIF. DTF analysis revealed that the impact of DIF on the scale level was negligible. IRT-based DIF and DTF analysis is demonstrated as a way to assess the equivalence of Web-based and P&P questionnaire modalities. Data obtained with the Web-based 4DSQ are equivalent to data obtained with the P&P version.

  20. Satellite Test of the Equivalence Principle, Overview and Progress

    NASA Technical Reports Server (NTRS)

    Kolodziejczak, Jeffery

    2006-01-01

    An overview of STEP, the Satellite test of the Equivalence Principle will be presented. This space-based experiment will test the Universality of free fall and is designed to advance the present state of knowledge by over 5 orders of magnitude. The international STEP collaboration is pursuing a development plan to improve and verify the technology readiness of key systems. We will discuss recent advances with an emphasis on accelerometer fabrication and test. The transfer of critical technologies successfully demonstrated in flight by the Gravity Probe B mission will be described.

Top