Sample records for conducting statistical tests

  1. Selecting the most appropriate inferential statistical test for your quantitative research study.

    PubMed

    Bettany-Saltikov, Josette; Whittaker, Victoria Jane

    2014-06-01

    To discuss the issues and processes relating to the selection of the most appropriate statistical test. A review of the basic research concepts together with a number of clinical scenarios is used to illustrate this. Quantitative nursing research generally features the use of empirical data which necessitates the selection of both descriptive and statistical tests. Different types of research questions can be answered by different types of research designs, which in turn need to be matched to a specific statistical test(s). Discursive paper. This paper discusses the issues relating to the selection of the most appropriate statistical test and makes some recommendations as to how these might be dealt with. When conducting empirical quantitative studies, a number of key issues need to be considered. Considerations for selecting the most appropriate statistical tests are discussed and flow charts provided to facilitate this process. When nursing clinicians and researchers conduct quantitative research studies, it is crucial that the most appropriate statistical test is selected to enable valid conclusions to be made. © 2013 John Wiley & Sons Ltd.

  2. Multiple statistical tests: Lessons from a d20.

    PubMed

    Madan, Christopher R

    2016-01-01

    Statistical analyses are often conducted with α= .05. When multiple statistical tests are conducted, this procedure needs to be adjusted to compensate for the otherwise inflated Type I error. In some instances in tabletop gaming, sometimes it is desired to roll a 20-sided die (or 'd20') twice and take the greater outcome. Here I draw from probability theory and the case of a d20, where the probability of obtaining any specific outcome is (1)/ 20, to determine the probability of obtaining a specific outcome (Type-I error) at least once across repeated, independent statistical tests.

  3. Cycom 977-2 Composite Material: Impact Test Results (workshop presentation)

    NASA Technical Reports Server (NTRS)

    Engle, Carl; Herald, Stephen; Watkins, Casey

    2005-01-01

    Contents include the following: Ambient (13A) tests of Cycom 977-2 impact characteristics by the Brucenton and statistical method at MSFC and WSTF. Repeat (13A) tests of tested Cycom from phase I at MSFC to expended testing statistical database. Conduct high-pressure tests (13B) in liquid oxygen (LOX) and GOX at MSFC and WSTF to determine Cycom reaction characteristics and batch effect. Conduct expended ambient (13A) LOX test at MSFC and high-pressure (13B) testing to determine pressure effects in LOX. Expend 13B GOX database.

  4. Conducting tests for statistically significant differences using forest inventory data

    Treesearch

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  5. Education Career Ladder, AFSC 751X0. Volume II.

    DTIC Science & Technology

    1981-02-01

    the functions listed in both documents, such as: testing; counseling; conducting educational surveys; collecting and analyzing statistical data; and...Inventory Development The instrument used for data collection for the occupational survey was USAF Job Inventory AFPT 90-751-408. As a starting point, the...testing; preparing, conducting, or evaluating educrtional surveys; collecting and analyzing statistical data; or organizing group study classes. This

  6. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  7. The Use of Meta-Analytic Statistical Significance Testing

    ERIC Educational Resources Information Center

    Polanin, Joshua R.; Pigott, Terri D.

    2015-01-01

    Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…

  8. Chi-Square Statistics, Tests of Hypothesis and Technology.

    ERIC Educational Resources Information Center

    Rochowicz, John A.

    The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…

  9. Early Warning Signs of Suicide in Service Members Who Engage in Unauthorized Acts of Violence

    DTIC Science & Technology

    2016-06-01

    observable to military law enforcement personnel. Statistical analyses tested for differences in warning signs between cases of suicide, violence, or...indicators, (2) Behavioral Change indicators, (3) Social indicators, and (4) Occupational indicators. Statistical analyses were conducted to test for...6 Coding _________________________________________________________________ 7 Statistical

  10. Identifying Variations in Hydraulic Conductivity on the East River at Crested Butte, CO

    NASA Astrophysics Data System (ADS)

    Ulmer, K. N.; Malenda, H. F.; Singha, K.

    2016-12-01

    Slug tests are a widely used method to measure saturated hydraulic conductivity, or how easily water flows through an aquifer, by perturbing the piezometric surface and measuring the time the local groundwater table takes to re-equilibrate. Saturated hydraulic conductivity is crucial to calculating the speed and direction of groundwater movement. Therefore, it is important to document data variance from in situ slug tests. This study addresses two potential sources of data variability: different users and different types of slug used. To test for user variability, two individuals slugged the same six wells with water multiple times at a stream meander on the East River near Crested Butte, CO. To test for variations in type of slug test, multiple water and metal slug tests were performed at a single well in the same meander. The distributions of hydraulic conductivities of each test were then tested for variance using both the Kruskal-Wallis test and the Brown-Forsythe test. When comparing the hydraulic conductivity distributions gathered by the two individuals, we found that they were statistically similar. However, we found that the two types of slug tests produced hydraulic conductivity distributions for the same well that are statistically dissimilar. In conclusion, multiple people should be able to conduct slug tests without creating any considerable variations in the resulting hydraulic conductivity values, but only a single type of slug should be used for those tests.

  11. Assessment of statistical education in Indonesia: Preliminary results and initiation to simulation-based inference

    NASA Astrophysics Data System (ADS)

    Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.

    2018-01-01

    Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.

  12. Using Cochran's Z Statistic to Test the Kernel-Smoothed Item Response Function Differences between Focal and Reference Groups

    ERIC Educational Resources Information Center

    Zheng, Yinggan; Gierl, Mark J.; Cui, Ying

    2010-01-01

    This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…

  13. Multiple comparison analysis testing in ANOVA.

    PubMed

    McHugh, Mary L

    2011-01-01

    The Analysis of Variance (ANOVA) test has long been an important tool for researchers conducting studies on multiple experimental groups and one or more control groups. However, ANOVA cannot provide detailed information on differences among the various study groups, or on complex combinations of study groups. To fully understand group differences in an ANOVA, researchers must conduct tests of the differences between particular pairs of experimental and control groups. Tests conducted on subsets of data tested previously in another analysis are called post hoc tests. A class of post hoc tests that provide this type of detailed information for ANOVA results are called "multiple comparison analysis" tests. The most commonly used multiple comparison analysis statistics include the following tests: Tukey, Newman-Keuls, Scheffee, Bonferroni and Dunnett. These statistical tools each have specific uses, advantages and disadvantages. Some are best used for testing theory while others are useful in generating new theory. Selection of the appropriate post hoc test will provide researchers with the most detailed information while limiting Type 1 errors due to alpha inflation.

  14. 49 CFR 40.111 - When and how must a laboratory disclose statistical summaries and other information it maintains?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.111 When and how must a laboratory disclose statistical summaries and other... a report indicating that not enough testing was conducted to warrant a summary. You may transmit the...

  15. 49 CFR 40.111 - When and how must a laboratory disclose statistical summaries and other information it maintains?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.111 When and how must a laboratory disclose statistical summaries and other... a report indicating that not enough testing was conducted to warrant a summary. You may transmit the...

  16. 49 CFR 40.111 - When and how must a laboratory disclose statistical summaries and other information it maintains?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.111 When and how must a laboratory disclose statistical summaries and other... a report indicating that not enough testing was conducted to warrant a summary. You may transmit the...

  17. 49 CFR 40.111 - When and how must a laboratory disclose statistical summaries and other information it maintains?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.111 When and how must a laboratory disclose statistical summaries and other... a report indicating that not enough testing was conducted to warrant a summary. You may transmit the...

  18. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    PubMed

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  19. Statistical and simulation analysis of hydraulic-conductivity data for Bear Creek and Melton Valleys, Oak Ridge Reservation, Tennessee

    USGS Publications Warehouse

    Connell, J.F.; Bailey, Z.C.

    1989-01-01

    A total of 338 single-well aquifer tests from Bear Creek and Melton Valley, Tennessee were statistically grouped to estimate hydraulic conductivities for the geologic formations in the valleys. A cross-sectional simulation model linked to a regression model was used to further refine the statistical estimates for each of the formations and to improve understanding of ground-water flow in Bear Creek Valley. Median hydraulic-conductivity values were used as initial values in the model. Model-calculated estimates of hydraulic conductivity were generally lower than the statistical estimates. Simulations indicate that (1) the Pumpkin Valley Shale controls groundwater flow between Pine Ridge and Bear Creek; (2) all the recharge on Chestnut Ridge discharges to the Maynardville Limestone; (3) the formations having smaller hydraulic gradients may have a greater tendency for flow along strike; (4) local hydraulic conditions in the Maynardville Limestone cause inaccurate model-calculated estimates of hydraulic conductivity; and (5) the conductivity of deep bedrock neither affects the results of the model nor does it add information on the flow system. Improved model performance would require: (1) more water level data for the Copper Ridge Dolomite; (2) improved estimates of hydraulic conductivity in the Copper Ridge Dolomite and Maynardville Limestone; and (3) more water level data and aquifer tests in deep bedrock. (USGS)

  20. CompareTests-R package

    Cancer.gov

    CompareTests is an R package to estimate agreement and diagnostic accuracy statistics for two diagnostic tests when one is conducted on only a subsample of specimens. A standard test is observed on all specimens.

  1. General Framework for Meta-analysis of Rare Variants in Sequencing Association Studies

    PubMed Central

    Lee, Seunggeun; Teslovich, Tanya M.; Boehnke, Michael; Lin, Xihong

    2013-01-01

    We propose a general statistical framework for meta-analysis of gene- or region-based multimarker rare variant association tests in sequencing association studies. In genome-wide association studies, single-marker meta-analysis has been widely used to increase statistical power by combining results via regression coefficients and standard errors from different studies. In analysis of rare variants in sequencing studies, region-based multimarker tests are often used to increase power. We propose meta-analysis methods for commonly used gene- or region-based rare variants tests, such as burden tests and variance component tests. Because estimation of regression coefficients of individual rare variants is often unstable or not feasible, the proposed method avoids this difficulty by calculating score statistics instead that only require fitting the null model for each study and then aggregating these score statistics across studies. Our proposed meta-analysis rare variant association tests are conducted based on study-specific summary statistics, specifically score statistics for each variant and between-variant covariance-type (linkage disequilibrium) relationship statistics for each gene or region. The proposed methods are able to incorporate different levels of heterogeneity of genetic effects across studies and are applicable to meta-analysis of multiple ancestry groups. We show that the proposed methods are essentially as powerful as joint analysis by directly pooling individual level genotype data. We conduct extensive simulations to evaluate the performance of our methods by varying levels of heterogeneity across studies, and we apply the proposed methods to meta-analysis of rare variant effects in a multicohort study of the genetics of blood lipid levels. PMID:23768515

  2. TRANSIT TIMING OBSERVATIONS FROM KEPLER. VI. POTENTIALLY INTERESTING CANDIDATE SYSTEMS FROM FOURIER-BASED STATISTICAL TESTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steffen, Jason H.; Ford, Eric B.; Rowe, Jason F.

    2012-09-10

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.

  3. Transit Timing Observations from Kepler: VII. Potentially interesting candidate systems from Fourier-based statistical tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steffen, Jason H.; /Fermilab; Ford, Eric B.

    2012-01-01

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through Quarter six (Q6) of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.

  4. Robustness of Multiple Objective Decision Analysis Preference Functions

    DTIC Science & Technology

    2002-06-01

    p p′ : The probability of some event. ,i ip q : The probability of event . i Π : An aggregation of proportional data used in calculating a test ...statistical tests of the significance of the term and also is conducted in a multivariate framework rather than the ROSA univariate approach. A...residual error is ˆ−e = y y (45) The coefficient provides a ready indicator of the contribution for the associated variable and statistical tests

  5. Decision Support Systems: Applications in Statistics and Hypothesis Testing.

    ERIC Educational Resources Information Center

    Olsen, Christopher R.; Bozeman, William C.

    1988-01-01

    Discussion of the selection of appropriate statistical procedures by educators highlights a study conducted to investigate the effectiveness of decision aids in facilitating the use of appropriate statistics. Experimental groups and a control group using a printed flow chart, a computer-based decision aid, and a standard text are described. (11…

  6. 78 FR 14060 - Migratory Bird Hunting; Revision of Language for Approval of Nontoxic Shot for Use in Waterfowl...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-04

    ... process, send the Tier 2 testing results and analyses to us. You must ensure that copies of all the raw..., you may conduct the Tier 3 testing. You must ensure that copies of the raw data and the statistical... deficient diet. Conduct a chronic exposure test under adverse conditions that complies with the following...

  7. Estimating biozone hydraulic conductivity in wastewater soil-infiltration systems using inverse numerical modeling.

    PubMed

    Bumgarner, Johnathan R; McCray, John E

    2007-06-01

    During operation of an onsite wastewater treatment system, a low-permeability biozone develops at the infiltrative surface (IS) during application of wastewater to soil. Inverse numerical-model simulations were used to estimate the biozone saturated hydraulic conductivity (K(biozone)) under variably saturated conditions for 29 wastewater infiltration test cells installed in a sandy loam field soil. Test cells employed two loading rates (4 and 8cm/day) and 3 IS designs: open chamber, gravel, and synthetic bundles. The ratio of K(biozone) to the saturated hydraulic conductivity of the natural soil (K(s)) was used to quantify the reductions in the IS hydraulic conductivity. A smaller value of K(biozone)/K(s,) reflects a greater reduction in hydraulic conductivity. The IS hydraulic conductivity was reduced by 1-3 orders of magnitude. The reduction in IS hydraulic conductivity was primarily influenced by wastewater loading rate and IS type and not by the K(s) of the native soil. The higher loading rate yielded greater reductions in IS hydraulic conductivity than the lower loading rate for bundle and gravel cells, but the difference was not statistically significant for chamber cells. Bundle and gravel cells exhibited a greater reduction in IS hydraulic conductivity than chamber cells at the higher loading rates, while the difference between gravel and bundle systems was not statistically significant. At the lower rate, bundle cells exhibited generally lower K(biozone)/K(s) values, but not at a statistically significant level, while gravel and chamber cells were statistically similar. Gravel cells exhibited the greatest variability in measured values, which may complicate design efforts based on K(biozone) evaluations for these systems. These results suggest that chamber systems may provide for a more robust design, particularly for high or variable wastewater infiltration rates.

  8. Pre-Deployment Stress, Mental Health, and Help-Seeking Behaviors Among Marines

    DTIC Science & Technology

    2014-01-01

    associations between two categori- cal variables, and Wald tests were conducted to compare mean scores on continuous variables across groups (e.g...Cluster- adjusted wald tests were conducted to determine whether there were significant differences by rank on the average number of potentially...deployed to Iraq or Afghanistan in 2010 or 2011 of rank O6 or lower. a Omnibus rao-Scott chi-square test or adjusted wald test is statistically

  9. Middle School Students' Statistical Literacy: Role of Grade Level and Gender

    ERIC Educational Resources Information Center

    Yolcu, Ayse

    2014-01-01

    This study examined the role of gender and grade level on middle school students' statistical literacy. The study was conducted in the spring semester of the 2012-2013 academic year with 598 middle-school students (grades 6-8) from three public schools in Turkey. The data were collected using the Statistical Literacy Test, developed based on…

  10. An Inferentialist Perspective on the Coordination of Actions and Reasons Involved in Making a Statistical Inference

    ERIC Educational Resources Information Center

    Bakker, Arthur; Ben-Zvi, Dani; Makar, Katie

    2017-01-01

    To understand how statistical and other types of reasoning are coordinated with actions to reduce uncertainty, we conducted a case study in vocational education that involved statistical hypothesis testing. We analyzed an intern's research project in a hospital laboratory in which reducing uncertainties was crucial to make a valid statistical…

  11. Statistical analysis of an inter-laboratory comparison of small-scale safety and thermal testing of RDX

    DOE PAGES

    Brown, Geoffrey W.; Sandstrom, Mary M.; Preston, Daniel N.; ...

    2014-11-17

    In this study, the Integrated Data Collection Analysis (IDCA) program has conducted a proficiency test for small-scale safety and thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results from this test for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Class 5 Type II standard. The material was tested as a well-characterized standard several times during the proficiency test to assess differences among participants and the range of results that may arise for well-behaved explosive materials.

  12. Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology.

    PubMed

    McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad

    2011-07-01

    The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  13. Biostatistics primer: part I.

    PubMed

    Overholser, Brian R; Sowinski, Kevin M

    2007-12-01

    Biostatistics is the application of statistics to biologic data. The field of statistics can be broken down into 2 fundamental parts: descriptive and inferential. Descriptive statistics are commonly used to categorize, display, and summarize data. Inferential statistics can be used to make predictions based on a sample obtained from a population or some large body of information. It is these inferences that are used to test specific research hypotheses. This 2-part review will outline important features of descriptive and inferential statistics as they apply to commonly conducted research studies in the biomedical literature. Part 1 in this issue will discuss fundamental topics of statistics and data analysis. Additionally, some of the most commonly used statistical tests found in the biomedical literature will be reviewed in Part 2 in the February 2008 issue.

  14. Sediment bioaccumulation testing with fish

    USGS Publications Warehouse

    Mac, Michael J.; Schmitt, Christopher J.; Burton, G. Allen

    1992-01-01

    In this chapter, we discuss methods for conducting bioaccumulation bioassays with fish; the advantages and disadvantages of using fish rather than invertebrates; and problems associated with bioaccumulation testing, with a special emphasis on statistical treatment.

  15. Estimation of diagnostic test accuracy without full verification: a review of latent class methods

    PubMed Central

    Collins, John; Huynh, Minh

    2014-01-01

    The performance of a diagnostic test is best evaluated against a reference test that is without error. For many diseases, this is not possible, and an imperfect reference test must be used. However, diagnostic accuracy estimates may be biased if inaccurately verified status is used as the truth. Statistical models have been developed to handle this situation by treating disease as a latent variable. In this paper, we conduct a systematized review of statistical methods using latent class models for estimating test accuracy and disease prevalence in the absence of complete verification. PMID:24910172

  16. Primer of statistics in dental research: part I.

    PubMed

    Shintani, Ayumi

    2014-01-01

    Statistics play essential roles in evidence-based dentistry (EBD) practice and research. It ranges widely from formulating scientific questions, designing studies, collecting and analyzing data to interpreting, reporting, and presenting study findings. Mastering statistical concepts appears to be an unreachable goal among many dental researchers in part due to statistical authorities' limitations of explaining statistical principles to health researchers without elaborating complex mathematical concepts. This series of 2 articles aim to introduce dental researchers to 9 essential topics in statistics to conduct EBD with intuitive examples. The part I of the series includes the first 5 topics (1) statistical graph, (2) how to deal with outliers, (3) p-value and confidence interval, (4) testing equivalence, and (5) multiplicity adjustment. Part II will follow to cover the remaining topics including (6) selecting the proper statistical tests, (7) repeated measures analysis, (8) epidemiological consideration for causal association, and (9) analysis of agreement. Copyright © 2014. Published by Elsevier Ltd.

  17. Further elucidation of nanofluid thermal conductivity measurement using a transient hot-wire method apparatus

    NASA Astrophysics Data System (ADS)

    Yoo, Donghoon; Lee, Joohyun; Lee, Byeongchan; Kwon, Suyong; Koo, Junemo

    2018-02-01

    The Transient Hot-Wire Method (THWM) was developed to measure the absolute thermal conductivity of gases, liquids, melts, and solids with low uncertainty. The majority of nanofluid researchers used THWM to measure the thermal conductivity of test fluids. Several reasons have been suggested for the discrepancies in these types of measurements, including nanofluid generation, nanofluid stability, and measurement challenges. The details of the transient hot-wire method such as the test cell size, the temperature coefficient of resistance (TCR) and the sampling number are further investigated to improve the accuracy and consistency of the measurements of different researchers. It was observed that smaller test apparatuses were better because they can delay the onset of natural convection. TCR values of a coated platinum wire were measured and statistically analyzed to reduce the uncertainty in thermal conductivity measurements. For validation, ethylene glycol (EG) and water thermal conductivity were measured and analyzed in the temperature range between 280 and 310 K. Furthermore, a detailed statistical analysis was conducted for such measurements, and the results confirmed the minimum number of samples required to achieve the desired resolution and precision of the measurements. It is further proposed that researchers fully report the information related to their measurements to validate the measurements and to avoid future inconsistent nanofluid data.

  18. [Interactive workshops as a dissemination strategy in psychology].

    PubMed

    Martínez-Martínez, Kalina Isela; Carrascosa-Venegas, César; Ayala-Velázquez, Héctor

    2003-01-01

    To assess whether interactive workshops are an effective strategy for promoting a psychological intervention model among healthcare providers, to treat problem drinkers. The study was conducted between the years 1999 and 2000, among 206 healthcare providers at seven Instituto Mexicano del Seguro Social (Mexican Institute of Social Security, IMSS) clinics. Study subjects were selected by hospital executive officers. The study design is a quasi-experimental pre-test/post-test study. Data on providers' attitudes, interests, and knowledge were collected using a questionnaire. After that, interactive workshops were conducted, and the same questionnaire was applied again at the end of the workshops. Statistical analysis was carried out using Student's t test for matched samples. Statistically significant differences were found in participants' knowledge on alcoholism t (206, 205) = -9.234, p = 0.001, as well as in their interest t (206, 205) = -2.318, p = 0.021. Interactive workshops are an effective tool to disseminate the Guided Self-Help Program conducted in IMSS clinics. Healthcare providers can become change-inducing/promoting agents of psychological innovations.

  19. Derivation and Applicability of Asymptotic Results for Multiple Subtests Person-Fit Statistics

    PubMed Central

    Albers, Casper J.; Meijer, Rob R.; Tendeiro, Jorge N.

    2016-01-01

    In high-stakes testing, it is important to check the validity of individual test scores. Although a test may, in general, result in valid test scores for most test takers, for some test takers, test scores may not provide a good description of a test taker’s proficiency level. Person-fit statistics have been proposed to check the validity of individual test scores. In this study, the theoretical asymptotic sampling distribution of two person-fit statistics that can be used for tests that consist of multiple subtests is first discussed. Second, simulation study was conducted to investigate the applicability of this asymptotic theory for tests of finite length, in which the correlation between subtests and number of items in the subtests was varied. The authors showed that these distributions provide reasonable approximations, even for tests consisting of subtests of only 10 items each. These results have practical value because researchers do not have to rely on extensive simulation studies to simulate sampling distributions. PMID:29881053

  20. Differences in Temperature Changes in Premature Infants During Invasive Procedures in Incubators and Radiant Warmers.

    PubMed

    Handhayanti, Ludwy; Rustina, Yeni; Budiati, Tri

    Premature infants tend to lose heat quickly. This loss can be aggravated when they have received an invasive procedure involving a venous puncture. This research uses crossover design by conducting 2 intervention tests to compare 2 different treatments on the same sample. This research involved 2 groups with 18 premature infants in each. The process of data analysis used a statistical independent t test. Interventions conducted in an open incubator showed a p value of .001 which statistically related to heat loss in premature infants. In contrast, the radiant warmer p value of .001 statistically referred to a different range of heat gain before and after the venous puncture was given. The radiant warmer saved the premature infant from hypothermia during the invasive procedure. However, it is inadvisable for routine care of newborn infants since it can increase insensible water loss.

  1. Statistical testing of baseline differences in sports medicine RCTs: a systematic evaluation.

    PubMed

    Peterson, Ross L; Tran, Matthew; Koffel, Jonathan; Stovitz, Steven D

    2017-01-01

    The CONSORT (Consolidated Standards of Reporting Trials) statement discourages reporting statistical tests of baseline differences between groups in randomised controlled trials (RCTs). However, this practice is still common in many medical fields. Our aim was to determine the prevalence of this practice in leading sports medicine journals. We conducted a comprehensive search in Medline through PubMed to identify RCTs published in the years 2005 and 2015 from 10 high-impact sports medicine journals. Two reviewers independently confirmed the trial design and reached consensus on which articles contained statistical tests of baseline differences. Our search strategy identified a total of 324 RCTs, with 85 from the year 2005 and 239 from the year 2015. Overall, 64.8% of studies (95% CI (59.6, 70.0)) reported statistical tests of baseline differences; broken down by year, this percentage was 67.1% in 2005 (95% CI (57.1, 77.1)) and 64.0% in 2015 (95% CI (57.9, 70.1)). Although discouraged by the CONSORT statement, statistical testing of baseline differences remains highly prevalent in sports medicine RCTs. Statistical testing of baseline differences can mislead authors; for example, by failing to identify meaningful baseline differences in small studies. Journals that ask authors to follow the CONSORT statement guidelines should recognise that many manuscripts are ignoring the recommendation against statistical testing of baseline differences.

  2. Statistical inference, the bootstrap, and neural-network modeling with application to foreign exchange rates.

    PubMed

    White, H; Racine, J

    2001-01-01

    We propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural-network models. The approaches employ well-known statistical resampling techniques. We conduct a small Monte Carlo experiment showing that our tests have reasonable level and power behavior, and we apply our methods to examine whether there are predictable regularities in foreign exchange rates. We find that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.

  3. A Nonparametric Geostatistical Method For Estimating Species Importance

    Treesearch

    Andrew J. Lister; Rachel Riemann; Michael Hoppus

    2001-01-01

    Parametric statistical methods are not always appropriate for conducting spatial analyses of forest inventory data. Parametric geostatistical methods such as variography and kriging are essentially averaging procedures, and thus can be affected by extreme values. Furthermore, non normal distributions violate the assumptions of analyses in which test statistics are...

  4. Analysis of Statistical Methods Currently used in Toxicology Journals

    PubMed Central

    Na, Jihye; Yang, Hyeri

    2014-01-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health. PMID:25343012

  5. Analysis of Statistical Methods Currently used in Toxicology Journals.

    PubMed

    Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min

    2014-09-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.

  6. 49 CFR Appendix A to Part 665 - Tests To Be Performed at the Bus Testing Facility

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... Because the operator will not become familiar with the detailed design of all new bus models that are tested, tests to determine the time and skill required to remove and reinstall an engine, a transmission... feasible to conduct statistical reliability tests. The detected bus failures, repair time, and the actions...

  7. 49 CFR Appendix A to Part 665 - Tests To Be Performed at the Bus Testing Facility

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... Because the operator will not become familiar with the detailed design of all new bus models that are tested, tests to determine the time and skill required to remove and reinstall an engine, a transmission... feasible to conduct statistical reliability tests. The detected bus failures, repair time, and the actions...

  8. 49 CFR Appendix A to Part 665 - Tests To Be Performed at the Bus Testing Facility

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... Because the operator will not become familiar with the detailed design of all new bus models that are tested, tests to determine the time and skill required to remove and reinstall an engine, a transmission... feasible to conduct statistical reliability tests. The detected bus failures, repair time, and the actions...

  9. Different Tests for a Difference: How Do We Do Research?

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    Most biological scientists conduct experiments to look for effects, and test the results statistically. One of the commonly used test is Student's t test. However, this test concentrates on a very limited question. The authors assume that there is no effect in the experiment, and then estimate the possibility that they could have obtained these…

  10. Static renewal tests using Anodonta imbecillus (freshwater mussels). Anodonta imbecillis copper sulfate reference toxicant test, Clinch River-Environmental Restoration Program (CR-ERP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simbeck, D.J.

    1993-12-31

    Reference toxicant testing using juvenile freshwater mussels was conducted as part of the CR-ERP biomonitoring study of Clinch River sediments to assess the sensitivity of test organisms and the overall performance of the test. Tests were conducted using moderately hard synthetic water spiked with known concentrations of copper as copper sulfate. Toxicity testing of copper sulfate reference toxicant was conducted from May 12--21, 1993. The organisms used for testing were juvenile fresh-water mussels (Anodonta imbecillis). Results from this test showed an LC{sub 50} value of 1.12 mg Cu/L which is lower than the value of 2.02 mg Cu/L obtained inmore » a previous test. Too few tests have been conducted with copper as the toxicant to determine a normal range of values. Attachments to this report include: Toxicity test bench sheets and statistical analyses; Copper analysis request and results; and Personnel training documentation.« less

  11. Test-retest reliability of auditory brainstem responses to chirp stimuli in newborns.

    PubMed

    Cobb, Kensi M; Stuart, Andrew

    2014-11-01

    The purpose of this study was to examine the test-retest reliability of auditory brainstem responses (ABRs) to air- and bone-conducted chirp stimuli in newborns as a function of intensity. A repeated measures quasi-experimental design was employed. Thirty healthy newborns participated. ABRs were evoked using 60, 45, and 30 dB nHL air-conducted CE-Chirps and 45, 30, and 15 dB nHL bone-conducted CE-Chirps at a rate of 57.7/s. Measures were repeated by a second tester. Statistically significant correlations (p <.0001) and predictive linear relations (p <.0001) were found between testers for wave V latencies and amplitudes to air- and bone-conducted CE-Chirps. There were also no statistically significant differences between testers with wave V latencies and amplitudes to air- and bone-conducted CE-Chirps (p >.05). As expected, significant differences in wave V latencies and amplitudes were seen as a function of stimulus intensity for air- and bone-conducted CE-Chirps (p <.0001). These results suggest that ABRs to air- and bone-conducted CE-Chirps can be reliably repeated in newborns with different testers. The CE-Chirp may be valuable for both screening and diagnostic audiologic assessments of newborns.

  12. Statistical Analysis of CFD Solutions from the Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.

    2002-01-01

    A simple, graphical framework is presented for robust statistical evaluation of results obtained from N-Version testing of a series of RANS CFD codes. The solutions were obtained by a variety of code developers and users for the June 2001 Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration used for the computational tests is the DLR-F4 wing-body combination previously tested in several European wind tunnels and for which a previous N-Version test had been conducted. The statistical framework is used to evaluate code results for (1) a single cruise design point, (2) drag polars and (3) drag rise. The paper concludes with a discussion of the meaning of the results, especially with respect to predictability, Validation, and reporting of solutions.

  13. [Statistical validity of the Mexican Food Security Scale and the Latin American and Caribbean Food Security Scale].

    PubMed

    Villagómez-Ornelas, Paloma; Hernández-López, Pedro; Carrasco-Enríquez, Brenda; Barrios-Sánchez, Karina; Pérez-Escamilla, Rafael; Melgar-Quiñónez, Hugo

    2014-01-01

    This article validates the statistical consistency of two food security scales: the Mexican Food Security Scale (EMSA) and the Latin American and Caribbean Food Security Scale (ELCSA). Validity tests were conducted in order to verify that both scales were consistent instruments, conformed by independent, properly calibrated and adequately sorted items, arranged in a continuum of severity. The following tests were developed: sorting of items; Cronbach's alpha analysis; parallelism of prevalence curves; Rasch models; sensitivity analysis through mean differences' hypothesis test. The tests showed that both scales meet the required attributes and are robust statistical instruments for food security measurement. This is relevant given that the lack of access to food indicator, included in multidimensional poverty measurement in Mexico, is calculated with EMSA.

  14. Principles and Practice of Scaled Difference Chi-Square Testing

    ERIC Educational Resources Information Center

    Bryant, Fred B.; Satorra, Albert

    2012-01-01

    We highlight critical conceptual and statistical issues and how to resolve them in conducting Satorra-Bentler (SB) scaled difference chi-square tests. Concerning the original (Satorra & Bentler, 2001) and new (Satorra & Bentler, 2010) scaled difference tests, a fundamental difference exists in how to compute properly a model's scaling correction…

  15. Evaluation of the ecological relevance of mysid toxicity tests using population modeling techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhn-Hines, A.; Munns, W.R. Jr.; Lussier, S.

    1995-12-31

    A number of acute and chronic bioassay statistics are used to evaluate the toxicity and risks of chemical stressors to the mysid shrimp, Mysidopsis bahia. These include LC{sub 50}S from acute tests, NOECs from 7-day and life-cycle tests, and the US EPA Water Quality Criteria Criterion Continuous Concentrations (CCC). Because these statistics are generated from endpoints which focus upon the responses of individual organisms, their relationships to significant effects at higher levels of ecological organization are unknown. This study was conducted to evaluate the quantitative relationships between toxicity test statistics and a concentration-based statistic derived from exposure-response models describing populationmore » growth rate ({lambda}) to stressor concentration. This statistic, C{sup {sm_bullet}} (concentration where {lambda} = I, zero population growth) describes the concentration above which mysid populations are projected to decline in abundance as determined using population modeling techniques. An analysis of M. bahia responses to 9 metals and 9 organic contaminants indicated the NOEC from life-cycle tests to be the best predictor of C{sup {sm_bullet}}, although the acute LC{sub 50} predicted population-level response surprisingly well. These analyses provide useful information regarding uncertainties of extrapolation among test statistics in assessments of ecological risk.« less

  16. Conducted-Susceptibility Testing as an Alternative Approach to Unit-Level Radiated-Susceptibility Verifications

    NASA Astrophysics Data System (ADS)

    Badini, L.; Grassi, F.; Pignari, S. A.; Spadacini, G.; Bisognin, P.; Pelissou, P.; Marra, S.

    2016-05-01

    This work presents a theoretical rationale for the substitution of radiated-susceptibility (RS) verifications defined in current aerospace standards with an equivalent conducted-susceptibility (CS) test procedure based on bulk current injection (BCI) up to 500 MHz. Statistics is used to overcome the lack of knowledge about uncontrolled or uncertain setup parameters, with particular reference to the common-mode impedance of equipment. The BCI test level is properly investigated so to ensure correlation of currents injected in the equipment under test via CS and RS. In particular, an over-testing probability quantifies the severity of the BCI test with respect to the RS test.

  17. A novel measure and significance testing in data analysis of cell image segmentation.

    PubMed

    Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L

    2017-03-14

    Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.

  18. Statistics For Success Statistical Analysis Of Student Data Is A Lot Easier Than You Think And More Useful Than You Imagine.

    ERIC Educational Resources Information Center

    Kadel, Robert

    2004-01-01

    To her surprise, Ms. Logan had just conducted a statistical analysis of her 10th grade biology students' quiz scores. The results indicated that she needed to reinforce mitosis before the students took the high-school proficiency test in three weeks, as required by the state. "Oh! That's easy!" She exclaimed. Teachers like Ms. Logan are…

  19. Comments on statistical issues in numerical modeling for underground nuclear test monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, W.L.; Anderson, K.K.

    1993-11-01

    The Symposium concluded with prepared summaries by four experts in the involved disciplines. These experts made no mention of statistics and/or the statistical content of issues. The first author contributed an extemporaneous statement at the Symposium because there are important issues associated with conducting and evaluating numerical modeling that are familiar to statisticians and often treated successfully by them. This note expands upon these extemporaneous remarks.

  20. Hypothesis Testing Using the Films of the Three Stooges

    ERIC Educational Resources Information Center

    Gardner, Robert; Davidson, Robert

    2010-01-01

    The use of The Three Stooges' films as a source of data in an introductory statistics class is described. The Stooges' films are separated into three populations. Using these populations, students may conduct hypothesis tests with data they collect.

  1. A Statistical Analysis of Brain Morphology Using Wild Bootstrapping

    PubMed Central

    Ibrahim, Joseph G.; Tang, Niansheng; Rowe, Daniel B.; Hao, Xuejun; Bansal, Ravi; Peterson, Bradley S.

    2008-01-01

    Methods for the analysis of brain morphology, including voxel-based morphology and surface-based morphometries, have been used to detect associations between brain structure and covariates of interest, such as diagnosis, severity of disease, age, IQ, and genotype. The statistical analysis of morphometric measures usually involves two statistical procedures: 1) invoking a statistical model at each voxel (or point) on the surface of the brain or brain subregion, followed by mapping test statistics (e.g., t test) or their associated p values at each of those voxels; 2) correction for the multiple statistical tests conducted across all voxels on the surface of the brain region under investigation. We propose the use of new statistical methods for each of these procedures. We first use a heteroscedastic linear model to test the associations between the morphological measures at each voxel on the surface of the specified subregion (e.g., cortical or subcortical surfaces) and the covariates of interest. Moreover, we develop a robust test procedure that is based on a resampling method, called wild bootstrapping. This procedure assesses the statistical significance of the associations between a measure of given brain structure and the covariates of interest. The value of this robust test procedure lies in its computationally simplicity and in its applicability to a wide range of imaging data, including data from both anatomical and functional magnetic resonance imaging (fMRI). Simulation studies demonstrate that this robust test procedure can accurately control the family-wise error rate. We demonstrate the application of this robust test procedure to the detection of statistically significant differences in the morphology of the hippocampus over time across gender groups in a large sample of healthy subjects. PMID:17649909

  2. United States Middle School Students' Perspectives on Learning Statistics

    ERIC Educational Resources Information Center

    Dwyer, Jerry; Moorhouse, Kim; Colwell, Malinda J.

    2009-01-01

    This paper describes an intervention at the 8th grade level where university mathematics researchers presented a series of lessons on introductory concepts in probability and statistics. Pre- and post-tests, and interviews were conducted to examine whether or not students at this grade level can understand these concepts. Students showed a…

  3. Statistical Significance and Effect Size: Two Sides of a Coin.

    ERIC Educational Resources Information Center

    Fan, Xitao

    This paper suggests that statistical significance testing and effect size are two sides of the same coin; they complement each other, but do not substitute for one another. Good research practice requires that both should be taken into consideration to make sound quantitative decisions. A Monte Carlo simulation experiment was conducted, and a…

  4. Asset Attribution Stability and Portfolio Construction: An Educational Example

    ERIC Educational Resources Information Center

    Chong, James T.; Jennings, William P.; Phillips, G. Michael

    2014-01-01

    This paper illustrates how a third statistic from asset pricing models, the R-squared statistic, may have information that can help in portfolio construction. Using a traditional CAPM model in comparison to an 18-factor Arbitrage Pricing Style Model, a portfolio separation test is conducted. Portfolio returns and risk metrics are compared using…

  5. Static renewal tests using Anodonta imbecillis (freshwater mussels). Anodonta imbecillis QA test 2, Clinch River-Environmental Restoration Program (CR-ERP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simbeck, D.J.

    1993-12-31

    Toxicity testing of split whole sediment samples using juvenile freshwater mussels (Anodonta imbecillis) was conducted by TVA to provide a quality assurance mechanism for test organism quality and overall performance of the test being conducted by CR-ERP personnel as part of the CR-ERP biomonitoring study of Clinch River sediments. Testing of sediment samples collected August 14 from Poplar Creek Miles 6.0 and 4.3 was conducted from August 24--September 2, 1993. Results from this test showed no toxicity (survival effects) to fresh-water mussels during a 9-day exposure to the sediments. Attachments to this report include: Chain of custody form -- original;more » Toxicity test bench sheets and statistical analyses; and Ammonia analysis request and results.« less

  6. Central Tire Inflation: Demonstration Tests in the South

    Treesearch

    R.B. Rummer; C. Ashmore; D.L. Sirois; C.L. Rawlins

    1990-01-01

    Tests of prototype Central Tire Inflation (CT11 systems were conducted to quantify CT1 performance, road wear, and truck vibration. The CT1 systems were tested in both experimental and operational settings. Changes in the road surface that occurred during the tests could not be statistically attributed to reduced tire pressure. Vibration at the seat base, however,...

  7. A comparison of face to face and group education on informed choice and decisional conflict of pregnant women about screening tests of fetal abnormalities.

    PubMed

    Kordi, Masoumeh; Riyazi, Sahar; Lotfalizade, Marziyeh; Shakeri, Mohammad Taghi; Suny, Hoseyn Jafari

    2018-01-01

    Screening of fetal anomalies is assumed as a necessary measurement in antenatal cares. The screening plans aim at empowerment of individuals to make the informed choice. This study was conducted in order to compare the effect of group and face-to-face education and decisional conflicts among the pregnant females regarding screening of fetal abnormalities. This study of the clinical trial was carried out on 240 pregnant women at <10-week pregnancy age in health care medical centers in Mashhad city in 2014. The form of individual-midwifery information and informed choice questionnaire and decisional conflict scale were used as tools for data collection. The face-to-face and group education course were held in two weekly sessions for intervention groups during two consecutive weeks, and the usual care was conducted for the control group. The rate of informed choice and decisional conflict was measured in pregnant women before education and also at weeks 20-22 of pregnancy in three groups. The data analysis was executed using SPSS statistical software (version 16), and statistical tests were implemented including Chi-square test, Kruskal-Wallis test, Wilcoxon test, Mann-Whitney U-test, one-way analysis of variance test, and Tukey's range test. The P < 0.05 was considered as a significant. The results showed that there was statically significant difference between three groups in terms of frequency of informed choice in screening of fetal abnormalities ( P = 0.001) in such a way that at next step of intervention, 62 participants (77.5%) in face-to-face education group, 64 members (80%) in group education class, and 20 persons (25%) in control group had the informed choice regarding screening tests, but there was no statistically significant difference between two individual and group education classes. Similarly, during the postintervention phase, there was a statistically significant difference in mean score of decisional conflict scale among pregnant women regarding screening tests in three groups ( P = 0.001). With respect to effectiveness of group and face-to-face education methods in increasing the informed choice and reduced decisional conflict in pregnant women regarding screening tests, each of these education methods may be employed according to the clinical environment conditions and requirement to encourage the women for conducting the screening tests.

  8. Statistical Estimation of Heterogeneities: A New Frontier in Well Testing

    NASA Astrophysics Data System (ADS)

    Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.

    2001-12-01

    Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.

  9. Autoregressive statistical pattern recognition algorithms for damage detection in civil structures

    NASA Astrophysics Data System (ADS)

    Yao, Ruigen; Pakzad, Shamim N.

    2012-08-01

    Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.

  10. Statistical analysis of time transfer data from Timation 2. [US Naval Observatory and Australia

    NASA Technical Reports Server (NTRS)

    Luck, J. M.; Morgan, P.

    1974-01-01

    Between July 1973 and January 1974, three time transfer experiments using the Timation 2 satellite were conducted to measure time differences between the U.S. Naval Observatory and Australia. Statistical tests showed that the results are unaffected by the satellite's position with respect to the sunrise/sunset line or by its closest approach azimuth at the Australian station. Further tests revealed that forward predictions of time scale differences, based on the measurements, can be made with high confidence.

  11. Marketing of Personalized Cancer Care on the Web: An Analysis of Internet Websites

    PubMed Central

    Cronin, Angel; Bair, Elizabeth; Lindeman, Neal; Viswanath, Vish; Janeway, Katherine A.

    2015-01-01

    Internet marketing may accelerate the use of care based on genomic or tumor-derived data. However, online marketing may be detrimental if it endorses products of unproven benefit. We conducted an analysis of Internet websites to identify personalized cancer medicine (PCM) products and claims. A Delphi Panel categorized PCM as standard or nonstandard based on evidence of clinical utility. Fifty-five websites, sponsored by commercial entities, academic institutions, physicians, research institutes, and organizations, that marketed PCM included somatic (58%) and germline (20%) analysis, interpretive services (15%), and physicians/institutions offering personalized care (44%). Of 32 sites offering somatic analysis, 56% included specific test information (range 1–152 tests). All statistical tests were two-sided, and comparisons of website content were conducted using McNemar’s test. More websites contained information about the benefits than limitations of PCM (85% vs 27%, P < .001). Websites specifying somatic analysis were statistically significantly more likely to market one or more nonstandard tests as compared with standard tests (88% vs 44%, P = .04). PMID:25745021

  12. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  13. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers

    ERIC Educational Resources Information Center

    Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-01-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…

  14. From Research to Practice: Basic Mathematics Skills and Success in Introductory Statistics

    ERIC Educational Resources Information Center

    Lunsford, M. Leigh; Poplin, Phillip

    2011-01-01

    Based on previous research of Johnson and Kuennen (2006), we conducted a study to determine factors that would possibly predict student success in an introductory statistics course. Our results were similar to Johnson and Kuennen in that we found students' basic mathematical skills, as measured on a test created by Johnson and Kuennen, were a…

  15. Quality Assurance for Rapid Airfield Construction

    DTIC Science & Technology

    2008-05-01

    necessary to conduct a volume-replacement density test for in-place soil. This density test, which was developed during this investigation, involves...the test both simpler and quicker. The Clegg hammer results are the primary means of judging compaction; thus, the requirements for density tests are...minimized through a stepwise acceptance procedure. Statistical criteria for evaluating Clegg hammer and density measurements are also included

  16. Effective Thermal Conductivity of an Aluminum Foam + Water Two Phase System

    NASA Technical Reports Server (NTRS)

    Moskito, John

    1996-01-01

    This study examined the effect of volume fraction and pore size on the effective thermal conductivity of an aluminum foam and water system. Nine specimens of aluminum foam representing a matrix of three volume fractions (4-8% by vol.) and three pore sizes (2-4 mm) were tested with water to determine relationships to the effective thermal conductivity. It was determined that increases in volume fraction of the aluminum phase were correlated to increases in the effective thermal conductivity. It was not statistically possible to prove that changes in pore size of the aluminum foam correlated to changes in the effective thermal conductivity. However, interaction effects between the volume fraction and pore size of the foam were statistically significant. Ten theoretical models were selected from the published literature to compare against the experimental data. Models by Asaad, Hadley, and de Vries provided effective thermal conductivity predictions within a 95% confidence interval.

  17. Immersive Theater - a Proven Way to Enhance Learning Retention

    NASA Astrophysics Data System (ADS)

    Reiff, P. H.; Zimmerman, L.; Spillane, S.; Sumners, C.

    2014-12-01

    The portable immersive theater has gone from our first demonstration at fall AGU 2003 to a product offered by multiple companies in various versions to literally millions of users per year. As part of our NASA funded outreach program, we conducted a test of learning in a portable Discovery Dome as contrasted with learning the same materials (visuals and sound track) on a computer screen. We tested 200 middle school students (primarily underserved minorities). Paired t-tests and an independent t-test were used to compare the amount of learning that students achieved. Interest questionnaires were administered to participants in formal (public school) settings and focus groups were conducted in informal (museum camp and educational festival) settings. Overall results from the informal and formal educational setting indicated that there was a statistically significant increase in test scores after viewing We Choose Space. There was a statistically significant increase in test scores for students who viewed We Choose Space in the portable Discovery Dome (9.75) as well as with the computer (8.88). However, long-term retention of the material tested on the questionnaire indicated that for students who watched We Choose Space in the portable Discovery Dome, there was a statistically significant long-term increase in test scores (10.47), whereas, six weeks after learning on the computer, the improvements over the initial baseline (3.49) were far less and were not statistically significant. The test score improvement six weeks after learning in the dome was essentially the same as the post test immediately after watching the show, demonstrating virtually no loss of gained information in the six week interval. In the formal educational setting, approximately 34% of the respondents indicated that they wanted to learn more about becoming a scientist, while 35% expressed an interest in a career in space science. In the informal setting, 26% indicated that they were interested in pursuing a career in space science.

  18. Testing homogeneity of proportion ratios for stratified correlated bilateral data in two-arm randomized clinical trials.

    PubMed

    Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai

    2014-11-10

    Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Validation of contractor HMA testing data in the materials acceptance process - phase II : final report.

    DOT National Transportation Integrated Search

    2016-08-01

    This study conducted an analysis of the SCDOT HMA specification. A Research Steering Committee provided oversight : of the process. The research process included extensive statistical analyses of test data supplied by SCDOT. : A total of 2,789 AC tes...

  20. Aircraft Maneuvers for the Evaluation of Flying Qualities and Agility. Volume 1. Maneuver Development Process and Initial Maneuver Set

    DTIC Science & Technology

    1993-08-01

    subtitled "Simulation Data," consists of detailed infonrnation on the design parmneter variations tested, subsequent statistical analyses conducted...used with confidence during the design process. The data quality can be examined in various forms such as statistical analyses of measure of merit data...merit, such as time to capture or nmaximurn pitch rate, can be calculated from the simulation time history data. Statistical techniques are then used

  1. The stability of hydrogen ion and specific conductance in filtered wet-deposition samples stored at ambient temperatures

    USGS Publications Warehouse

    Gordon, J.D.; Schroder, L.J.; Morden-Moore, A. L.; Bowersox, V.C.

    1995-01-01

    Separate experiments by the U.S. Geological Survey (USGS) and the Illinois State Water Survey Central Analytical Laboratory (CAL) independently assessed the stability of hydrogen ion and specific conductance in filtered wet-deposition samples stored at ambient temperatures. The USGS experiment represented a test of sample stability under a diverse range of conditions, whereas the CAL experiment was a controlled test of sample stability. In the experiment by the USGS, a statistically significant (?? = 0.05) relation between [H+] and time was found for the composited filtered, natural, wet-deposition solution when all reported values are included in the analysis. However, if two outlying pH values most likely representing measurement error are excluded from the analysis, the change in [H+] over time was not statistically significant. In the experiment by the CAL, randomly selected samples were reanalyzed between July 1984 and February 1991. The original analysis and reanalysis pairs revealed that [H+] differences, although very small, were statistically different from zero, whereas specific-conductance differences were not. Nevertheless, the results of the CAL reanalysis project indicate there appears to be no consistent, chemically significant degradation in sample integrity with regard to [H+] and specific conductance while samples are stored at room temperature at the CAL. Based on the results of the CAL and USGS studies, short-term (45-60 day) stability of [H+] and specific conductance in natural filtered wet-deposition samples that are shipped and stored unchilled at ambient temperatures was satisfactory.

  2. Compressor seal rub energetics study

    NASA Technical Reports Server (NTRS)

    Laverty, W. F.

    1978-01-01

    The rub mechanics of compressor abradable blade tip seals at simulated engine conditions were investigated. Twelve statistically planned, instrumented rub tests were conducted with titanium blades and Feltmetal fibermetal rubstrips. The tests were conducted with single stationary blades rubbing against seal material bonded to rotating test disks. The instantaneous rub torque, speed, incursion rate and blade temperatures were continuously measured and recorded. Basic rub parameters (incursion rate, rub depth, abradable density, blade thickness and rub velocity) were varied to determine the effects on rub energy and heat split between the blade, rubstrip surface and rub debris. The test data was reduced, energies were determined and statistical analyses were completed to determine the primary and interactive effects. Wear surface morphology, profile measurements and metallographic analysis were used to determine wear, glazing, melting and material transfer. The rub energies for these tests were most significantly affected by the incursion rate while rub velocity and blade thickness were of secondary importance. The ratios of blade wear to seal wear were representative of those experienced in engine operation of these seal system materials.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simbeck, D.J.

    Clinch River-Environmental Restoration Program (CR-ERP) personnel and Tennessee Valley Authority (TVA) personnel conducted a study during the week of October 21--28, 1993. The organisms specified for testing were larval fathead minnows, Pimephales promelas, and the daphnid, Ceriodaphnia dubia. Due to serious reproduction/embryo abortion problems with the TVA daphnid cultures, TVA conducted tests during this study period using only fathead minnows. Surface water samples were collected by TVA Field Engineering personnel from Poplar Creek Mile 2.9, Mile 4.3, and Mile 5.1 on October 20, 22, and 25. Samples were split and provided to the CR-ERP and TVA toxicology laboratories for testing.more » Exposure of test organisms to these samples resulted in no toxicity (survival or growth) in testing conducted by TVA. Attachments to this report include: Chain of custody forms -- originals; Toxicity test bench sheets and statistical analyses; and Reference toxicant test information.« less

  4. Monte Carlo based statistical power analysis for mediation models: methods and software.

    PubMed

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  5. Guidelines for Using the "Q" Test in Meta-Analysis

    ERIC Educational Resources Information Center

    Maeda, Yukiko; Harwell, Michael R.

    2016-01-01

    The "Q" test is regularly used in meta-analysis to examine variation in effect sizes. However, the assumptions of "Q" are unlikely to be satisfied in practice prompting methodological researchers to conduct computer simulation studies examining its statistical properties. Narrative summaries of this literature are available but…

  6. Exploiting excess sharing: a more powerful test of linkage for affected sib pairs than the transmission/disequilibrium test.

    PubMed Central

    Wicks, J

    2000-01-01

    The transmission/disequilibrium test (TDT) is a popular, simple, and powerful test of linkage, which can be used to analyze data consisting of transmissions to the affected members of families with any kind pedigree structure, including affected sib pairs (ASPs). Although it is based on the preferential transmission of a particular marker allele across families, it is not a valid test of association for ASPs. Martin et al. devised a similar statistic for ASPs, Tsp, which is also based on preferential transmission of a marker allele but which is a valid test of both linkage and association for ASPs. It is, however, less powerful than the TDT as a test of linkage for ASPs. What I show is that the differences between the TDT and Tsp are due to the fact that, although both statistics are based on preferential transmission of a marker allele, the TDT also exploits excess sharing in identity-by-descent transmissions to ASPs. Furthermore, I show that both of these statistics are members of a family of "TDT-like" statistics for ASPs. The statistics in this family are based on preferential transmission but also, to varying extents, exploit excess sharing. From this family of statistics, we see that, although the TDT exploits excess sharing to some extent, it is possible to do so to a greater extent-and thus produce a more powerful test of linkage, for ASPs, than is provided by the TDT. Power simulations conducted under a number of disease models are used to verify that the most powerful member of this family of TDT-like statistics is more powerful than the TDT for ASPs. PMID:10788332

  7. Exploiting excess sharing: a more powerful test of linkage for affected sib pairs than the transmission/disequilibrium test.

    PubMed

    Wicks, J

    2000-06-01

    The transmission/disequilibrium test (TDT) is a popular, simple, and powerful test of linkage, which can be used to analyze data consisting of transmissions to the affected members of families with any kind pedigree structure, including affected sib pairs (ASPs). Although it is based on the preferential transmission of a particular marker allele across families, it is not a valid test of association for ASPs. Martin et al. devised a similar statistic for ASPs, Tsp, which is also based on preferential transmission of a marker allele but which is a valid test of both linkage and association for ASPs. It is, however, less powerful than the TDT as a test of linkage for ASPs. What I show is that the differences between the TDT and Tsp are due to the fact that, although both statistics are based on preferential transmission of a marker allele, the TDT also exploits excess sharing in identity-by-descent transmissions to ASPs. Furthermore, I show that both of these statistics are members of a family of "TDT-like" statistics for ASPs. The statistics in this family are based on preferential transmission but also, to varying extents, exploit excess sharing. From this family of statistics, we see that, although the TDT exploits excess sharing to some extent, it is possible to do so to a greater extent-and thus produce a more powerful test of linkage, for ASPs, than is provided by the TDT. Power simulations conducted under a number of disease models are used to verify that the most powerful member of this family of TDT-like statistics is more powerful than the TDT for ASPs.

  8. An investigative comparison of purging and non-purging groundwater sampling methods in Karoo aquifer monitoring wells

    NASA Astrophysics Data System (ADS)

    Gomo, M.; Vermeulen, D.

    2015-03-01

    An investigation was conducted to statistically compare the influence of non-purging and purging groundwater sampling methods on analysed inorganic chemistry parameters and calculated saturation indices. Groundwater samples were collected from 15 monitoring wells drilled in Karoo aquifers before and after purging for the comparative study. For the non-purging method, samples were collected from groundwater flow zones located in the wells using electrical conductivity (EC) profiling. The two data sets of non-purged and purged groundwater samples were analysed for inorganic chemistry parameters at the Institute of Groundwater Studies (IGS) laboratory of the Free University in South Africa. Saturation indices for mineral phases that were found in the data base of PHREEQC hydrogeochemical model were calculated for each data set. Four one-way ANOVA tests were conducted using Microsoft excel 2007 to investigate if there is any statistically significant difference between: (1) all inorganic chemistry parameters measured in the non-purged and purged groundwater samples per each specific well, (2) all mineral saturation indices calculated for the non-purged and purged groundwater samples per each specific well, (3) individual inorganic chemistry parameters measured in the non-purged and purged groundwater samples across all wells and (4) Individual mineral saturation indices calculated for non-purged and purged groundwater samples across all wells. For all the ANOVA tests conducted, the calculated alpha values (p) are greater than 0.05 (significance level) and test statistic (F) is less than the critical value (Fcrit) (F < Fcrit). The results imply that there was no statistically significant difference between the two data sets. With a 95% confidence, it was therefore concluded that the variance between groups was rather due to random chance and not to the influence of the sampling methods (tested factor). It is therefore be possible that in some hydrogeologic conditions, non-purged groundwater samples might be just as representative as the purged ones. The findings of this study can provide an important platform for future evidence oriented research investigations to establish the necessity of purging prior to groundwater sampling in different aquifer systems.

  9. Variability of streambed hydraulic conductivity in an intermittent stream reach regulated by Vented Dams: A case study

    NASA Astrophysics Data System (ADS)

    Naganna, Sujay Raghavendra; Deka, Paresh Chandra

    2018-07-01

    The hydro-geological properties of streambed together with the hydraulic gradients determine the fluxes of water, energy and solutes between the stream and underlying aquifer system. Dam induced sedimentation affects hyporheic processes and alters substrate pore space geometries in the course of progressive stabilization of the sediment layers. Uncertainty in stream-aquifer interactions arises from the inherent complex-nested flow paths and spatio-temporal variability of streambed hydraulic properties. A detailed field investigation of streambed hydraulic conductivity (Ks) using Guelph Permeameter was carried out in an intermittent stream reach of the Pavanje river basin located in the mountainous, forested tract of western ghats of India. The present study reports the spatial and temporal variability of streambed hydraulic conductivity along the stream reach obstructed by two Vented Dams in sequence. Statistical tests such as Levene's and Welch's t-tests were employed to check for various variability measures. The strength of spatial dependence and the presence of spatial autocorrelation among the streambed Ks samples were tested by using Moran's I statistic. The measures of central tendency and dispersion pointed out reasonable spatial variability in Ks distribution throughout the study reach during two consecutive years 2016 and 2017. The streambed was heterogeneous with regard to hydraulic conductivity distribution with high-Ks zones near the backwater areas of the vented dam and low-Ks zones particularly at the tail water section of vented dams. Dam operational strategies were responsible for seasonal fluctuations in sedimentation and modifications to streambed substrate characteristics (such as porosity, grain size, packing etc.), resulting in heterogeneous streambed Ks profiles. The channel downstream of vented dams contained significantly more cohesive deposits of fine sediment due to the overflow of surplus suspended sediment-laden water at low velocity and pressure head. The statistical test results accept the hypothesis of significant spatial variability of streambed Ks but refuse to accept the temporal variations. The deterministic and geo-statistical approaches of spatial interpolation provided virtuous surface maps of streambed Ks distribution.

  10. Mysid (Mysidopsis bahia) life-cycle test: Design comparisons and assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lussier, S.M.; Champlin, D.; Kuhn, A.

    1996-12-31

    This study examines ASTM Standard E1191-90, ``Standard Guide for Conducting Life-cycle Toxicity Tests with Saltwater Mysids,`` 1990, using Mysidopsis bahia, by comparing several test designs to assess growth, reproduction, and survival. The primary objective was to determine the most labor efficient and statistically powerful test design for the measurement of statistically detectable effects on biologically sensitive endpoints. Five different test designs were evaluated varying compartment size, number of organisms per compartment and sex ratio. Results showed that while paired organisms in the ASTM design had the highest rate of reproduction among designs tested, no individual design had greater statistical powermore » to detect differences in reproductive effects. Reproduction was not statistically different between organisms paired in the ASTM design and those with randomized sex ratios using larger test compartments. These treatments had numerically higher reproductive success and lower within tank replicate variance than treatments using smaller compartments where organisms were randomized, or had a specific sex ratio. In this study, survival and growth were not statistically different among designs tested. Within tank replicate variability can be reduced by using many exposure compartments with pairs, or few compartments with many organisms in each. While this improves variance within replicate chambers, it does not strengthen the power of detection among treatments in the test. An increase in the number of true replicates (exposure chambers) to eight will have the effect of reducing the percent detectable difference by a factor of two.« less

  11. Robust multivariate nonparametric tests for detection of two-sample location shift in clinical trials

    PubMed Central

    Jiang, Xuejun; Guo, Xu; Zhang, Ning; Wang, Bo

    2018-01-01

    This article presents and investigates performance of a series of robust multivariate nonparametric tests for detection of location shift between two multivariate samples in randomized controlled trials. The tests are built upon robust estimators of distribution locations (medians, Hodges-Lehmann estimators, and an extended U statistic) with both unscaled and scaled versions. The nonparametric tests are robust to outliers and do not assume that the two samples are drawn from multivariate normal distributions. Bootstrap and permutation approaches are introduced for determining the p-values of the proposed test statistics. Simulation studies are conducted and numerical results are reported to examine performance of the proposed statistical tests. The numerical results demonstrate that the robust multivariate nonparametric tests constructed from the Hodges-Lehmann estimators are more efficient than those based on medians and the extended U statistic. The permutation approach can provide a more stringent control of Type I error and is generally more powerful than the bootstrap procedure. The proposed robust nonparametric tests are applied to detect multivariate distributional difference between the intervention and control groups in the Thai Healthy Choices study and examine the intervention effect of a four-session motivational interviewing-based intervention developed in the study to reduce risk behaviors among youth living with HIV. PMID:29672555

  12. Statistical methods for the beta-binomial model in teratology.

    PubMed Central

    Yamamoto, E; Yanagimoto, T

    1994-01-01

    The beta-binomial model is widely used for analyzing teratological data involving littermates. Recent developments in statistical analyses of teratological data are briefly reviewed with emphasis on the model. For statistical inference of the parameters in the beta-binomial distribution, separation of the likelihood introduces an likelihood inference. This leads to reducing biases of estimators and also to improving accuracy of empirical significance levels of tests. Separate inference of the parameters can be conducted in a unified way. PMID:8187716

  13. Gene-Based Association Analysis for Censored Traits Via Fixed Effect Functional Regressions.

    PubMed

    Fan, Ruzong; Wang, Yifan; Yan, Qi; Ding, Ying; Weeks, Daniel E; Lu, Zhaohui; Ren, Haobo; Cook, Richard J; Xiong, Momiao; Swaroop, Anand; Chew, Emily Y; Chen, Wei

    2016-02-01

    Genetic studies of survival outcomes have been proposed and conducted recently, but statistical methods for identifying genetic variants that affect disease progression are rarely developed. Motivated by our ongoing real studies, here we develop Cox proportional hazard models using functional regression (FR) to perform gene-based association analysis of survival traits while adjusting for covariates. The proposed Cox models are fixed effect models where the genetic effects of multiple genetic variants are assumed to be fixed. We introduce likelihood ratio test (LRT) statistics to test for associations between the survival traits and multiple genetic variants in a genetic region. Extensive simulation studies demonstrate that the proposed Cox RF LRT statistics have well-controlled type I error rates. To evaluate power, we compare the Cox FR LRT with the previously developed burden test (BT) in a Cox model and sequence kernel association test (SKAT), which is based on mixed effect Cox models. The Cox FR LRT statistics have higher power than or similar power as Cox SKAT LRT except when 50%/50% causal variants had negative/positive effects and all causal variants are rare. In addition, the Cox FR LRT statistics have higher power than Cox BT LRT. The models and related test statistics can be useful in the whole genome and whole exome association studies. An age-related macular degeneration dataset was analyzed as an example. © 2016 WILEY PERIODICALS, INC.

  14. Gene-based Association Analysis for Censored Traits Via Fixed Effect Functional Regressions

    PubMed Central

    Fan, Ruzong; Wang, Yifan; Yan, Qi; Ding, Ying; Weeks, Daniel E.; Lu, Zhaohui; Ren, Haobo; Cook, Richard J; Xiong, Momiao; Swaroop, Anand; Chew, Emily Y.; Chen, Wei

    2015-01-01

    Summary Genetic studies of survival outcomes have been proposed and conducted recently, but statistical methods for identifying genetic variants that affect disease progression are rarely developed. Motivated by our ongoing real studies, we develop here Cox proportional hazard models using functional regression (FR) to perform gene-based association analysis of survival traits while adjusting for covariates. The proposed Cox models are fixed effect models where the genetic effects of multiple genetic variants are assumed to be fixed. We introduce likelihood ratio test (LRT) statistics to test for associations between the survival traits and multiple genetic variants in a genetic region. Extensive simulation studies demonstrate that the proposed Cox RF LRT statistics have well-controlled type I error rates. To evaluate power, we compare the Cox FR LRT with the previously developed burden test (BT) in a Cox model and sequence kernel association test (SKAT) which is based on mixed effect Cox models. The Cox FR LRT statistics have higher power than or similar power as Cox SKAT LRT except when 50%/50% causal variants had negative/positive effects and all causal variants are rare. In addition, the Cox FR LRT statistics have higher power than Cox BT LRT. The models and related test statistics can be useful in the whole genome and whole exome association studies. An age-related macular degeneration dataset was analyzed as an example. PMID:26782979

  15. A comparison of face to face and group education on informed choice and decisional conflict of pregnant women about screening tests of fetal abnormalities

    PubMed Central

    Kordi, Masoumeh; Riyazi, Sahar; Lotfalizade, Marziyeh; Shakeri, Mohammad Taghi; Suny, Hoseyn Jafari

    2018-01-01

    BACKGROUND AND GOAL: Screening of fetal anomalies is assumed as a necessary measurement in antenatal cares. The screening plans aim at empowerment of individuals to make the informed choice. This study was conducted in order to compare the effect of group and face-to-face education and decisional conflicts among the pregnant females regarding screening of fetal abnormalities. METHODS: This study of the clinical trial was carried out on 240 pregnant women at <10-week pregnancy age in health care medical centers in Mashhad city in 2014. The form of individual-midwifery information and informed choice questionnaire and decisional conflict scale were used as tools for data collection. The face-to-face and group education course were held in two weekly sessions for intervention groups during two consecutive weeks, and the usual care was conducted for the control group. The rate of informed choice and decisional conflict was measured in pregnant women before education and also at weeks 20–22 of pregnancy in three groups. The data analysis was executed using SPSS statistical software (version 16), and statistical tests were implemented including Chi-square test, Kruskal–Wallis test, Wilcoxon test, Mann–Whitney U-test, one-way analysis of variance test, and Tukey's range test. The P < 0.05 was considered as a significant. RESULTS: The results showed that there was statically significant difference between three groups in terms of frequency of informed choice in screening of fetal abnormalities (P = 0.001) in such a way that at next step of intervention, 62 participants (77.5%) in face-to-face education group, 64 members (80%) in group education class, and 20 persons (25%) in control group had the informed choice regarding screening tests, but there was no statistically significant difference between two individual and group education classes. Similarly, during the postintervention phase, there was a statistically significant difference in mean score of decisional conflict scale among pregnant women regarding screening tests in three groups (P = 0.001). DISCUSSION AND CONCLUSION: With respect to effectiveness of group and face-to-face education methods in increasing the informed choice and reduced decisional conflict in pregnant women regarding screening tests, each of these education methods may be employed according to the clinical environment conditions and requirement to encourage the women for conducting the screening tests. PMID:29417066

  16. Marketing of personalized cancer care on the web: an analysis of Internet websites.

    PubMed

    Gray, Stacy W; Cronin, Angel; Bair, Elizabeth; Lindeman, Neal; Viswanath, Vish; Janeway, Katherine A

    2015-05-01

    Internet marketing may accelerate the use of care based on genomic or tumor-derived data. However, online marketing may be detrimental if it endorses products of unproven benefit. We conducted an analysis of Internet websites to identify personalized cancer medicine (PCM) products and claims. A Delphi Panel categorized PCM as standard or nonstandard based on evidence of clinical utility. Fifty-five websites, sponsored by commercial entities, academic institutions, physicians, research institutes, and organizations, that marketed PCM included somatic (58%) and germline (20%) analysis, interpretive services (15%), and physicians/institutions offering personalized care (44%). Of 32 sites offering somatic analysis, 56% included specific test information (range 1-152 tests). All statistical tests were two-sided, and comparisons of website content were conducted using McNemar's test. More websites contained information about the benefits than limitations of PCM (85% vs 27%, P < .001). Websites specifying somatic analysis were statistically significantly more likely to market one or more nonstandard tests as compared with standard tests (88% vs 44%, P = .04). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Key statistical and analytical issues for evaluating treatment effects in periodontal research.

    PubMed

    Tu, Yu-Kang; Gilthorpe, Mark S

    2012-06-01

    Statistics is an indispensible tool for evaluating treatment effects in clinical research. Due to the complexities of periodontal disease progression and data collection, statistical analyses for periodontal research have been a great challenge for both clinicians and statisticians. The aim of this article is to provide an overview of several basic, but important, statistical issues related to the evaluation of treatment effects and to clarify some common statistical misconceptions. Some of these issues are general, concerning many disciplines, and some are unique to periodontal research. We first discuss several statistical concepts that have sometimes been overlooked or misunderstood by periodontal researchers. For instance, decisions about whether to use the t-test or analysis of covariance, or whether to use parametric tests such as the t-test or its non-parametric counterpart, the Mann-Whitney U-test, have perplexed many periodontal researchers. We also describe more advanced methodological issues that have sometimes been overlooked by researchers. For instance, the phenomenon of regression to the mean is a fundamental issue to be considered when evaluating treatment effects, and collinearity amongst covariates is a conundrum that must be resolved when explaining and predicting treatment effects. Quick and easy solutions to these methodological and analytical issues are not always available in the literature, and careful statistical thinking is paramount when conducting useful and meaningful research. © 2012 John Wiley & Sons A/S.

  18. Effects of Long-Term Thermal Exposure on Commercially Pure Titanium Grade 2 Elevated-Temperature Tensile Properties

    NASA Technical Reports Server (NTRS)

    Ellis, David L.

    2012-01-01

    Elevated-temperature tensile testing of commercially pure titanium (CP Ti) Grade 2 was conducted for as-received commercially produced sheet and following thermal exposure at 550 and 650 K (531 and 711 F) for times up to 5000 h. The tensile testing revealed some statistical differences between the 11 thermal treatments, but most thermal treatments were statistically equivalent. Previous data from room temperature tensile testing was combined with the new data to allow regression and development of mathematical models relating tensile properties to temperature and thermal exposure. The results indicate that thermal exposure temperature has a very small effect, whereas the thermal exposure duration has no statistically significant effects on the tensile properties. These results indicate that CP Ti Grade 2 will be thermally stable and suitable for long-duration space missions.

  19. Learning and understanding the Kruskal-Wallis one-way analysis-of-variance-by-ranks test for differences among three or more independent groups.

    PubMed

    Chan, Y; Walmsley, R P

    1997-12-01

    When several treatment methods are available for the same problem, many clinicians are faced with the task of deciding which treatment to use. Many clinicians may have conducted informal "mini-experiments" on their own to determine which treatment is best suited for the problem. These results are usually not documented or reported in a formal manner because many clinicians feel that they are "statistically challenged." Another reason may be because clinicians do not feel they have controlled enough test conditions to warrant analysis. In this update, a statistic is described that does not involve complicated statistical assumptions, making it a simple and easy-to-use statistical method. This update examines the use of two statistics and does not deal with other issues that could affect clinical research such as issues affecting credibility. For readers who want a more in-depth examination of this topic, references have been provided. The Kruskal-Wallis one-way analysis-of-variance-by-ranks test (or H test) is used to determine whether three or more independent groups are the same or different on some variable of interest when an ordinal level of data or an interval or ratio level of data is available. A hypothetical example will be presented to explain when and how to use this statistic, how to interpret results using the statistic, the advantages and disadvantages of the statistic, and what to look for in a written report. This hypothetical example will involve the use of ratio data to demonstrate how to choose between using the nonparametric H test and the more powerful parametric F test.

  20. Semiquantitative determination of mesophilic, aerobic microorganisms in cocoa products using the Soleris NF-TVC method.

    PubMed

    Montei, Carolyn; McDougal, Susan; Mozola, Mark; Rice, Jennifer

    2014-01-01

    The Soleris Non-fermenting Total Viable Count method was previously validated for a wide variety of food products, including cocoa powder. A matrix extension study was conducted to validate the method for use with cocoa butter and cocoa liquor. Test samples included naturally contaminated cocoa liquor and cocoa butter inoculated with natural microbial flora derived from cocoa liquor. A probability of detection statistical model was used to compare Soleris results at multiple test thresholds (dilutions) with aerobic plate counts determined using the AOAC Official Method 966.23 dilution plating method. Results of the two methods were not statistically different at any dilution level in any of the three trials conducted. The Soleris method offers the advantage of results within 24 h, compared to the 48 h required by standard dilution plating methods.

  1. Measurements of experimental precision for trials with cowpea (Vigna unguiculata L. Walp.) genotypes.

    PubMed

    Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G

    2016-05-09

    The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.

  2. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  3. Proceedings of the Conference on the Design of Experiments in Army Research, Development, and Testing (33rd)

    DTIC Science & Technology

    1988-05-01

    Evaluation Directorate (ARMTE) was tasked to conduct a "side- by-side" comparison of EMPS vs . DATMs and to conduct a human factors evaluation of the EMPS...performance ("side-by-side") comparison of EMPS vs . DATMs and to conduct a human factors evaluation. The performance evaluation was based on the speed... independent targets over time. To acquire data for this research, the BRL conducted a statistically designed exper- iment, the Firepower Control Experiment

  4. The Use of Peer Tutoring to Improve the Passing Rates in Mathematics Placement Exams of Engineering Students: A Success Story

    ERIC Educational Resources Information Center

    García, Rolando; Morales, Juan C.; Rivera, Gloribel

    2014-01-01

    This paper describes a highly successful peer tutoring program that has resulted in an improvement in the passing rates of mathematics placement exams from 16% to 42%, on average. Statistical analyses were conducted using a Chi-Squared (?[superscript 2]) test for independence and the results were statistically significant (p-value much less than…

  5. Statistical, graphical, and trend summaries of selected water-quality and streamflow data from the Trinity River near Crockett, Texas, 1964-85

    USGS Publications Warehouse

    Goss, Richard L.

    1987-01-01

    As part of the statistical summaries, trend tests were conducted. Several small uptrends were detected for total nitrogen, total organic nitrogen, total ammonia nitrogen, total nitrite nitrogen, total nitrate nitrogen, total organic plus ammonia nitrogen, total nitrite plus nitrate nitrogen, and total phosphorus. Small downtrends were detected for biochemical oxygen demand and dissolved magnesium.

  6. Hysteresis of unsaturated hydromechanical properties of a silty soil

    USGS Publications Warehouse

    Lu, Ning; Kaya, Murat; Collins, Brian D.; Godt, Jonathan W.

    2013-01-01

    Laboratory tests to examine hysteresis in the hydrologic and mechanical properties of partially saturated soils were conducted on six intact specimens collected from a landslide-prone area of Alameda County, California. The results reveal that the pore-size distribution parameter remains statistically unchanged between the wetting and drying paths; however, the wetting or drying state has a pronounced influence on the water-entry pressure, the water-filled porosity at zero suction, and the saturated hydraulic conductivity. The suction stress values obtained from the shear-strength tests under both natural moisture and resaturated conditions were mostly bounded by the suction stress characteristic curves (SSCCs) obtained from the hydrologic tests. This finding experimentally confirms that the soil-water retention curve, hydraulic conductivity function, and SSCC are intrinsically related.

  7. An investigation of a low-variability tire treadwear test procedure and of treadwear adjustment for ambient temperature. Volume 1 : the test procedures, statistical analyses, and the findings

    DOT National Transportation Integrated Search

    1985-01-01

    The program was conducted to evaluate the variation in tire treadwear rates as : experienced on identical vehicles during the various environmental exposure : conditions of the winter, spring, and summer seasons. The diurnal/nocturnal effect : on the...

  8. Measurements in quantitative research: how to select and report on research instruments.

    PubMed

    Hagan, Teresa L

    2014-07-01

    Measures exist to numerically represent degrees of attributes. Quantitative research is based on measurement and is conducted in a systematic, controlled manner. These measures enable researchers to perform statistical tests, analyze differences between groups, and determine the effectiveness of treatments. If something is not measurable, it cannot be tested.

  9. Welding of AM350 and AM355 steel

    NASA Technical Reports Server (NTRS)

    Davis, R. J.; Wroth, R. S.

    1967-01-01

    A series of tests was conducted to establish optimum procedures for TIG welding and heat treating of AM350 and AM355 steel sheet in thicknesses ranging from 0.010 inch to 0.125 inch. Statistical analysis of the test data was performed to determine the anticipated minimum strength of the welded joints.

  10. Impact of clinical teaching on students knowledge acquisition.

    PubMed

    Manzar, Shabih

    2003-08-01

    We are in the process of curriculum revision and for that we need to know the strengths and weaknesses of the current teaching program and the venue that may need more attention. To proceed with this aim, we conducted this study. The study was conducted on 2 groups of students rotating through nursery as a part of Pediatrics clerkship at King Faisal University, Dammam, KSA, during a 2 month study, April through to May 2001. A 15 item questionnaire was developed for testing. By using a pre-test post-test model, we looked at the scores achieved by the students on the questionnaire before and after 2 weeks of intensive clinical teaching. In the first group of students, the mean percentage of correctly answered questions were higher in the post-test (78%) as compared to pre-test (64%), which was statistically significant, p=0.02. A similar trend was noted in the second group, the mean percentage of correctly answered questions were higher in the post-test (64%) as compared to pre-test (78%), which was also statistically significant, p=0.004. We concluded that our method of clinical teaching followed during nursery rotation was effective in increasing students knowledge. However, attention is needed on some topics in which students are noted to be relatively weak.

  11. Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.

    PubMed

    Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai

    2014-12-18

    A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.

  12. Parametric Analysis to Study the Influence of Aerogel-Based Renders' Components on Thermal and Mechanical Performance.

    PubMed

    Ximenes, Sofia; Silva, Ana; Soares, António; Flores-Colen, Inês; de Brito, Jorge

    2016-05-04

    Statistical models using multiple linear regression are some of the most widely used methods to study the influence of independent variables in a given phenomenon. This study's objective is to understand the influence of the various components of aerogel-based renders on their thermal and mechanical performance, namely cement (three types), fly ash, aerial lime, silica sand, expanded clay, type of aerogel, expanded cork granules, expanded perlite, air entrainers, resins (two types), and rheological agent. The statistical analysis was performed using SPSS (Statistical Package for Social Sciences), based on 85 mortar mixes produced in the laboratory and on their values of thermal conductivity and compressive strength obtained using tests in small-scale samples. The results showed that aerial lime assumes the main role in improving the thermal conductivity of the mortars. Aerogel type, fly ash, expanded perlite and air entrainers are also relevant components for a good thermal conductivity. Expanded clay can improve the mechanical behavior and aerogel has the opposite effect.

  13. Parametric Analysis to Study the Influence of Aerogel-Based Renders’ Components on Thermal and Mechanical Performance

    PubMed Central

    Ximenes, Sofia; Silva, Ana; Soares, António; Flores-Colen, Inês; de Brito, Jorge

    2016-01-01

    Statistical models using multiple linear regression are some of the most widely used methods to study the influence of independent variables in a given phenomenon. This study’s objective is to understand the influence of the various components of aerogel-based renders on their thermal and mechanical performance, namely cement (three types), fly ash, aerial lime, silica sand, expanded clay, type of aerogel, expanded cork granules, expanded perlite, air entrainers, resins (two types), and rheological agent. The statistical analysis was performed using SPSS (Statistical Package for Social Sciences), based on 85 mortar mixes produced in the laboratory and on their values of thermal conductivity and compressive strength obtained using tests in small-scale samples. The results showed that aerial lime assumes the main role in improving the thermal conductivity of the mortars. Aerogel type, fly ash, expanded perlite and air entrainers are also relevant components for a good thermal conductivity. Expanded clay can improve the mechanical behavior and aerogel has the opposite effect. PMID:28773460

  14. Summary of Aquifer Test Data for Arkansas - 1940-2006

    USGS Publications Warehouse

    Pugh, Aaron L.

    2008-01-01

    As demands on Arkansas's ground water continue to increase, decision-makers need all available information to ensure the sustainability of this important natural resource. From 1940 through 2006, the U.S. Geological Survey has conducted over 300 aquifer tests in Arkansas. Much of these data never have been published. This report presents the results from 206 of these aquifer tests from 21 different hydrogeologic units spread across 51 Arkansas counties. Ten of the hydrogeologic units are within the Atlantic Plain of Arkansas and consist mostly of unconsolidated and semi-consolidated deposits. The remaining 11 units are within the Interior Highlands consisting mainly of consolidated rock. Descriptive statistics are reported for each hydrologic unit with two or more tests, including the mean, minimum, median, maximum and standard deviation values for specific capacity, transmissivity, hydraulic conductivity, and storage coefficient. Hydraulic conductivity values for the major water-bearing hydrogeologic units are estimated because few conductivity values are recorded in the original records. Nearly all estimated hydraulic conductivity values agree with published hydraulic conductivity values based on the hydrogeologic unit material types. Similarly, because few specific capacity values were available in the original aquifer test records, specific capacity values are estimated for individual wells.

  15. Static renewal tests using Anodonta imbecillis (freshwater mussels). Anodonta imbecillis copper sulfate reference toxicant/food test, Clinch River-Environmental Restoration Program (CR-ERP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simbeck, D.J.

    1993-12-31

    Reference toxicant testing using juvenile freshwater mussels was conducted as part of the CR-ERP biomonitoring study of Clinch River sediments to assess the sensitivity of test organisms and the overall performance of the test. Tests were conducted using moderately hard synthetic water spiked with known concentrations of copper as copper sulfate. Two different foods, phytoplankton and YCT-Selenastrum (YCT-S), were tested in side by side tests to compare food quality. Toxicity testing of copper sulfate reference toxicant was conducted from July 6--15, 1993. The organisms used for testing were juvenile fresh-water mussels (Anodonta imbecillis). Although significant reduction in growth, compared tomore » the phytoplankton control, was seen in all treatments, including the YCT-S Control, the consequence of this observation has not been established. Ninety-day testing of juvenile mussels exhibited large variations in growth within treatment and replicate groups. Attachments to this report include: Toxicity test bench sheets and statistical analyses; and Copper analysis request and results.« less

  16. Systematic Field Study of NO(x) Emission Control Methods for Utility Boilers.

    ERIC Educational Resources Information Center

    Bartok, William; And Others

    A utility boiler field test program was conducted. The objectives were to determine new or improved NO (x) emission factors by fossil fuel type and boiler design, and to assess the scope of applicability of combustion modification techniques for controlling NO (x) emissions from such installations. A statistically designed test program was…

  17. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis.

    PubMed

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-07-01

    A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  18. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis

    PubMed Central

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J.; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T.; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-01-01

    Motivation: A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. Results: We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness. Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Availability and implementation: Code is available at https://github.com/aalto-ics-kepaco Contacts: anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153689

  19. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    PubMed

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  20. Reliability and validity of a nutrition and physical activity environmental self-assessment for child care

    PubMed Central

    Benjamin, Sara E; Neelon, Brian; Ball, Sarah C; Bangdiwala, Shrikant I; Ammerman, Alice S; Ward, Dianne S

    2007-01-01

    Background Few assessment instruments have examined the nutrition and physical activity environments in child care, and none are self-administered. Given the emerging focus on child care settings as a target for intervention, a valid and reliable measure of the nutrition and physical activity environment is needed. Methods To measure inter-rater reliability, 59 child care center directors and 109 staff completed the self-assessment concurrently, but independently. Three weeks later, a repeat self-assessment was completed by a sub-sample of 38 directors to assess test-retest reliability. To assess criterion validity, a researcher-administered environmental assessment was conducted at 69 centers and was compared to a self-assessment completed by the director. A weighted kappa test statistic and percent agreement were calculated to assess agreement for each question on the self-assessment. Results For inter-rater reliability, kappa statistics ranged from 0.20 to 1.00 across all questions. Test-retest reliability of the self-assessment yielded kappa statistics that ranged from 0.07 to 1.00. The inter-quartile kappa statistic ranges for inter-rater and test-retest reliability were 0.45 to 0.63 and 0.27 to 0.45, respectively. When percent agreement was calculated, questions ranged from 52.6% to 100% for inter-rater reliability and 34.3% to 100% for test-retest reliability. Kappa statistics for validity ranged from -0.01 to 0.79, with an inter-quartile range of 0.08 to 0.34. Percent agreement for validity ranged from 12.9% to 93.7%. Conclusion This study provides estimates of criterion validity, inter-rater reliability and test-retest reliability for an environmental nutrition and physical activity self-assessment instrument for child care. Results indicate that the self-assessment is a stable and reasonably accurate instrument for use with child care interventions. We therefore recommend the Nutrition and Physical Activity Self-Assessment for Child Care (NAP SACC) instrument to researchers and practitioners interested in conducting healthy weight intervention in child care. However, a more robust, less subjective measure would be more appropriate for researchers seeking an outcome measure to assess intervention impact. PMID:17615078

  1. Canadian Health Measures Survey pre-test: design, methods, results.

    PubMed

    Tremblay, Mark; Langlois, Renée; Bryan, Shirley; Esliger, Dale; Patterson, Julienne

    2007-01-01

    The Canadian Health Measures Survey (CHMS) pre-test was conducted to provide information about the challenges and costs associated with administering a physical health measures survey in Canada. To achieve the specific objectives of the pre-test, protocols were developed and tested, and methods for household interviewing and clinic testing were designed and revised. The cost, logistics and suitability of using fixed sites for the CHMS were assessed. Although data collection, transfer and storage procedures are complex, the pre-test experience confirmed Statistics Canada's ability to conduct a direct health measures survey and the willingness of Canadians to participate in such a health survey. Many operational and logistical procedures worked well and, with minor modifications, are being employed in the main survey. Fixed sites were problematic, and survey costs were higher than expected.

  2. Comparisons of modified Vasco X-2 and AISI 9310 gear steels

    NASA Technical Reports Server (NTRS)

    Townsend, D. P.; Zaretsky, E. V.

    1980-01-01

    Endurance tests were conducted with four groups of spur gears manufactured from three heats of consumable electrode vacuum melted (CVM) modified Vasco X-2. Endurance tests were also conducted with gears manufactured from CVM AISI 9310. Bench type rolling element fatigue tests were conducted with both materials. Hardness measurements were made to 811 K. There was no statistically significant life difference between the two materials. Life differences between the different heats of modified Vasco X-2 can be attributed to heat treat variation and resultant hardness. Carburization of gear flanks only can eliminate tooth fracture as a primary failure mode for modified Vasco X-2. However, a tooth surface fatigue spall can act as a nucleus of a tooth fracture failure for the modified Vasco X-2.

  3. Investigation of PACE™ software and VeriFax's Impairoscope device for quantitatively measuring the effects of stress

    NASA Astrophysics Data System (ADS)

    Morgenthaler, George W.; Nuñez, German R.; Botello, Aaron M.; Soto, Jose; Shrairman, Ruth; Landau, Alexander

    1998-01-01

    Many reaction time experiments have been conducted over the years to observe human responses. However, most of the experiments that were performed did not have quantitatively accurate instruments for measuring change in reaction time under stress. There is a great need for quantitative instruments to measure neuromuscular reaction responses under stressful conditions such as distraction, disorientation, disease, alcohol, drugs, etc. The two instruments used in the experiments reported in this paper are such devices. Their accuracy, portability, ease of use, and biometric character are what makes them very special. PACE™ is a software model used to measure reaction time. VeriFax's Impairoscope measures the deterioration of neuromuscular responses. During the 1997 Summer Semester, various reaction time experiments were conducted on University of Colorado faculty, staff, and students using the PACE™ system. The tests included both two-eye and one-eye unstressed trials and trials with various stresses such as fatigue, distractions in which subjects were asked to perform simple arithmetic during the PACE™ tests, and stress due to rotating-chair dizziness. Various VeriFax Impairoscope tests, both stressed and unstressed, were conducted to determine the Impairoscope's ability to quantitatively measure this impairment. In the 1997 Fall Semester, a Phase II effort was undertaken to increase test sample sizes in order to provide statistical precision and stability. More sophisticated statistical methods remain to be applied to better interpret the data.

  4. Comparison of hydraulic conductivities for a sand and gravel aquifer in southeastern Massachusetts, estimated by three methods

    USGS Publications Warehouse

    Warren, L.P.; Church, P.E.; Turtora, Michael

    1996-01-01

    Hydraulic conductivities of a sand and gravel aquifer were estimated by three methods: constant- head multiport-permeameter tests, grain-size analyses (with the Hazen approximation method), and slug tests. Sediment cores from 45 boreholes were undivided or divided into two or three vertical sections to estimate hydraulic conductivity based on permeameter tests and grain-size analyses. The cores were collected from depth intervals in the screened zone of the aquifer in each observation well. Slug tests were performed on 29 observation wells installed in the boreholes. Hydraulic conductivities of 35 sediment cores estimated by use of permeameter tests ranged from 0.9 to 86 meters per day, with a mean of 22.8 meters per day. Hydraulic conductivities of 45 sediment cores estimated by use of grain-size analyses ranged from 0.5 to 206 meters per day, with a mean of 40.7 meters per day. Hydraulic conductivities of aquifer material at 29 observation wells estimated by use of slug tests ranged from 0.6 to 79 meters per day, with a mean of 32.9 meters per day. The repeatability of estimated hydraulic conductivities were estimated to be within 30 percent for the permeameter method, 12 percent for the grain-size method, and 9.5 percent for the slug test method. Statistical tests determined that the medians of estimates resulting from the slug tests and grain-size analyses were not significantly different but were significantly higher than the median of estimates resulting from the permeameter tests. Because the permeameter test is the only method considered which estimates vertical hydraulic conductivity, the difference in estimates may be attributed to vertical or horizontal anisotropy. The difference in the average hydraulic conductivities estimated by use of each method was less than 55 percent when compared to the estimated hydraulic conductivity determined from an aquifer test conducted near the study area.

  5. Evaluation program for secondary spacecraft cells

    NASA Technical Reports Server (NTRS)

    Christy, D. E.; Harkness, J. D.

    1973-01-01

    A life cycle test of secondary electric batteries for spacecraft applications was conducted. A sample number of nickel cadmium batteries were subjected to general performance tests to determine the limit of their actual capabilities. Weaknesses discovered in cell design are reported and aid in research and development efforts toward improving the reliability of spacecraft batteries. A statistical analysis of the life cycle prediction and cause of failure versus test conditions is provided.

  6. Using the Bootstrap Method to Evaluate the Critical Range of Misfit for Polytomous Rasch Fit Statistics.

    PubMed

    Seol, Hyunsoo

    2016-06-01

    The purpose of this study was to apply the bootstrap procedure to evaluate how the bootstrapped confidence intervals (CIs) for polytomous Rasch fit statistics might differ according to sample sizes and test lengths in comparison with the rule-of-thumb critical value of misfit. A total of 25 simulated data sets were generated to fit the Rasch measurement and then a total of 1,000 replications were conducted to compute the bootstrapped CIs under each of 25 testing conditions. The results showed that rule-of-thumb critical values for assessing the magnitude of misfit were not applicable because the infit and outfit mean square error statistics showed different magnitudes of variability over testing conditions and the standardized fit statistics did not exactly follow the standard normal distribution. Further, they also do not share the same critical range for the item and person misfit. Based on the results of the study, the bootstrapped CIs can be used to identify misfitting items or persons as they offer a reasonable alternative solution, especially when the distributions of the infit and outfit statistics are not well known and depend on sample size. © The Author(s) 2016.

  7. One-dimensional statistical parametric mapping in Python.

    PubMed

    Pataky, Todd C

    2012-01-01

    Statistical parametric mapping (SPM) is a topological methodology for detecting field changes in smooth n-dimensional continua. Many classes of biomechanical data are smooth and contained within discrete bounds and as such are well suited to SPM analyses. The current paper accompanies release of 'SPM1D', a free and open-source Python package for conducting SPM analyses on a set of registered 1D curves. Three example applications are presented: (i) kinematics, (ii) ground reaction forces and (iii) contact pressure distribution in probabilistic finite element modelling. In addition to offering a high-level interface to a variety of common statistical tests like t tests, regression and ANOVA, SPM1D also emphasises fundamental concepts of SPM theory through stand-alone example scripts. Source code and documentation are available at: www.tpataky.net/spm1d/.

  8. Nigerian pharmacists’ self-perceived competence and confidence to plan and conduct pharmacy practice research

    PubMed Central

    Usman, Mohammad N.; Umar, Muhammad D.

    2018-01-01

    Background: Recent studies have revealed that pharmacists have interest in conducting research. However, lack of confidence is a major barrier. Objective: This study evaluated pharmacists’ self-perceived competence and confidence to plan and conduct health-related research. Method: This cross sectional study was conducted during the 89th Annual National Conference of the Pharmaceutical Society of Nigeria in November 2016. An adapted questionnaire was validated and administered to 200 pharmacist delegates during the conference. Result: Overall, 127 questionnaires were included in the analysis. At least 80% of the pharmacists had previous health-related research experience. Pharmacist’s competence and confidence scores were lowest for research skills such as: using software for statistical analysis, choosing and applying appropriate inferential statistical test and method, and outlining detailed statistical plan to be used in data analysis. Highest competence and confidence scores were observed for conception of research idea, literature search and critical appraisal of literature. Pharmacists with previous research experience had higher competence and confidence scores than those with no previous research experience (p<0.05). The only predictor of moderate-to-extreme self-competence and confidence was having at least one journal article publication during the last 5 years. Conclusion: Nigerian pharmacists indicated interest to participate in health-related research. However, self-competence and confidence to plan and conduct research were low. This was particularly so for skills related to statistical analysis. Training programs and building of Pharmacy Practice Research Network are recommended to enhance pharmacist’s research capacity. PMID:29619141

  9. The Clinical Utility of the Proposed DSM-5 Callous-Unemotional Subtype of Conduct Disorder in Young Girls

    ERIC Educational Resources Information Center

    Pardini, Dustin; Stepp, Stephanie; Hipwell, Alison; Stouthamer-Loeber, Magda; Loeber, Rolf

    2012-01-01

    Objective: A callous-unemotional (CU) subtype of conduct disorder (CD) has been proposed as an addition to the fifth edition of the "Diagnostic and Statistic Manual of Mental Disorders (DSM-5)." This study tested the hypothesis that young girls with the CU subtype of CD would exhibit more severe antisocial behavior and less severe internalizing…

  10. Testing high SPF sunscreens: a demonstration of the accuracy and reproducibility of the results of testing high SPF formulations by two methods and at different testing sites.

    PubMed

    Agin, Patricia Poh; Edmonds, Susan H

    2002-08-01

    The goals of this study were (i) to demonstrate that existing and widely used sun protection factor (SPF) test methodologies can produce accurate and reproducible results for high SPF formulations and (ii) to provide data on the number of test-subjects needed, the variability of the data, and the appropriate exposure increments needed for testing high SPF formulations. Three high SPF formulations were tested, according to the Food and Drug Administration's (FDA) 1993 tentative final monograph (TFM) 'very water resistant' test method and/or the 1978 proposed monograph 'waterproof' test method, within one laboratory. A fourth high SPF formulation was tested at four independent SPF testing laboratories, using the 1978 waterproof SPF test method. All laboratories utilized xenon arc solar simulators. The data illustrate that the testing conducted within one laboratory, following either the 1978 proposed or the 1993 TFM SPF test method, was able to reproducibly determine the SPFs of the formulations tested, using either the statistical analysis method in the proposed monograph or the statistical method described in the TFM. When one formulation was tested at four different laboratories, the anticipated variation in the data owing to the equipment and other operational differences was minimized through the use of the statistical method described in the 1993 monograph. The data illustrate that either the 1978 proposed monograph SPF test method or the 1993 TFM SPF test method can provide accurate and reproducible results for high SPF formulations. Further, these results can be achieved with panels of 20-25 subjects with an acceptable level of variability. Utilization of the statistical controls from the 1993 sunscreen monograph can help to minimize lab-to-lab variability for well-formulated products.

  11. Evaluation of bone surrogates for indirect and direct ballistic fractures.

    PubMed

    Bir, Cynthia; Andrecovich, Chris; DeMaio, Marlene; Dougherty, Paul J

    2016-04-01

    The mechanism of injury for fractures to long bones has been studied for both direct ballistic loading as well as indirect. However, the majority of these studies have been conducted on both post-mortem human subjects (PMHS) and animal surrogates which have constraints in terms of storage, preparation and testing. The identification of a validated bone surrogate for use in forensic, medical and engineering testing would provide the ability to investigate ballistic loading without these constraints. Two specific bone surrogates, Sawbones and Synbone, were evaluated in comparison to PMHS for both direct and indirect ballistic loading. For the direct loading, the mean velocity to produce fracture was 121 ± 19 m/s for the PMHS, which was statistically different from the Sawbones (140 ± 7 m/s) and Synbone (146 ± 3 m/s). The average distance to fracture in the indirect loading was .70 cm for the PMHS. The Synbone had a statistically similar average distance to fracture (.61 cm, p=0.54) however the Sawbones average distance to fracture was statistically different (.41 cm, p<0.05). Fractures patterns were found to be comparable to the PMHS for tests conducted with Synbones, however the input parameters were slightly varied to produce similar results. The fractures patterns with the Sawbones were not found to be as comparable to the PMHS. An ideal bone surrogate for ballistic testing was not identified and future work is warranted. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Do You Catch Undersized Fish? Let's Go Fishing to Learn Some Important Concepts in Multiple Testing

    ERIC Educational Resources Information Center

    Zheng, Qiujie; Lu, Yonggang

    2016-01-01

    In the era of Big Data, because of diminishing cost of data collection and storage, a large number of statistical tests may even possibly be conducted all together by a high school student to seek for some "exciting" new scientific findings. In this article, we propose an interesting approach to introduce students to some important…

  13. Recognizing the Signs and Symptoms of Youth and Adolescents That Experience Mental Health Problems and/or Crises

    ERIC Educational Resources Information Center

    Pritchett, Tierra M.

    2017-01-01

    Health First Aid at the Philadelphia Red Cross completed a survey with information pertaining to knowledge and confidence in recognizing the signs and symptoms of youth/adolescents that may be experiencing a mental health problem and or crisis. Descriptive statistics, independent t-tests, ANOVA, and Tukey tests were conducted to investigate the…

  14. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study.

    PubMed

    van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-08-07

    Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.

  15. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study

    PubMed Central

    Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-01-01

    Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160

  16. Knowledge dimensions in hypothesis test problems

    NASA Astrophysics Data System (ADS)

    Krishnan, Saras; Idris, Noraini

    2012-05-01

    The reformation in statistics education over the past two decades has predominantly shifted the focus of statistical teaching and learning from procedural understanding to conceptual understanding. The emphasis of procedural understanding is on the formulas and calculation procedures. Meanwhile, conceptual understanding emphasizes students knowing why they are using a particular formula or executing a specific procedure. In addition, the Revised Bloom's Taxonomy offers a twodimensional framework to describe learning objectives comprising of the six revised cognition levels of original Bloom's taxonomy and four knowledge dimensions. Depending on the level of complexities, the four knowledge dimensions essentially distinguish basic understanding from the more connected understanding. This study identifiesthe factual, procedural and conceptual knowledgedimensions in hypothesis test problems. Hypothesis test being an important tool in making inferences about a population from sample informationis taught in many introductory statistics courses. However, researchers find that students in these courses still have difficulty in understanding the underlying concepts of hypothesis test. Past studies also show that even though students can perform the hypothesis testing procedure, they may not understand the rationale of executing these steps or know how to apply them in novel contexts. Besides knowing the procedural steps in conducting a hypothesis test, students must have fundamental statistical knowledge and deep understanding of the underlying inferential concepts such as sampling distribution and central limit theorem. By identifying the knowledge dimensions of hypothesis test problems in this study, suitable instructional and assessment strategies can be developed in future to enhance students' learning of hypothesis test as a valuable inferential tool.

  17. Effect of acupressure vs reflexology on pre-menstrual syndrome among adolescent girls--a pilot study.

    PubMed

    Padmavathi, P

    2014-01-01

    Premenstrual syndrome is the most common of gynaecologic complaints. It affects half of all female adolescents today and represents the leading cause of college/school absenteeism among that population. It was sought to assess the effectiveness of acupressure Vs reflexology on premenstrual syndrome among adolescents. Two-group pre-test and post-test true experimental design was adopted for the study. Forty adolescent girls from Government Girls Secondary School, Erode with pre- menstrual syndrome fulfilling the inclusion criteria were selected by simple random sampling. A pre-test was conducted by using premenstrual symptoms assessment scale. Immediately after pre-test acupressure Vs reflexology was given once a week for 6 weeks and again post-test was conducted to assess the effectiveness of treatment. Collected data was analysed by using descriptive and inferential statistics. In post-test, the mean score of the experimental group I sample was 97.3 (SD = 2.5) and the group II mean score was 70:8 (SD = 10.71) with paired 't' value of 19.2 and 31.9. This showed that the reflexology was more effective than acupressure in enhancing the practice of the sample regarding pre-menstrual syndrome. Statistically no significant association was found between the post-test scores of the sample with their demographic variables. The findings imply the need for educating adolescent girls on effective management of pre-menstrual syndrome.

  18. Prospective Study of Neuroendoscopy versus Microscopy: 213 Cases of Microvascular Decompression for Trigeminal Neuralgia Performed by One Neurosurgeon.

    PubMed

    Xiang, Hui; Wu, Guangyong; Ouyang, Jia; Liu, Ruen

    2018-03-01

    To compare the efficacy and complications of microvascular decompression (MVD) by complete neuroendoscopy versus microscopy for 213 cases of trigeminal neuralgia (TN). Between January 2014 and January 2016, 213 patients with TN were randomly assigned to the neuroendoscopy (n = 105) or microscopy (n = 114) group for MVD via the suboccipital retrosigmoid approach. All procedures were performed by the same neurosurgeon. Follow-up was conducted by telephone interview. Statistical data were analyzed with the chi-square test, and a probability (P) value of ≤0.05 was considered statistically significant. Chi-square test was conducted using SAS 9.4 software (SAS Institute, Cary, North Carolina, USA). There were no statistical differences between the 2 groups in pain-free condition immediately post procedure, pain-free condition 1 year post procedure, hearing loss, facial hypoesthesia, transient ataxia, aseptic meningitis, intracranial infections, and herpetic lesions of the lips. There were no instances of death, facial paralysis, cerebral hemorrhage, or cerebrospinal fluid leakage in either group. There were no significant differences in the cure rates or incidences of surgical complications between neuroendoscopic and microscopic MVD. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Static renewal tests using Pimephales promelas (fathead minnows) and Ceriodaphnia dubia (daphnids). Clinch River-Environmental Restoration Program (CR-ERP) study, ambient water toxicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, C.L.

    1993-12-31

    Clinch River-Environmental Restoration Program (CR-ERP) personnel and Tennessee Valley Authority (TVA) personnel conducted a study during the week of July 22--29, 1993. The organisms specified for testing were larval fathead minnows, Pimephales promelas, and the daphnid, Ceriodaphnia dubia. Surface water samples were collected by TVA Field engineering personnel from Clinch River Mile 19.0 and Mile 22.0 on July 21, 23, and 26. Samples were split and provided to the CR-ERP and TVA toxicology laboratories for testing. Exposure of test organisms to these samples resulted in no toxicity (survival, growth, or reproduction) to either species in testing conducted by TVA. Attachmentsmore » to this report include: Chain of custody forms -- originals; Toxicity test bench sheets and statistical analyses; and Reference toxicant test information.« less

  20. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2018-04-01

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  1. Countermeasures for Reducing Unsteady Aerodynamic Force Acting on High-Speed Train in Tunnel by Use of Modifications of Train Shapes

    NASA Astrophysics Data System (ADS)

    Suzuki, Masahiro; Nakade, Koji; Ido, Atsushi

    As the maximum speed of high-speed trains increases, flow-induced vibration of trains in tunnels has become a subject of discussion in Japan. In this paper, we report the result of a study on use of modifications of train shapes as a countermeasure for reducing an unsteady aerodynamic force by on-track tests and a wind tunnel test. First, we conduct a statistical analysis of on-track test data to identify exterior parts of a train which cause the unsteady aerodynamic force. Next, we carry out a wind tunnel test to measure the unsteady aerodynamic force acting on a train in a tunnel and examined train shapes with a particular emphasis on the exterior parts identified by the statistical analysis. The wind tunnel test shows that fins under the car body are effective in reducing the unsteady aerodynamic force. Finally, we test the fins by an on-track test and confirmed its effectiveness.

  2. Feasibility of TBI Assessment Measures in a Field Environment: A Pilot Study for the Environmental Sensors in Training (ESiT) Project

    DTIC Science & Technology

    2016-12-22

    included assessments and instruments, descriptive statistics were calculated. Independent-samples t-tests were conducted using participant survey scores...integrity tests within a multimodal system. Both conditions included the Military Acute Concussion Evaluation (MACE) and an Ease-of-Use survey . Mean scores...for the Ease-of-Use survey and mean test administration times for each measure were compared. Administrative feedback was also considered for

  3. Effect of Planned Follow-up on Married Women's Health Beliefs and Behaviors Concerning Breast and Cervical Cancer Screenings.

    PubMed

    Kolutek, Rahsan; Avci, Ilknur Aydin; Sevig, Umit

    2018-04-01

    The objective of this study was to identify the effect of planned follow-up visits on married women's health beliefs and behaviors concerning breast and cervical cancer screenings. The study was conducted using the single-group pre-test/post-test and quasi-experimental study designs. The sample of the study included 153 women. Data were collected using a Personal Information Form, the Health Belief Model (HBM) Scale for Breast Cancer Screening, the HBM Scale for Cervical Cancer Screening, and a Pap smear test. Data were collected using the aforementioned tools from September 2012 to March 2013. Four follow-up visits were conducted, nurses were educated, and telephone reminders were utilized. Friedman's test, McNemar's test, and descriptive statistics were used for data analyzing. The frequency of performing breast self-examination (BSE) at the last visit increased to 84.3 % compared to the pre-training. A statistically significant difference was observed between the pre- and post-training median values in four subscales except for the subscale of perceived seriousness of cervical cancer under "the Health Belief Model Scale for Cervical Cancer and the Pap Smear Test" (p < 0.001). The rate of performing BSE significantly increased after the training and follow-up visits. Also, the rate of having a Pap smear significantly increased after the follow-up visits.

  4. The Interaction of Conduct Problems and Depressed Mood in Relation to Adolescent Substance Involvement and Peer Substance Use

    PubMed Central

    Hitchings, Julia E.; Spoth, Richard L.

    2010-01-01

    Conduct problems are strong positive predictors of substance use and problem substance use among teens, whereas predictive associations of depressed mood with these outcomes are mixed. Conduct problems and depressed mood often co-occur, and such co-occurrence may heighten risk for negative outcomes. Thus, this study examined the interaction of conduct problems and depressed mood at age 11 in relation to substance use and problem use at age 18, and possible mediation through peer substance use at age 16. Analyses of multirater longitudinal data collected from 429 rural youths (222 girls) and their families were conducted using a methodology for testing latent variable interactions. The link between the conduct problems X depressed mood interaction and adolescent substance use was negative and statistically significant. Unexpectedly, positive associations of conduct problems with substance use were stronger at lower levels of depressed mood. A significant negative interaction in relation to peer substance use also was observed, and the estimated indirect effect of the interaction on adolescent use through peer use as a mediator was statistically significant. Findings illustrate the complexity of multiproblem youth. PMID:18455886

  5. Endurance and failure characteristics of modified Vasco X-2, CBS 600 and AISI 9310 spur gears. [aircraft construction materials

    NASA Technical Reports Server (NTRS)

    Townsend, D. P.; Zaretsky, E. V.

    1980-01-01

    Gear endurance tests and rolling-element fatigue tests were conducted to compare the performance of spur gears made from AISI 9310, CBS 600 and modified Vasco X-2 and to compare the pitting fatigue lives of these three materials. Gears manufactured from CBS 600 exhibited lives longer than those manufactured from AISI 9310. However, rolling-element fatigue tests resulted in statistically equivalent lives. Modified Vasco X-2 exhibited statistically equivalent lives to AISI 9310. CBS 600 and modified Vasco X-2 gears exhibited the potential of tooth fracture occurring at a tooth surface fatigue pit. Case carburization of all gear surfaces for the modified Vasco X-2 gears results in fracture at the tips of the gears.

  6. Improved silicon nitride for advanced heat engines

    NASA Technical Reports Server (NTRS)

    Yeh, Hun C.; Fang, Ho T.

    1987-01-01

    The technology base required to fabricate silicon nitride components with the strength, reliability, and reproducibility necessary for actual heat engine applications is presented. Task 2 was set up to develop test bars with high Weibull slope and greater high temperature strength, and to conduct an initial net shape component fabrication evaluation. Screening experiments were performed in Task 7 on advanced materials and processing for input to Task 2. The technical efforts performed in the second year of a 5-yr program are covered. The first iteration of Task 2 was completed as planned. Two half-replicated, fractional factorial (2 sup 5), statistically designed matrix experiments were conducted. These experiments have identified Denka 9FW Si3N4 as an alternate raw material to GTE SN502 Si3N4 for subsequent process evaluation. A detailed statistical analysis was conducted to correlate processing conditions with as-processed test bar properties. One processing condition produced a material with a 97 ksi average room temperature MOR (100 percent of goal) with 13.2 Weibull slope (83 percent of goal); another condition produced 86 ksi (6 percent over baseline) room temperature strength with a Weibull slope of 20 (125 percent of goal).

  7. Saccular function in otosclerosis patients: bone conducted-vestibular evoked myogenic potential analysis.

    PubMed

    Amali, Amin; Mahdi, Parvane; Karimi Yazdi, Alireza; Khorsandi Ashtiyani, Mohammad Taghi; Yazdani, Nasrin; Vakili, Varasteh; Pourbakht, Akram

    2014-01-01

    Vestibular involvements have long been observed in otosclerotic patients. Among vestibular structures saccule has the closest anatomical proximity to the sclerotic foci, so it is the most prone vestibular structure to be affected during the otosclerosis process. The aim of this study was to investigate the saccular function in patients suffering from otosclerosis, by means of Vestibular Evoked Myogenic Potential (VEMP). The material consisted of 30 otosclerosis patients and 20 control subjects. All participants underwent audiometric and VEMP testing. Analysis of tests results revealed that the mean values of Air-Conducted Pure Tone Average (AC-PTA) and Bone-Conducted Pure Tone Average (BC-PTA) in patients were 45.28 ± 15.57 and 19.68 ± 10.91, respectively and calculated 4 frequencies Air Bone Gap (ABG) was 25.64 ± 9.95. The VEMP response was absent in 14 (28.57%) otosclerotic ears. A statistically significant increase in latency of the p13 was found in the affected ears (P=0.004), differences in n23 latency did not reach a statistically significant level (P=0.112). Disparities in amplitude of p13-n23 in between two study groups was statistically meaningful (P=0.009), indicating that the patients with otosclerosis had lower amplitudes. This study tends to suggest that due to the direct biotoxic effect of the materials released from the otosclerosis foci on saccular receptors, there might be a possibility of vestibular dysfunction in otosclerotic patients.

  8. Potential Mediators in Parenting and Family Intervention: Quality of Mediation Analyses

    PubMed Central

    Patel, Chandni C.; Fairchild, Amanda J.; Prinz, Ronald J.

    2017-01-01

    Parenting and family interventions have repeatedly shown effectiveness in preventing and treating a range of youth outcomes. Accordingly, investigators in this area have conducted a number of studies using statistical mediation to examine some of the potential mechanisms of action by which these interventions work. This review examined from a methodological perspective in what ways and how well the family-based intervention studies tested statistical mediation. A systematic search identified 73 published outcome studies that tested mediation for family-based interventions across a wide range of child and adolescent outcomes (i.e., externalizing, internalizing, and substance-abuse problems; high-risk sexual activity; and academic achievement), for putative mediators pertaining to positive and negative parenting, family functioning, youth beliefs and coping skills, and peer relationships. Taken as a whole, the studies used designs that adequately addressed temporal precedence. The majority of studies used the product of coefficients approach to mediation, which is preferred, and less limiting than the causal steps approach. Statistical significance testing did not always make use of the most recently developed approaches, which would better accommodate small sample sizes and more complex functions. Specific recommendations are offered for future mediation studies in this area with respect to full longitudinal design, mediation approach, significance testing method, documentation and reporting of statistics, testing of multiple mediators, and control for Type I error. PMID:28028654

  9. Assessment of the beryllium lymphocyte proliferation test using statistical process control.

    PubMed

    Cher, Daniel J; Deubner, David C; Kelsh, Michael A; Chapman, Pamela S; Ray, Rose M

    2006-10-01

    Despite more than 20 years of surveillance and epidemiologic studies using the beryllium blood lymphocyte proliferation test (BeBLPT) as a measure of beryllium sensitization (BeS) and as an aid for diagnosing subclinical chronic beryllium disease (CBD), improvements in specific understanding of the inhalation toxicology of CBD have been limited. Although epidemiologic data suggest that BeS and CBD risks vary by process/work activity, it has proven difficult to reach specific conclusions regarding the dose-response relationship between workplace beryllium exposure and BeS or subclinical CBD. One possible reason for this uncertainty could be misclassification of BeS resulting from variation in BeBLPT testing performance. The reliability of the BeBLPT, a biological assay that measures beryllium sensitization, is unknown. To assess the performance of four laboratories that conducted this test, we used data from a medical surveillance program that offered testing for beryllium sensitization with the BeBLPT. The study population was workers exposed to beryllium at various facilities over a 10-year period (1992-2001). Workers with abnormal results were offered diagnostic workups for CBD. Our analyses used a standard statistical technique, statistical process control (SPC), to evaluate test reliability. The study design involved a repeated measures analysis of BeBLPT results generated from the company-wide, longitudinal testing. Analytical methods included use of (1) statistical process control charts that examined temporal patterns of variation for the stimulation index, a measure of cell reactivity to beryllium; (2) correlation analysis that compared prior perceptions of BeBLPT instability to the statistical measures of test variation; and (3) assessment of the variation in the proportion of missing test results and how time periods with more missing data influenced SPC findings. During the period of this study, all laboratories displayed variation in test results that were beyond what would be expected due to chance alone. Patterns of test results suggested that variations were systematic. We conclude that laboratories performing the BeBLPT or other similar biological assays of immunological response could benefit from a statistical approach such as SPC to improve quality management.

  10. Randomized trial of parent training to prevent adolescent problem behaviors during the high school transition.

    PubMed

    Mason, W Alex; Fleming, Charles B; Gross, Thomas J; Thompson, Ronald W; Parra, Gilbert R; Haggerty, Kevin P; Snyder, James J

    2016-12-01

    This randomized controlled trial tested a widely used general parent training program, Common Sense Parenting (CSP), with low-income 8th graders and their families to support a positive transition to high school. The program was tested in its original 6-session format and in a modified format (CSP-Plus), which added 2 sessions that included adolescents. Over 2 annual cohorts, 321 families were enrolled and randomly assigned to either the CSP, CSP-Plus, or minimal-contact control condition. Pretest, posttest, 1-year follow-up, and 2-year follow-up survey data on parenting as well as youth school bonding, social skills, and problem behaviors were collected from parents and youth (94% retention). Extending prior examinations of posttest outcomes, intent-to-treat regression analyses tested for intervention effects at the 2 follow-up assessments, and growth curve analyses examined experimental condition differences in yearly change across time. Separate exploratory tests of moderation by youth gender, youth conduct problems, and family economic hardship also were conducted. Out of 52 regression models predicting 1- and 2-year follow-up outcomes, only 2 out of 104 possible intervention effects were statistically significant. No statistically significant intervention effects were found in the growth curve analyses. Tests of moderation also showed few statistically significant effects. Because CSP already is in widespread use, findings have direct implications for practice. Specifically, findings suggest that the program may not be efficacious with parents of adolescents in a selective prevention context and may reveal the limits of brief, general parent training for achieving outcomes with parents of adolescents. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Satisfaction with cancer care among underserved racial-ethnic minorities and lower-income patients receiving patient navigation.

    PubMed

    Jean-Pierre, Pascal; Cheng, Ying; Wells, Kristen J; Freund, Karen M; Snyder, Frederick R; Fiscella, Kevin; Holden, Alan E; Paskett, Electra D; Dudley, Donald J; Simon, Melissa A; Valverde, Patricia A

    2016-04-01

    Patient navigation is a barrier-focused program of care coordination designed to achieve timely and high-quality cancer-related care for medically underserved racial-ethnic minorities and the poor. However, to the authors' knowledge, few studies to date have examined the relationship between satisfaction with navigators and cancer-related care. The authors included data from 1345 patients with abnormal cancer screening tests or a definitive cancer diagnosis who participated in the Patient Navigation Research Program to test the efficacy of patient navigation. Participants completed demographic questionnaires and measures of patient satisfaction with cancer-related care (PSCC) and patient satisfaction with interpersonal relationship with navigator (PSN-I). The authors obtained descriptive statistics to characterize the sample and conducted regression analyses to assess the degree of association between PSN-I and PSCC, controlling for demographic and clinical factors. Analyses of variance were conducted to examine group differences controlling for statistically significant covariates. Statistically significant relationships were found between the PSCC and PSN-I for patients with abnormal cancer screening tests (1040 patients; correlation coefficient (r), 0.4 [P<.001]) and those with a definitive cancer diagnosis (305 patients; correlation coefficient, 0.4 [P<.001]). The regression analysis indicated that having an abnormal colorectal cancer screening test in the abnormal screening test group and increased age and minority race-ethnicity status in the cancer diagnosis group were associated with a higher satisfaction with cancer care (P<.01). Satisfaction with navigators appears to be significantly associated with satisfaction with cancer-related care. Information regarding the patient-navigator relationship should be integrated into patient navigation programs to maximize the likelihood of reducing caner disparities and mortality for medically underserved racial-ethnic minorities and the poor. © 2016 American Cancer Society.

  12. Visualizing the Bayesian 2-test case: The effect of tree diagrams on medical decision making.

    PubMed

    Binder, Karin; Krauss, Stefan; Bruckmaier, Georg; Marienhagen, Jörg

    2018-01-01

    In medicine, diagnoses based on medical test results are probabilistic by nature. Unfortunately, cognitive illusions regarding the statistical meaning of test results are well documented among patients, medical students, and even physicians. There are two effective strategies that can foster insight into what is known as Bayesian reasoning situations: (1) translating the statistical information on the prevalence of a disease and the sensitivity and the false-alarm rate of a specific test for that disease from probabilities into natural frequencies, and (2) illustrating the statistical information with tree diagrams, for instance, or with other pictorial representation. So far, such strategies have only been empirically tested in combination for "1-test cases", where one binary hypothesis ("disease" vs. "no disease") has to be diagnosed based on one binary test result ("positive" vs. "negative"). However, in reality, often more than one medical test is conducted to derive a diagnosis. In two studies, we examined a total of 388 medical students from the University of Regensburg (Germany) with medical "2-test scenarios". Each student had to work on two problems: diagnosing breast cancer with mammography and sonography test results, and diagnosing HIV infection with the ELISA and Western Blot tests. In Study 1 (N = 190 participants), we systematically varied the presentation of statistical information ("only textual information" vs. "only tree diagram" vs. "text and tree diagram in combination"), whereas in Study 2 (N = 198 participants), we varied the kinds of tree diagrams ("complete tree" vs. "highlighted tree" vs. "pruned tree"). All versions were implemented in probability format (including probability trees) and in natural frequency format (including frequency trees). We found that natural frequency trees, especially when the question-related branches were highlighted, improved performance, but that none of the corresponding probabilistic visualizations did.

  13. Revised standards for statistical evidence.

    PubMed

    Johnson, Valen E

    2013-11-26

    Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.

  14. Reliability demonstration test for load-sharing systems with exponential and Weibull components

    PubMed Central

    Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn’t yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics. PMID:29284030

  15. Reliability demonstration test for load-sharing systems with exponential and Weibull components.

    PubMed

    Xu, Jianyu; Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn't yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics.

  16. Public health information and statistics dissemination efforts for Indonesia on the Internet.

    PubMed

    Hanani, Febiana; Kobayashi, Takashi; Jo, Eitetsu; Nakajima, Sawako; Oyama, Hiroshi

    2011-01-01

    To elucidate current issues related to health statistics dissemination efforts on the Internet in Indonesia and to propose a new dissemination website as a solution. A cross-sectional survey was conducted. Sources of statistics were identified using link relationship and Google™ search. Menu used to locate statistics, mode of presentation and means of access to statistics, and available statistics were assessed for each site. Assessment results were used to derive design specification; a prototype system was developed and evaluated with usability test. 49 sources were identified on 18 governmental, 8 international and 5 non-government websites. Of 49 menus identified, 33% used non-intuitive titles and lead to inefficient search. 69% of them were on government websites. Of 31 websites, only 39% and 23% used graph/chart and map for presentation. Further, only 32%, 39% and 19% provided query, export and print feature. While >50% sources reported morbidity, risk factor and service provision statistics, <40% sources reported health resource and mortality statistics. Statistics portal website was developed using Joomla!™ content management system. Usability test demonstrated its potential to improve data accessibility. In this study, government's efforts to disseminate statistics in Indonesia are supported by non-governmental and international organizations and existing their information may not be very useful because it is: a) not widely distributed, b) difficult to locate, and c) not effectively communicated. Actions are needed to ensure information usability, and one of such actions is the development of statistics portal website.

  17. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  18. Oklahoma and Texas Completion Policies for Community Colleges

    ERIC Educational Resources Information Center

    Rankin, David A.; Scott, Joyce A.; Kim, JoHyun

    2015-01-01

    This study measured the effectiveness of and differences between Oklahoma's Brain Gain and Texas' Closing the Gaps policies to enhance degree completion of students at community colleges. Descriptive statistics and independent-samples "t" tests were conducted utilizing data from community colleges' 3-year graduation rates from the…

  19. Evaluating the efficacy of a chemistry video game

    NASA Astrophysics Data System (ADS)

    Shapiro, Marina

    A quasi-experimental design pre-test/post-test intervention study utilizing a within group analysis was conducted with 45 undergraduate college chemistry students that investigated the effect of implementing a game-based learning environment into an undergraduate college chemistry course in order to learn if serious educational games (SEGs) can be used to achieve knowledge gains of complex chemistry concepts and to achieve increase in students' positive attitude toward chemistry. To evaluate if students learn chemistry concepts by participating in a chemistry game-based learning environment, a one-way repeated measures analysis of variance (ANOVA) was conducted across three time points (pre-test, post-test, delayed post-test which were chemistry content exams). Results showed that there was an increase in exam scores over time. The results of the ANOVA indicated a statistically significant time effect. To evaluate if students' attitude towards chemistry increased as a result of participating in a chemistry game-based learning environment a paired samples t-test was conducted using a chemistry attitudinal survey by Mahdi (2014) as the pre- and post-test. Results of the paired-samples t-test indicated that there was no significant difference in pre-attitudinal scores and post-attitudinal scores.

  20. Statistical analysis of Turbine Engine Diagnostic (TED) field test data

    NASA Astrophysics Data System (ADS)

    Taylor, Malcolm S.; Monyak, John T.

    1994-11-01

    During the summer of 1993, a field test of turbine engine diagnostic (TED) software, developed jointly by U.S. Army Research Laboratory and the U.S. Army Ordnance Center and School, was conducted at Fort Stuart, GA. The data were collected in conformance with a cross-over design, some of whose considerations are detailed. The initial analysis of the field test data was exploratory, followed by a more formal investigation. Technical aspects of the data analysis insights that were elicited are reported.

  1. Antimicrobial Analysis of an Antiseptic Made from Ethanol Crude Extracts of P. granatum and E. uniflora in Wistar Rats against Staphylococcus aureus and Staphylococcus epidermidis

    PubMed Central

    Bernardo, Thaís Honório Lins; Sales Santos Veríssimo, Regina Célia; Alvino, Valter; Silva Araujo, Maria Gabriella; Evangelista Pires dos Santos, Raíssa Fernanda; Maurício Viana, Max Denisson; de Assis Bastos, Maria Lysete; Alexandre-Moreira, Magna Suzana; de Araújo-Júnior, João Xavier

    2015-01-01

    Introduction. Surgical site infection remains a challenge for hospital infection control, especially when it relates to skin antisepsis in the surgical site. Objective. To analyze the antimicrobial activity in vivo of an antiseptic from ethanol crude extracts of P. granatum and E. uniflora against Gram-positive and Gram-negative bacteria. Methods. Agar drilling and minimal inhibitory tests were conducted for in vitro evaluation. In the in vivo bioassay were used Wistar rats and Staphylococcus aureus (ATCC 25923) and Staphylococcus epidermidis (ATCC 14990). Statistical analysis was performed through variance analysis and Scott-Knott cluster test at 5% probability and significance level. Results. In the in vitro, ethanolic extracts of Punica granatum and Eugenia uniflora and their combination showed the best antimicrobial potential against S. epidermidis and S. aureus. In the in vivo bioassay against S. epidermidis, there was no statistically significant difference between the tested product and the patterns used after five minutes of applying the product. Conclusion. The results indicate that the originated product is an antiseptic alternative source against S. epidermidis compared to chlorhexidine gluconate. It is suggested that further researches are to be conducted in different concentrations of the test product, evaluating its effectiveness and operational costs. PMID:26146655

  2. Process air quality data

    NASA Technical Reports Server (NTRS)

    Butler, C. M.; Hogge, J. E.

    1978-01-01

    Air quality sampling was conducted. Data for air quality parameters, recorded on written forms, punched cards or magnetic tape, are available for 1972 through 1975. Computer software was developed to (1) calculate several daily statistical measures of location, (2) plot time histories of data or the calculated daily statistics, (3) calculate simple correlation coefficients, and (4) plot scatter diagrams. Computer software was developed for processing air quality data to include time series analysis and goodness of fit tests. Computer software was developed to (1) calculate a larger number of daily statistical measures of location, and a number of daily monthly and yearly measures of location, dispersion, skewness and kurtosis, (2) decompose the extended time series model and (3) perform some goodness of fit tests. The computer program is described, documented and illustrated by examples. Recommendations are made for continuation of the development of research on processing air quality data.

  3. St. Paul Harbor, St. Paul Island, Alaska; Design for Wave and Shoaling Protection; Hydraulic Model Investigation

    DTIC Science & Technology

    1988-09-01

    S P a .E REPORT DOCUMENTATION PAGE OMR;oJ ’ , CRR Eo Dale n2 ;R6 ’a 4EPOR- SCRFT CASS F.C.T ON ’b RES’RICTI’,E MARKINGS Unclassified a ECRIT y...and selection of test waves 30. Measured prototype wave data on which a comprehensive statistical analysis of wave conditions could be based were...Tests Existing conditions 32. Prior to testing of the various improvement plans, comprehensive tests were conducted for existing conditions (Plate 1

  4. Identifying Pleiotropic Genes in Genome-Wide Association Studies for Multivariate Phenotypes with Mixed Measurement Scales

    PubMed Central

    Williams, L. Keoki; Buu, Anne

    2017-01-01

    We propose a multivariate genome-wide association test for mixed continuous, binary, and ordinal phenotypes. A latent response model is used to estimate the correlation between phenotypes with different measurement scales so that the empirical distribution of the Fisher’s combination statistic under the null hypothesis is estimated efficiently. The simulation study shows that our proposed correlation estimation methods have high levels of accuracy. More importantly, our approach conservatively estimates the variance of the test statistic so that the type I error rate is controlled. The simulation also shows that the proposed test maintains the power at the level very close to that of the ideal analysis based on known latent phenotypes while controlling the type I error. In contrast, conventional approaches–dichotomizing all observed phenotypes or treating them as continuous variables–could either reduce the power or employ a linear regression model unfit for the data. Furthermore, the statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that conducting a multivariate test on multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests. The proposed method also offers a new approach to analyzing the Fagerström Test for Nicotine Dependence as multivariate phenotypes in genome-wide association studies. PMID:28081206

  5. Weighted Feature Significance: A Simple, Interpretable Model of Compound Toxicity Based on the Statistical Enrichment of Structural Features

    PubMed Central

    Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.

    2009-01-01

    In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) data from Salmonella typhimurium reverse mutagenicity assays conducted by the U.S. National Toxicology Program, and (3) hepatotoxicity data published in the Registry of Toxic Effects of Chemical Substances. Enrichments of structural features in toxic compounds are evaluated for their statistical significance and compiled into a simple additive model of toxicity and then used to score new compounds for potential toxicity. The predictive power of the model for cytotoxicity was validated using an independent set of compounds from the U.S. Environmental Protection Agency tested also at the National Institutes of Health Chemical Genomics Center. We compared the performance of our WFS approach with classical classification methods such as Naive Bayesian clustering and support vector machines. In most test cases, WFS showed similar or slightly better predictive power, especially in the prediction of hepatotoxic compounds, where WFS appeared to have the best performance among the three methods. The new algorithm has the important advantages of simplicity, power, interpretability, and ease of implementation. PMID:19805409

  6. Contribution of Vestibular-Evoked Myogenic Potential (VEMP) testing in the assessment and the differential diagnosis of otosclerosis.

    PubMed

    Tramontani, Ourania; Gkoritsa, Eleni; Ferekidis, Eleftherios; Korres, Stavros G

    2014-02-07

    The aim of this prospective clinical study was to evaluate the clinical importance of Vestibular-Evoked Myogenic Potentials (VEMPs) in the assessment and differential diagnosis of otosclerosis and otologic diseases characterized by "pseudo-conductive" components. We also investigated the clinical appearance of balance disorders in patients with otosclerosis by correlating VEMP results with the findings of caloric testing and pure tone audiometry(PTA). Air-conducted(AC) 4-PTA, bone-conducted(BC) 4-PTA, air-bone Gap(ABG), AC, BC tone burst evoked VEMP, and calorics were measured preoperatively in 126 otosclerotic ears. The response rate of the AC-VEMPs and BC-VEMPs was 29.36% and 44.03%, respectively. Statistical differences were found between the means of ABG, AC 4-PTA, and BC 4-PTA in the otosclerotic ears in relation to AC-VEMP elicitability. About one-third of patients presented with disequilibrium. A statistically significant interaction was found between calorics and dizziness in relation to PTA thresholds. No relationship was found between calorics and dizziness with VEMPs responses. AC and BC VEMPs can be elicited in ears with otosclerosis. AC-VEMP is more vulnerable to conductive hearing loss. Evaluation of AC-VEMP thresholds can be added in the diagnostic work-up of otosclerosis in case of doubt, enhancing differential diagnosis in patients with air-bone gaps. Otosclerosis is not a cause of canal paresis or vertigo.

  7. Using Alien Coins to Test Whether Simple Inference Is Bayesian

    ERIC Educational Resources Information Center

    Cassey, Peter; Hawkins, Guy E.; Donkin, Chris; Brown, Scott D.

    2016-01-01

    Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we…

  8. The Development and Validation of a Teacher Preparation Program: Follow-Up Survey

    ERIC Educational Resources Information Center

    Schulte, Laura E.

    2008-01-01

    Students in my applied advanced statistics course for educational administration doctoral students developed a follow-up survey for teacher preparation programs, using the following scale development processes: adopting a framework; developing items; providing evidence of content validity; conducting a pilot test; and analyzing data. The students…

  9. An Examination of Master's Student Retention & Completion

    ERIC Educational Resources Information Center

    Barry, Melissa; Mathies, Charles

    2011-01-01

    This study was conducted at a research-extensive public university in the southeastern United States. It examined the retention and completion of master's degree students across numerous disciplines. Results were derived from a series of descriptive statistics, T-tests, and a series of binary logistic regression models. The findings from binary…

  10. RESEARCH REPORT ON THE RISK ASSESSMENT OF MIXTURES OF DISINFECTION BY-PRODUCTS (DBPS) IN DRINKING WATER

    EPA Science Inventory

    This report presents a number of manuscripts and progress reports on statistical and biological research pertaining to the health risk assessment of simple DBP mixtures. Research has been conducted to generate efficient experimental designs to test specific mixtures for departu...

  11. Learning Opportunities for Group Learning

    ERIC Educational Resources Information Center

    Gil, Alfonso J.; Mataveli, Mara

    2017-01-01

    Purpose: This paper aims to analyse the impact of organizational learning culture and learning facilitators in group learning. Design/methodology/approach: This study was conducted using a survey method applied to a statistically representative sample of employees from Rioja wine companies in Spain. A model was tested using a structural equation…

  12. Faculty Perceptions of Transition Personnel Preparation in Saudi Arabia

    ERIC Educational Resources Information Center

    Alhossan, Bandar A.; Trainor, Audrey A.

    2017-01-01

    This study investigated to what extent faculty members include and value transition curricula in special education preparation programs in Saudi Arabia. A web-based survey was conducted and sent to special education professors across 20 universities. Descriptive statistics and a t-test analysis generated three main findings: (a) Institutions…

  13. The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.

    PubMed

    Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R

    2013-01-01

    In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.

  14. Lubricant and additive effects on spur gear fatigue life

    NASA Technical Reports Server (NTRS)

    Townsend, D. P.; Zaretsky, E. V.; Scibbe, H. W.

    1985-01-01

    Spur gear endurance tests were conducted with six lubricants using a single lot of consumable-electrode vacuum melted (CVM) AISI 9310 spur gears. The sixth lubricant was divided into four batches each of which had a different additive content. Lubricants tested with a phosphorus-type load carrying additive showed a statistically significant improvement in life over lubricants without this type of additive. The presence of sulfur type antiwear additives in the lubricant did not appear to affect the surface fatigue life of the gears. No statistical difference in life was produced with those lubricants of different base stocks but with similar viscosity, pressure-viscosity coefficients and antiwear additives. Gears tested with a 0.1 wt % sulfur and 0.1 wt % phosphorus EP additives in the lubricant had reactive films that were 200 to 400 (0.8 to 1.6 microns) thick.

  15. Does Assessing Eye Alignment along with Refractive Error or Visual Acuity Increase Sensitivity for Detection of Strabismus in Preschool Vision Screening?

    PubMed Central

    2007-01-01

    Purpose Preschool vision screenings often include refractive error or visual acuity (VA) testing to detect amblyopia, as well as alignment testing to detect strabismus. The purpose of this study was to determine the effect of combining screening for eye alignment with screening for refractive error or reduced VA on sensitivity for detection of strabismus, with specificity set at 90% and 94%. Methods Over 3 years, 4040 preschool children were screened in the Vision in Preschoolers (VIP) Study, with different screening tests administered each year. Examinations were performed to identify children with strabismus. The best screening tests for detecting children with any targeted condition were noncycloplegic retinoscopy (NCR), Retinomax autorefractor (Right Manufacturing, Virginia Beach, VA), SureSight Vision Screener (Welch-Allyn, Inc., Skaneateles, NY), and Lea Symbols (Precision Vision, LaSalle, IL and Good-Lite Co., Elgin, IL) and HOTV optotypes VA tests. Analyses were conducted with these tests of refractive error or VA paired with the best tests for detecting strabismus (unilateral cover testing, Random Dot “E” [RDE] and Stereo Smile Test II [Stereo Optical, Inc., Chicago, IL]; and MTI PhotoScreener [PhotoScreener, Inc., Palm Beach, FL]). The change in sensitivity that resulted from combining a test of eye alignment with a test of refractive error or VA was determined with specificity set at 90% and 94%. Results Among the 4040 children, 157 were identified as having strabismus. For screening tests conducted by eye care professionals, the addition of a unilateral cover test to a test of refraction generally resulted in a statistically significant increase (range, 15%–25%) in detection of strabismus. For screening tests administered by trained lay screeners, the addition of Stereo Smile II to SureSight resulted in a statistically significant increase (21%) in sensitivity for detection of strabismus. Conclusions The most efficient and low-cost ways to achieve a statistically significant increase in sensitivity for detection of strabismus were by combining the unilateral cover test with the autorefractor (Retinomax) administered by eye care professionals and by combining Stereo Smile II with SureSight administered by trained lay screeners. The decision of whether to include a test of alignment should be based on the screening program’s goals (e.g., targeted visual conditions) and resources. PMID:17591881

  16. Wavelet analysis in ecology and epidemiology: impact of statistical tests

    PubMed Central

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-01-01

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892

  17. Wavelet analysis in ecology and epidemiology: impact of statistical tests.

    PubMed

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-02-06

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.

  18. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  19. Space Suit Joint Torque Testing

    NASA Technical Reports Server (NTRS)

    Valish, Dana J.

    2011-01-01

    In 2009 and early 2010, a test was performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design meets the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future space suits. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis and a variance in torque values for some of the tested joints was apparent. Potential variables that could have affected the data were identified and re-testing was conducted in an attempt to eliminate these variables. The results of the retest will be used to determine if further testing and modification is necessary before the method can be validated.

  20. Version 2.0 Visual Sample Plan (VSP): UXO Module Code Description and Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Richard O.; Wilson, John E.; O'Brien, Robert F.

    2003-05-06

    The Pacific Northwest National Laboratory (PNNL) is developing statistical methods for determining the amount of geophysical surveys conducted along transects (swaths) that are needed to achieve specified levels of confidence of finding target areas (TAs) of anomalous readings and possibly unexploded ordnance (UXO) at closed, transferring and transferred (CTT) Department of Defense (DoD) ranges and other sites. The statistical methods developed by PNNL have been coded into the UXO module of the Visual Sample Plan (VSP) software code that is being developed by PNNL with support from the DoD, the U.S. Department of Energy (DOE, and the U.S. Environmental Protectionmore » Agency (EPA). (The VSP software and VSP Users Guide (Hassig et al, 2002) may be downloaded from http://dqo.pnl.gov/vsp.) This report describes and documents the statistical methods developed and the calculations and verification testing that have been conducted to verify that VSPs implementation of these methods is correct and accurate.« less

  1. Impact of training of traditional birth attendants on the newborn care.

    PubMed

    Satishchandra, D M; Naik, V A; Wantamutte, A S; Mallapur, M D

    2009-01-01

    To study the impact of training of Traditional Birth Attendants (TBAs) on the Newborn care in resource poor setting in rural area. A community based study in the Primary Health Center (PHC) area was conducted over one year period between March 2006 to February 2007. The study participants were 50 Traditional Birth Attendants (TBAs)who conduct home deliveries in the PHC area. Training was conducted for two days which included topics on techniques of conducting safe delivery and newborn care practices. Pre-test evaluation regarding knowledge and practices about newborn care was done. Post-test evaluation was done at first month (early) and at fifth month (late) after the training. Analysis was done by using Mc. Nemer's test, Chi- square test with Yates's correction and Fischer's exact test. Pre-test evaluation showed that, knowledge and practices about newborn care services provided by the previously trained TBAs and untrained TBAs were poor. Early and late post-test evaluation showed that, there was a progressive improvement in the newborn care provided by both the groups. Preintervention period (one year prior to the training) and postintervention period (one year after the training) showed that, there was a statistically significant (p<0.05) reduction in the perinatal deaths (11 to 3) and neonatal deaths (10 to 2) among the deliveries conducted by TBAs after the training. Training programme for TBAs with regular reinforcements in the resource poor setting will not only improve the quality of newborn care but also reduces perinatal deaths.

  2. Reaction times to weak test lights. [psychophysics biological model

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.; Ahumada, P.; Welsh, D.

    1984-01-01

    Maloney and Wandell (1984) describe a model of the response of a single visual channel to weak test lights. The initial channel response is a linearly filtered version of the stimulus. The filter output is randomly sampled over time. Each time a sample occurs there is some probability increasing with the magnitude of the sampled response - that a discrete detection event is generated. Maloney and Wandell derive the statistics of the detection events. In this paper a test is conducted of the hypothesis that the reaction time responses to the presence of a weak test light are initiated at the first detection event. This makes it possible to extend the application of the model to lights that are slightly above threshold, but still within the linear operating range of the visual system. A parameter-free prediction of the model proposed by Maloney and Wandell for lights detected by this statistic is tested. The data are in agreement with the prediction.

  3. Interlaboratory round robin study on axial tensile properties of SiC-SiC CMC tubular test specimens [Interlaboratory round robin study on axial tensile properties of SiC/SiC tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.

    An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less

  4. Interlaboratory round robin study on axial tensile properties of SiC-SiC CMC tubular test specimens [Interlaboratory round robin study on axial tensile properties of SiC/SiC tubes

    DOE PAGES

    Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.; ...

    2018-04-19

    An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less

  5. Static renewal tests using Pimephales promelas (fathead minnows) and Ceriodaphnia dubia (daphnids). Clinch River-Environmental Restoration Program (CR-ERP) pilot study, ambient water toxicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simbeck, D.J.

    1993-12-31

    Clinch River-Environmental Restoration Program (CR-ERP) personnel and Tennessee Valley Authority (TVA) personnel conducted a pilot study during the week of April 22--29, 1993, prior to initiation of CR-ERP Phase 2 Sampling and Analysis activities. The organisms specified for testing were larval fathead minnows, Pimephales promelas, and the daphnid, Ceriodaphnia dubia. Surface water samples were collected by TVA Field Engineering personnel from Clinch River Mile 9.0 and Poplar Creek Kilometer 1.6 on April 21, 23, and 26. Samples were split and provided to the CR-ERP and TVA toxicology laboratories for testing. Exposure of test organisms to these samples resulted in nomore » toxicity (survival, growth, or reproduction) to either species in testing conducted by TVA. Attachments to this report include: Chain of custody forms -- originals; Toxicity test bench sheets and statistical analyses; Reference toxicant test information; and Personnel training documentation.« less

  6. Static renewal tests using Anodonta imbecillis (freshwater mussels). Anodonta imbecillis QA test 1, Clinch River-Environmental Restoration Program (CR-ERP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simbeck, D.J.

    1993-12-31

    Toxicity testing of split whole sediment samples using juvenile freshwater mussels (Anodonta imbecillis) was conducted by TVA and CR-ERP personnel as part of the CR-ERP biomonitoring study of Clinch River sediments to provide a quality assurance mechanism for test organism quality and overall performance of the test. In addition, testing included procedures comparing daily renewal versus non-renewal of test sediments. Testing of sediment samples collected July 15 from Poplar Creek Miles 6.0 and 5.1 was conducted from July 21--30, 1993. Results from this test showed no toxicity (survival effects) to fresh-water mussels during a 9-day exposure to the sediments. Sidemore » by side testing of sediments with daily sediment renewal and no sediment renewal showed no differences between methods. This may be due to the absence of toxicity in both samples and may not reflect true differences between the two methods for toxic sediment. Attachments to this report include: Chain of custody forms -- originals; Toxicity test bench sheets and statistical analyses; and Ammonia analysis request and results.« less

  7. Comparison between reflectivity statistics at heights of 3 and 6 km and rain rate statistics at ground level

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1975-01-01

    An experiment was conducted to study the relations between the empirical distribution functions of reflectivity at specified locations above the surface and the corresponding functions at the surface. A bistatic radar system was used to measure continuously the scattering cross section per unit volume at heights of 3 and 6 km. A frequency of 3.7 GHz was used in the tests. It was found that the distribution functions for reflectivity may significantly change with height at heights below the level of the melting layer.

  8. Statistical classification approach to discrimination between weak earthquakes and quarry blasts recorded by the Israel Seismic Network

    NASA Astrophysics Data System (ADS)

    Kushnir, A. F.; Troitsky, E. V.; Haikin, L. M.; Dainty, A.

    1999-06-01

    A semi-automatic procedure has been developed to achieve statistically optimum discrimination between earthquakes and explosions at local or regional distances based on a learning set specific to a given region. The method is used for step-by-step testing of candidate discrimination features to find the optimum (combination) subset of features, with the decision taken on a rigorous statistical basis. Linear (LDF) and Quadratic (QDF) Discriminant Functions based on Gaussian distributions of the discrimination features are implemented and statistically grounded; the features may be transformed by the Box-Cox transformation z=(1/ α)( yα-1) to make them more Gaussian. Tests of the method were successfully conducted on seismograms from the Israel Seismic Network using features consisting of spectral ratios between and within phases. Results showed that the QDF was more effective than the LDF and required five features out of 18 candidates for the optimum set. It was found that discrimination improved with increasing distance within the local range, and that eliminating transformation of the features and failing to correct for noise led to degradation of discrimination.

  9. Approximations to the distribution of a test statistic in covariance structure analysis: A comprehensive study.

    PubMed

    Wu, Hao

    2018-05-01

    In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.

  10. A Test of the Effectiveness of Time Management Training in a Department of the Navy Program Management Office (PMO)

    DTIC Science & Technology

    1977-05-01

    An experiment, designed to introduce time management concepts, was conducted with 33 volunteers from a Department of the Navy PMO -- the experimental...group. The instruments used to conduct the experiment were a Time Management Survey and a Time Management Questionnaire. The survey was used to...data obtained from the experimental group were statistically compared with similar data from a control group. Time management principles and ’tips’ on

  11. Response of SiC{sub f}/Si{sub 3}N{sub 4} composites under static and cyclic loading -- An experimental and statistical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahfuz, H.; Maniruzzaman, M.; Vaidya, U.

    1997-04-01

    Monotonic tensile and fatigue response of continuous silicon carbide fiber reinforced silicon nitride (SiC{sub f}/Si{sub 3}N{sub 4}) composites has been investigated. The monotonic tensile tests have been performed at room and elevated temperatures. Fatigue tests have been conducted at room temperature (RT), at a stress ratio, R = 0.1 and a frequency of 5 Hz. It is observed during the monotonic tests that the composites retain only 30% of its room temperature strength at 1,600 C suggesting a substantial chemical degradation of the matrix at that temperature. The softening of the matrix at elevated temperature also causes reduction in tensilemore » modulus, and the total reduction in modulus is around 45%. Fatigue data have been generated at three load levels and the fatigue strength of the composite has been found to be considerably high; about 75% of its ultimate room temperature strength. Extensive statistical analysis has been performed to understand the degree of scatter in the fatigue as well as in the static test data. Weibull shape factors and characteristic values have been determined for each set of tests and their relationship with the response of the composites has been discussed. A statistical fatigue life prediction method developed from the Weibull distribution is also presented. Maximum Likelihood Estimator with censoring techniques and data pooling schemes has been employed to determine the distribution parameters for the statistical analysis. These parameters have been used to generate the S-N diagram with desired level of reliability. Details of the statistical analysis and the discussion of the static and fatigue behavior of the composites are presented in this paper.« less

  12. Advanced Hydraulic Fracturing Technology for Unconventional Tight Gas Reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephen Holditch; A. Daniel Hill; D. Zhu

    2007-06-19

    The objectives of this project are to develop and test new techniques for creating extensive, conductive hydraulic fractures in unconventional tight gas reservoirs by statistically assessing the productivity achieved in hundreds of field treatments with a variety of current fracturing practices ranging from 'water fracs' to conventional gel fracture treatments; by laboratory measurements of the conductivity created with high rate proppant fracturing using an entirely new conductivity test - the 'dynamic fracture conductivity test'; and by developing design models to implement the optimal fracture treatments determined from the field assessment and the laboratory measurements. One of the tasks of thismore » project is to create an 'advisor' or expert system for completion, production and stimulation of tight gas reservoirs. A central part of this study is an extensive survey of the productivity of hundreds of tight gas wells that have been hydraulically fractured. We have been doing an extensive literature search of the SPE eLibrary, DOE, Gas Technology Institute (GTI), Bureau of Economic Geology and IHS Energy, for publicly available technical reports about procedures of drilling, completion and production of the tight gas wells. We have downloaded numerous papers and read and summarized the information to build a database that will contain field treatment data, organized by geographic location, and hydraulic fracture treatment design data, organized by the treatment type. We have conducted experimental study on 'dynamic fracture conductivity' created when proppant slurries are pumped into hydraulic fractures in tight gas sands. Unlike conventional fracture conductivity tests in which proppant is loaded into the fracture artificially; we pump proppant/frac fluid slurries into a fracture cell, dynamically placing the proppant just as it occurs in the field. From such tests, we expect to gain new insights into some of the critical issues in tight gas fracturing, in particular the roles of gel damage, polymer loading (water-frac versus gel frac), and proppant concentration on the created fracture conductivity. To achieve this objective, we have designed the experimental apparatus to conduct the dynamic fracture conductivity tests. The experimental apparatus has been built and some preliminary tests have been conducted to test the apparatus.« less

  13. Modeling 3-D permeability distribution in alluvial fans using facies architecture and geophysical acquisitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lin; Gong, Huili; Dai, Zhenxue

    Alluvial fans are highly heterogeneous in hydraulic properties due to complex depositional processes, which make it difficult to characterize the spatial distribution of the hydraulic conductivity ( K). An original methodology is developed to identify the spatial statistical parameters (mean, variance, correlation range) of the hydraulic conductivity in a three-dimensional (3-D) setting by using geological and geophysical data. More specifically, a large number of inexpensive vertical electric soundings are integrated with a facies model developed from borehole lithologic data to simulate the log 10( K) continuous distributions in multiple-zone heterogeneous alluvial megafans. The Chaobai River alluvial fan in the Beijing Plain,more » China, is used as an example to test the proposed approach. Due to the non-stationary property of the K distribution in the alluvial fan, a multiple-zone parameterization approach is applied to analyze the conductivity statistical properties of different hydrofacies in the various zones. The composite variance in each zone is computed to describe the evolution of the conductivity along the flow direction. Consistently with the scales of the sedimentary transport energy, the results show that conductivity variances of fine sand, medium-coarse sand, and gravel decrease from the upper (zone 1) to the lower (zone 3) portion along the flow direction. In zone 1, sediments were moved by higher-energy flooding, which induces poor sorting and larger conductivity variances. The composite variance confirms this feature with statistically different facies from zone 1 to zone 3. Lastly, the results of this study provide insights to improve our understanding on conductivity heterogeneity and a method for characterizing the spatial distribution of  K in alluvial fans.« less

  14. Modeling 3-D permeability distribution in alluvial fans using facies architecture and geophysical acquisitions

    DOE PAGES

    Zhu, Lin; Gong, Huili; Dai, Zhenxue; ...

    2017-02-03

    Alluvial fans are highly heterogeneous in hydraulic properties due to complex depositional processes, which make it difficult to characterize the spatial distribution of the hydraulic conductivity ( K). An original methodology is developed to identify the spatial statistical parameters (mean, variance, correlation range) of the hydraulic conductivity in a three-dimensional (3-D) setting by using geological and geophysical data. More specifically, a large number of inexpensive vertical electric soundings are integrated with a facies model developed from borehole lithologic data to simulate the log 10( K) continuous distributions in multiple-zone heterogeneous alluvial megafans. The Chaobai River alluvial fan in the Beijing Plain,more » China, is used as an example to test the proposed approach. Due to the non-stationary property of the K distribution in the alluvial fan, a multiple-zone parameterization approach is applied to analyze the conductivity statistical properties of different hydrofacies in the various zones. The composite variance in each zone is computed to describe the evolution of the conductivity along the flow direction. Consistently with the scales of the sedimentary transport energy, the results show that conductivity variances of fine sand, medium-coarse sand, and gravel decrease from the upper (zone 1) to the lower (zone 3) portion along the flow direction. In zone 1, sediments were moved by higher-energy flooding, which induces poor sorting and larger conductivity variances. The composite variance confirms this feature with statistically different facies from zone 1 to zone 3. Lastly, the results of this study provide insights to improve our understanding on conductivity heterogeneity and a method for characterizing the spatial distribution of  K in alluvial fans.« less

  15. Assessing the Preparedness Level of Incoming Principles of Accounting Students.

    ERIC Educational Resources Information Center

    Imel, Phillip W.

    2000-01-01

    Reports that the introductory level Principles of Accounting classes at Southwest Virginia Community College (SVCC) had high unsuccessful grade rates between 1989 and 1999. Describes a study conducted to determine whether there was a statistical difference in the test scores and GPA of successful versus unsuccessful accounting students. Finds that…

  16. Assessing Literacy: The Framework for the National Adult Literacy Survey.

    ERIC Educational Resources Information Center

    Campbell, Anne; And Others

    To satisfy federal requirements, the National Center for Education Statistics and the Division of Adult Education and Literacy planned a nationally representative household sample survey to assess the literacy skills of the adult population of the United States, to be conducted by the Educational Testing Service with the assistance of Westat, Inc.…

  17. The Analysis of Completely Randomized Factorial Experiments When Observations Are Lost at Random.

    ERIC Educational Resources Information Center

    Hummel, Thomas J.

    An investigation was conducted of the characteristics of two estimation procedures and corresponding test statistics used in the analysis of completely randomized factorial experiments when observations are lost at random. For one estimator, contrast coefficients for cell means did not involve the cell frequencies. For the other, contrast…

  18. On the Sensible Application of Familywise Alpha Adjustment.

    ERIC Educational Resources Information Center

    Tutzauer, Frank

    2003-01-01

    Responds to Daniel O'Keefe's "Against Familywise Alpha Adjustment," where O'Keefe maintains that one should never attempt to control Type I error introduced when many statistical tests are conducted. Argues that alpha adjustment should be applied only in the narrowly circumscribed instance when the researcher wants to make a strong claim…

  19. Learning Patterns as Criterion for Forming Work Groups in 3D Simulation Learning Environments

    ERIC Educational Resources Information Center

    Maria Cela-Ranilla, Jose; Molías, Luis Marqués; Cervera, Mercè Gisbert

    2016-01-01

    This study analyzes the relationship between the use of learning patterns as a grouping criterion to develop learning activities in the 3D simulation environment at University. Participants included 72 Spanish students from the Education and Marketing disciplines. Descriptive statistics and non-parametric tests were conducted. The process was…

  20. 75 FR 47592 - Final Test Guideline; Product Performance of Skin-applied Insect Repellents of Insect and Other...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-06

    ... considerations affecting the design and conduct of repellent studies when human subjects are involved. Any... recommendations for the design and execution of studies to evaluate the performance of pesticide products intended... recommends appropriate study designs and methods for selecting subjects, statistical analysis, and reporting...

  1. Avowed Happiness in Confucian Asia: Ascertaining Its Distribution, Patterns, and Sources

    ERIC Educational Resources Information Center

    Shin, Doh Chull; Inoguchi, Takashi

    2009-01-01

    This study reviewed three philosophical accounts of happiness, and then tested those accounts with the Asiabarometer surveys conducted in six Confucian societies during the summer of 2006. Statistical analyses of these surveys reveal that East Asians tend to experience happiness to a greater extent when they experience enjoyment together with…

  2. Effects of ozone (O3) therapy on cisplatin-induced ototoxicity in rats.

    PubMed

    Koçak, Hasan Emre; Taşkın, Ümit; Aydın, Salih; Oktay, Mehmet Faruk; Altınay, Serdar; Çelik, Duygu Sultan; Yücebaş, Kadir; Altaş, Bengül

    2016-12-01

    The aim of this study is to investigate the effect of rectal ozone and intratympanic ozone therapy on cisplatin-induced ototoxicity in rats. Eighteen female Wistar albino rats were included in our study. External auditory canal and tympanic membrane examinations were normal in all rats. The rats were randomly divided into three groups. Initially, all the rats were tested with distortion product otoacoustic emissions (DPOAE), and emissions were measured normally. All rats were injected with 5-mg/kg/day cisplatin for 3 days intraperitoneally. Ototoxicy had developed in all rats, as confirmed with DPOAE after 1 week. Rectal and intratympanic ozone therapy group was Group 1. No treatment was administered for the rats in Group 2 as the control group. The rats in Group 3 were treated with rectal ozone. All the rats were tested with DPOAE under general anesthesia, and all were sacrificed for pathological examination 1 week after ozone administration. Their cochleas were removed. The outer hair cell damage and stria vascularis damage were examined. In the statistical analysis conducted, a statistically significant difference between Group 1 and Group 2 was observed in all frequencies according to the DPOAE test. In addition, between Group 2 and Group 3, a statistically significant difference was observed in the DPOAE test. However, a statistically significant difference was not observed between Group 1 and Group 3 according to the DPOAE test. According to histopathological scoring, the outer hair cell damage score was statistically significantly high in Group 2 compared with Group 1. In addition, the outer hair cell damage score was also statistically significantly high in Group 2 compared with Group 3. Outer hair cell damage scores were low in Group 1 and Group 3, but there was no statistically significant difference between these groups. There was no statistically significant difference between the groups in terms of stria vascularis damage score examinations. Systemic ozone gas therapy is effective in the treatment of cell damage in cisplatin-induced ototoxicity. The intratympanic administration of ozone gas does not have any additional advantage over the rectal administration.

  3. Perception of Community Pharmacists towards Dispensing Errors in Community Pharmacy Setting in Gondar Town, Northwest Ethiopia

    PubMed Central

    2017-01-01

    Background Dispensing errors are inevitable occurrences in community pharmacies across the world. Objective This study aimed to identify the community pharmacists' perception towards dispensing errors in the community pharmacies in Gondar town, Northwest Ethiopia. Methods A cross-sectional study was conducted among 47 community pharmacists selected through convenience sampling. Data were analyzed using SPSS version 20. Descriptive statistics, Mann–Whitney U test, and Pearson's Chi-square test of independence were conducted with P ≤ 0.05 considered statistically significant. Result The majority of respondents were in the 23–28-year age group (N = 26, 55.3%) and with at least B.Pharm degree (N = 25, 53.2%). Poor prescription handwriting and similar/confusing names were perceived to be the main contributing factors while all the strategies and types of dispensing errors were highly acknowledged by the respondents. Group differences (P < 0.05) in opinions were largely due to educational level and age. Conclusion Dispensing errors were associated with prescribing quality and design of dispensary as well as dispensing procedures. Opinion differences relate to age and educational status of the respondents. PMID:28612023

  4. Perception of Community Pharmacists towards Dispensing Errors in Community Pharmacy Setting in Gondar Town, Northwest Ethiopia.

    PubMed

    Asmelashe Gelayee, Dessalegn; Binega Mekonnen, Gashaw

    2017-01-01

    Dispensing errors are inevitable occurrences in community pharmacies across the world. This study aimed to identify the community pharmacists' perception towards dispensing errors in the community pharmacies in Gondar town, Northwest Ethiopia. A cross-sectional study was conducted among 47 community pharmacists selected through convenience sampling. Data were analyzed using SPSS version 20. Descriptive statistics, Mann-Whitney U test, and Pearson's Chi-square test of independence were conducted with P ≤ 0.05 considered statistically significant. The majority of respondents were in the 23-28-year age group ( N = 26, 55.3%) and with at least B.Pharm degree ( N = 25, 53.2%). Poor prescription handwriting and similar/confusing names were perceived to be the main contributing factors while all the strategies and types of dispensing errors were highly acknowledged by the respondents. Group differences ( P < 0.05) in opinions were largely due to educational level and age. Dispensing errors were associated with prescribing quality and design of dispensary as well as dispensing procedures. Opinion differences relate to age and educational status of the respondents.

  5. Design and fabrication of composite wing panels containing a production splice

    NASA Technical Reports Server (NTRS)

    Reed, D. L.

    1975-01-01

    Bolted specimens representative of both upper and lower wing surface splices of a transport aircraft were designed and manufactured for static and random load tension and compression fatigue testing including ground-air-ground load reversals. The specimens were fabricated with graphite-epoxy composite material. Multiple tests were conducted at various load levels and the results were used as input to a statistical wearout model. The statically designed specimens performed very well under highly magnified fatigue loadings. Two large panels, one tension and compression, were fabricated for testing by NASA-LRC.

  6. Evaluation program for secondary spacecraft cells: Seventeenth annual report of cycle life test

    NASA Technical Reports Server (NTRS)

    Harkness, J. D.

    1981-01-01

    Acceptance tests were conducted on nickel cadmium, silver cadmium, and silver zinc cells to insure that all cells put into the life cycle program meet the specifications outlined in the respective purchase contracts. Statistical information is presented on cell performance characteristics and limitations. Weaknesses discovered in cell design are reported and aid in research and development efforts toward improving the reliability of space batteries. Battery weaknesses encountered in satellite programs such as IMP, NIMBUS, OGO, OAO, SAS, and TETR were studied and remedied through special tests.

  7. An educational program about premarital screening for unmarried female students in King Abdul-Aziz University, Jeddah.

    PubMed

    Ibrahim, Nahla Khamis Ragab; Al-Bar, Hussein; Al-Fakeeh, Ali; Al Ahmadi, Jawaher; Qadi, Mahdi; Al-Bar, Adnan; Milaat, Waleed

    2011-03-01

    The present study was conducted to assess knowledge and attitude of unmarried female students in King Abdul-Aziz University (KAU) towards premarital screening (PMS) program, to determine predictors of high students' knowledge scores and to improve their knowledge about PMS through conduction of an educational campaign. Multi-stage stratified random sample method was used with recruitment of 1563 students from all faculties of KAU, during the educational year 2008-2009. The Pre-test included 30 knowledge items and 14 attitude statements with student's response through a 5-point Likert scale. Health education was conducted using audiovisual aids through pre-designed educational materials. Statistical analysis was done by SPSS version 16. Students' knowledge about the program was generally low before the educational campaign. The predictors of high knowledge scores were being a health science student (aOR=4.15; 95% CI: 2.97-5.81), age ≥20 years (aOR=2.78; 95% CI: 2.01-3.85), family history of hereditary diseases and income ≥10,000 SR/month. Regarding attitude, almost all students (99.0%) agreed on the importance of PMS. After the educational program, students' knowledge about PMS was markedly improved. The mean students' knowledge score was 9.85 ± 5.36 in Pre-test and improved to 18.45 ± 4.96 in Post-test, with a highly statistical significant difference (paired t=25.40, p<0.000). The educational program was successful in improving students' knowledge about the PMS. Conduction of similar educational programs and adding PMS in the curriculum of secondary and university education are recommended. Copyright © 2010 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  8. Eddy current crack detection capability assessment approach using crack specimens with differing electrical conductivity

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2018-03-01

    Like other NDE methods, eddy current surface crack detectability is determined using probability of detection (POD) demonstration. The POD demonstration involves eddy current testing of surface crack specimens with known crack sizes. Reliably detectable flaw size, denoted by, a90/95 is determined by statistical analysis of POD test data. The surface crack specimens shall be made from a similar material with electrical conductivity close to the part conductivity. A calibration standard with electro-discharged machined (EDM) notches is typically used in eddy current testing for surface crack detection. The calibration standard conductivity shall be within +/- 15% of the part conductivity. This condition is also applicable to the POD demonstration crack set. Here, a case is considered, where conductivity of the crack specimens available for POD testing differs by more than 15% from that of the part to be inspected. Therefore, a direct POD demonstration of reliably detectable flaw size is not applicable. Additional testing is necessary to use the demonstrated POD test data. An approach to estimate the reliably detectable flaw size in eddy current testing for part made from material A using POD crack specimens made from material B with different conductivity is provided. The approach uses additional test data obtained on EDM notch specimens made from materials A and B. EDM notch test data from the two materials is used to create a transfer function between the demonstrated a90/95 size on crack specimens made of material B and the estimated a90/95 size for part made of material A. Two methods are given. For method A, a90/95 crack size for material B is given and POD data is available. Objective of method A is to determine a90/95 crack size for material A using the same relative decision threshold that was used for material B. For method B, target crack size a90/95 for material A is known. Objective is to determine decision threshold for inspecting material A.

  9. Bayesian estimation of the transmissivity spatial structure from pumping test data

    NASA Astrophysics Data System (ADS)

    Demir, Mehmet Taner; Copty, Nadim K.; Trinchero, Paolo; Sanchez-Vila, Xavier

    2017-06-01

    Estimating the statistical parameters (mean, variance, and integral scale) that define the spatial structure of the transmissivity or hydraulic conductivity fields is a fundamental step for the accurate prediction of subsurface flow and contaminant transport. In practice, the determination of the spatial structure is a challenge because of spatial heterogeneity and data scarcity. In this paper, we describe a novel approach that uses time drawdown data from multiple pumping tests to determine the transmissivity statistical spatial structure. The method builds on the pumping test interpretation procedure of Copty et al. (2011) (Continuous Derivation method, CD), which uses the time-drawdown data and its time derivative to estimate apparent transmissivity values as a function of radial distance from the pumping well. A Bayesian approach is then used to infer the statistical parameters of the transmissivity field by combining prior information about the parameters and the likelihood function expressed in terms of radially-dependent apparent transmissivities determined from pumping tests. A major advantage of the proposed Bayesian approach is that the likelihood function is readily determined from randomly generated multiple realizations of the transmissivity field, without the need to solve the groundwater flow equation. Applying the method to synthetically-generated pumping test data, we demonstrate that, through a relatively simple procedure, information on the spatial structure of the transmissivity may be inferred from pumping tests data. It is also shown that the prior parameter distribution has a significant influence on the estimation procedure, given the non-uniqueness of the estimation procedure. Results also indicate that the reliability of the estimated transmissivity statistical parameters increases with the number of available pumping tests.

  10. Evaluating collective significance of climatic trends: A comparison of methods on synthetic data

    NASA Astrophysics Data System (ADS)

    Huth, Radan; Dubrovský, Martin

    2017-04-01

    The common approach to determine whether climatic trends are significantly different from zero is to conduct individual (local) tests at each single site (station or gridpoint). Whether the number of sites where the trends are significantly non-zero can or cannot occur by random, is almost never evaluated in trend studies. That is, collective (global) significance of trends is ignored. We compare three approaches to evaluating collective statistical significance of trends at a network of sites, using the following statistics: (i) the number of successful local tests (a successful test means here a test in which the null hypothesis of no trend is rejected); this is a standard way of assessing collective significance in various applications in atmospheric sciences; (ii) the smallest p-value among the local tests (Walker test); and (iii) the counts of positive and negative trends regardless of their magnitudes and local significance. The third approach is a new procedure that we propose; the rationale behind it is that it is reasonable to assume that the prevalence of one sign of trends at individual sites is indicative of a high confidence in the trend not being zero, regardless of the (in)significance of individual local trends. A potentially large amount of information contained in trends that are not locally significant, which are typically deemed irrelevant and neglected, is thus not lost and is retained in the analysis. In this contribution we examine the feasibility of the proposed way of significance testing on synthetic data, produced by a multi-site stochastic generator, and compare it with the two other ways of assessing collective significance, which are well established now. The synthetic dataset, mimicking annual mean temperature on an array of stations (or gridpoints), is constructed assuming a given statistical structure characterized by (i) spatial separation (density of the station network), (ii) local variance, (iii) temporal and spatial autocorrelations, and (iv) the trend magnitude. The probabilistic distributions of the three test statistics (null distributions) and critical values of the tests are determined from multiple realizations of the synthetic dataset, in which no trend is imposed at each site (that is, any trend is a result of random fluctuations only). The procedure is then evaluated by determining the type II error (the probability of a false detection of a trend) in the presence of a trend with a known magnitude, for which the synthetic dataset with an imposed spatially uniform non-zero trend is used. A sensitivity analysis is conducted for various combinations of the trend magnitude and spatial autocorrelation.

  11. A compilation of nuclear weapons test detonation data for U.S. Pacific ocean tests.

    PubMed

    Simon, S L; Robison, W L

    1997-07-01

    Prior to December 1993, the explosive yields of 44 of 66 nuclear tests conducted by the United States in the Marshall Islands were still classified. Following a request from the Government of the Republic of the Marshall Islands to the U.S. Department of Energy to release this information, the Secretary of Energy declassified and released to the public the explosive yields of the Pacific nuclear tests. This paper presents a synopsis of information on nuclear test detonations in the Marshall Islands and other locations in the mid-Pacific including dates, explosive yields, locations, weapon placement, and summary statistics.

  12. An Improved Thermal Conductivity Polyurethane Composite for a Space Borne 20KV Power Supply

    NASA Technical Reports Server (NTRS)

    Shapiro, Andrew A.; Haque, Inam

    2005-01-01

    This effort was designed to find a way to reduce the temperature rise of critical components of a 20KV High Voltage Power Supply (HVPS) by improving the overall thermal conductivity of the encapsulated modules. Three strategies were evaluated by developing complete procedures, preparing samples, and performing tests. The three strategies were: 1. Improve the thermal conductivity of the polyurethane encapsulant through the addition of thermally conductive powder while minimizing impact on other characteristics of the encapsulant. 2. Improve the thermal conductivity of the polyurethane encapsulated assembly by the addition of a slab of thermally conductive, electrically insulating material, which is to act as a heat spreader. 3. Employ a more thermally conductive substrate (Al203) with the existing encapsulation scheme. The materials were chosen based on the following criteria: high dielectric breakdown strength; high thermal conductivity, ease of manufacturing, high compliance, and other standard space qualified materials properties (low out-gassing, etc.). An optimized cure was determined by a statistical design of experiments for both filled and unfilled materials. The materials were characterized for the desired properties and a complete process was developed and tested. The thermal performance was substantially improved and the strategies may be used for space flight.

  13. Evaluation of tools used to measure calcium and/or dairy consumption in adults.

    PubMed

    Magarey, Anthea; Baulderstone, Lauren; Yaxley, Alison; Markow, Kylie; Miller, Michelle

    2015-05-01

    To identify and critique tools for the assessment of Ca and/or dairy intake in adults, in order to ascertain the most accurate and reliable tools available. A systematic review of the literature was conducted using defined inclusion and exclusion criteria. Articles reporting on originally developed tools or testing the reliability or validity of existing tools that measure Ca and/or dairy intake in adults were included. Author-defined criteria for reporting reliability and validity properties were applied. Studies conducted in Western countries. Adults. Thirty papers, utilising thirty-six tools assessing intake of dairy, Ca or both, were identified. Reliability testing was conducted on only two dairy and five Ca tools, with results indicating that only one dairy and two Ca tools were reliable. Validity testing was conducted for all but four Ca-only tools. There was high reliance in validity testing on lower-order tests such as correlation and failure to differentiate between statistical and clinically meaningful differences. Results of the validity testing suggest one dairy and five Ca tools are valid. Thus one tool was considered both reliable and valid for the assessment of dairy intake and only two tools proved reliable and valid for the assessment of Ca intake. While several tools are reliable and valid, their application across adult populations is limited by the populations in which they were tested. These results indicate a need for tools that assess Ca and/or dairy intake in adults to be rigorously tested for reliability and validity.

  14. Exocrine Dysfunction Correlates with Endocrinal Impairment of Pancreas in Type 2 Diabetes Mellitus.

    PubMed

    Prasanna Kumar, H R; Gowdappa, H Basavana; Hosmani, Tejashwi; Urs, Tejashri

    2018-01-01

    Diabetes mellitus (DM) is a chronic abnormal metabolic condition, which manifests elevated blood sugar level over a prolonged period. The pancreatic endocrine system generally gets affected during diabetes, but often abnormal exocrine functions are also manifested due to its proximity to the endocrine system. Fecal elastase-1 (FE-1) is found to be an ideal biomarker to reflect the exocrine insufficiency of the pancreas. The aim of this study was conducted to assess exocrine dysfunction of the pancreas in patients with type-2 DM (T2DM) by measuring FE levels and to associate the level of hyperglycemia with exocrine pancreatic dysfunction. A prospective, cross-sectional comparative study was conducted on both T2DM patients and healthy nondiabetic volunteers. FE-1 levels were measured using a commercial kit (Human Pancreatic Elastase ELISA BS 86-01 from Bioserv Diagnostics). Data analysis was performed based on the important statistical parameters such as mean, standard deviation, standard error, t -test-independent samples, and Chi-square test/cross tabulation using SPSS for Windows version 20.0. Statistically nonsignificant ( P = 0.5051) relationship between FE-1 deficiency and age was obtained, which implied age as a noncontributing factor toward exocrine pancreatic insufficiency among diabetic patients. Statistically significant correlation ( P = 0.003) between glycated hemoglobin and FE-1 levels was also noted. The association between retinopathy ( P = 0.001) and peripheral pulses ( P = 0.001) with FE-1 levels were found to be statistically significant. This study validates the benefit of FE-1 estimation, as a surrogate marker of exocrine pancreatic insufficiency, which remains unmanifest and subclinical.

  15. Effect of Chocobar Ice Cream Containing Bifidobacterium on Salivary Streptococcus mutans and Lactobacilli: A Randomised Controlled Trial.

    PubMed

    Nagarajappa, Ramesh; Daryani, Hemasha; Sharda, Archana J; Asawa, Kailash; Batra, Mehak; Sanadhya, Sudhanshu; Ramesh, Gayathri

    2015-01-01

    To examine the effect of chocobar ice cream containing bifidobacteria on salivary mutans streptococci and lactobacilli. A double-blind, randomised controlled trial was conducted with 30 subjects (18 to 22 years of age) divided into 2 groups, test (chocobar ice cream with probiotics) and control (chocobar ice cream without probiotics). The subjects were instructed to eat the allotted chocobar ice cream once daily for 18 days. Saliva samples collected at intervals were cultured on Mitis Salivarius agar and Rogosa agar and examined for salivary mutans streptococci and lactobacilli, respectively. The Mann-Whitney U-test, Friedman and Wilcoxon signed-rank tests were used for statistical analysis. Postingestion in the test group, a statistically significant reduction (p < 0.05) of salivary mutans streptococci was recorded, but a non-significant trend was seen for lactobacilli. Significant differences were was also observed between follow-ups. Short-term daily ingestion of ice cream containing probiotic bifidobacteria may reduce salivary levels of mutans streptococci in young adults.

  16. Effect of Time on Gypsum-Impression Material Compatibility

    NASA Astrophysics Data System (ADS)

    Won, John Boram

    The purpose of this study was to evaluate the compatibility of dental gypsum with three recently introduced irreversible hydrocolloid (alginate) alternatives. The test materials were Alginot® (Kerr™), Position Penta Quick® (3M ESPE™) and Silgimix ® (Sultan Dental™). The irreversible hydrocolloid impression material, Jeltrate Plus antimicrobial® (Dentsply Caulk™) served as the control. Materials and Methods: Testing of materials was conducted in accordance with ANSI/ADA Specification No. 18 for Alginate Impression Materials. Statistical Analysis: The 3-Way ANOVA test was used to analyze measurements between different time points at a significance level of (p < 0.05). Outcome: It was found that there was greater compatibility between gypsum and the alternative materials over time than the traditional irreversible hydrocolloid material that was tested. A statistically significant amount of surface change/incompatibility was found over time with the combination of the dental gypsum products and the control impression material (Jeltrate Plus antimicrobial®).

  17. Oral health status of women with high-risk pregnancies.

    PubMed

    Merglova, Vlasta; Hecova, Hana; Stehlikova, Jaroslava; Chaloupka, Pavel

    2012-12-01

    The aim of this study was to investigate the oral health status of women with high-risk pregnancies. A case-control study of 142 pregnant women was conducted. The case group included 81 pregnant women with high-risk pregnancies, while 61 women with normal pregnancies served as controls. The following variables were recorded for each woman: age, general health status, DMF, CPITN, and PBI index, amounts of Streptococcus mutans in the saliva and dental treatment needs. The Mann-Whitney test, Kruskal-Wallis test, t-test and chi-squared test were used for statistical analyses. Statistically significant differences were detected between the PBI indices and dental treatment needs of the two groups. Out of the entire study cohort, 77% of the women in the case group and 52% of the women in the control group required dental treatment. In this study, women with complications during pregnancy had severe gingivitis and needed more frequent dental treatment than those in the control group.

  18. Results of the Verification of the Statistical Distribution Model of Microseismicity Emission Characteristics

    NASA Astrophysics Data System (ADS)

    Cianciara, Aleksander

    2016-09-01

    The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.

  19. Emotional reaction evaluation provoked by the vestibular caloric test through physiological variables monitoring.

    PubMed

    Barona-de-Guzmán, Rafael; Krstulovic-Roa, Claudio; Donderis-Malea, Elena; Barona-Lleó, Luz

    2018-03-08

    The emotional evaluation of the causes of vertigo is made using the clinical records and several subjective questionnaires. The aim of the present study is to evaluate the emotional response objectively, in normal subjects, during an induced vertigo crisis. A caloric vestibular test with cold water was performed on 30 healthy subjects. The following physiological parameters were monitored during the 60seconds prior to and the 60seconds after the stimulation: Skin Conductivity, Peripheral Pulse Volume, Body Temperature, Muscle Contraction, Heart Rate, and Respiratory Rate. The maximum angular speed of the nystagmus slow phase at each stimulation was assessed. Skin conductance presented a statistically significant increase during the vertigo crisis in relation to the prior period while the peripheral pulse volume presented a statistically significant decrease. There was no relationship between the slow phase of the provoked nystagmus angular speed and skin conductance and peripheral pulse volume changes. The decrease in peripheral pulse volume was significantly higher in the second vertigo crisis. Skin conductance and peripheral pulse volume changed significantly during a vertigo crisis. There was no relation between the provoked vertiginous crisis intensity and the changes produced in those variables. The stress generated by the caloric stimulation is higher in the second crisis, when the subject has experience of the vertigo caused by the stimulation. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. The efficacy of tamsulosin in lower ureteral calculi

    PubMed Central

    Griwan, M.S.; Singh, Santosh Kumar; Paul, Himanshu; Pawar, Devendra Singh; Verma, Manish

    2010-01-01

    Context: There has been a paradigm shift in the management of ureteral calculi in the last decade with the introduction of new less invasive methods, such as ureterorenoscopy and extracorporeal shock wave lithotripsy (ESWL). Aims: Recent studies have reported excellent results with medical expulsive therapy (MET) for distal ureteral calculi, both in terms of stone expulsion and control of ureteral colic pain. Settings and Design: We conducted a comparative study in between watchful waiting and MET with tamsulosin. Materials and Methods: We conducted a comparative study in between watchful waiting (Group I) and MET with tamsulosin (Group II) in 60 patients, with a follow up of 28 days. Statistical Analysis: Independent 't' test and chi-square test. Results: Group II showed a statistically significant advantage in terms of the stone expulsion rate. The mean number of episodes of pain, mean days to stone expulsion and mean amount of analgesic dosage used were statistically significantly lower in Group II (P value is 0.007, 0.01 and 0.007, respectively) as compared to Group I. Conclusions: It is concluded that MET should be considered for uncomplicated distal ureteral calculi before ureteroscopy or extracorporeal lithotripsy. Tamsulosin has been found to increase and hasten stone expulsion rates, decrease acute attacks by acting as a spasmolytic, reduces mean days to stone expulsion and decreases analgesic dose usage. PMID:20882156

  1. Using statistical process control for monitoring the prevalence of hospital-acquired pressure ulcers.

    PubMed

    Kottner, Jan; Halfens, Ruud

    2010-05-01

    Institutionally acquired pressure ulcers are used as outcome indicators to assess the quality of pressure ulcer prevention programs. Determining whether quality improvement projects that aim to decrease the proportions of institutionally acquired pressure ulcers lead to real changes in clinical practice depends on the measurement method and statistical analysis used. To examine whether nosocomial pressure ulcer prevalence rates in hospitals in the Netherlands changed, a secondary data analysis using different statistical approaches was conducted of annual (1998-2008) nationwide nursing-sensitive health problem prevalence studies in the Netherlands. Institutions that participated regularly in all survey years were identified. Risk-adjusted nosocomial pressure ulcers prevalence rates, grade 2 to 4 (European Pressure Ulcer Advisory Panel system) were calculated per year and hospital. Descriptive statistics, chi-square trend tests, and P charts based on statistical process control (SPC) were applied and compared. Six of the 905 healthcare institutions participated in every survey year and 11,444 patients in these six hospitals were identified as being at risk for pressure ulcers. Prevalence rates per year ranged from 0.05 to 0.22. Chi-square trend tests revealed statistically significant downward trends in four hospitals but based on SPC methods, prevalence rates of five hospitals varied by chance only. Results of chi-square trend tests and SPC methods were not comparable, making it impossible to decide which approach is more appropriate. P charts provide more valuable information than single P values and are more helpful for monitoring institutional performance. Empirical evidence about the decrease of nosocomial pressure ulcer prevalence rates in the Netherlands is contradictory and limited.

  2. Single-Item Measurement of Suicidal Behaviors: Validity and Consequences of Misclassification

    PubMed Central

    Millner, Alexander J.; Lee, Michael D.; Nock, Matthew K.

    2015-01-01

    Suicide is a leading cause of death worldwide. Although research has made strides in better defining suicidal behaviors, there has been less focus on accurate measurement. Currently, the widespread use of self-report, single-item questions to assess suicide ideation, plans and attempts may contribute to measurement problems and misclassification. We examined the validity of single-item measurement and the potential for statistical errors. Over 1,500 participants completed an online survey containing single-item questions regarding a history of suicidal behaviors, followed by questions with more precise language, multiple response options and narrative responses to examine the validity of single-item questions. We also conducted simulations to test whether common statistical tests are robust against the degree of misclassification produced by the use of single-items. We found that 11.3% of participants that endorsed a single-item suicide attempt measure engaged in behavior that would not meet the standard definition of a suicide attempt. Similarly, 8.8% of those who endorsed a single-item measure of suicide ideation endorsed thoughts that would not meet standard definitions of suicide ideation. Statistical simulations revealed that this level of misclassification substantially decreases statistical power and increases the likelihood of false conclusions from statistical tests. Providing a wider range of response options for each item reduced the misclassification rate by approximately half. Overall, the use of single-item, self-report questions to assess the presence of suicidal behaviors leads to misclassification, increasing the likelihood of statistical decision errors. Improving the measurement of suicidal behaviors is critical to increase understanding and prevention of suicide. PMID:26496707

  3. Preparing for the first meeting with a statistician.

    PubMed

    De Muth, James E

    2008-12-15

    Practical statistical issues that should be considered when performing data collection and analysis are reviewed. The meeting with a statistician should take place early in the research development before any study data are collected. The process of statistical analysis involves establishing the research question, formulating a hypothesis, selecting an appropriate test, sampling correctly, collecting data, performing tests, and making decisions. Once the objectives are established, the researcher can determine the characteristics or demographics of the individuals required for the study, how to recruit volunteers, what type of data are needed to answer the research question(s), and the best methods for collecting the required information. There are two general types of statistics: descriptive and inferential. Presenting data in a more palatable format for the reader is called descriptive statistics. Inferential statistics involve making an inference or decision about a population based on results obtained from a sample of that population. In order for the results of a statistical test to be valid, the sample should be representative of the population from which it is drawn. When collecting information about volunteers, researchers should only collect information that is directly related to the study objectives. Important information that a statistician will require first is an understanding of the type of variables involved in the study and which variables can be controlled by researchers and which are beyond their control. Data can be presented in one of four different measurement scales: nominal, ordinal, interval, or ratio. Hypothesis testing involves two mutually exclusive and exhaustive statements related to the research question. Statisticians should not be replaced by computer software, and they should be consulted before any research data are collected. When preparing to meet with a statistician, the pharmacist researcher should be familiar with the steps of statistical analysis and consider several questions related to the study to be conducted.

  4. Structural texture similarity metrics for image analysis and retrieval.

    PubMed

    Zujovic, Jana; Pappas, Thrasyvoulos N; Neuhoff, David L

    2013-07-01

    We develop new metrics for texture similarity that accounts for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of "known-item search," the retrieval of textures that are "identical" to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM) and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.

  5. Effect of structural parameters on burning behavior of polyester fabrics having flame retardancy property

    NASA Astrophysics Data System (ADS)

    Çeven, E. K.; Günaydın, G. K.

    2017-10-01

    The aim of this study is filling the gap in the literature about investigating the effect of yarn and fabric structural parameters on burning behavior of polyester fabrics. According to the experimental design three different fabric types, three different weft densities and two different weave types were selected and a total of eighteen different polyester drapery fabrics were produced. All statistical procedures were conducted using the SPSS Statistical software package. The results of the Analysis of Variance (ANOVA) tests indicated that; there were statistically significant (5% significance level) differences between the mass loss ratios (%) in weft and mass loss ratios (%) in warp direction of different fabrics calculated after the flammability test. The Student-Newman-Keuls (SNK) results for mass loss ratios (%) both in weft and warp directions revealed that the mass loss ratios (%) of fabrics containing Trevira CS type polyester were lower than the mass loss ratios of polyester fabrics subjected to washing treatment and flame retardancy treatment.

  6. A Framework for Establishing Standard Reference Scale of Texture by Multivariate Statistical Analysis Based on Instrumental Measurement and Sensory Evaluation.

    PubMed

    Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye

    2016-01-13

    A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.

  7. Assessment of variations in thermal cycle life data of thermal barrier coated rods

    NASA Astrophysics Data System (ADS)

    Hendricks, R. C.; McDonald, G.

    An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.

  8. Assessment of variations in thermal cycle life data of thermal barrier coated rods

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.; Mcdonald, G.

    1981-01-01

    An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.

  9. Evaluation of the effect of a home bleaching agent on surface characteristics of indirect esthetic restorative materials--part II microhardness.

    PubMed

    Torabi, Kianoosh; Rasaeipour, Sasan; Ghodsi, Safoura; Khaledi, Amir Ali Reza; Vojdani, Mahroo

    2014-07-01

    The exponential usage of esthetic restorative materials is beholden to society needs and desires. Interaction between the bleaching agents and the esthetic restorative materials is of critical importance. This in vitro study has been conducted to evaluate the effect of a home bleaching agent, carbamide peroxide (CP) 38%, on the microhardness of the fiber reinforced composite (FRC), overglazed, autoglazed, or polished porcelain specimens. For overglazed, autoglazed, polished ceramics and also FRC cylindrical specimens (n = 20 per group) were prepared. The specimens were stored in distilled water at 37°C for 48 hours prior to testing. Six samples from each group were selected randomly as negative controls which were stored in distilled water at 37°C that was changed daily. CP 38% was applied on the test specimens for 15 minutes, twice a day for 14 days. By using Knoop-microhardness tester microhardness testing for baseline, control and test specimens was conducted. Data were statistically analyzed using paired t-test, Mann-Whitney test, and Kruskal-Wallis test. Home bleaching significantly decreased the surface microhardness of all the test samples (p < 0.05), whereas the control groups did not show statistically significant changes after 2 weeks. The polished porcelain and polished composite specimens showed the most significant change in microhard-ness after bleaching process (p < 0.05). Although the type of surface preparation affects the susceptibility of the porcelain surface to the bleaching agent, no special preparation can preclude such adverse effects. The contact of home bleaching agents with esthetic restorative materials is unavoidable. Therefore protecting these restorations from bleaching agents and reglazing or at least polishing the restorations after bleaching is recommended.

  10. Brain imaging and cognition in young narcoleptic patients.

    PubMed

    Huang, Yu-Shu; Liu, Feng-Yuan; Lin, Chin-Yang; Hsiao, Ing-Tsung; Guilleminault, Christian

    2016-08-01

    The relationship between functional brain images and performances in narcoleptic patients and controls is a new field of investigation. We studied 71 young, type 1 narcoleptic patients and 20 sex- and age-matched control individuals using brain positron emission tomography (PET) images and neurocognitive testing. Clinical investigation was carried out using sleep-wake evaluation questionnaires; a sleep-wake study was conducted with actigraphy, polysomnography, multiple sleep latency test (MSLT), and blood tests (with human leukocyte antigen typing). The continuous performance test (CPT) and Wisconsin card sorting test (WCST) were administered on the same day as the PET study. PET data were analyzed using Statistical Parametric Mapping (version 8) software. Correlation of brain imaging and neurocognitive function was performed by Pearson's correlation. Statistical analyses (Student's t-test) were conducted with SPSS version-18. Seventy-one narcoleptic patients (mean age: 16.15 years, 41 boys (57.7%)) and 20 controls (mean age: 15.1 years, 12 boys (60%)) were studied. Results from the CPT and WCST showed significantly worse scores in narcoleptic patients than in controls (P < 0.05). Compared to controls, narcoleptic patients presented with hypometabolism in the right mid-frontal lobe and angular gyrus (P < 0.05) and significant hypermetabolism in the olfactory lobe, hippocampus, parahippocampus, amygdala, fusiform, left inferior parietal lobe, left superior temporal lobe, striatum, basal ganglia and thalamus, right hypothalamus, and pons (P < 0.05) in the PET study. Changes in brain metabolic activity in narcoleptic patients were positively correlated with results from the sleepiness scales and performance tests. Young, type 1 narcoleptic patients face a continuous cognitive handicap. Our imaging cognitive test protocol can be useful for investigating the effects of treatment trials in these patients. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Static renewal tests using Pimephales promelas (fathead minnows) and Ceriodaphnia dubia (daphnids). Clinch River-Environmental Restoration Program (CR-ERP) study, ambient water toxicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simbeck, D.J.

    1994-12-31

    Clinch River-Environmental Restoration Program (CR-ERP) personnel and Tennessee Valley Authority (TVA) personnel conducted a study during the week of January 25--February 1, 1994. The organisms specified for testing were larval fathead minnows, Pimephales promelas, and the daphnid, Ceriodaphnia dubia. Surface water samples were collected from Clinch River Mile 9.0, Poplar Creek Mile 1.0, and Poplar Creek Mile 2.9 on January 24, 26, and 28. Samples were partitioned and provided to the CR-ERP and TVA toxicology laboratories for testing. Exposure of test organisms to these samples resulted in no toxicity (survival or growth) to fathead minnows; however, toxicity to daphnids wasmore » demonstrated in undiluted samples from Poplar Creek Mile 1.0 in testing conducted by TVA based on hypothesis testing of data. Point estimation (IC{sub 25}) analysis of the data, however, showed no toxicity in PCM 1.0 samples. Attachments to this report include: Chain of custody forms -- originals; Toxicity test bench sheets and statistical analyses; Meter calibrations; and Reference toxicant test information.« less

  12. Flight investigation of a four-dimensional terminal area guidance system for STOL aircraft

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Hardy, G. H.

    1981-01-01

    A series of flight tests and fast-time simulations were conducted, using the augmentor wing jet STOL research aircraft and the STOLAND 4D-RNAV system to add to the growing data base of 4D-RNAV system performance capabilities. To obtain statistically meaningful data a limited amount of flight data were supplemented by a statistically significant amount of data obtained from fast-time simulation. The results of these tests are reported. Included are comparisons of the 4D-RNAV estimated winds with actual winds encountered in flight, as well as data on along-track navigation and guidance errors, and time-of-arrival errors at the final approach waypoint. In addition, a slight improvement of the STOLAND 4D-RNAV system is proposed and demonstrated, using the fast-time simulation.

  13. Evaluation of a Head-Worn Display System as an Equivalent Head-Up Display for Low Visibility Commercial Operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis (Trey) J., III; Shelton, Kevin J.; Prinzel, Lawrence J.; Nicholas, Stephanie N.; Williams, Steven P.; Ellis, Kyle E.; Jones, Denise R.; Bailey, Randall E.; Harrison, Stephanie J.; Barnes, James R.

    2017-01-01

    Research, development, test, and evaluation of fight deck interface technologies is being conducted by the National Aeronautics and Space Administration (NASA) to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System (NextGen). One specific area of research was the use of small Head-Worn Displays (HWDs) to serve as a possible equivalent to a Head-Up Display (HUD). A simulation experiment and a fight test were conducted to evaluate if the HWD can provide an equivalent level of performance to a HUD. For the simulation experiment, airline crews conducted simulated approach and landing, taxi, and departure operations during low visibility operations. In a follow-on fight test, highly experienced test pilots evaluated the same HWD during approach and surface operations. The results for both the simulation and fight tests showed that there were no statistical differences in the crews' performance in terms of approach, touchdown and takeoff; but, there are still technical hurdles to be overcome for complete display equivalence including, most notably, the end-to-end latency of the HWD system.

  14. The effect of coloring liquid dipping time on the fracture load and color of zirconia ceramics

    PubMed Central

    2017-01-01

    PURPOSE The aims of the study were to evaluate the fracture load of zirconia core material after dipping in coloring liquid at different time intervals and to compare the color of dipped blocks with that of prefabricated shaded blocks. MATERIALS AND METHODS 3-unit bridge frameworks were designed digitally. Sixty frameworks were fabricated using uncolored zirconia blocks by CAD/CAM and divided into 4 groups randomly (n = 15). Group 2 (G2) was subjected to coloring liquids for 2 minutes, Group 4 (G4) for 4 minutes, and Group 6 (G6) for 6 minutes. CFS group was not subjected to any coloring procedure. After coloring, color differences between the test groups and a prefabricated shaded zirconia group (CPZ, n = 15) were evaluated by using a spectrophotometer. Fracture test was conducted immediately after shade evaluation with a Testometric test device at a cross-head speed of 1 mm/sec. Statistical analysis for evaluating color and fracture load was performed by using one way ANOVA followed by Tukey HSD test (P ≤ .05). Weibull analysis was conducted for distribution of fracture load. RESULTS There was no difference in terms of fracture load and color between CFS (1176.681 N) and G2 (985.638 N) group and between CPZ (81.340) and G2 (81.140) group, respectively. Fracture load values of G4 (779.340 N) and G6 (935.491 N) groups were statistically significantly lower than that of CFS group (P ≤ .005). The color values of G4 (79.340) and G6 (79.673) groups were statistically different than that of CPZ group (P ≤ .005). CONCLUSION Prolonged immersion of zirconia in coloring liquid not only negatively affected the fracture load of the zirconia being tested in the current study but also deteriorated the desired shade of the restoration. PMID:28243394

  15. The Statistical Analysis Techniques to Support the NGNP Fuel Performance Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bihn T. Pham; Jeffrey J. Einerson

    2010-06-01

    This paper describes the development and application of statistical analysis techniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automatedmore » processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.« less

  16. Landscape preference assessment of Louisiana river landscapes: a methodological study

    Treesearch

    Michael S. Lee

    1979-01-01

    The study pertains to the development of an assessment system for the analysis of visual preference attributed to Louisiana river landscapes. The assessment system was utilized in the evaluation of 20 Louisiana river scenes. Individuals were tested for their free choice preference for the same scenes. A statistical analysis was conducted to examine the relationship...

  17. Regional Environmental Monitoring and Assessment Program Data (REMAP)

    EPA Pesticide Factsheets

    The Regional Environmental Monitoring and Assessment Program (REMAP) was initiated to test the applicability of the Environmental Monitoring and Assessment Program (EMAP) approach to answer questions about ecological conditions at regional and local scales. Using EMAP's statistical design and indicator concepts, REMAP conducts projects at smaller geographic scales and in shorter time frames than the national EMAP program.

  18. Direct and Indirect Effects of Birth Order on Personality and Identity: Support for the Null Hypothesis

    ERIC Educational Resources Information Center

    Dunkel, Curtis S.; Harbke, Colin R.; Papini, Dennis R.

    2009-01-01

    The authors proposed that birth order affects psychosocial outcomes through differential investment from parent to child and differences in the degree of identification from child to parent. The authors conducted this study to test these 2 models. Despite the use of statistical and methodological procedures to increase sensitivity and reduce…

  19. Stochastic Price Models and Optimal Tree Cutting: Results for Loblolly Pine

    Treesearch

    Robert G. Haight; Thomas P. Holmes

    1991-01-01

    An empirical investigation of stumpage price models and optimal harvest policies is conducted for loblolly pine plantations in the southeastern United States. The stationarity of monthly and quarterly series of sawtimber prices is analyzed using a unit root test. The statistical evidence supports stationary autoregressive models for the monthly series and for the...

  20. Explore the Usefulness of Person-Fit Analysis on Large-Scale Assessment

    ERIC Educational Resources Information Center

    Cui, Ying; Mousavi, Amin

    2015-01-01

    The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…

  1. National Study of Postsecondary Faculty (NSOPF:04) Field Test Methodology Report, 2004. Working Paper Series. NCES 2004-01

    ERIC Educational Resources Information Center

    Heuer, R. E.; Cahalan, M.; Fahimi, M.; Curry-Tucker, J. L.; Carley-Baxter, L.; Curtin, T. R.; Hinsdale, M.; Jewell, D. M.; Kuhr, B. D.; McLean, L.

    2004-01-01

    The 2004 National Study of Postsecondary Faculty (NSOPF:04), conducted by RTI International (RTI) and sponsored by the U.S. Department of Education's National Center for Education Statistics (NCES), is a nationally representative study that collects data regarding the characteristics, workload, and career paths of full- and part-time…

  2. Academic Achievement and Perceived Peer Support among Turkish Students: Gender and Preschool Education Impact

    ERIC Educational Resources Information Center

    Bursal, Murat

    2017-01-01

    This study was conducted to investigate the academic achievement and perceived peer support levels of 4th-8th grade Turkish elementary and middle school students at low socio-economic status. Factorial design analyses were used to test the statistical effects of gender and preschool education variables on the dependent variables. The findings…

  3. Assessing Beaked Whale Reproduction and Stress Response Relative to Sonar Activity at the Atlantic Undersea Test and Evaluation Center (AUTEC)

    DTIC Science & Technology

    2015-09-30

    oil spills (unpublished data, Kellar). The second will be to conduct a more fine-scale analysis of the areas examined during this study. For this...REFERENCES Carlin BP , Chib S (1995) Bayesian model choice via Markov-chain Monte-Carlo methods. Journal of the Royal Statistical Society

  4. Testing Structural Models of DSM-IV Symptoms of Common Forms of Child and Adolescent Psychopathology

    ERIC Educational Resources Information Center

    Lahey, Benjamin B.; Rathouz, Paul J.; Van Hulle, Carol; Urbano, Richard C.; Krueger, Robert F.; Applegate, Brooks; Garriock, Holly A.; Chapman, Derek A.; Waldman, Irwin D.

    2008-01-01

    Confirmatory factor analyses were conducted of "Diagnostic and Statistical Manual of Mental Disorders", Fourth Edition (DSM-IV) symptoms of common mental disorders derived from structured interviews of a representative sample of 4,049 twin children and adolescents and their adult caretakers. A dimensional model based on the assignment of symptoms…

  5. Quantitative Methods for Analysing Joint Questionnaire Data: Exploring the Role of Joint in Force Design

    DTIC Science & Technology

    2015-08-01

    the nine questions. The Statistical Package for the Social Sciences ( SPSS ) [11] was used to conduct statistical analysis on the sample. Two types...constructs. SPSS was again used to conduct statistical analysis on the sample. This time factor analysis was conducted. Factor analysis attempts to...Business Research Methods and Statistics using SPSS . P432. 11 IBM SPSS Statistics . (2012) 12 Burns, R.B., Burns, R.A. (2008) ‘Business Research

  6. Evaluating innovative items for the NCLEX, part I: usability and pilot testing.

    PubMed

    Wendt, Anne; Harmes, J Christine

    2009-01-01

    National Council of State Boards of Nursing (NCSBN) has recently conducted preliminary research on the feasibility of including various types of innovative test questions (items) on the NCLEX. This article focuses on the participants' reactions to and their strategies for interacting with various types of innovative items. Part 2 in the May/June issue will focus on the innovative item templates and evaluation of the statistical characteristics and the level of cognitive processing required to answer the examination items.

  7. Cognition, comprehension and application of biostatistics in research by Indian postgraduate students in periodontics.

    PubMed

    Swetha, Jonnalagadda Laxmi; Arpita, Ramisetti; Srikanth, Chintalapani; Nutalapati, Rajasekhar

    2014-01-01

    Biostatistics is an integral part of research protocols. In any field of inquiry or investigation, data obtained is subsequently classified, analyzed and tested for accuracy by statistical methods. Statistical analysis of collected data, thus, forms the basis for all evidence-based conclusions. The aim of this study is to evaluate the cognition, comprehension and application of biostatistics in research among post graduate students in Periodontics, in India. A total of 391 post graduate students registered for a master's course in periodontics at various dental colleges across India were included in the survey. Data regarding the level of knowledge, understanding and its application in design and conduct of the research protocol was collected using a dichotomous questionnaire. A descriptive statistics was used for data analysis. Nearly 79.2% students were aware of the importance of biostatistics in research, 55-65% were familiar with MS-EXCEL spreadsheet for graphical representation of data and with the statistical softwares available on the internet, 26.0% had biostatistics as mandatory subject in their curriculum, 9.5% tried to perform statistical analysis on their own while 3.0% were successful in performing statistical analysis of their studies on their own. Biostatistics should play a central role in planning, conduct, interim analysis, final analysis and reporting of periodontal research especially by the postgraduate students. Indian postgraduate students in periodontics are aware of the importance of biostatistics in research but the level of understanding and application is still basic and needs to be addressed.

  8. Statistical Analysis of the Polarimetric Cloud Analysis and Seeding Test (POLCAST) Field Projects

    NASA Astrophysics Data System (ADS)

    Ekness, Jamie Lynn

    The North Dakota farming industry brings in more than $4.1 billion annually in cash receipts. Unfortunately, agriculture sales vary significantly from year to year, which is due in large part to weather events such as hail storms and droughts. One method to mitigate drought is to use hygroscopic seeding to increase the precipitation efficiency of clouds. The North Dakota Atmospheric Research Board (NDARB) sponsored the Polarimetric Cloud Analysis and Seeding Test (POLCAST) research project to determine the effectiveness of hygroscopic seeding in North Dakota. The POLCAST field projects obtained airborne and radar observations, while conducting randomized cloud seeding. The Thunderstorm Identification Tracking and Nowcasting (TITAN) program is used to analyze radar data (33 usable cases) in determining differences in the duration of the storm, rain rate and total rain amount between seeded and non-seeded clouds. The single ratio of seeded to non-seeded cases is 1.56 (0.28 mm/0.18 mm) or 56% increase for the average hourly rainfall during the first 60 minutes after target selection. A seeding effect is indicated with the lifetime of the storms increasing by 41 % between seeded and non-seeded clouds for the first 60 minutes past seeding decision. A double ratio statistic, a comparison of radar derived rain amount of the last 40 minutes of a case (seed/non-seed), compared to the first 20 minutes (seed/non-seed), is used to account for the natural variability of the cloud system and gives a double ratio of 1.85. The Mann-Whitney test on the double ratio of seeded to non-seeded cases (33 cases) gives a significance (p-value) of 0.063. Bootstrapping analysis of the POLCAST set indicates that 50 cases would provide statistically significant results based on the Mann-Whitney test of the double ratio. All the statistical analysis conducted on the POLCAST data set show that hygroscopic seeding in North Dakota does increase precipitation. While an additional POLCAST field project would be necessary to obtain standardly accepted statistically significant results (p < 0.5) for the double ratio of precipitation amount, the obtained p-value of 0.063 is close and considering the positive result from other hygroscopic seeding experiments, the North Dakota Cloud Modification Project should consider implementation of hygroscopic seeding.

  9. Space Suit Joint Torque Measurement Method Validation

    NASA Technical Reports Server (NTRS)

    Valish, Dana; Eversley, Karina

    2012-01-01

    In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.

  10. Explanation of Two Anomalous Results in Statistical Mediation Analysis.

    PubMed

    Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.

  11. An Exploratory Data Analysis System for Support in Medical Decision-Making

    PubMed Central

    Copeland, J. A.; Hamel, B.; Bourne, J. R.

    1979-01-01

    An experimental system was developed to allow retrieval and analysis of data collected during a study of neurobehavioral correlates of renal disease. After retrieving data organized in a relational data base, simple bivariate statistics of parametric and nonparametric nature could be conducted. An “exploratory” mode in which the system provided guidance in selection of appropriate statistical analyses was also available to the user. The system traversed a decision tree using the inherent qualities of the data (e.g., the identity and number of patients, tests, and time epochs) to search for the appropriate analyses to employ.

  12. Effects of Long Term Thermal Exposure on Chemically Pure (CP) Titanium Grade 2 Room Temperature Tensile Properties and Microstructure

    NASA Technical Reports Server (NTRS)

    Ellis, David L.

    2007-01-01

    Room temperature tensile testing of Chemically Pure (CP) Titanium Grade 2 was conducted for as-received commercially produced sheet and following thermal exposure at 550 and 650 K for times up to 5,000 h. No significant changes in microstructure or failure mechanism were observed. A statistical analysis of the data was performed. Small statistical differences were found, but all properties were well above minimum values for CP Ti Grade 2 as defined by ASTM standards and likely would fall within normal variation of the material.

  13. Incorporating an Interactive Statistics Workshop into an Introductory Biology Course-Based Undergraduate Research Experience (CURE) Enhances Students’ Statistical Reasoning and Quantitative Literacy Skills †

    PubMed Central

    Olimpo, Jeffrey T.; Pevey, Ryan S.; McCabe, Thomas M.

    2018-01-01

    Course-based undergraduate research experiences (CUREs) provide an avenue for student participation in authentic scientific opportunities. Within the context of such coursework, students are often expected to collect, analyze, and evaluate data obtained from their own investigations. Yet, limited research has been conducted that examines mechanisms for supporting students in these endeavors. In this article, we discuss the development and evaluation of an interactive statistics workshop that was expressly designed to provide students with an open platform for graduate teaching assistant (GTA)-mentored data processing, statistical testing, and synthesis of their own research findings. Mixed methods analyses of pre/post-intervention survey data indicated a statistically significant increase in students’ reasoning and quantitative literacy abilities in the domain, as well as enhancement of student self-reported confidence in and knowledge of the application of various statistical metrics to real-world contexts. Collectively, these data reify an important role for scaffolded instruction in statistics in preparing emergent scientists to be data-savvy researchers in a globally expansive STEM workforce. PMID:29904549

  14. Incorporating an Interactive Statistics Workshop into an Introductory Biology Course-Based Undergraduate Research Experience (CURE) Enhances Students' Statistical Reasoning and Quantitative Literacy Skills.

    PubMed

    Olimpo, Jeffrey T; Pevey, Ryan S; McCabe, Thomas M

    2018-01-01

    Course-based undergraduate research experiences (CUREs) provide an avenue for student participation in authentic scientific opportunities. Within the context of such coursework, students are often expected to collect, analyze, and evaluate data obtained from their own investigations. Yet, limited research has been conducted that examines mechanisms for supporting students in these endeavors. In this article, we discuss the development and evaluation of an interactive statistics workshop that was expressly designed to provide students with an open platform for graduate teaching assistant (GTA)-mentored data processing, statistical testing, and synthesis of their own research findings. Mixed methods analyses of pre/post-intervention survey data indicated a statistically significant increase in students' reasoning and quantitative literacy abilities in the domain, as well as enhancement of student self-reported confidence in and knowledge of the application of various statistical metrics to real-world contexts. Collectively, these data reify an important role for scaffolded instruction in statistics in preparing emergent scientists to be data-savvy researchers in a globally expansive STEM workforce.

  15. Stratigraphy and vertical hydraulic conductivity of the St. Francois Confining Unit in the Viburnum Trend and evaluation of the Unit in the Viburnum Trend and exploration areas, southeastern Missouri

    USGS Publications Warehouse

    Kleeschulte, Michael J.; Seeger, Cheryl M.

    2003-01-01

    The confining ability of the St. Francois confining unit (Derby-Doerun Dolomite and Davis Formation) was evaluated in ten townships (T. 31?35 N. and R. 01?02 W.) along the Viburnum Trend of southeastern Missouri. Vertical hydraulic conductivity data were compared to similar data collected during two previous studies 20 miles south of the Viburnum Trend, in two lead-zinc exploration areas that may be a southern extension of the Viburnum Trend. The surficial Ozark aquifer is the primary source of water for domestic and public-water supplies and major springs in southern Missouri. The St. Francois confining unit lies beneath the Ozark aquifer and impedes the movement of water between the Ozark aquifer and the underlying St. Francois aquifer (composed of the Bonneterre Formation and Lamotte Sandstone). The Bonneterre Formation is the primary host formation for lead-zinc ore deposits of the Viburnum Trend and potential host formation in the exploration areas. For most of the more than 40 years the mines have been in operation along the Viburnum Trend, about 27 million gallons per day were being pumped from the St. Francois aquifer for mine dewatering. Previous studies conducted along the Viburnum Trend have concluded that no large cones of depression have developed in the potentiometric surface of the Ozark aquifer as a result of mining activity. Because of similar geology, stratigraphy, and depositional environment between the Viburnum Trend and the exploration areas, the Viburnum Trend may be used as a pertinent, full-scale model to study and assess how mining may affect the exploration areas. Along the Viburnum Trend, the St. Francois confining unit is a complex series of dolostones, limestones, and shales that generally is 230 to 280 feet thick with a net shale thickness ranging from less than 25 to greater than 100 feet with the thickness increasing toward the west. Vertical hydraulic conductivity values determined from laboratory permeability tests were used to represent the St. Francois confining unit along the Viburnum Trend. The Derby-Doerun Dolomite and Davis Formation are statistically similar, but the Davis Formation would be the more hydraulically restrictive medium. The shale and carbonate values were statistically different. The median vertical hydraulic conductivity value for the shale samples was 62 times less than the carbonate samples. Consequently, the net shale thickness of the confining unit along the Viburnum Trend significantly affects the effective vertical hydraulic conductivity. As the percent of shale increases in a given horizon, the vertical hydraulic conductivity decreases. The range of effective vertical hydraulic conductivity for the confining unit in the Viburnum Trend was estimated to be a minimum of 2 x 10-13 ft/s (foot per second) and a maximum of 3 x 10-12 ft/s. These vertical hydraulic conductivity values are considered small and verify conclusions of previous studies that the confining unit effectively impedes the flow of ground water between the Ozark aquifer and the St. Francois aquifer along the Viburnum Trend. Previously-collected vertical hydraulic conductivity data for the two exploration areas from two earlier studies were combined with the data collected along the Viburnum Trend. The nonparametric Kruskal-Wallis statistical test shows the vertical hydraulic conductivity of the St. Francois confining unit along the Viburnum Trend, and west and east exploration areas are statistically different. The vertical hydraulic conductivity values generally are the largest in the Viburnum Trend and are smallest in the west exploration area. The statistical differences in these values do not appear to be attributed strictly to either the Derby-Doerun Dolomite or Davis Formation, but instead they are caused by the differences in the carbonate vertical hydraulic conductivity values at the three locations. The calculated effective vertical hydraulic conductivity range for the St. Franc

  16. p-hacking by post hoc selection with multiple opportunities: Detectability by skewness test?: Comment on Simonsohn, Nelson, and Simmons (2014).

    PubMed

    Ulrich, Rolf; Miller, Jeff

    2015-12-01

    Simonsohn, Nelson, and Simmons (2014) have suggested a novel test to detect p-hacking in research, that is, when researchers report excessive rates of "significant effects" that are truly false positives. Although this test is very useful for identifying true effects in some cases, it fails to identify false positives in several situations when researchers conduct multiple statistical tests (e.g., reporting the most significant result). In these cases, p-curves are right-skewed, thereby mimicking the existence of real effects even if no effect is actually present. (c) 2015 APA, all rights reserved).

  17. DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.

    Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less

  18. [The role of acoustic impedance test in the diagnosis for occupational noise induced deafness].

    PubMed

    Chen, H; Xue, L J; Yang, A C; Liang, X Y; Chen, Z Q; Zheng, Q L

    2018-01-20

    Objective: To investigate the characteristics of acoustic impedance test and its diagnostic role for occupational noise induced deafness, in order to provide an objective basis for the differential diagnosis of occupational noise induced deafness. Methods: A retrospective study was conducted to investigate the cases on the diagnosis of occupational noise-induced deafness in Guangdong province hospital for occupational disease prevention and treatment from January 2016 to January 2017. A total of 198 cases (396 ears) were divided into occupation disease group and non occupation disease group based on the diagnostic criteria of occupational noise deafness in 2014 edition, acoustic conductivity test results of two groups were compared including tympanograms types, external auditory canal volume, tympanic pressure, static compliance and slope. Results: In the occupational disease group, 204 ears were found to have 187 ears (91.67%) of type A, which were significantly higher than those in the non occupational disease group 143/192 (74.48%) , the difference was statistically significant (χ(2)=21.038, P <0.01). Detection of Ad or As type, occupation disease group in other type were 16/204 (7.84%) , 3/204 (1.47%) , were lower than Ad or As type of occupation disease group (15.63%) , other type (9.38%) , the differences were statistically significant[ (χ(2)=5.834, P <0.05) , (χ(2)=12.306, P <0.01) ]. Occupation disease group canal volume average (1.68±0.39) ml higher than that of non occupation disease group (1.57 ± 0.47) ml, the difference was statistically significant ( t =2.756, P <0.01) ; occupation disease group mean static compliance (1.06±0.82) ml higher than that of non occupation disease group (0.89±0.64) ml. The difference was statistically singificant ( t =2.59, P <0.01) . Conclusion: We observed that acoustic impedance test had obvious auxiliary function in the differential diagnosis of occupational noise induced deafness, More than 90% of the confirmed cases showed an A-form tympanograms, it is one of the objective examination methods which can be used in the differential diagnosis of pseudo deafness.

  19. Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures

    PubMed Central

    Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha

    2017-01-01

    Aims and Objectives: The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. Materials and Methods: A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Results: Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions (P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant (P < 0.001). Conclusions: Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems. PMID:28713763

  20. Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures.

    PubMed

    Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha

    2017-06-01

    The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions ( P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant ( P < 0.001). Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems.

  1. Significance of specificity of Tinetti B-POMA test and fall risk factor in third age of life.

    PubMed

    Avdić, Dijana; Pecar, Dzemal

    2006-02-01

    As for the third age, psychophysical abilities of humans gradually decrease, while the ability of adaptation to endogenous and exogenous burdens is going down. In 1987, "Harada" et al. (1) have found out that 9.5 million persons in USA have difficulties running daily activities, while 59% of them (which is 5.6 million) are older than 65 years in age. The study has encompassed 77 questioned persons of both sexes with their average age 71.73 +/- 5.63 (scope of 65-90 years in age), chosen by random sampling. Each patient has been questioned in his/her own home and familiar to great extent with the methodology and aims of the questionnaire. Percentage of questioned women was 64.94% (50 patients) while the percentage for men was 35.06% (27 patients). As for the value of risk factor score achieved conducting the questionnaire and B-POMA test, there are statistically significant differences between men and women, as well as between patients who fell and those who never did. As for the way of life (alone or in the community), there are no significant statistical differences. Average results gained through B-POMA test in this study are statistically significantly higher in men and patients who did not provide data about falling, while there was no statistically significant difference in the way of life. In relation to the percentage of maximum number of positive answers to particular questions, regarding gender, way of life and the data about falling, there were no statistically significant differences between the value of B-POMA test and the risk factor score (the questionnaire).

  2. Rodent Biocompatibility Test Using the NASA Foodbar and Epoxy EP21LV

    NASA Technical Reports Server (NTRS)

    Tillman, J.; Steele, M.; Dumars, P.; Vasques, M.; Girten, B.; Sun, S. (Technical Monitor)

    2002-01-01

    Epoxy has been used successfully to affix NASA foodbars to the inner walls of the Animal Enclosure Module for past space flight experiments utilizing rodents. The epoxy used on past missions was discontinued, making it necessary to identify a new epoxy for use on the STS-108 and STS-107 missions. This experiment was designed to test the basic biocompatibility of epoxy EP21LV with male rats (Sprague Dawley) and mice (Swiss Webster) when applied to NASA foodbars. For each species, the test was conducted with a control group fed untreated foodbars and an experimental group fed foodbars applied with EP21LV. For each species, there were no group differences in animal health and no statistical differences (P<0.05) in body weights throughout the study. In mice, there was a 16% increase in heart weight in the epoxy group; this result was not found in rats. For both species, there were no statistical differences found in other organ weights measured. In rats, blood glucose levels were 15% higher and both total protein and globulin were 10% lower in the epoxy group. Statistical differences in these parameters were not found in mice. For both species, no statistical differences were found in other blood parameters tested. Food consumption was not different in rats but water consumption was significantly decreased 10 to 15% in the epoxy group. The difference in water consumption is likely due to an increased water content of the epoxy-treated foodbars. Finally, both species avoided consumption of the epoxy material. Based on the global analysis of the results, the few parameters found to be statistically different do not appear to be a physiologically relevant effect of the epoxy material, We conclude that the EP21LV epoxy is biocompatible with rodents.

  3. Lower incisor inclination regarding different reference planes.

    PubMed

    Zataráin, Brenda; Avila, Josué; Moyaho, Angeles; Carrasco, Rosendo; Velasco, Carmen

    2016-09-01

    The purpose of this study was to assess the degree of lower incisor inclination with respect to different reference planes. It was an observational, analytical, longitudinal, prospective study conducted on 100 lateral cephalograms which were corrected according to the photograph in natural head position in order to draw the true vertical plane (TVP). The incisor mandibular plane angle (IMPA) was compensated to eliminate the variation of the mandibular plane growth type with the formula "FMApx.- 25 (FMA) + IMPApx. = compensated IMPA (IMPACOM)". As the data followed normal distribution determined by the KolmogorovSmirnov test, parametric tests were used for the statistical analysis, Ttest, ANOVA and Pearson coefficient correlation test. Statistical analysis was performed using a statistical significance of p <0.05. There is correlation between TVP and NB line (NB) (0.8614), Frankfort mandibular incisor angle (FMIA) (0.8894), IMPA (0.6351), Apo line (Apo) (0.609), IMPACOM (0.8895) and McHorris angle (MH) (0.7769). ANOVA showed statistically significant differences between the means for the 7 variables with 95% confidence level, P=0.0001. The multiple range test showed no significant difference among means: APoNB (0.88), IMPAMH (0.36), IMPANB (0.65), FMIAIMPACOM (0.01), FMIATVP (0.18), TVPIMPACOM (0.17). There was correlation among all reference planes. There were statistically significant differences among the means of the planes measured, except for IMPACOM, FMIA and TVP. The IMPA differed significantly from the IMPACOM. The compensated IMPA and the FMIA did not differ significantly from the TVP. The true horizontal plane was mismatched with Frankfort plane in 84% of the sample with a range of 19°. The true vertical plane is adequate for measuring lower incisor inclination. Sociedad Argentina de Investigación Odontológica.

  4. Case Studies for the Statistical Design of Experiments Applied to Powered Rotor Wind Tunnel Tests

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.; Tanner, Philip E.; Martin, Preston B.; Commo, Sean A.

    2015-01-01

    The application of statistical Design of Experiments (DOE) to helicopter wind tunnel testing was explored during two powered rotor wind tunnel entries during the summers of 2012 and 2013. These tests were performed jointly by the U.S. Army Aviation Development Directorate Joint Research Program Office and NASA Rotary Wing Project Office, currently the Revolutionary Vertical Lift Project, at NASA Langley Research Center located in Hampton, Virginia. Both entries were conducted in the 14- by 22-Foot Subsonic Tunnel with a small portion of the overall tests devoted to developing case studies of the DOE approach as it applies to powered rotor testing. A 16-47 times reduction in the number of data points required was estimated by comparing the DOE approach to conventional testing methods. The average error for the DOE surface response model for the OH-58F test was 0.95 percent and 4.06 percent for drag and download, respectively. The DOE surface response model of the Active Flow Control test captured the drag within 4.1 percent of measured data. The operational differences between the two testing approaches are identified, but did not prevent the safe operation of the powered rotor model throughout the DOE test matrices.

  5. Adolescent-onset alcohol abuse exacerbates the influence of childhood conduct disorder on late adolescent and early adult antisocial behaviour.

    PubMed

    Howard, Richard; Finn, Peter; Jose, Paul; Gallagher, Jennifer

    2011-12-16

    This study tested the hypothesis that adolescent-onset alcohol abuse (AOAA) would both mediate and moderate the effect of childhood conduct disorder on antisocial behaviour in late adolescence and early adulthood. A sample comprising 504 young men and women strategically recruited from the community were grouped using the criteria of the Diagnostic and Statistical Manual (DSM-IV, American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: APA), as follows: neither childhood conduct disorder (CCD) nor alcohol abuse/dependence; CCD but no alcohol abuse or dependence; alcohol abuse/dependence but no CCD; both CCD and alcohol abuse/dependence. The outcome measure was the sum of positive responses to 55 interview items capturing a variety of antisocial behaviours engaged in since age 15. Severity of lifetime alcohol-related and CCD problems served as predictor variables in regression analysis. Antisocial behaviour problems were greatest in individuals with a history of co-occurring conduct disorder (CD) and alcohol abuse/dependence. While CCD was strongly predictive of adult antisocial behaviour, this effect was both mediated and moderated (exacerbated) by AOAA.

  6. Adolescent-onset alcohol abuse exacerbates the influence of childhood conduct disorder on late adolescent and early adult antisocial behaviour

    PubMed Central

    Howard, Richard; Finn, Peter; Jose, Paul; Gallagher, Jennifer

    2012-01-01

    This study tested the hypothesis that adolescent-onset alcohol abuse (AOAA) would both mediate and moderate the effect of childhood conduct disorder on antisocial behaviour in late adolescence and early adulthood. A sample comprising 504 young men and women strategically recruited from the community were grouped using the criteria of the Diagnostic and Statistical Manual (DSM-IV, American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: APA), as follows: neither childhood conduct disorder (CCD) nor alcohol abuse/dependence; CCD but no alcohol abuse or dependence; alcohol abuse/dependence but no CCD; both CCD and alcohol abuse/dependence. The outcome measure was the sum of positive responses to 55 interview items capturing a variety of antisocial behaviours engaged in since age 15. Severity of lifetime alcohol-related and CCD problems served as predictor variables in regression analysis. Antisocial behaviour problems were greatest in individuals with a history of co-occurring conduct disorder (CD) and alcohol abuse/dependence. While CCD was strongly predictive of adult antisocial behaviour, this effect was both mediated and moderated (exacerbated) by AOAA. PMID:23459369

  7. gsSKAT: Rapid gene set analysis and multiple testing correction for rare-variant association studies using weighted linear kernels.

    PubMed

    Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J

    2017-05-01

    Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.

  8. A Study of Dielectric Properties of Proteinuria between 0.2 GHz and 50 GHz

    PubMed Central

    Mun, Peck Shen; Ting, Hua Nong; Ong, Teng Aik; Wong, Chew Ming; Ng, Kwan Hong; Chong, Yip Boon

    2015-01-01

    This paper investigates the dielectric properties of urine in normal subjects and subjects with chronic kidney disease (CKD) at microwave frequency of between 0.2 GHz and 50 GHz. The measurements were conducted using an open-ended coaxial probe at room temperature (25°C), at 30°C and at human body temperature (37°C). There were statistically significant differences in the dielectric properties of the CKD subjects compared to those of the normal subjects. Statistically significant differences in dielectric properties were observed across the temperatures for normal subjects and CKD subjects. Pearson correlation test showed the significant correlation between proteinuria and dielectric properties. The experimental data closely matched the single-pole Debye model. The relaxation dispersion and relaxation time increased with the proteinuria level, while decreasing with the temperature. As for static conductivity, it increased with proteinuria level and temperature. PMID:26066351

  9. Reveal Listeria 2.0 test for detection of Listeria spp. in foods and environmental samples.

    PubMed

    Alles, Susan; Curry, Stephanie; Almy, David; Jagadeesan, Balamurugan; Rice, Jennifer; Mozola, Mark

    2012-01-01

    A Performance Tested Method validation study was conducted for a new lateral flow immunoassay (Reveal Listeria 2.0) for detection of Listeria spp. in foods and environmental samples. Results of inclusivity testing showed that the test detects all species of Listeria, with the exception of L. grayi. In exclusivity testing conducted under nonselective growth conditions, all non-listeriae tested produced negative Reveal assay results, except for three strains of Lactobacillus spp. However, these lactobacilli are inhibited by the selective Listeria Enrichment Single Step broth enrichment medium used with the Reveal method. Six foods were tested in parallel by the Reveal method and the U.S. Food and Drug Administration/Bacteriological Analytical Manual (FDA/BAM) reference culture procedure. Considering data from both internal and independent laboratory trials, overall sensitivity of the Reveal method relative to that of the FDA/BAM procedure was 101%. Four foods were tested in parallel by the Reveal method and the U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) reference culture procedure. Overall sensitivity of the Reveal method relative to that of the USDA-FSIS procedure was 98.2%. There were no statistically significant differences in the number of positives obtained by the Reveal and reference culture procedures in any food trials. In testing of swab or sponge samples from four types of environmental surfaces, sensitivity of Reveal relative to that of the USDA-FSIS reference culture procedure was 127%. For two surface types, differences in the number of positives obtained by the Reveal and reference methods were statistically significant, with more positives by the Reveal method in both cases. Specificity of the Reveal assay was 100%, as there were no unconfirmed positive results obtained in any phase of the testing. Results of ruggedness experiments showed that the Reveal assay is tolerant of modest deviations in test sample volume and device incubation time.

  10. Halobetasol Propionate Lotion, 0.05% Provides Superior Hydration Compared to Halobetasol Propionate Cream, 0.05% in a Double-Blinded Study of Occlusivity and Hydration.

    PubMed

    Grove, Gary; Zerweck, Charles; Houser, Tim; Andrasfay, Anthony; Gauthier, Bob; Holland, Charles; Piacquadio, Daniel

    2017-02-01

    This study measured skin hydration and occlusivity of two test products [halobetasol propionate lotion, 0.05% (HBP Lotion) and Ultravate® (halobetasol propionate) cream, 0.05% (HBP Cream)] at 2, 4, and 6 hours after application to skin test sites previously challenged by dry shaving, which was performed to compromise the integrity of the stratum corneum barrier. Trans-epidermal water loss (TEWL), an indicator of skin barrier function, was measured using cyberDERM, inc. RG-1 evaporimeter. Skin hydration was evaluated using IBS SkiCon-200 conductance meter. Test products were applied bilaterally on dry-shaved sites on the volar forearm sites, according to a randomization scheme, with two test sites untreated to serve as "dry-shaved" controls. TEWL and conductance were measured at 2, 4, and 6 hours post-treatment. HBP Lotion displayed a significant increase in skin hydration at 2, 4, and 6 hours post-treatment compared to the baseline values and dry-shaved controls (each, P less than 0.001). However, HBP Cream produced statistically significant increased skin hydration only after 6 hours (P less than 0.05). HBP Lotion was significantly more effective than HBP Cream in increasing skin hydration at 2 and 4 hours post-treatment (each, P less than 0.001), and had a directional advantage (not statistically significant) at 6 hours. Neither test product had a significant occlusive effect as measured by TEWL at 2, 4, and 6 hours post-application. Both formulations of HBP (Lotion and Cream) contributed to skin moisturization, as measured by skin conductance. HBP Lotion produced a significantly more rapid onset and higher level of moisturization at 2 and 4 hours post-application compared to HBP Cream. The TEWL results indicate that neither HBP Lotion nor HBP Cream provided any significant occlusivity to the skin.

    J Drugs Dermatol. 2017;16(2):140-144.

    .

  11. Review of research designs and statistical methods employed in dental postgraduate dissertations.

    PubMed

    Shirahatti, Ravi V; Hegde-Shetiya, Sahana

    2015-01-01

    There is a need to evaluate the quality of postgraduate dissertations of dentistry submitted to university in the light of the international standards of reporting. We conducted the review with an objective to document the use of sampling methods, measurement standardization, blinding, methods to eliminate bias, appropriate use of statistical tests, appropriate use of data presentation in postgraduate dental research and suggest and recommend modifications. The public access database of the dissertations from Rajiv Gandhi University of Health Sciences was reviewed. Three hundred and thirty-three eligible dissertations underwent preliminary evaluation followed by detailed evaluation of 10% of randomly selected dissertations. The dissertations were assessed based on international reporting guidelines such as strengthening the reporting of observational studies in epidemiology (STROBE), consolidated standards of reporting trials (CONSORT), and other scholarly resources. The data were compiled using MS Excel and SPSS 10.0. Numbers and percentages were used for describing the data. The "in vitro" studies were the most common type of research (39%), followed by observational (32%) and experimental studies (29%). The disciplines conservative dentistry (92%) and prosthodontics (75%) reported high numbers of in vitro research. Disciplines oral surgery (80%) and periodontics (67%) had conducted experimental studies as a major share of their research. Lacunae in the studies included observational studies not following random sampling (70%), experimental studies not following random allocation (75%), not mentioning about blinding, confounding variables and calibrations in measurements, misrepresenting the data by inappropriate data presentation, errors in reporting probability values and not reporting confidence intervals. Few studies showed grossly inappropriate choice of statistical tests and many studies needed additional tests. Overall observations indicated the need to comply with standard guidelines of reporting research.

  12. Medical ethical standards in dermatology: an analytical study of knowledge, attitudes and practices.

    PubMed

    Mostafa, W Z; Abdel Hay, R M; El Lawindi, M I

    2015-01-01

    Dermatology practice has not been ethically justified at all times. The objective of the study was to find out dermatologists' knowledge about medical ethics, their attitudes towards regulatory measures and their practices, and to study the different factors influencing the knowledge, the attitude and the practices of dermatologists. This is a cross-sectional comparative study conducted among 214 dermatologists, from five Academic Universities and from participants in two conferences. A 54 items structured anonymous questionnaire was designed to describe the demographical characteristics of the study group as well as their knowledge, attitude and practices regarding the medical ethics standards in clinical and research settings. Five scoring indices were estimated regarding knowledge, attitude and practice. Inferential statistics were used to test differences between groups as indicated. The Student's t-test and analysis of variance were carried out for quantitative variables. The chi-squared test was conducted for qualitative variables. The results were considered statistically significant at a P > 0.05. Analysis of the possible factors having impact on the overall scores revealed that the highest knowledge scores were among dermatologists who practice in an academic setting plus an additional place; however, this difference was statistically non-significant (P = 0.060). Female dermatologists showed a higher attitude score compared to males (P = 0.028). The highest significant attitude score (P = 0.019) regarding clinical practice was recorded among those practicing cosmetic dermatology. The different studied groups of dermatologists revealed a significant impact on the attitude score (P = 0.049), and the evidence-practice score (P < 0.001). Ethical practices will improve the quality and integrity of dermatology research. © 2014 European Academy of Dermatology and Venereology.

  13. Exocrine Dysfunction Correlates with Endocrinal Impairment of Pancreas in Type 2 Diabetes Mellitus

    PubMed Central

    Prasanna Kumar, H. R.; Gowdappa, H. Basavana; Hosmani, Tejashwi; Urs, Tejashri

    2018-01-01

    Background: Diabetes mellitus (DM) is a chronic abnormal metabolic condition, which manifests elevated blood sugar level over a prolonged period. The pancreatic endocrine system generally gets affected during diabetes, but often abnormal exocrine functions are also manifested due to its proximity to the endocrine system. Fecal elastase-1 (FE-1) is found to be an ideal biomarker to reflect the exocrine insufficiency of the pancreas. Aim: The aim of this study was conducted to assess exocrine dysfunction of the pancreas in patients with type-2 DM (T2DM) by measuring FE levels and to associate the level of hyperglycemia with exocrine pancreatic dysfunction. Methodology: A prospective, cross-sectional comparative study was conducted on both T2DM patients and healthy nondiabetic volunteers. FE-1 levels were measured using a commercial kit (Human Pancreatic Elastase ELISA BS 86-01 from Bioserv Diagnostics). Data analysis was performed based on the important statistical parameters such as mean, standard deviation, standard error, t-test-independent samples, and Chi-square test/cross tabulation using SPSS for Windows version 20.0. Results: Statistically nonsignificant (P = 0.5051) relationship between FE-1 deficiency and age was obtained, which implied age as a noncontributing factor toward exocrine pancreatic insufficiency among diabetic patients. Statistically significant correlation (P = 0.003) between glycated hemoglobin and FE-1 levels was also noted. The association between retinopathy (P = 0.001) and peripheral pulses (P = 0.001) with FE-1 levels were found to be statistically significant. Conclusion: This study validates the benefit of FE-1 estimation, as a surrogate marker of exocrine pancreatic insufficiency, which remains unmanifest and subclinical. PMID:29535950

  14. The effect of using graphic organizers in the teaching of standard biology

    NASA Astrophysics Data System (ADS)

    Pepper, Wade Louis, Jr.

    This study was conducted to determine if the use of graphic organizers in the teaching of standard biology would increase student achievement, involvement and quality of activities. The subjects were 10th grade standard biology students in a large southern inner city high school. The study was conducted over a six-week period in an instructional setting using action research as the investigative format. After calculation of the homogeneity between classes, random selection was used to determine the graphic organizer class and the control class. The graphic organizer class was taught unit material through a variety of instructional methods along with the use of teacher generated graphic organizers. The control class was taught the same unit material using the same instructional methods, but without the use of graphic organizers. Data for the study were gathered from in-class written assignments, teacher-generated tests and text-generated tests, and rubric scores of an out-of-class written assignment and project. Also, data were gathered from student reactions, comments, observations and a teacher's research journal. Results were analyzed using descriptive statistics and qualitative interpretation. By comparing statistical results, it was determined that the use of graphic organizers did not make a statistically significant difference in the understanding of biological concepts and retention of factual information. Furthermore, the use of graphic organizers did not make a significant difference in motivating students to fulfill all class assignments with quality efforts and products. However, based upon student reactions and comments along with observations by the researcher, graphic organizers were viewed by the students as a favorable and helpful instructional tool. In lieu of statistical results, student gains from instructional activities using graphic organizers were positive and merit the continuation of their use as an instructional tool.

  15. Assessment of dental caries and periodontal status in institutionalized hearing impaired children in Khordha District of Odisha.

    PubMed

    Jnaneswar, Avinash; Subramaniya, Goutham Bala; Pathi, Jayashree; Jha, Kunal; Suresan, Vinay; Kumar, Gunjan

    2017-01-01

    Over 5% of the world's population has disabling hearing loss. The oral health of the disabled may be disused for the reason of the disabling condition, a challenging disease or the limited access to oral health care. The objectives of the study were to assess the prevalence of dental caries and periodontal status of institutionalized hearing impaired (HI) children in Khordha district of Odisha. A descriptive cross-sectional study on the HI children was conducted in Khordha district, Odisha. Type III examination procedure was conducted to assess the oral health status of the children. Statistical analysis was performed by Chi-square test and Student's t-test, and the significance level was fixed at P < 0.05. The final population consisted of 540 HI children out of which 262 (48.5%) were male and 278 (51.5%) were female, 285 (52.8%) children had severe hearing loss and 227 (42.0%) had profound hearing loss. Bleeding on probing was found in 72 (13.3%) female children as compared to 57 (10.6%) male children. While 131 (24.3%) female children had calculus, 124 (23.0%) male children had the same condition. Total caries prevalence was 19.3%. Statistically highly significant difference was found for mean decayed teeth (DT), missing teeth decayed, missing filled teeth (FT) (P < 0.001), while for mean FT there was no statistically significant difference according to age groups. Statistically highly significant difference was found for mean DT, extracted teeth and decayed, extracted, filled teeth (P < 0.001). An improved accessibility to dental services as well as dental health education is necessary to ensure the optimum dental health within the reach of these less fortunate children.

  16. Statistical inference methods for two crossing survival curves: a comparison of methods.

    PubMed

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman's smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér-von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman's smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.

  17. Statistical Inference Methods for Two Crossing Survival Curves: A Comparison of Methods

    PubMed Central

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman’s smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér—von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman’s smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests. PMID:25615624

  18. Evaluation of interactive teaching for undergraduate medical students using a classroom interactive response system in India.

    PubMed

    Datta, Rakesh; Datta, Karuna; Venkatesh, M D

    2015-07-01

    The classical didactic lecture has been the cornerstone of the theoretical undergraduate medical education. Their efficacy however reduces due to reduced interaction and short attention span of the students. It is hypothesized that the interactive response pad obviates some of these drawbacks. The aim of this study was to evaluate the effectiveness of an interactive response system by comparing it with conventional classroom teaching. A prospective comparative longitudinal study was conducted on 192 students who were exposed to either conventional or interactive teaching over 20 classes. Pre-test, Post-test and retentions test (post 8-12 weeks) scores were collated and statistically analysed. An independent observer measured number of student interactions in each class. Pre-test scores from both groups were similar (p = 0.71). There was significant improvement in both post test scores when compared to pre-test scores in either method (p < 0.001). The interactive post-test score was better than conventional post test score (p < 0.001) by 8-10% (95% CI-difference of means - 8.2%-9.24%-10.3%). The interactive retention test score was better than conventional retention test score (p < 0.001) by 15-18% (95% CI-difference of means - 15.0%-16.64%-18.2%). There were 51 participative events in the interactive group vs 25 in the conventional group. The Interactive Response Pad method was efficacious in teaching. Students taught with the interactive method were likely to score 8-10% higher (statistically significant) in the immediate post class time and 15-18% higher (statistically significant) after 8-12 weeks. The number of student-teacher interactions increases when using the interactive response pads.

  19. Evaluation of undergraduate nursing students' attitudes towards statistics courses, before and after a course in applied statistics.

    PubMed

    Hagen, Brad; Awosoga, Olu; Kellett, Peter; Dei, Samuel Ofori

    2013-09-01

    Undergraduate nursing students must often take a course in statistics, yet there is scant research to inform teaching pedagogy. The objectives of this study were to assess nursing students' overall attitudes towards statistics courses - including (among other things) overall fear and anxiety, preferred learning and teaching styles, and the perceived utility and benefit of taking a statistics course - before and after taking a mandatory course in applied statistics. The authors used a pre-experimental research design (a one-group pre-test/post-test research design), by administering a survey to nursing students at the beginning and end of the course. The study was conducted at a University in Western Canada that offers an undergraduate Bachelor of Nursing degree. Participants included 104 nursing students, in the third year of a four-year nursing program, taking a course in statistics. Although students only reported moderate anxiety towards statistics, student anxiety about statistics had dropped by approximately 40% by the end of the course. Students also reported a considerable and positive change in their attitudes towards learning in groups by the end of the course, a potential reflection of the team-based learning that was used. Students identified preferred learning and teaching approaches, including the use of real-life examples, visual teaching aids, clear explanations, timely feedback, and a well-paced course. Students also identified preferred instructor characteristics, such as patience, approachability, in-depth knowledge of statistics, and a sense of humor. Unfortunately, students only indicated moderate agreement with the idea that statistics would be useful and relevant to their careers, even by the end of the course. Our findings validate anecdotal reports on statistics teaching pedagogy, although more research is clearly needed, particularly on how to increase students' perceptions of the benefit and utility of statistics courses for their nursing careers. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  20. A Second Grade Study of the Effectiveness of Economy Keys to Reading Versus Macmillan Series r in Reading Comprehension.

    ERIC Educational Resources Information Center

    Olsen, Marilyn

    A study (conducted in suburban central New Jersey using 218 second graders' California Achievement Test (CAT) scores from 1986-1988 compared the effectiveness of two well-known reading programs. Results indicated that although there was no statistically significant difference in the scores, the mean difference suggested that children who were…

  1. Modeling Antimicrobial Activity of Clorox(R) Using an Agar-Diffusion Test: A New Twist On an Old Experiment.

    ERIC Educational Resources Information Center

    Mitchell, James K.; Carter, William E.

    2000-01-01

    Describes using a computer statistical software package called Minitab to model the sensitivity of several microbes to the disinfectant NaOCl (Clorox') using the Kirby-Bauer technique. Each group of students collects data from one microbe, conducts regression analyses, then chooses the best-fit model based on the highest r-values obtained.…

  2. Using a Model of Analysts' Judgments to Augment an Item Calibration Process

    ERIC Educational Resources Information Center

    Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling

    2015-01-01

    When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…

  3. The Effects of Conditioned Reinforcement for Reading on Reading Comprehension for 5th Graders

    ERIC Educational Resources Information Center

    Cumiskey Moore, Colleen

    2017-01-01

    In three experiments, I tested the effects of the conditioned reinforcement for reading (R+Reading) on reading comprehension with 5th graders. In Experiment 1, I conducted a series of statistical analyses with data from 18 participants for one year. I administered 4 pre/post measurements for reading repertoires which included: 1) state-wide…

  4. Stressful Life Events, Social Support, and Achievement: A Study of Three Grade Levels in a Multicultural Environment.

    ERIC Educational Resources Information Center

    Levitt, Mary J.; And Others

    This study assessed the extent to which support exerts direct or indirect effects on child and adolescent achievement (grade point average and Statistical Aptitude Test scores). Personal interviews were conducted with 120 African American, 101 Anglo-European American, and 112 Latin American students (151 males and 182 females) in grades 1-2,…

  5. Single-Participant Assessment of Treatment Mediators: Strategy Description and Examples from a Behavioral Activation Intervention for Depressed Adolescents

    ERIC Educational Resources Information Center

    Gaynor, Scott T.; Harris, Amanda

    2008-01-01

    Determining the means by which effective psychotherapy works is critical. A generally recommended strategy for identifying the potential causal variables is to conduct group-level statistical tests of treatment mediators. Herein the case is made for also assessing mediators of treatment outcome at the level of the individual participant.…

  6. The Relationship between the Rigor of a State's Proficiency Standard and Student Achievement in the State

    ERIC Educational Resources Information Center

    Stoneberg, Bert D.

    2015-01-01

    The National Center of Education Statistics conducted a mapping study that equated the percentage proficient or above on each state's NCLB reading and mathematics tests in grades 4 and 8 to the NAEP scale. Each "NAEP equivalent score" was labeled according to NAEP's achievement levels and used to compare state proficiency standards and…

  7. LSAT Dimensionality Analysis for the December 1991, June 1992, and October 1992 Administrations. Statistical Report. LSAC Research Report Series.

    ERIC Educational Resources Information Center

    Douglas, Jeff; Kim, Hae-Rim; Roussos, Louis; Stout, William; Zhang, Jinming

    An extensive nonparametric dimensionality analysis of latent structure was conducted on three forms of the Law School Admission Test (LSAT) (December 1991, June 1992, and October 1992) using the DIMTEST model in confirmatory analyses and using DIMTEST, FAC, DETECT, HCA, PROX, and a genetic algorithm in exploratory analyses. Results indicate that…

  8. Modification of Kolmogorov-Smirnov test for DNA content data analysis through distribution alignment.

    PubMed

    Huang, Shuguang; Yeo, Adeline A; Li, Shuyu Dan

    2007-10-01

    The Kolmogorov-Smirnov (K-S) test is a statistical method often used for comparing two distributions. In high-throughput screening (HTS) studies, such distributions usually arise from the phenotype of independent cell populations. However, the K-S test has been criticized for being overly sensitive in applications, and it often detects a statistically significant difference that is not biologically meaningful. One major reason is that there is a common phenomenon in HTS studies that systematic drifting exists among the distributions due to reasons such as instrument variation, plate edge effect, accidental difference in sample handling, etc. In particular, in high-content cellular imaging experiments, the location shift could be dramatic since some compounds themselves are fluorescent. This oversensitivity of the K-S test is particularly overpowered in cellular assays where the sample sizes are very big (usually several thousands). In this paper, a modified K-S test is proposed to deal with the nonspecific location-shift problem in HTS studies. Specifically, we propose that the distributions are "normalized" by density curve alignment before the K-S test is conducted. In applications to simulation data and real experimental data, the results show that the proposed method has improved specificity.

  9. Objective forensic analysis of striated, quasi-striated and impressed toolmarks

    NASA Astrophysics Data System (ADS)

    Spotts, Ryan E.

    Following the 1993 Daubert v. Merrell Dow Pharmaceuticals, Inc. court case and continuing to the 2010 National Academy of Sciences report, comparative forensic toolmark examination has received many challenges to its admissibility in court cases and its scientific foundations. Many of these challenges deal with the subjective nature in determining whether toolmarks are identifiable. This questioning of current identification methods has created a demand for objective methods of identification - "objective" implying known error rates and statistically reliability. The demand for objective methods has resulted in research that created a statistical algorithm capable of comparing toolmarks to determine their statistical similarity, and thus the ability to separate matching and nonmatching toolmarks. This was expanded to the creation of virtual toolmarking (characterization of a tool to predict the toolmark it will create). The statistical algorithm, originally designed for two-dimensional striated toolmarks, had been successfully applied to striated screwdriver and quasi-striated plier toolmarks. Following this success, a blind study was conducted to validate the virtual toolmarking capability using striated screwdriver marks created at various angles of incidence. Work was also performed to optimize the statistical algorithm by implementing means to ensure the algorithm operations were constrained to logical comparison regions (e.g. the opposite ends of two toolmarks do not need to be compared because they do not coincide with each other). This work was performed on quasi-striated shear cut marks made with pliers - a previously tested, more difficult application of the statistical algorithm that could demonstrate the difference in results due to optimization. The final research conducted was performed with pseudostriated impression toolmarks made with chisels. Impression marks, which are more complex than striated marks, were analyzed using the algorithm to separate matching and nonmatching toolmarks. Results of the conducted research are presented as well as evidence of the primary assumption of forensic toolmark examination; all tools can create identifiably unique toolmarks.

  10. Robustness of the sequential lineup advantage.

    PubMed

    Gronlund, Scott D; Carlson, Curt A; Dailey, Sarah B; Goodsell, Charles A

    2009-06-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup advantages and 3 significant simultaneous advantages. Both sequential advantages occurred when the good photograph of the guilty suspect or either innocent suspect was in the fifth position in the sequential lineup; all 3 simultaneous advantages occurred when the poorer quality photograph of the guilty suspect or either innocent suspect was in the second position. Adjusting the statistical criterion to control for the multiple tests (.05/24) revealed no significant sequential advantages. Moreover, despite finding more conservative overall choosing for the sequential lineup, no support was found for the proposal that a sequential advantage was due to that conservative criterion shift. Unless lineups with particular characteristics predominate in the real world, there appears to be no strong preference for conducting lineups in either a sequential or a simultaneous manner. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  11. A bibliometric analysis of statistical terms used in American Physical Therapy Association journals (2011-2012): evidence for educating physical therapists.

    PubMed

    Tilson, Julie K; Marshall, Katie; Tam, Jodi J; Fetters, Linda

    2016-04-22

    A primary barrier to the implementation of evidence based practice (EBP) in physical therapy is therapists' limited ability to understand and interpret statistics. Physical therapists demonstrate limited skills and report low self-efficacy for interpreting results of statistical procedures. While standards for physical therapist education include statistics, little empirical evidence is available to inform what should constitute such curricula. The purpose of this study was to conduct a census of the statistical terms and study designs used in physical therapy literature and to use the results to make recommendations for curricular development in physical therapist education. We conducted a bibliometric analysis of 14 peer-reviewed journals associated with the American Physical Therapy Association over 12 months (Oct 2011-Sept 2012). Trained raters recorded every statistical term appearing in identified systematic reviews, primary research reports, and case series and case reports. Investigator-reported study design was also recorded. Terms representing the same statistical test or concept were combined into a single, representative term. Cumulative percentage was used to identify the most common representative statistical terms. Common representative terms were organized into eight categories to inform curricular design. Of 485 articles reviewed, 391 met the inclusion criteria. These 391 articles used 532 different terms which were combined into 321 representative terms; 13.1 (sd = 8.0) terms per article. Eighty-one representative terms constituted 90% of all representative term occurrences. Of the remaining 240 representative terms, 105 (44%) were used in only one article. The most common study design was prospective cohort (32.5%). Physical therapy literature contains a large number of statistical terms and concepts for readers to navigate. However, in the year sampled, 81 representative terms accounted for 90% of all occurrences. These "common representative terms" can be used to inform curricula to promote physical therapists' skills, competency, and confidence in interpreting statistics in their professional literature. We make specific recommendations for curriculum development informed by our findings.

  12. A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data

    NASA Technical Reports Server (NTRS)

    Smith, Laura J.

    2004-01-01

    Tests are conducted on a quad-redundant fault tolerant flight control computer to establish upset characteristics of an avionics system in an electromagnetic field. A numerical simulation and statistical model are described in this work to analyze the open loop experiment data collected in the reverberation chamber at NASA LaRC as a part of an effort to examine the effects of electromagnetic interference on fly-by-wire aircraft control systems. By comparing thousands of simulation and model outputs, the models that best describe the data are first identified and then a systematic statistical analysis is performed on the data. All of these efforts are combined which culminate in an extrapolation of values that are in turn used to support previous efforts used in evaluating the data.

  13. Diagnostic potential of real-time elastography (RTE) and shear wave elastography (SWE) to differentiate benign and malignant thyroid nodules: A systematic review and meta-analysis.

    PubMed

    Hu, Xiangdong; Liu, Yujiang; Qian, Linxue

    2017-10-01

    Real-time elastography (RTE) and shear wave elastography (SWE) are noninvasive and easily available imaging techniques that measure the tissue strain, and it has been reported that the sensitivity and the specificity of elastography were better in differentiating between benign and malignant thyroid nodules than conventional technologies. Relevant articles were searched in multiple databases; the comparison of elasticity index (EI) was conducted with the Review Manager 5.0. Forest plots of the sensitivity and specificity and SROC curve of RTE and SWE were performed with STATA 10.0 software. In addition, sensitivity analysis and bias analysis of the studies were conducted to examine the quality of articles; and to estimate possible publication bias, funnel plot was used and the Egger test was conducted. Finally 22 articles which eventually satisfied the inclusion criteria were included in this study. After eliminating the inefficient, benign and malignant nodules were 2106 and 613, respectively. The meta-analysis suggested that the difference of EI between benign and malignant nodules was statistically significant (SMD = 2.11, 95% CI [1.67, 2.55], P < .00001). The overall sensitivities of RTE and SWE were roughly comparable, whereas the difference of specificities between these 2 methods was statistically significant. In addition, statistically significant difference of AUC between RTE and SWE was observed between RTE and SWE (P < .01). The specificity of RTE was statistically higher than that of SWE; which suggests that compared with SWE, RTE may be more accurate on differentiating benign and malignant thyroid nodules.

  14. Diagnostic potential of real-time elastography (RTE) and shear wave elastography (SWE) to differentiate benign and malignant thyroid nodules

    PubMed Central

    Hu, Xiangdong; Liu, Yujiang; Qian, Linxue

    2017-01-01

    Abstract Background: Real-time elastography (RTE) and shear wave elastography (SWE) are noninvasive and easily available imaging techniques that measure the tissue strain, and it has been reported that the sensitivity and the specificity of elastography were better in differentiating between benign and malignant thyroid nodules than conventional technologies. Methods: Relevant articles were searched in multiple databases; the comparison of elasticity index (EI) was conducted with the Review Manager 5.0. Forest plots of the sensitivity and specificity and SROC curve of RTE and SWE were performed with STATA 10.0 software. In addition, sensitivity analysis and bias analysis of the studies were conducted to examine the quality of articles; and to estimate possible publication bias, funnel plot was used and the Egger test was conducted. Results: Finally 22 articles which eventually satisfied the inclusion criteria were included in this study. After eliminating the inefficient, benign and malignant nodules were 2106 and 613, respectively. The meta-analysis suggested that the difference of EI between benign and malignant nodules was statistically significant (SMD = 2.11, 95% CI [1.67, 2.55], P < .00001). The overall sensitivities of RTE and SWE were roughly comparable, whereas the difference of specificities between these 2 methods was statistically significant. In addition, statistically significant difference of AUC between RTE and SWE was observed between RTE and SWE (P < .01). Conclusion: The specificity of RTE was statistically higher than that of SWE; which suggests that compared with SWE, RTE may be more accurate on differentiating benign and malignant thyroid nodules. PMID:29068996

  15. Knowledge level of effect size statistics, confidence intervals and meta-analysis in Spanish academic psychologists.

    PubMed

    Badenes-Ribera, Laura; Frias-Navarro, Dolores; Pascual-Soler, Marcos; Monterde-I-Bort, Héctor

    2016-11-01

    The statistical reform movement and the American Psychological Association (APA) defend the use of estimators of the effect size and its confidence intervals, as well as the interpretation of the clinical significance of the findings. A survey was conducted in which academic psychologists were asked about their behavior in designing and carrying out their studies. The sample was composed of 472 participants (45.8% men). The mean number of years as a university professor was 13.56 years (SD= 9.27). The use of effect-size estimators is becoming generalized, as well as the consideration of meta-analytic studies. However, several inadequate practices still persist. A traditional model of methodological behavior based on statistical significance tests is maintained, based on the predominance of Cohen’s d and the unadjusted R2/η2, which are not immune to outliers or departure from normality and the violations of statistical assumptions, and the under-reporting of confidence intervals of effect-size statistics. The paper concludes with recommendations for improving statistical practice.

  16. Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.

    PubMed

    Liu, Siwei; Molenaar, Peter

    2016-01-01

    This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.

  17. The effects of reminiscence in promoting mental health of Taiwanese elderly.

    PubMed

    Wang, Jing-Jy; Hsu, Ya-Chuan; Cheng, Su-Fen

    2005-01-01

    This study examined the effects of reminiscence on four selected mental health indicators, including depressive symptoms, mood status, self-esteem, and self-health perception of elderly people residing in community care facilities and at home. A longitudinal quasi-experimental design was conducted, using two equivalent groups for pre-post test and purposive sampling with random assignment. Each subject was administered pre- and post- tests at a 4 month interval but subjects in the experimental group underwent weekly intervention. Ninety-four subjects completed the study, with 48 in the control group and 46 in the experimental group. In the experimental group, a statistically significant difference (p = 0.041) was found between the pre-post tests on the dependent variable, depressive symptoms. However, no statistical significance was found in subjects' level of mood status, self-esteem, and self-health perception after the intervention in the experimental group, but slightly improvement was found. Reminiscence not only supports depression of the elderly but also empower nurses to become proactive in their daily nursing care activities.

  18. Effect of long-term proton pump inhibitor administration on gastric mucosal atrophy: A meta-analysis

    PubMed Central

    Li, Zhong; Wu, Cong; Li, Ling; Wang, Zhaoming; Xie, Haibin; He, Xiaozhou; Feng, Jin

    2017-01-01

    Background/Aims: Proton pump inhibitors (PPIs) are widely used for the treatment of acid-related gastrointestinal diseases. Recently, some studies have reported that PPIs can alter the gastric mucosal architecture; however, the relationship remains controversial. This meta-analysis study was designed to quantify the association between long-term PPI administration and gastric atrophy. Materials and Methods: A PubMed search was conducted to identify studies using the keywords proton pump inhibitors or PPI and gastric atrophy or atrophic gastritis; the timeframe of publication searched was up to May 2016. Heterogeneity among studies was tested with the Q test; odds ratios (OR) and 95% confidence intervals (CI) were calculated. P values were calculated by I2 tests and regarded as statistically significant when <0.05. Results: We identified 13 studies that included 1465 patients under long-term PPI therapy and 1603 controls, with a total gastric atrophy rate of 14.50%. There was a higher presence of gastric atrophy (15.84%; statistically significant) in PPI group compared to the control group (13.29%) (OR: 1.55, 95% CI: 1.00–2.41). Conclusions: The pooled data suggest that long-term PPI use is associated with increased rates of gastric atrophy. Large-scale multicenter studies should be conducted to further investigate the relationship between acid suppressants and precancerous diseases. PMID:28721975

  19. Effect of VAPE about mother and infant health on knowledge among primary caregivers of patients with postpartum psychiatric illness:- A pre-experimental study.

    PubMed

    Gandhi, Sailaxmi; Thomas, Linsu; Desai, Geetha

    2017-08-01

    Post partum psychiatric illnesses are quiet common nowadays, which can interfere with postnatal care of both mother and infant. The present study was a one group pre-test - post-test design, adopted with an aim to enhance the knowledge on mother infant health among primary caregivers of mothers with postpartum psychiatric illnesses conducted in the mother-baby unit, NIMHANS, Bengaluru. Twenty five subjects who met the inclusion criteria were recruited through convenience sampling. After the pilot study, data was collected with a researcher developed tool. The Video Assisted Psycho-Education [VAPE] consisted of three sessions lasting for thirty minutes, taken over three consecutive days following the pre-test. Post-test was done immediately after the last session. Effectiveness of the intervention was established by McNemar test, Paired t-test and Wilcoxon Sign Ranks test. Analysis revealed statistically significant (p<0.001) increase in the post-test mean knowledge scores following the VAPE sessions. There was no statistically significant association between the pre-intervention knowledge score and the socio-demographic variables of the study subjects. The study findings revealed that the VAPE programme was effective in increasing the knowledge of the primary caregivers on mother infant health. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Cognition, comprehension and application of biostatistics in research by Indian postgraduate students in periodontics

    PubMed Central

    Swetha, Jonnalagadda Laxmi; Arpita, Ramisetti; Srikanth, Chintalapani; Nutalapati, Rajasekhar

    2014-01-01

    Background: Biostatistics is an integral part of research protocols. In any field of inquiry or investigation, data obtained is subsequently classified, analyzed and tested for accuracy by statistical methods. Statistical analysis of collected data, thus, forms the basis for all evidence-based conclusions. Aim: The aim of this study is to evaluate the cognition, comprehension and application of biostatistics in research among post graduate students in Periodontics, in India. Materials and Methods: A total of 391 post graduate students registered for a master's course in periodontics at various dental colleges across India were included in the survey. Data regarding the level of knowledge, understanding and its application in design and conduct of the research protocol was collected using a dichotomous questionnaire. A descriptive statistics was used for data analysis. Results: Nearly 79.2% students were aware of the importance of biostatistics in research, 55-65% were familiar with MS-EXCEL spreadsheet for graphical representation of data and with the statistical softwares available on the internet, 26.0% had biostatistics as mandatory subject in their curriculum, 9.5% tried to perform statistical analysis on their own while 3.0% were successful in performing statistical analysis of their studies on their own. Conclusion: Biostatistics should play a central role in planning, conduct, interim analysis, final analysis and reporting of periodontal research especially by the postgraduate students. Indian postgraduate students in periodontics are aware of the importance of biostatistics in research but the level of understanding and application is still basic and needs to be addressed. PMID:24744547

  1. Repeated Sprint Ability with Inclusion of Changing Direction among Veteran Soccer Players

    NASA Astrophysics Data System (ADS)

    Salleh, Omar Md; Nadzalan, Ali Md; Ikhwan Mohamad, Nur; Rahmat, Azali; Azny Mustafa, Mirza; Tan, Kevin

    2018-05-01

    This study was conducted to determine the repeated sprint ability (RSA) with the inclusion of changing direction among veteran soccer players. Twelve main players from a university veteran soccer team were recruited and were required to perform the RSA test in two conditions; i) without ball and ii) with ball. Descriptive statistics and Wilcoxon signed rank test was conducted to determine the mean score and the differences of sprint time, percentage of decrement score of sprinting (Sdec) and fatigue index (FI) between the two conditions. Both conditions demonstrated significantly drop in speed in the fifth sprint. Results showed sprint time, Sdec and FI were found to be significantly different between both conditions. Findings of this study demonstrated the important for specific training to sports for performance enhancement.

  2. Development of an auditory situation awareness test battery for advanced hearing protectors and TCAPS: detection subtest of DRILCOM (detection-recognition/identification-localization-communication).

    PubMed

    Lee, Kichol; Casali, John G

    2017-01-01

    To design a test battery and conduct a proof-of-concept experiment of a test method that can be used to measure the detection performance afforded by military advanced hearing protection devices (HPDs) and tactical communication and protective systems (TCAPS). The detection test was conducted with each of the four loudspeakers located at front, right, rear and left of the participant. Participants wore 2 in-ear-type TCAPS, 1 earmuff-type TCAPS, a passive Combat Arms Earplug in its "open" or pass-through setting and an EB-15LE™ electronic earplug. Devices with electronic gain systems were tested under two gain settings: "unity" and "max". Testing without any device (open ear) was conducted as a control. Ten participants with audiometric requirements of 25 dBHL or better at 500, 1000, 2000, 4000, 8000 Hz in both ears. Detection task performance varied with different signals and speaker locations. The test identified performance differences among certain TCAPS and protectors, and the open ear. A computer-controlled detection subtest of the Detection-Recognition/Identification-Localisation-Communication (DRILCOM) test battery was designed and implemented. Tested in a proof-of-concept experiment, it showed statistically-significant sensitivity to device differences in detection effects with the small sample of participants (10). This result has important implications for selection and deployment of TCAPS and HPDs on soldiers and workers in dynamic situations.

  3. An inferentialist perspective on the coordination of actions and reasons involved in making a statistical inference

    NASA Astrophysics Data System (ADS)

    Bakker, Arthur; Ben-Zvi, Dani; Makar, Katie

    2017-12-01

    To understand how statistical and other types of reasoning are coordinated with actions to reduce uncertainty, we conducted a case study in vocational education that involved statistical hypothesis testing. We analyzed an intern's research project in a hospital laboratory in which reducing uncertainties was crucial to make a valid statistical inference. In his project, the intern, Sam, investigated whether patients' blood could be sent through pneumatic post without influencing the measurement of particular blood components. We asked, in the process of making a statistical inference, how are reasons and actions coordinated to reduce uncertainty? For the analysis, we used the semantic theory of inferentialism, specifically, the concept of webs of reasons and actions—complexes of interconnected reasons for facts and actions; these reasons include premises and conclusions, inferential relations, implications, motives for action, and utility of tools for specific purposes in a particular context. Analysis of interviews with Sam, his supervisor and teacher as well as video data of Sam in the classroom showed that many of Sam's actions aimed to reduce variability, rule out errors, and thus reduce uncertainties so as to arrive at a valid inference. Interestingly, the decisive factor was not the outcome of a t test but of the reference change value, a clinical chemical measure of analytic and biological variability. With insights from this case study, we expect that students can be better supported in connecting statistics with context and in dealing with uncertainty.

  4. Multisample adjusted U-statistics that account for confounding covariates.

    PubMed

    Satten, Glen A; Kong, Maiying; Datta, Somnath

    2018-06-19

    Multisample U-statistics encompass a wide class of test statistics that allow the comparison of 2 or more distributions. U-statistics are especially powerful because they can be applied to both numeric and nonnumeric data, eg, ordinal and categorical data where a pairwise similarity or distance-like measure between categories is available. However, when comparing the distribution of a variable across 2 or more groups, observed differences may be due to confounding covariates. For example, in a case-control study, the distribution of exposure in cases may differ from that in controls entirely because of variables that are related to both exposure and case status and are distributed differently among case and control participants. We propose to use individually reweighted data (ie, using the stratification score for retrospective data or the propensity score for prospective data) to construct adjusted U-statistics that can test the equality of distributions across 2 (or more) groups in the presence of confounding covariates. Asymptotic normality of our adjusted U-statistics is established and a closed form expression of their asymptotic variance is presented. The utility of our approach is demonstrated through simulation studies, as well as in an analysis of data from a case-control study conducted among African-Americans, comparing whether the similarity in haplotypes (ie, sets of adjacent genetic loci inherited from the same parent) occurring in a case and a control participant differs from the similarity in haplotypes occurring in 2 control participants. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Guidelines for the Investigation of Mediating Variables in Business Research.

    PubMed

    MacKinnon, David P; Coxe, Stefany; Baraldi, Amanda N

    2012-03-01

    Business theories often specify the mediating mechanisms by which a predictor variable affects an outcome variable. In the last 30 years, investigations of mediating processes have become more widespread with corresponding developments in statistical methods to conduct these tests. The purpose of this article is to provide guidelines for mediation studies by focusing on decisions made prior to the research study that affect the clarity of conclusions from a mediation study, the statistical models for mediation analysis, and methods to improve interpretation of mediation results after the research study. Throughout this article, the importance of a program of experimental and observational research for investigating mediating mechanisms is emphasized.

  6. Do physicians understand cancer screening statistics? A national survey of primary care physicians in the United States.

    PubMed

    Wegwarth, Odette; Schwartz, Lisa M; Woloshin, Steven; Gaissmaier, Wolfgang; Gigerenzer, Gerd

    2012-03-06

    Unlike reduced mortality rates, improved survival rates and increased early detection do not prove that cancer screening tests save lives. Nevertheless, these 2 statistics are often used to promote screening. To learn whether primary care physicians understand which statistics provide evidence about whether screening saves lives. Parallel-group, randomized trial (randomization controlled for order effect only), conducted by Internet survey. (ClinicalTrials.gov registration number: NCT00981019) National sample of U.S. primary care physicians from a research panel maintained by Harris Interactive (79% cooperation rate). 297 physicians who practiced both inpatient and outpatient medicine were surveyed in 2010, and 115 physicians who practiced exclusively outpatient medicine were surveyed in 2011. Physicians received scenarios about the effect of 2 hypothetical screening tests: The effect was described as improved 5-year survival and increased early detection in one scenario and as decreased cancer mortality and increased incidence in the other. Physicians' recommendation of screening and perception of its benefit in the scenarios and general knowledge of screening statistics. Primary care physicians were more enthusiastic about the screening test supported by irrelevant evidence (5-year survival increased from 68% to 99%) than about the test supported by relevant evidence (cancer mortality reduced from 2 to 1.6 in 1000 persons). When presented with irrelevant evidence, 69% of physicians recommended the test, compared with 23% when presented with relevant evidence (P < 0.001). When asked general knowledge questions about screening statistics, many physicians did not distinguish between irrelevant and relevant screening evidence; 76% versus 81%, respectively, stated that each of these statistics proves that screening saves lives (P = 0.39). About one half (47%) of the physicians incorrectly said that finding more cases of cancer in screened as opposed to unscreened populations "proves that screening saves lives." Physicians' recommendations for screening were based on hypothetical scenarios, not actual practice. Most primary care physicians mistakenly interpreted improved survival and increased detection with screening as evidence that screening saves lives. Few correctly recognized that only reduced mortality in a randomized trial constitutes evidence of the benefit of screening. Harding Center for Risk Literacy, Max Planck Institute for Human Development.

  7. Comparison of optimization strategy and similarity metric in atlas-to-subject registration using statistical deformation model

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Grupp, R. B.; Sato, Y.; Taylor, R. H.; Armand, M.

    2015-03-01

    A robust atlas-to-subject registration using a statistical deformation model (SDM) is presented. The SDM uses statistics of voxel-wise displacement learned from pre-computed deformation vectors of a training dataset. This allows an atlas instance to be directly translated into an intensity volume and compared with a patient's intensity volume. Rigid and nonrigid transformation parameters were simultaneously optimized via the Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES), with image similarity used as the objective function. The algorithm was tested on CT volumes of the pelvis from 55 female subjects. A performance comparison of the CMA-ES and Nelder-Mead downhill simplex optimization algorithms with the mutual information and normalized cross correlation similarity metrics was conducted. Simulation studies using synthetic subjects were performed, as well as leave-one-out cross validation studies. Both studies suggested that mutual information and CMA-ES achieved the best performance. The leave-one-out test demonstrated 4.13 mm error with respect to the true displacement field, and 26,102 function evaluations in 180 seconds, on average.

  8. Comments of statistical issue in numerical modeling for underground nuclear test monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, W.L.; Anderson, K.K.

    1993-03-01

    The Symposium concluded with prepared summaries by four experts in the involved disciplines. These experts made no mention of statistics and/or the statistical content of issues. The first author contributed an extemporaneous statement at the Symposium because there are important issues associated with conducting and evaluating numerical modeling that are familiar to statisticians and often treated successfully by them. This note expands upon these extemporaneous remarks. Statistical ideas may be helpful in resolving some numerical modeling issues. Specifically, we comment first on the role of statistical design/analysis in the quantification process to answer the question ``what do we know aboutmore » the numerical modeling of underground nuclear tests?`` and second on the peculiar nature of uncertainty analysis for situations involving numerical modeling. The simulations described in the workshop, though associated with topic areas, were basically sets of examples. Each simulation was tuned towards agreeing with either empirical evidence or an expert`s opinion of what empirical evidence would be. While the discussions were reasonable, whether the embellishments were correct or a forced fitting of reality is unclear and illustrates that ``simulation is easy.`` We also suggest that these examples of simulation are typical and the questions concerning the legitimacy and the role of knowing the reality are fair, in general, with respect to simulation. The answers will help us understand why ``prediction is difficult.``« less

  9. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  10. Assessing the significance of pedobarographic signals using random field theory.

    PubMed

    Pataky, Todd C

    2008-08-07

    Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.

  11. From medium heterogeneity to flow and transport: A time-domain random walk approach

    NASA Astrophysics Data System (ADS)

    Hakoun, V.; Comolli, A.; Dentz, M.

    2017-12-01

    The prediction of flow and transport processes in heterogeneous porous media is based on the qualitative and quantitative understanding of the interplay between 1) spatial variability of hydraulic conductivity, 2) groundwater flow and 3) solute transport. Using a stochastic modeling approach, we study this interplay through direct numerical simulations of Darcy flow and advective transport in heterogeneous media. First, we study flow in correlated hydraulic permeability fields and shed light on the relationship between the statistics of log-hydraulic conductivity, a medium attribute, and the flow statistics. Second, we determine relationships between Eulerian and Lagrangian velocity statistics, this means, between flow and transport attributes. We show how Lagrangian statistics and thus transport behaviors such as late particle arrival times are influenced by the medium heterogeneity on one hand and the initial particle velocities on the other. We find that equidistantly sampled Lagrangian velocities can be described by a Markov process that evolves on the characteristic heterogeneity length scale. We employ a stochastic relaxation model for the equidistantly sampled particle velocities, which is parametrized by the velocity correlation length. This description results in a time-domain random walk model for the particle motion, whose spatial transitions are characterized by the velocity correlation length and temporal transitions by the particle velocities. This approach relates the statistical medium and flow properties to large scale transport, and allows for conditioning on the initial particle velocities and thus to the medium properties in the injection region. The approach is tested against direct numerical simulations.

  12. Reliability of Computerized Neurocognitive Tests for Concussion Assessment: A Meta-Analysis.

    PubMed

    Farnsworth, James L; Dargo, Lucas; Ragan, Brian G; Kang, Minsoo

    2017-09-01

      Although widely used, computerized neurocognitive tests (CNTs) have been criticized because of low reliability and poor sensitivity. A systematic review was published summarizing the reliability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) scores; however, this was limited to a single CNT. Expansion of the previous review to include additional CNTs and a meta-analysis is needed. Therefore, our purpose was to analyze reliability data for CNTs using meta-analysis and examine moderating factors that may influence reliability.   A systematic literature search (key terms: reliability, computerized neurocognitive test, concussion) of electronic databases (MEDLINE, PubMed, Google Scholar, and SPORTDiscus) was conducted to identify relevant studies.   Studies were included if they met all of the following criteria: used a test-retest design, involved at least 1 CNT, provided sufficient statistical data to allow for effect-size calculation, and were published in English.   Two independent reviewers investigated each article to assess inclusion criteria. Eighteen studies involving 2674 participants were retained. Intraclass correlation coefficients were extracted to calculate effect sizes and determine overall reliability. The Fisher Z transformation adjusted for sampling error associated with averaging correlations. Moderator analyses were conducted to evaluate the effects of the length of the test-retest interval, intraclass correlation coefficient model selection, participant demographics, and study design on reliability. Heterogeneity was evaluated using the Cochran Q statistic.   The proportion of acceptable outcomes was greatest for the Axon Sports CogState Test (75%) and lowest for the ImPACT (25%). Moderator analyses indicated that the type of intraclass correlation coefficient model used significantly influenced effect-size estimates, accounting for 17% of the variation in reliability.   The Axon Sports CogState Test, which has a higher proportion of acceptable outcomes and shorter test duration relative to other CNTs, may be a reliable option; however, future studies are needed to compare the diagnostic accuracy of these instruments.

  13. The effect of acquiring life skills through humor on social adjustment rate of the female students.

    PubMed

    Maghsoudi, Jahangir; Sabour, Nazanin Hashemi; Yazdani, Mohsen; Mehrabi, Tayebeh

    2010-01-01

    Life skills have different effects on various aspects of the mental health. Social adjustment prepares adolescents for entering to the adulthood. On the other hand, humor and joking in the education is considered as a stress reducer and learning increaser. Therefore, the present study conducted aimed to determine the effect of acquiring life skills through humor on the social adjustment rate of the high school girls. This was a two-group semi-experimental study including three phases. The study population included 69 first year high school female students of Isfahan Department of Education district 3 who were selected in simple random sampling. First of all, the social adjustment rate was measured using California Personality Inventory. Thereafter, life skills education was conducted using humor during five sessions. Finally, a test was taken in order to assess the acquisition of the life skills in which passing score was required for re-completing the questionnaire. The data were analyzed using software SPSS(10) and independent and paired t-tests. The findings of the study indicated that the mean score of the social adjustment statistically had a significant difference in the intervention group before and after the intervention. Furthermore, statistically, there was a significant difference between mean score of the social adjustment in the control group and test group after conducting the intervention. The findings of the study indicated that life skills education has been increased through humor on the social adjustment rate of the high school girl students. Considering the efficacy of learning life skills on the social adjustment and results of the other studies which were in accordance with the present study, implementing such trainings with a new method comprehensively is recommended in the schools.

  14. Results and harmonization guidelines from two large-scale international Elispot proficiency panels conducted by the Cancer Vaccine Consortium (CVC/SVI).

    PubMed

    Janetzki, Sylvia; Panageas, Katherine S; Ben-Porat, Leah; Boyer, Jean; Britten, Cedrik M; Clay, Timothy M; Kalos, Michael; Maecker, Holden T; Romero, Pedro; Yuan, Jianda; Kast, W Martin; Hoos, Axel

    2008-03-01

    The Cancer Vaccine Consortium of the Sabin Vaccine Institute (CVC/SVI) is conducting an ongoing large-scale immune monitoring harmonization program through its members and affiliated associations. This effort was brought to life as an external validation program by conducting an international Elispot proficiency panel with 36 laboratories in 2005, and was followed by a second panel with 29 participating laboratories in 2006 allowing for application of learnings from the first panel. Critical protocol choices, as well as standardization and validation practices among laboratories were assessed through detailed surveys. Although panel participants had to follow general guidelines in order to allow comparison of results, each laboratory was able to use its own protocols, materials and reagents. The second panel recorded an overall significantly improved performance, as measured by the ability to detect all predefined responses correctly. Protocol choices and laboratory practices, which can have a dramatic effect on the overall assay outcome, were identified and lead to the following recommendations: (A) Establish a laboratory SOP for Elispot testing procedures including (A1) a counting method for apoptotic cells for determining adequate cell dilution for plating, and (A2) overnight rest of cells prior to plating and incubation, (B) Use only pre-tested serum optimized for low background: high signal ratio, (C) Establish a laboratory SOP for plate reading including (C1) human auditing during the reading process and (C2) adequate adjustments for technical artifacts, and (D) Only allow trained personnel, which is certified per laboratory SOPs to conduct assays. Recommendations described under (A) were found to make a statistically significant difference in assay performance, while the remaining recommendations are based on practical experiences confirmed by the panel results, which could not be statistically tested. These results provide initial harmonization guidelines to optimize Elispot assay performance to the immunotherapy community. Further optimization is in process with ongoing panels.

  15. An Econometric Model of External Labor Supply to the Establishment Within a Confined Geographic Market.

    ERIC Educational Resources Information Center

    Hines, Robert James

    The study conducted in the Buffalo, New York standard metropolitan statistical area, was undertaken to formulate and test a simple model of labor supply for a local labor market. The principal variables to be examined to determine the external supply function of labor to the establishment are variants of the rate of change of the entry wage and…

  16. Study of Personnel Attrition and Revocation within U.S. Marine Corps Air Traffic Control Specialties

    DTIC Science & Technology

    2012-03-01

    Entrance Processing Stations (MEPS) and recruit depots, to include non-cognitive testing, such as Navy Computer Adaptive Personality Scales ( NCAPS ...Revocation, Selection, MOS, Regression, Probit, dProbit, STATA, Statistics, Marginal Effects, ASVAB, AFQT, Composite Scores, Screening, NCAPS 15. NUMBER...Navy Computer Adaptive Personality Scales ( NCAPS ), during recruitment. It is also recommended that an economic analysis be conducted comparing the

  17. Comparative transition performance of several nosetip materials as defined by ballistics-range testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, D.C.

    1979-01-01

    Requirements and techniques for conducting aerothermodynamic tests of reentry body nosetips/materials in hypersonic ballistics-range environments (ISA 22nd IIS), and associated data interpretation/analyses methods using interactive graphics (ISA 24th IIS) have been outlined. Such testing, which centers on the utilization of electro-optical pyrometry for the measurement of nosetip surface temperature distributions, has provided both the aerothermodynamics and materials-development communities with valuable new capabilities. From an aerothermodynamics standpoint, experimental results serve to test the validity of existing computer codes/correlations, as well as to expand the data base necessary for the generation of improved predictive techniques. From a materials-development standpoint, results serve tomore » define relationships between fabrication/processing methods and associated material thermal response as well as to provide for relative ranking of candidate materials under controlled reentry conditions. Following these multipurpose objectives, ballistic-range tests of preablated graphite and carbon/carbon composite nosetips have been conducted. Results are presented herein which illustrate the comparative transition performance of five nosetip materials from both mean and statistical (degree-of-asymmetry) viewpoints.« less

  18. Increased Surface Fatigue Lives of Spur Gears by Application of a Coating

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.; Cooper, Clark V.; Townsend, Dennis P.; Hansen, Bruce D.

    2003-01-01

    Hard coatings have potential for increasing gear surface fatigue lives. Experiments were conducted using gears both with and without a metal-containing, carbonbased coating. The gears were case-carburized AISI 9310 steel spur gears. Some gears were provided with the coating by magnetron sputtering. Lives were evaluated by accelerated life tests. For uncoated gears, all of fifteen tests resulted in fatigue failure before completing 275 million revolutions. For coated gears, eleven of the fourteen tests were suspended with no fatigue failure after 275 million revolutions. The improved life owing to the coating, approximately a six-fold increase, was a statistically significant result.

  19. What do results from coordinate-based meta-analyses tell us?

    PubMed

    Albajes-Eizagirre, Anton; Radua, Joaquim

    2018-08-01

    Coordinate-based meta-analyses (CBMA) methods, such as Activation Likelihood Estimation (ALE) and Seed-based d Mapping (SDM), have become an invaluable tool for summarizing the findings of voxel-based neuroimaging studies. However, the progressive sophistication of these methods may have concealed two particularities of their statistical tests. Common univariate voxelwise tests (such as the t/z-tests used in SPM and FSL) detect voxels that activate, or voxels that show differences between groups. Conversely, the tests conducted in CBMA test for "spatial convergence" of findings, i.e., they detect regions where studies report "more peaks than in most regions", regions that activate "more than most regions do", or regions that show "larger differences between groups than most regions do". The first particularity is that these tests rely on two spatial assumptions (voxels are independent and have the same probability to have a "false" peak), whose violation may make their results either conservative or liberal, though fortunately current versions of ALE, SDM and some other methods consider these assumptions. The second particularity is that the use of these tests involves an important paradox: the statistical power to detect a given effect is higher if there are no other effects in the brain, whereas lower in presence of multiple effects. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. A sup-score test for the cure fraction in mixture models for long-term survivors.

    PubMed

    Hsu, Wei-Wen; Todem, David; Kim, KyungMann

    2016-12-01

    The evaluation of cure fractions in oncology research under the well known cure rate model has attracted considerable attention in the literature, but most of the existing testing procedures have relied on restrictive assumptions. A common assumption has been to restrict the cure fraction to a constant under alternatives to homogeneity, thereby neglecting any information from covariates. This article extends the literature by developing a score-based statistic that incorporates covariate information to detect cure fractions, with the existing testing procedure serving as a special case. A complication of this extension, however, is that the implied hypotheses are not typical and standard regularity conditions to conduct the test may not even hold. Using empirical processes arguments, we construct a sup-score test statistic for cure fractions and establish its limiting null distribution as a functional of mixtures of chi-square processes. In practice, we suggest a simple resampling procedure to approximate this limiting distribution. Our simulation results show that the proposed test can greatly improve efficiency over tests that neglect the heterogeneity of the cure fraction under the alternative. The practical utility of the methodology is illustrated using ovarian cancer survival data with long-term follow-up from the surveillance, epidemiology, and end results registry. © 2016, The International Biometric Society.

  1. Evaluation of dredged material proposed for ocean disposal from Hackensack River Project Area, New York

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruendell, B.D.; Barrows, E.S.; Borde, A.B.

    1997-01-01

    The objective of the bioassay reevaluation of the Hackensack River Federal Project was to reperform toxicity testing on proposed dredged material with current ammonia reduction protocols. Hackensack River was one of four waterways sampled and evaluated for dredging and disposal in April 1993. Sediment samples were re-collected from the Hackensack River Project area in August 1995. Tests and analyses were conducted according to the manual developed by the USACE and the U.S. Environmental Protection Agency (EPA), Evaluation of Dredged Material Proposed for Ocean Disposal (Testing Manual), commonly referred to as the {open_quotes}Green Book,{close_quotes} and the regional manual developed by themore » USACE-NYD and EPA Region II, Guidance for Performing Tests on Dredged Material to be Disposed of in Ocean Waters. The reevaluation of proposed dredged material from the Hackensack River project area consisted of benthic acute toxicity tests. Thirty-three individual sediment core samples were collected from the Hackensack River project area. Three composite sediments, representing each reach of the area proposed for dredging, were used in benthic acute toxicity testing. Benthic acute toxicity tests were performed with the amphipod Ampelisca abdita and the mysid Mysidopsis bahia. The amphipod and mysid benthic toxicity test procedures followed EPA guidance for reduction of total ammonia concentrations in test systems prior to test initiation. Statistically significant acute toxicity was found in all three Hackensack River composites in the static renewal tests with A. abdita, but not in the static tests with M. bahia. Statistically significant acute toxicity and a greater than 20% increase in mortality over the reference sediment was found in the static renewal tests with A. abdita. Statistically significant mortality 10% over reference sediment was observed in the M. bahia static tests. 5 refs., 2 figs., 2 tabs.« less

  2. Evaluation of dredged material proposed for ocean disposal from Arthur Kill Project Area, New York

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruendell, B.D.; Barrows, E.S.; Borde, A.B.

    1997-01-01

    The objective of the bioassay reevaluation of Arthur Kill Federal Project was to reperform toxicity testing on proposed dredged material following current ammonia reduction protocols. Arthur Kill was one of four waterways sampled and evaluated for dredging and disposal in April 1993. Sediment samples were recollected from the Arthur Kill Project areas in August 1995. Tests and analyses were conducted according to the manual developed by the USACE and the U.S. Environmental Protection Agency (EPA), Evaluation of Dredged Material Proposed for Ocean Disposal (Testing Manual), commonly referred to as the {open_quotes}Green Book,{close_quotes} and the regional manual developed by the USACE-NYDmore » and EPA Region II, Guidance for Performing Tests on Dredged Material to be Disposed of in Ocean Waters. The reevaluation of proposed dredged material from the Arthur Kill project areas consisted of benthic acute toxicity tests. Thirty-three individual sediment core samples were collected from the Arthur Kill project area. Three composite sediments, representing each reach of the area proposed for dredging, was used in benthic acute toxicity testing. Benthic acute toxicity tests were performed with the amphipod Ampelisca abdita and the mysid Mysidopsis bahia. The amphipod and mysid benthic toxicity test procedures followed EPA guidance for reduction of total ammonia concentrations in test systems prior to test initiation. Statistically significant acute toxicity was found in all Arthur Kill composites in the static renewal tests with A. abdita, but not in the static tests with M. bahia. Statistically significant acute toxicity and a greater than 20% increase in mortality over the reference sediment was found in the static renewal tests with A. abdita. M. bahia did not show statistically significant acute toxicity or a greater than 10% increase in mortality over reference sediment in static tests. 5 refs., 2 figs., 2 tabs.« less

  3. Relationship between Semmes-Weinstein Monofilaments perception Test and sensory nerve conduction studies in Carpal Tunnel Syndrome.

    PubMed

    Raji, Parvin; Ansari, Noureddin Nakhostin; Naghdi, Soofia; Forogh, Bijan; Hasson, Scott

    2014-01-01

    The Semmes-Weinstein Monofilament Test (SWMT) is a clinical widely used test to quantify the sensibility in patients with Carpal Tunnel Syndrome (CTS). No study has investigated the relationship between the SWMT and sensory nerve conduction studies (SNCS) in patients with CTS. To assess the relationship between the SWMT and SNCS findings in patients with CTS. This cross-sectional clinical measurement study included 35 patients with CTS (55 hands) with a mean age of 45 ± 12 years. The outcome measures were the SWMT and SNCS measures of distal latency (DLs), amplitude (AMPs), and nerve conduction velocity (NCV). The median innervated fingers were tested using SWMT and electrodiagnostic tests. The primary outcome was the correlations between the SWMTs and NCS measures. All of the patients/hands had abnormal NCS findings. When looking at the three digits of interest (thumb, index and middle), the thumb SWMTs had the highest number of abnormal findings (58.2%), with the middle digit having the lowest (45.5%). All NCS findings were statistically different between abnormal and normal thumb SWMTs and abnormal and normal total summed SWMTs. There were significant moderate correlations between thumb SWMT scores and all NCS outcomes. Although only approximately 50% of the CTS diagnosed through NCS are corroborated through SWMT; the significant associations between SWMT and NCS measures suggest that SWMT is a valid test for assessing sensations in patients with CTS.

  4. The influence of control group reproduction on the statistical ...

    EPA Pesticide Factsheets

    Because of various Congressional mandates to protect the environment from endocrine disrupting chemicals (EDCs), the United States Environmental Protection Agency (USEPA) initiated the Endocrine Disruptor Screening Program. In the context of this framework, the Office of Research and Development within the USEPA developed the Medaka Extended One Generation Reproduction Test (MEOGRT) to characterize the endocrine action of a suspected EDC. One important endpoint of the MEOGRT is fecundity of breeding pairs of medaka. Power analyses were conducted to determine the number of replicates needed in proposed test designs and to determine the effects that varying reproductive parameters (e.g. mean fecundity, variance, and days with no egg production) will have on the statistical power of the test. A software tool, the MEOGRT Reproduction Power Analysis Tool, was developed to expedite these power analyses by both calculating estimates of the needed reproductive parameters (e.g. population mean and variance) and performing the power analysis under user specified scenarios. The manuscript illustrates how the reproductive performance of the control medaka that are used in a MEOGRT influence statistical power, and therefore the successful implementation of the protocol. Example scenarios, based upon medaka reproduction data collected at MED, are discussed that bolster the recommendation that facilities planning to implement the MEOGRT should have a culture of medaka with hi

  5. Statistical evaluation of metal fill widths for emulated metal fill in parasitic extraction methodology

    NASA Astrophysics Data System (ADS)

    J-Me, Teh; Noh, Norlaili Mohd.; Aziz, Zalina Abdul

    2015-05-01

    In the chip industry today, the key goal of a chip development organization is to develop and market chips within a short time frame to gain foothold on market share. This paper proposes a design flow around the area of parasitic extraction to improve the design cycle time. The proposed design flow utilizes the usage of metal fill emulation as opposed to the current flow which performs metal fill insertion directly. By replacing metal fill structures with an emulation methodology in earlier iterations of the design flow, this is targeted to help reduce runtime in fill insertion stage. Statistical design of experiments methodology utilizing the randomized complete block design was used to select an appropriate emulated metal fill width to improve emulation accuracy. The experiment was conducted on test cases of different sizes, ranging from 1000 gates to 21000 gates. The metal width was varied from 1 x minimum metal width to 6 x minimum metal width. Two-way analysis of variance and Fisher's least significant difference test were used to analyze the interconnect net capacitance values of the different test cases. This paper presents the results of the statistical analysis for the 45 nm process technology. The recommended emulated metal fill width was found to be 4 x the minimum metal width.

  6. Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M

    2011-01-01

    Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected bymore » high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.« less

  7. The effect of group bibliotherapy on the self-esteem of female students living in dormitory.

    PubMed

    Salimi, Sepideh; Zare-Farashbandi, Firoozeh; Papi, Ahmad; Samouei, Rahele; Hassanzadeh, Akbar

    2014-01-01

    Bibliotherapy is a supplement, simple, inexpensive and readily available method to treat the diseases that is performed with cooperation of librarians and psychologists or doctors. The aim of this study is the investigation of group bibliotherapy's effect on the self-esteem of the female students of Isfahan University of Medical Sciences Living in Dormitory in 2012. The present study is an interventional semi-experimental study with pre test and post test and control group. The statistical population of study consisted of 32 female students who reside in Isfahan University of Medical Sciences dormitories which control and case groups and the students were divided randomly between these two groups. Data was collected by Cooper Smith Self-esteem questionnaire scale (Cronbach's alpha: 0.85). Two groups were examined by the questionnaire in pre test. Case group received group bibliotherapy for 2 month (8 sessions of 2 hours), while the control group received no training at all. Then, 2 groups were assessed in post test after 1 month. Descriptive statistics (means and frequencies distribution) and inferential statistics (independent t- test, paired t- test and mann whitney) were used and data was analyzed by SPSS20 software. The findings showed that group bibliotherapy had positive and significant effect on general, family, professional and total self esteem of female students living in dormitories, but it had no effect on their social self esteem. Group bibliotherapy can increase female students' self-esteem levels. On the other hand, conducting these studies not only can improve mental health of people, but can also improve their reading habits.

  8. Limited-information goodness-of-fit testing of diagnostic classification item response models.

    PubMed

    Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen

    2016-11-01

    Despite the growing popularity of diagnostic classification models (e.g., Rupp et al., 2010, Diagnostic measurement: theory, methods, and applications, Guilford Press, New York, NY) in educational and psychological measurement, methods for testing their absolute goodness of fit to real data remain relatively underdeveloped. For tests of reasonable length and for realistic sample size, full-information test statistics such as Pearson's X 2 and the likelihood ratio statistic G 2 suffer from sparseness in the underlying contingency table from which they are computed. Recently, limited-information fit statistics such as Maydeu-Olivares and Joe's (2006, Psychometrika, 71, 713) M 2 have been found to be quite useful in testing the overall goodness of fit of item response theory models. In this study, we applied Maydeu-Olivares and Joe's (2006, Psychometrika, 71, 713) M 2 statistic to diagnostic classification models. Through a series of simulation studies, we found that M 2 is well calibrated across a wide range of diagnostic model structures and was sensitive to certain misspecifications of the item model (e.g., fitting disjunctive models to data generated according to a conjunctive model), errors in the Q-matrix (adding or omitting paths, omitting a latent variable), and violations of local item independence due to unmodelled testlet effects. On the other hand, M 2 was largely insensitive to misspecifications in the distribution of higher-order latent dimensions and to the specification of an extraneous attribute. To complement the analyses of the overall model goodness of fit using M 2 , we investigated the utility of the Chen and Thissen (1997, J. Educ. Behav. Stat., 22, 265) local dependence statistic XLD2 for characterizing sources of misfit, an important aspect of model appraisal often overlooked in favour of overall statements. The XLD2 statistic was found to be slightly conservative (with Type I error rates consistently below the nominal level) but still useful in pinpointing the sources of misfit. Patterns of local dependence arising due to specific model misspecifications are illustrated. Finally, we used the M 2 and XLD2 statistics to evaluate a diagnostic model fit to data from the Trends in Mathematics and Science Study, drawing upon analyses previously conducted by Lee et al., (2011, IJT, 11, 144). © 2016 The British Psychological Society.

  9. Reliability of a rating procedure to monitor industry self-regulation codes governing alcohol advertising content.

    PubMed

    Babor, Thomas F; Xuan, Ziming; Proctor, Dwayne

    2008-03-01

    The purposes of this study were to develop reliable procedures to monitor the content of alcohol advertisements broadcast on television and in other media, and to detect violations of the content guidelines of the alcohol industry's self-regulation codes. A set of rating-scale items was developed to measure the content guidelines of the 1997 version of the U.S. Beer Institute Code. Six focus groups were conducted with 60 college students to evaluate the face validity of the items and the feasibility of the procedure. A test-retest reliability study was then conducted with 74 participants, who rated five alcohol advertisements on two occasions separated by 1 week. Average correlations across all advertisements using three reliability statistics (r, rho, and kappa) were almost all statistically significant and the kappas were good for most items, which indicated high test-retest agreement. We also found high interrater reliabilities (intraclass correlations) among raters for item-level and guideline-level violations, indicating that regardless of the specific item, raters were consistent in their general evaluations of the advertisements. Naïve (untrained) raters can provide consistent (reliable) ratings of the main content guidelines proposed in the U.S. Beer Institute Code. The rating procedure may have future applications for monitoring compliance with industry self-regulation codes and for conducting research on the ways in which alcohol advertisements are perceived by young adults and other vulnerable populations.

  10. Contribution of Vestibular-Evoked Myogenic Potential (VEMP) testing in the assessment and the differential diagnosis of otosclerosis

    PubMed Central

    Tramontani, Ourania; Gkoritsa, Eleni; Ferekidis, Eleftherios; Korres, Stavros G.

    2014-01-01

    Background The aim of this prospective clinical study was to evaluate the clinical importance of Vestibular-Evoked Myogenic Potentials (VEMPs) in the assessment and differential diagnosis of otosclerosis and otologic diseases characterized by “pseudo-conductive” components. We also investigated the clinical appearance of balance disorders in patients with otosclerosis by correlating VEMP results with the findings of caloric testing and pure tone audiometry(PTA). Material/Methods Air-conducted(AC) 4-PTA, bone-conducted(BC) 4-PTA, air-bone Gap(ABG), AC, BC tone burst evoked VEMP, and calorics were measured preoperatively in 126 otosclerotic ears. Results The response rate of the AC-VEMPs and BC-VEMPs was 29.36% and 44.03%, respectively. Statistical differences were found between the means of ABG, AC 4-PTA, and BC 4-PTA in the otosclerotic ears in relation to AC-VEMP elicitability. About one-third of patients presented with disequilibrium. A statistically significant interaction was found between calorics and dizziness in relation to PTA thresholds. No relationship was found between calorics and dizziness with VEMPs responses. Conclusions AC and BC VEMPs can be elicited in ears with otosclerosis. AC-VEMP is more vulnerable to conductive hearing loss. Evaluation of AC-VEMP thresholds can be added in the diagnostic work-up of otosclerosis in case of doubt, enhancing differential diagnosis in patients with air-bone gaps. Otosclerosis is not a cause of canal paresis or vertigo. PMID:24509900

  11. Prevalence and Predictors of Use of Home Sphygmomanometers Among Hypertensive Patients.

    PubMed

    Zahid, Hira; Amin, Aisha; Amin, Emaan; Waheed, Summaiya; Asad, Ameema; Faheem, Ariba; Jawaid, Samreen; Afzal, Adila; Misbah, Sarah; Majid, Kanza

    2017-04-11

    Few studies have looked at the predictors of use of home sphygmomanometers among hypertensive patients in low-income countries such as Pakistan. Considering the importance of home blood pressure monitoring (HBPM), cross-sectional study was conducted to evaluate the prevalence and predictors of the usage of all kinds of HBPM devices. This study was conducted in Karachi during the time period of January-February 2017. Adult patients previously diagnosed with hypertension visiting tertiary care hospitals were selected for the study. Interviews from the individuals were conducted after verbal consent using a pre-coded questionnaire. The data was analyzed using Statistical Package for the Social Sciences v. 23.0 (SPSS, IBM Corporation, NY, USA). Chi-squared test was applied as the primary statistical test. More than half of the participants used a home sphygmomanometer (n=250, 61.7%). The age, level of education, family history of hypertension, compliance to drugs and blood pressure (BP) monitoring, few times a month at clinics were significant determinants of HBPM (P values < 0.001). It was found that more individuals owned a digital sphygmomanometer (n=128, 51.3%) as compared to a manual type (n=122, 48.8%). Moreover, avoiding BP measurement in a noisy environment was the most common precaution taken (n=117, 46.8%). The study showed that around 40% of the hypertensive individuals did not own a sphygmomanometer and less than 25% performed HBPM regularly. General awareness by healthcare professionals can be a possible factor which can increase HBPM.

  12. THE MEASUREMENT OF BONE QUALITY USING GRAY LEVEL CO-OCCURRENCE MATRIX TEXTURAL FEATURES.

    PubMed

    Shirvaikar, Mukul; Huang, Ning; Dong, Xuanliang Neil

    2016-10-01

    In this paper, statistical methods for the estimation of bone quality to predict the risk of fracture are reported. Bone mineral density and bone architecture properties are the main contributors of bone quality. Dual-energy X-ray Absorptiometry (DXA) is the traditional clinical measurement technique for bone mineral density, but does not include architectural information to enhance the prediction of bone fragility. Other modalities are not practical due to cost and access considerations. This study investigates statistical parameters based on the Gray Level Co-occurrence Matrix (GLCM) extracted from two-dimensional projection images and explores links with architectural properties and bone mechanics. Data analysis was conducted on Micro-CT images of 13 trabecular bones (with an in-plane spatial resolution of about 50μm). Ground truth data for bone volume fraction (BV/TV), bone strength and modulus were available based on complex 3D analysis and mechanical tests. Correlation between the statistical parameters and biomechanical test results was studied using regression analysis. The results showed Cluster-Shade was strongly correlated with the microarchitecture of the trabecular bone and related to mechanical properties. Once the principle thesis of utilizing second-order statistics is established, it can be extended to other modalities, providing cost and convenience advantages for patients and doctors.

  13. THE MEASUREMENT OF BONE QUALITY USING GRAY LEVEL CO-OCCURRENCE MATRIX TEXTURAL FEATURES

    PubMed Central

    Shirvaikar, Mukul; Huang, Ning; Dong, Xuanliang Neil

    2016-01-01

    In this paper, statistical methods for the estimation of bone quality to predict the risk of fracture are reported. Bone mineral density and bone architecture properties are the main contributors of bone quality. Dual-energy X-ray Absorptiometry (DXA) is the traditional clinical measurement technique for bone mineral density, but does not include architectural information to enhance the prediction of bone fragility. Other modalities are not practical due to cost and access considerations. This study investigates statistical parameters based on the Gray Level Co-occurrence Matrix (GLCM) extracted from two-dimensional projection images and explores links with architectural properties and bone mechanics. Data analysis was conducted on Micro-CT images of 13 trabecular bones (with an in-plane spatial resolution of about 50μm). Ground truth data for bone volume fraction (BV/TV), bone strength and modulus were available based on complex 3D analysis and mechanical tests. Correlation between the statistical parameters and biomechanical test results was studied using regression analysis. The results showed Cluster-Shade was strongly correlated with the microarchitecture of the trabecular bone and related to mechanical properties. Once the principle thesis of utilizing second-order statistics is established, it can be extended to other modalities, providing cost and convenience advantages for patients and doctors. PMID:28042512

  14. Random sampling of constrained phylogenies: conducting phylogenetic analyses when the phylogeny is partially known.

    PubMed

    Housworth, E A; Martins, E P

    2001-01-01

    Statistical randomization tests in evolutionary biology often require a set of random, computer-generated trees. For example, earlier studies have shown how large numbers of computer-generated trees can be used to conduct phylogenetic comparative analyses even when the phylogeny is uncertain or unknown. These methods were limited, however, in that (in the absence of molecular sequence or other data) they allowed users to assume that no phylogenetic information was available or that all possible trees were known. Intermediate situations where only a taxonomy or other limited phylogenetic information (e.g., polytomies) are available are technically more difficult. The current study describes a procedure for generating random samples of phylogenies while incorporating limited phylogenetic information (e.g., four taxa belong together in a subclade). The procedure can be used to conduct comparative analyses when the phylogeny is only partially resolved or can be used in other randomization tests in which large numbers of possible phylogenies are needed.

  15. Prediction of thermal conductivity of polyvinylpyrrolidone (PVP) electrospun nanocomposite fibers using artificial neural network and prey-predator algorithm.

    PubMed

    Khan, Waseem S; Hamadneh, Nawaf N; Khan, Waqar A

    2017-01-01

    In this study, multilayer perception neural network (MLPNN) was employed to predict thermal conductivity of PVP electrospun nanocomposite fibers with multiwalled carbon nanotubes (MWCNTs) and Nickel Zinc ferrites [(Ni0.6Zn0.4) Fe2O4]. This is the second attempt on the application of MLPNN with prey predator algorithm for the prediction of thermal conductivity of PVP electrospun nanocomposite fibers. The prey predator algorithm was used to train the neural networks to find the best models. The best models have the minimal of sum squared error between the experimental testing data and the corresponding models results. The minimal error was found to be 0.0028 for MWCNTs model and 0.00199 for Ni-Zn ferrites model. The predicted artificial neural networks (ANNs) responses were analyzed statistically using z-test, correlation coefficient, and the error functions for both inclusions. The predicted ANN responses for PVP electrospun nanocomposite fibers were compared with the experimental data and were found in good agreement.

  16. Effect of streptokinase on reperfusion after acute myocardial infarction and its complications: an ex-post facto study.

    PubMed

    Taheri, Leila; Boroujeni, Ali Zargham; Kargar Jahromi, Marzieh; Charkhandaz, Maryam; Hojat, Mohsen

    2015-01-01

    Emergency treatment of patients with acute myocardial infarction is very important. Streptokinase in Iran is often as the only clot-busting medication is used. The purpose of using streptokinase medication is to revive the ischemic heart tissue, although has dangerous complications too. Therefore, the present study aimed to determine the effect of streptokinase on reperfusion after acute myocardial infarction and its complications, has been designed and conducted. This is an Ex-post facto study. The study population included patients who suffer from acute myocardial infarction. The sample size was 300 patients, and 2 groups were matched, in variables of age, sex, underlying disease, frequencies and area of MI. Data collection did by researcher making questionnaire, that accept face and content validity by 10 expert researcher, the reliability was conducted with Spearman's test (r=0.85) by Test-retest method. Data analysis did by SPSS software: V 12. Mean of EF in SK group was (46.15±8.11) and in control group was (43.11±12.57). Significant relationship was seen between SK, arrhythmia occurring and improve EF reperfusion by chi-square test (p=0.028), (p=0.020).The most arrhythmia in SK group was Ventricular Tachycardia (20.7%). Significant statistical relation between SK and mortality were found by Chi-square test (p=0.001). But a meaningful statistical relation was not found between SK and pulmonary edema incidence (p=0.071). Nurses of CCU should be aware about SK complications such as hypotension, bleeding and arrhythmias. Proposed compare SK and tissue plasminogen drug in reperfusion and complications effect.

  17. Biostatistics Series Module 2: Overview of Hypothesis Testing.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore "statistically significant") P value, but a "real" estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another.

  18. Biostatistics Series Module 2: Overview of Hypothesis Testing

    PubMed Central

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Hypothesis testing (or statistical inference) is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric) and the number of groups or data sets being compared (e.g., two or more than two) at a time. The same research question may be explored by more than one type of hypothesis test. While this may be of utility in highlighting different aspects of the problem, merely reapplying different tests to the same issue in the hope of finding a P < 0.05 is a wrong use of statistics. Finally, it is becoming the norm that an estimate of the size of any effect, expressed with its 95% confidence interval, is required for meaningful interpretation of results. A large study is likely to have a small (and therefore “statistically significant”) P value, but a “real” estimate of the effect would be provided by the 95% confidence interval. If the intervals overlap between two interventions, then the difference between them is not so clear-cut even if P < 0.05. The two approaches are now considered complementary to one another. PMID:27057011

  19. Steadiness of Spinal Regions during Single-Leg Standing in Older Adults with and without Chronic Low Back Pain

    PubMed Central

    Kuo, Yi-Liang; Huang, Kuo-Yuan; Chiang, Pei-Tzu; Lee, Pei-Yun; Tsai, Yi-Ju

    2015-01-01

    The aims of this study were to compare the steadiness index of spinal regions during single-leg standing in older adults with and without chronic low back pain (LBP) and to correlate measurements of steadiness index with the performance of clinical balance tests. Thirteen community-dwelling older adults (aged 55 years or above) with chronic LBP and 13 age- and gender-matched asymptomatic volunteers participated in this study. Data collection was conducted in a university research laboratory. Measurements were steadiness index of spinal regions (trunk, thoracic spine, lumbar spine, and pelvis) during single-leg standing including relative holding time (RHT) and relative standstill time (RST), and clinical balance tests (timed up and go test and 5-repetition sit to stand test). The LBP group had a statistically significantly smaller RHT than the control group, regardless of one leg stance on the painful or non-painful sides. The RSTs on the painful side leg in the LBP group were not statistically significantly different from the average RSTs of both legs in the control group; however, the RSTs on the non-painful side leg in the LBP group were statistically significantly smaller than those in the control group for the trunk, thoracic spine, and lumbar spine. No statistically significant intra-group differences were found in the RHTs and RSTs between the painful and non-painful side legs in the LBP group. Measurements of clinical balance tests also showed insignificant weak to moderate correlations with steadiness index. In conclusion, older adults with chronic LBP demonstrated decreased spinal steadiness not only in the symptomatic lumbar spine but also in the other spinal regions within the kinetic chain of the spine. When treating older adults with chronic LBP, clinicians may also need to examine their balance performance and spinal steadiness during balance challenging tests. PMID:26024534

  20. Fundamentals of Research Data and Variables: The Devil Is in the Details.

    PubMed

    Vetter, Thomas R

    2017-10-01

    Designing, conducting, analyzing, reporting, and interpreting the findings of a research study require an understanding of the types and characteristics of data and variables. Descriptive statistics are typically used simply to calculate, describe, and summarize the collected research data in a logical, meaningful, and efficient way. Inferential statistics allow researchers to make a valid estimate of the association between an intervention and the treatment effect in a specific population, based upon their randomly collected, representative sample data. Categorical data can be either dichotomous or polytomous. Dichotomous data have only 2 categories, and thus are considered binary. Polytomous data have more than 2 categories. Unlike dichotomous and polytomous data, ordinal data are rank ordered, typically based on a numerical scale that is comprised of a small set of discrete classes or integers. Continuous data are measured on a continuum and can have any numeric value over this continuous range. Continuous data can be meaningfully divided into smaller and smaller or finer and finer increments, depending upon the precision of the measurement instrument. Interval data are a form of continuous data in which equal intervals represent equal differences in the property being measured. Ratio data are another form of continuous data, which have the same properties as interval data, plus a true definition of an absolute zero point, and the ratios of the values on the measurement scale make sense. The normal (Gaussian) distribution ("bell-shaped curve") is of the most common statistical distributions. Many applied inferential statistical tests are predicated on the assumption that the analyzed data follow a normal distribution. The histogram and the Q-Q plot are 2 graphical methods to assess if a set of data have a normal distribution (display "normality"). The Shapiro-Wilk test and the Kolmogorov-Smirnov test are 2 well-known and historically widely applied quantitative methods to assess for data normality. Parametric statistical tests make certain assumptions about the characteristics and/or parameters of the underlying population distribution upon which the test is based, whereas nonparametric tests make fewer or less rigorous assumptions. If the normality test concludes that the study data deviate significantly from a Gaussian distribution, rather than applying a less robust nonparametric test, the problem can potentially be remedied by judiciously and openly: (1) performing a data transformation of all the data values; or (2) eliminating any obvious data outlier(s).

  1. [A retrospective study on the assessment of dysphagia after partial laryngectomy].

    PubMed

    Su, T T; Sun, Z F

    2017-11-07

    Objective: To retrospectively investigate the long-term swallowing function of patients with laryngeal carcinoma, who underwent partial laryngectomy, discuss the effectiveness and reliability of Kubota drinking test in the assessment of patients with dysphagia, who underwent partial laryngectomy, and analyze the influence of different ways of operation on swallowing function. Methods: Clinical data were retrospectively analyzed on 83 patients with laryngeal carcinoma, who underwent partial laryngectomy between September 2012 and August 2015. Questionnaire survey, Kubota drinking test and video fluoroscopic swallowing study (VFSS) were conducted for patients during a scheduled interview. Patients were grouped by two ways: the one was whether epiglottis was retained, and the other was whether either arytenoids or both were reserved. The influence of different surgical techniques on swallowing function was analyzed according to the results of Kubota drinking test. The agreement and reliability of Kubota drinking test were statistically analyzed with respect to VFSS treated as the gold standard. SPSS23.0 software was used to analyze the data. Results: Questionnaire results revealed that among 83 patients underwent partial laryngectomy 32.53% suffered from eating disorder, and 43.37% experienced painful swallowing. The incidence of dysphagia was 40.96% according to the results of Kubota drinking test. There was statistical difference between the group with epiglottis remained and that having epiglottis removed in terms of the absence of dysphagia and severity. The statistical values of normal, moderate and severe dysphagia were in the order of 18.160, 7.229, 12.344( P <0.05). Also, statistical difference existed between the groups with either and both arytenoids reserved in terms of the absence of dysphagia as well as that of intermediate severity, and their statistical values were 4.790 and 9.110( P <0.05). A certain degree of agreement and reliability was present between the results of Kubota drinking test and VFSS( Kappa =0.551, r =0.810). Conclusions: It was of considerable significance to reserve epiglottis and arytenoids for the retention of swallowing function for patients post partial laryngectomy. There are certain degree of agreement and reliability between the results of Kubota drinking test and VFSS. The test, therefore, could be used as a tool for screening patients suffering from dysphagia post partial laryngectomy.

  2. Evaluation of Resilient Modulus of Subgrade and Base Materials in Indiana and Its Implementation in MEPDG

    PubMed Central

    Siddiki, Nayyarzia; Nantung, Tommy; Kim, Daehyeon

    2014-01-01

    In order to implement MEPDG hierarchical inputs for unbound and subgrade soil, a database containing subgrade M R, index properties, standard proctor, and laboratory M R for 140 undisturbed roadbed soil samples from six different districts in Indiana was created. The M R data were categorized in accordance with the AASHTO soil classifications and divided into several groups. Based on each group, this study develops statistical analysis and evaluation datasets to validate these models. Stress-based regression models were evaluated using a statistical tool (analysis of variance (ANOVA)) and Z-test, and pertinent material constants (k 1, k 2 and k 3) were determined for different soil types. The reasonably good correlations of material constants along with M R with routine soil properties were established. Furthermore, FWD tests were conducted on several Indiana highways in different seasons, and laboratory resilient modulus tests were performed on the subgrade soils that were collected from the falling weight deflectometer (FWD) test sites. A comparison was made of the resilient moduli obtained from the laboratory resilient modulus tests with those from the FWD tests. Correlations between the laboratory resilient modulus and the FWD modulus were developed and are discussed in this paper. PMID:24701162

  3. Effects of chocolate intake on Perceived Stress; a Controlled Clinical Study

    PubMed Central

    Al Sunni, Ahmed; Latif, Rabia

    2014-01-01

    Background Cocoa polyphenols have been shown to reduce stress in highly stressed, as well as normal healthy individuals, we wondered whether commercially available chocolate could reduce perceived stress in medical students or not, so we decided to conduct this study. Methods Sixty students were divided into 3 groups (10 males + 10 females/group): i) Dark chocolate (DC) ii) Milk chocolate (MC) iii) White chocolate (WC). Subjects answered a PSS-10 (Perceived Stress Scale) questionnaire at baseline and after consumption of chocolate (40 g/day) for 2 weeks. Data were analyzed by using Microsoft Excel and SPSS version 20. Descriptive analyses were conducted. Means were compared across the study groups by One-Way ANOVA and within the same group by paired ‘t’ test. Results Mean stress scores compared between the groups by ANOVA revealed statistically not significant differences before (F =0.505; P=0.606) and after chocolate consumption (F=0.188; P=0.829). Paired ‘t’ test compared stress scores means before and after chocolate supplementation within the same group and exhibited statistically significant decrease in DC (t = 2.341; p value = 0.03) and MC (t = 3.302; p value = 0.004) groups. Mean stress scores decreased, on average, by approximately 2 and 3 points in DC and MC groups, respectively, at 95% Confidence Interval. The difference was more evident and statistically significant in female students as compared to the males. Conclusion Consumption of 40 g of Dark and Milk chocolate daily during a period of 2 weeks appear to be an effective way to reduce perceived stress in females. PMID:25780358

  4. Effects of chocolate intake on Perceived Stress; a Controlled Clinical Study.

    PubMed

    Al Sunni, Ahmed; Latif, Rabia

    2014-10-01

    Cocoa polyphenols have been shown to reduce stress in highly stressed, as well as normal healthy individuals, we wondered whether commercially available chocolate could reduce perceived stress in medical students or not, so we decided to conduct this study. Sixty students were divided into 3 groups (10 males + 10 females/group): i) Dark chocolate (DC) ii) Milk chocolate (MC) iii) White chocolate (WC). Subjects answered a PSS-10 (Perceived Stress Scale) questionnaire at baseline and after consumption of chocolate (40 g/day) for 2 weeks. Data were analyzed by using Microsoft Excel and SPSS version 20. Descriptive analyses were conducted. Means were compared across the study groups by One-Way ANOVA and within the same group by paired 't' test. Mean stress scores compared between the groups by ANOVA revealed statistically not significant differences before (F =0.505; P=0.606) and after chocolate consumption (F=0.188; P=0.829). Paired 't' test compared stress scores means before and after chocolate supplementation within the same group and exhibited statistically significant decrease in DC (t = 2.341; p value = 0.03) and MC (t = 3.302; p value = 0.004) groups. Mean stress scores decreased, on average, by approximately 2 and 3 points in DC and MC groups, respectively, at 95% Confidence Interval. The difference was more evident and statistically significant in female students as compared to the males. Consumption of 40 g of Dark and Milk chocolate daily during a period of 2 weeks appear to be an effective way to reduce perceived stress in females.

  5. Effectiveness of Sealed Double-Ring Infiltrometers{trademark} and effects of changes in atmospheric pressure on hydraulic conductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMullin, S.R.

    The Savannah River Site is currently evaluating some 40 hazardous and radioactive-waste sites for remediation. Among the remedial alternatives considered is closure using a kaolin clay cap. The hydraulic conductivity suggested by the US Environmental Protection Agency is 1.0 {times} 10{sup {minus}7} cm/sec. One instrument to measure this value is the Sealed Double-Ring Infiltrometer{trademark} (SDRI). Six SDRI were recently installed on a kaolin test cap. Test results demonstrated uniform performance of these instruments. However, the test data showed as much as an order of magnitude of variation over time. This variation is attributed to both internal structural heterogeneity and variablemore » external boundary conditions. The internal heterogeneity is caused by construction variability within a specified range of moisture and density. The external influences considered are temperature and barometric pressure. Temperature was discharged as a source of heterogeneity because of a lack of correlation with test data and a negligible impact from the range of variability. However, a direct correlation was found between changes in barometric pressure and hydraulic conductivity. This correlation is most pronounced when pressure changes occur over a short period of time. Additionally, this correlation is related to a single soil layer. When the wetting front passes into a more porous foundation layer, the correlation with pressure changes disappears. Conclusions are that the SDRI performs adequately, with good repeatability of results. The duration of test is critical to assure a statistically valid data set. Data spikes resulting from pressure changes should be identified, and professional judgment used to determine the representative hydraulic conductivity. Further evaluation is recommended to determine the impact of pressure change on the actual hydraulic conductivity.« less

  6. Is it possible to shorten examination time in posture control studies?

    PubMed

    Faraldo García, Ana; Soto Varela, Andrés; Santos Pérez, Sofía

    2015-01-01

    The sensory organization test (SOT) is the gold-standard test for the study of postural control with posturographic platforms. Three registers of Conditions 3, 4, 5 and 6 are conducted to find an arithmetic mean of the 3, with the time that this entails. The aim of this study was to determine whether a single record for each SOT condition would give us the same information as the arithmetic mean of the 3 recordings used until now. 100 healthy individuals who performed a sensory organisation test in the Smart Balance Master(®) Neurocom platform. For the statistical analysis we used the Wilcoxon test for nonparametric variables and dependent t-student for paired samples for parametric variables (P<.05). When comparing the scores on the first record with the average of the 3 records, we found statistically significant differences for the 4 conditions (P<0.05). Comparing the first record to the second record also yielded statistically significant differences in the 4 conditions (P<.05). Upon comparing the second record with the third, however, we found differences in only Condition 5, with the significance being borderline (P=.04). Finally, comparing the average of the first and second record with the average of the 3 records, we also found statistically significant differences for the 4 conditions (P<.05). Using only 1 or 2 records from each of the conditions on the SOT does not give us the same information as the arithmetic average of the 3 records used until now. Copyright © 2014 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  7. SWToolbox: A surface-water tool-box for statistical analysis of streamflow time series

    USGS Publications Warehouse

    Kiang, Julie E.; Flynn, Kate; Zhai, Tong; Hummel, Paul; Granato, Gregory

    2018-03-07

    This report is a user guide for the low-flow analysis methods provided with version 1.0 of the Surface Water Toolbox (SWToolbox) computer program. The software combines functionality from two software programs—U.S. Geological Survey (USGS) SWSTAT and U.S. Environmental Protection Agency (EPA) DFLOW. Both of these programs have been used primarily for computation of critical low-flow statistics. The main analysis methods are the computation of hydrologic frequency statistics such as the 7-day minimum flow that occurs on average only once every 10 years (7Q10), computation of design flows including biologically based flows, and computation of flow-duration curves and duration hydrographs. Other annual, monthly, and seasonal statistics can also be computed. The interface facilitates retrieval of streamflow discharge data from the USGS National Water Information System and outputs text reports for a record of the analysis. Tools for graphing data and screening tests are available to assist the analyst in conducting the analysis.

  8. The new statistics: why and how.

    PubMed

    Cumming, Geoff

    2014-01-01

    We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.

  9. Studying primate cognition in a social setting to improve validity and welfare: a literature review highlighting successful approaches.

    PubMed

    Cronin, Katherine A; Jacobson, Sarah L; Bonnie, Kristin E; Hopper, Lydia M

    2017-01-01

    Studying animal cognition in a social setting is associated with practical and statistical challenges. However, conducting cognitive research without disturbing species-typical social groups can increase ecological validity, minimize distress, and improve animal welfare. Here, we review the existing literature on cognitive research run with primates in a social setting in order to determine how widespread such testing is and highlight approaches that may guide future research planning. Using Google Scholar to search the terms "primate" "cognition" "experiment" and "social group," we conducted a systematic literature search covering 16 years (2000-2015 inclusive). We then conducted two supplemental searches within each journal that contained a publication meeting our criteria in the original search, using the terms "primate" and "playback" in one search and the terms "primate" "cognition" and "social group" in the second. The results were used to assess how frequently nonhuman primate cognition has been studied in a social setting (>3 individuals), to gain perspective on the species and topics that have been studied, and to extract successful approaches for social testing. Our search revealed 248 unique publications in 43 journals encompassing 71 species. The absolute number of publications has increased over years, suggesting viable strategies for studying cognition in social settings. While a wide range of species were studied they were not equally represented, with 19% of the publications reporting data for chimpanzees. Field sites were the most common environment for experiments run in social groups of primates, accounting for more than half of the results. Approaches to mitigating the practical and statistical challenges were identified. This analysis has revealed that the study of primate cognition in a social setting is increasing and taking place across a range of environments. This literature review calls attention to examples that may provide valuable models for researchers wishing to overcome potential practical and statistical challenges to studying cognition in a social setting, ultimately increasing validity and improving the welfare of the primates we study.

  10. Audiological comparison between two different clips prostheses in stapes surgery.

    PubMed

    Potena, M; Portmann, D; Guindi, S

    2015-01-01

    To compare audiometric results and complications of stapes surgery with two different types of piston prosthesis, the Portmann Clip Piston (Medtronic) (PCP) and the Soft Clip Piston (Kurz) (SCP). Study conducted on 64 patients who underwent primary stapedotomy from 2008 to 2011. We matched for each case of stapedotomy with the PCP (Medtronic Xomed Inc. Portmann Clip Piston Stainless Steel/Fluoroplastic) a case with the SCP (Heinz Kurz GmbH Medizintechnik Soft Piston Clip Titanium). Each group consisted of 32 patients, and patients in both groups were matched with respect to gender, age, bilateral or unilateral otosclerosis, otological symptoms (tinnitus, vertigo or dizziness), family history, operated side and the Portmann grading for otosclerosis. The length of the prosthesis used was reported. Post-operative complications such as tinnitus, vertigo, hearing loss and altered taste were documented. Each patient was subjected to a preoperative and postoperative audiogram (follow-up at the second month after the surgery). We used the Student test for statistical analysis. Statistical significance was set at < 0.01. None of the patients experienced a post-operative hearing loss and none required a later revision surgery. No statistically significant difference was found between the two populations regarding demographic data (age, sex, side, bilaterality, family history, stage and lenght of piston) and hearing level (> 0.01) in the air, bone conduction and air-bone gap (ABG). Postoperative complications did not result to be significantly different between the two groups. Also, both groups showed a significant improvement (< 0.01) in the post-operative air, bone conduction and air-bone gap. There was no statistically significant difference (> 0.01) between the post-operative hearing results (bone conduction, air conduction, air-bone gap) using the two pistons. The mean ABG improvement was respectively 16.63 dB in the SCP group and 20.59 dB in the PCP group. The titanium Soft clip piston (SCP) is a good alternative to the Portmann clip piston (PCP). Nevertheless there are some differences in the surgical fixing of these two pistons in the correct position.

  11. 3D Simulation as a Learning Environment for Acquiring the Skill of Self-Management: An Experience Involving Spanish University Students of Education

    ERIC Educational Resources Information Center

    Cela-Ranilla, Jose María; Esteve-Gonzalez, Vanessa; Esteve-Mon, Francesc; Gisbert-Cervera, Merce

    2014-01-01

    In this study we analyze how 57 Spanish university students of Education developed a learning process in a virtual world by conducting activities that involved the skill of self-management. The learning experience comprised a serious game designed in a 3D simulation environment. Descriptive statistics and non-parametric tests were used in the…

  12. The impact of obesity on specific airway resistance and conductance among schoolchildren.

    PubMed

    Parraguez Arévalo, Andrea; Rojas Navarro, Francisco; Ruz Céspedes, Macarena; Medina González, Paul; Escobar Cabello, Máximo; Muñoz Cofré, Rodrigo

    2018-04-01

    Child and adolescent obesity is an epidemiological problem in developing countries. Its prevalence among preschoolers and schoolchildren is over 30%. It has been associated with a wide range of health complications, including rapid loss of lung function leading to changes in physiology and ventilatory mechanics. The objective of this study was to analyze the association between obesity and the increase in specific airway resistance (sRaw) in a sample of obese children and adolescents from the district of Talca. In a sample of 36 subjects with an average age of 9.38 ± 1.99 years, divided into 2 groups (normal weight and obese), the tricipital, subscapular, and abdominal skinfolds and lung volumes were measured. For the statistical analysis, data normality was determined and then the Student's t test or the Mann-Whitney U test and Pearson's or Spearman's correlations were used, as applicable. A value of p < 0.05 was considered statistically significant. When comparing normal weight and obese subjects, a significant increase in sRaw and a significant reduction in specific airway conductance (sGaw) were observed in obese subjects. In addition, an adequate and significant correlation was observed between sRaw and fat percentage. Obese subjects showed an increased sRaw and a reduced sGaw. Sociedad Argentina de Pediatría.

  13. Evaluation of virtual environment as a form of interactive resuscitation exam

    NASA Astrophysics Data System (ADS)

    Leszczyński, Piotr; Charuta, Anna; Kołodziejczak, Barbara; Roszak, Magdalena

    2017-10-01

    There is scientific evidence confirming the effectiveness of e-learning within resuscitation, however, there is not enough research on modern examination techniques within the scope. The aim of the pilot research is to compare the exam results in the field of Advanced Life Support in a traditional (paper) and interactive (computer) form as well as to evaluate satisfaction of the participants. A survey was conducted which meant to evaluate satisfaction of exam participants. Statistical analysis of the collected data was conducted at a significance level of α = 0.05 using STATISTICS v. 12. Final results of the traditional exam (67.5% ± 15.8%) differed significantly (p < 0.001) from the results of the interactive exam (53.3% ± 13.7%). However, comparing the number of students who did not pass the exam (passing point at 51%), no significant differences (p = 0.13) were observed between the two types exams. The feedback accuracy as well as the presence of well-prepared interactive questions could influence the evaluation of satisfaction of taking part in the electronic test. Significant differences between the results of a traditional test and the one supported by Computer Based Learning system showed the possibility of achieving a more detailed competence verification in the field of resuscitation thanks to interactive solutions.

  14. Heavy metals found in the breathing zone, toenails and lung function of welders working in an air-conditioned welding workplace.

    PubMed

    Hariri, Azian; Mohamad Noor, Noraishah; Paiman, Nuur Azreen; Ahmad Zaidi, Ahmad Mujahid; Zainal Bakri, Siti Farhana

    2017-09-22

    Welding operations are rarely conducted in an air-conditioned room. However, a company would set its welding operations in an air-conditioned room to maintain the humidity level needed to reduce hydrogen cracks in the specimen being welded. This study intended to assess the exposure to metal elements in the welders' breathing zone and toenail samples. Heavy metal concentration was analysed using inductively coupled plasma mass spectrometry. The lung function test was also conducted and analysed using statistical approaches. Chromium and manganese concentrations in the breathing zone exceeded the permissible exposure limit stipulated by Malaysian regulations. A similar trend was obtained in the concentration of heavy metals in the breathing zone air sampling and in the welders' toenails. Although there was no statistically significant decrease in the lung function of welders, it is suggested that exposure control through engineering and administrative approaches should be considered for workplace safety and health improvement.

  15. Single-electron thermal noise

    NASA Astrophysics Data System (ADS)

    Nishiguchi, Katsuhiko; Ono, Yukinori; Fujiwara, Akira

    2014-07-01

    We report the observation of thermal noise in the motion of single electrons in an ultimately small dynamic random access memory (DRAM). The nanometer-scale transistors that compose the DRAM resolve the thermal noise in single-electron motion. A complete set of fundamental tests conducted on this single-electron thermal noise shows that the noise perfectly follows all the aspects predicted by statistical mechanics, which include the occupation probability, the law of equipartition, a detailed balance, and the law of kT/C. In addition, the counting statistics on the directional motion (i.e., the current) of the single-electron thermal noise indicate that the individual electron motion follows the Poisson process, as it does in shot noise.

  16. Single-electron thermal noise.

    PubMed

    Nishiguchi, Katsuhiko; Ono, Yukinori; Fujiwara, Akira

    2014-07-11

    We report the observation of thermal noise in the motion of single electrons in an ultimately small dynamic random access memory (DRAM). The nanometer-scale transistors that compose the DRAM resolve the thermal noise in single-electron motion. A complete set of fundamental tests conducted on this single-electron thermal noise shows that the noise perfectly follows all the aspects predicted by statistical mechanics, which include the occupation probability, the law of equipartition, a detailed balance, and the law of kT/C. In addition, the counting statistics on the directional motion (i.e., the current) of the single-electron thermal noise indicate that the individual electron motion follows the Poisson process, as it does in shot noise.

  17. Guidelines for the Investigation of Mediating Variables in Business Research

    PubMed Central

    Coxe, Stefany; Baraldi, Amanda N.

    2013-01-01

    Business theories often specify the mediating mechanisms by which a predictor variable affects an outcome variable. In the last 30 years, investigations of mediating processes have become more widespread with corresponding developments in statistical methods to conduct these tests. The purpose of this article is to provide guidelines for mediation studies by focusing on decisions made prior to the research study that affect the clarity of conclusions from a mediation study, the statistical models for mediation analysis, and methods to improve interpretation of mediation results after the research study. Throughout this article, the importance of a program of experimental and observational research for investigating mediating mechanisms is emphasized. PMID:25237213

  18. Vertical integration of basic science in final year of medical education.

    PubMed

    Rajan, Sudha Jasmine; Jacob, Tripti Meriel; Sathyendra, Sowmya

    2016-01-01

    Development of health professionals with ability to integrate, synthesize, and apply knowledge gained through medical college is greatly hampered by the system of delivery that is compartmentalized and piecemeal. There is a need to integrate basic sciences with clinical teaching to enable application in clinical care. To study the benefit and acceptance of vertical integration of basic science in final year MBBS undergraduate curriculum. After Institutional Ethics Clearance, neuroanatomy refresher classes with clinical application to neurological diseases were held as part of the final year posting in two medical units. Feedback was collected. Pre- and post-tests which tested application and synthesis were conducted. Summative assessment was compared with the control group of students who had standard teaching in other two medical units. In-depth interview was conducted on 2 willing participants and 2 teachers who did neurology bedside teaching. Majority (>80%) found the classes useful and interesting. There was statistically significant improvement in the post-test scores. There was a statistically significant difference between the intervention and control groups' scores during summative assessment (76.2 vs. 61.8 P < 0.01). Students felt that it reinforced, motivated self-directed learning, enabled correlations, improved understanding, put things in perspective, gave confidence, aided application, and enabled them to follow discussions during clinical teaching. Vertical integration of basic science in final year was beneficial and resulted in knowledge gain and improved summative scores. The classes were found to be useful, interesting and thought to help in clinical care and application by majority of students.

  19. Diagnosis of cystic fibrosis with chloride meter (Sherwood M926S chloride analyzer®) and sweat test analysis system (CFΔ collection system®) compared to the Gibson Cooke method.

    PubMed

    Emiralioğlu, Nagehan; Özçelik, Uğur; Yalçın, Ebru; Doğru, Deniz; Kiper, Nural

    2016-01-01

    Sweat test with Gibson Cooke (GC) method is the diagnostic gold standard for cystic fibrosis (CF). Recently, alternative methods have been introduced to simplify both the collection and analysis of sweat samples. Our aim was to compare sweat chloride values obtained by GC method with other sweat test methods in patients diagnosed with CF and whose CF diagnosis had been ruled out. We wanted to determine if the other sweat test methods could reliably identify patients with CF and differentiate them from healthy subjects. Chloride concentration was measured with GC method, chloride meter and sweat test analysis system; also conductivity was determined with sweat test analysis system. Forty eight patients with CF and 82 patients without CF underwent the sweat test, showing median sweat chloride values 98.9 mEq/L with GC method, 101 mmol/L with chloride meter, 87.8 mmol/L with sweat test analysis system. In non-CF group, median sweat chloride values were 16.8 mEq/L with GC method, 10.5 mmol/L with chloride meter, and 15.6 mmol/L with sweat test analysis system. Median conductivity value was 107.3 mmol/L in CF group and 32.1 mmol/L in non CF group. There was a strong positive correlation between GC method and the other sweat test methods with a statistical significance (r=0.85) in all subjects. Sweat chloride concentration and conductivity by other sweat test methods highly correlate with the GC method. We think that the other sweat test equipments can be used as reliably as the classic GC method to diagnose or exclude CF.

  20. Hierarchical statistical modeling of xylem vulnerability to cavitation.

    PubMed

    Ogle, Kiona; Barber, Jarrett J; Willson, Cynthia; Thompson, Brenda

    2009-01-01

    Cavitation of xylem elements diminishes the water transport capacity of plants, and quantifying xylem vulnerability to cavitation is important to understanding plant function. Current approaches to analyzing hydraulic conductivity (K) data to infer vulnerability to cavitation suffer from problems such as the use of potentially unrealistic vulnerability curves, difficulty interpreting parameters in these curves, a statistical framework that ignores sampling design, and an overly simplistic view of uncertainty. This study illustrates how two common curves (exponential-sigmoid and Weibull) can be reparameterized in terms of meaningful parameters: maximum conductivity (k(sat)), water potential (-P) at which percentage loss of conductivity (PLC) =X% (P(X)), and the slope of the PLC curve at P(X) (S(X)), a 'sensitivity' index. We provide a hierarchical Bayesian method for fitting the reparameterized curves to K(H) data. We illustrate the method using data for roots and stems of two populations of Juniperus scopulorum and test for differences in k(sat), P(X), and S(X) between different groups. Two important results emerge from this study. First, the Weibull model is preferred because it produces biologically realistic estimates of PLC near P = 0 MPa. Second, stochastic embolisms contribute an important source of uncertainty that should be included in such analyses.

  1. Pure-tone audiometry outside a sound booth using earphone attentuation, integrated noise monitoring, and automation.

    PubMed

    Swanepoel, De Wet; Matthysen, Cornelia; Eikelboom, Robert H; Clark, Jackie L; Hall, James W

    2015-01-01

    Accessibility of audiometry is hindered by the cost of sound booths and shortage of hearing health personnel. This study investigated the validity of an automated mobile diagnostic audiometer with increased attenuation and real-time noise monitoring for clinical testing outside a sound booth. Attenuation characteristics and reference ambient noise levels for the computer-based audiometer (KUDUwave) was evaluated alongside the validity of environmental noise monitoring. Clinical validity was determined by comparing air- and bone-conduction thresholds obtained inside and outside the sound booth (23 subjects). Twenty-three normal-hearing subjects (age range, 20-75 years; average age 35.5) and a sub group of 11 subjects to establish test-retest reliability. Improved passive attenuation and valid environmental noise monitoring was demonstrated. Clinically, air-conduction thresholds inside and outside the sound booth, corresponded within 5 dB or less > 90% of instances (mean absolute difference 3.3 ± 3.2 SD). Bone conduction thresholds corresponded within 5 dB or less in 80% of comparisons between test environments, with a mean absolute difference of 4.6 dB (3.7 SD). Threshold differences were not statistically significant. Mean absolute test-retest differences outside the sound booth was similar to those in the booth. Diagnostic pure-tone audiometry outside a sound booth, using automated testing, improved passive attenuation, and real-time environmental noise monitoring demonstrated reliable hearing assessments.

  2. Geological modeling of submeter scale heterogeneity and its influence on tracer transport in a fluvial aquifer

    NASA Astrophysics Data System (ADS)

    Ronayne, Michael J.; Gorelick, Steven M.; Zheng, Chunmiao

    2010-10-01

    We developed a new model of aquifer heterogeneity to analyze data from a single-well injection-withdrawal tracer test conducted at the Macrodispersion Experiment (MADE) site on the Columbus Air Force Base in Mississippi (USA). The physical heterogeneity model is a hybrid that combines 3-D lithofacies to represent submeter scale, highly connected channels within a background matrix based on a correlated multivariate Gaussian hydraulic conductivity field. The modeled aquifer architecture is informed by a variety of field data, including geologic core sampling. Geostatistical properties of this hybrid heterogeneity model are consistent with the statistics of the hydraulic conductivity data set based on extensive borehole flowmeter testing at the MADE site. The representation of detailed, small-scale geologic heterogeneity allows for explicit simulation of local preferential flow and slow advection, processes that explain the complex tracer response from the injection-withdrawal test. Based on the new heterogeneity model, advective-dispersive transport reproduces key characteristics of the observed tracer recovery curve, including a delayed concentration peak and a low-concentration tail. Importantly, our results suggest that intrafacies heterogeneity is responsible for local-scale mass transfer.

  3. Mortality of veteran participants in the crossroads nuclear test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, J.C.; Thaul, S.; Page, W.F.

    1997-07-01

    Operation CROSSROADS, conducted at Bikini Atoll in 1946, was the first post World War II test of nuclear weapons. Mortality experience of 40,000 military veteran participants in CROSSROADS was compared to that of a similar cohort of nonparticipating veterans. All-cause mortality of the participants was slightly increased over nonparticipants by 5% (p < .001). Smaller increases in participant mortality for all malignancies (1.4%, p = 0.26) or leukemia (2.0%, p = 0.9) were not statistically significant. These results do not support a hypothesis that radiation had increased participant cancer mortality over that of nonparticipants. 8 refs.

  4. Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Landman, Drew

    2015-01-01

    Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.

  5. Field and laboratory analyses of water from the Columbia aquifer in Eastern Maryland

    USGS Publications Warehouse

    Bachman, L.J.

    1984-01-01

    Field and laboratory analyses of pH, alkalinity, and specific conductance from water samples collected from the Columbia aquifer on the Delmarva Peninsula in eastern Maryland were compared to determine if laboratory analyses could be used for making regional water-quality interpretations. Kruskal-Wallis tests of field and laboratory data indicate that the difference between field and laboratory values is usually not enough to affect the outcome of the statistical tests. Thus, laboratory measurements of these constituents may be adequate for making certain regional water-quality interpretations, although they may result in errors if used for geochemical interpretations.

  6. A time to be born: Variation in the hour of birth in a rural population of Northern Argentina.

    PubMed

    Chaney, Carlye; Goetz, Laura G; Valeggia, Claudia

    2018-04-17

    The present study aimed at investigating the timing of birth across the day in a rural population of indigenous and nonindigenous women in the province of Formosa, Argentina in order to explore the variation in patterns in a non-Western setting. This study utilized birth record data transcribed from delivery room records at a rural hospital in the province of Formosa, northern Argentina. The sample included data for Criollo, Wichí, and Toba/Qom women (n = 2421). Statistical analysis was conducted using directional statistics to identify a mean sample direction. Chi-square tests for homogeneity were also used to test for statistical significant differences between hours of the day. The mean sample direction was 81.04°, which equates to 5:24 AM when calculated as time on a 24-hr clock. Chi-squared analyses showed a statistically significant peak in births between 12:00 and 4:00 AM. Birth counts generally declined throughout the day until a statistically significant trough around 5:00 PM. This pattern may be associated with the circadian rhythms of hormone release, particularly melatonin, on a proximate level. At the ultimate level, giving birth in the early hours of the morning may have been selected to time births when the mother could benefit from the predator protection and support provided by her social group as well as increased mother-infant bonding from a more peaceful environment. © 2018 Wiley Periodicals, Inc.

  7. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  8. Domestic violence on children: development and validation of an instrument to evaluate knowledge of health professionals 1

    PubMed Central

    Oliveira, Lanuza Borges; Soares, Fernanda Amaral; Silveira, Marise Fagundes; de Pinho, Lucinéia; Caldeira, Antônio Prates; Leite, Maísa Tavares de Souza

    2016-01-01

    ABSTRACT Objective: to develop and validate an instrument to evaluate the knowledge of health professionals about domestic violence on children. Method: this was a study conducted with 194 physicians, nurses and dentists. A literature review was performed for preparation of the items and identification of the dimensions. Apparent and content validation was performed using analysis of three experts and 27 professors of the pediatric health discipline. For construct validation, Cronbach's alpha was used, and the Kappa test was applied to verify reproducibility. The criterion validation was conducted using the Student's t-test. Results: the final instrument included 56 items; the Cronbach alpha was 0.734, the Kappa test showed a correlation greater than 0.6 for most items, and the Student t-test showed a statistically significant value to the level of 5% for the two selected variables: years of education and using the Family Health Strategy. Conclusion: the instrument is valid and can be used as a promising tool to develop or direct actions in public health and evaluate knowledge about domestic violence on children. PMID:27556878

  9. Influence of the heterogeneity on the hydraulic conductivity of a real aquifer

    NASA Astrophysics Data System (ADS)

    Carmine, Fallico; Aldo Pedro, Ferrante; Chiara, Vita Maria; Bartolo Samuele, De

    2010-05-01

    Many factors influence the flux in the porous media therefore the values of the representative parameters of the aquifer such as the hydraulic conductivity (k). A lot of studies have shown that this parameter increases with the portion of the aquifer tested. The main cause of this behaviour is the heterogeneity in the aquifer (Sànchez-Vila et al., 1996). It was also verified that the scale dependence of hydraulic conductivity does not depend on the specific method of measurement (Schulze-Makuch and Cherkauer, 1998). An experimental approach to study this phenomenon is based on sets of measurements carried out at different scales. However, one should consider that for the lower scale values k can be determined by direct measurements, performed in the laboratory using samples of different dimensions; whyle, for the large scales the measurement of the hydraulic conductivity requires indirect methods (Johnson and Sen, 1988; Katz and Thompson, 1986; Bernabé and Revil, 1995). In this study the confined aquifer of Montalto Uffugo test field was examined. This aquifer has the geological characteristics of a recently formed valley, with conglomeratic and sandy alluvial deposits; specifically the layer of sands and conglomerates, with a significant percentage of silt at various levels, lies about 55-60 m below the ground surface, where there is a heavy clay formation. Moreover in the test field, for the considered confined aquifer, there are one completely penetrating well, five partially penetrating wells and two completely penetrating piezometers. Along two vertical lines a series of cylindrical samples (6.4 cm of diameter and 15 cm of head) were extracted and for each one of them the k value was measured in laboratory by direct methods, based on the use of flux cells. Also indirect methods were used; in fact, a series of slug tests was carried out, determining the corresponding k values and the radius of influence (R). Moreover another series of pumping tests was carried out determining again the corresponding k values and the radius of influence; in fact, changing the pumping rate, varies also R. For the different sets of k values, obtained by different measurement methods, a statistical analysis was performed, determining the meaningful statistical parameters. All the obtained k values were examined, furnishing a scaling law of k for the considered aquifer. The equation describing this experimental trend is a power law, according to Schulze-Makuch and Cherkauer (1998). These results, obtained for the Montalto Uffugo test field, show that the hydraulic conductivity grows with the radius of influence, id est with the volume of the aquifer involved in the measurement. Moreover, the threshold value, to which k tends with the growing of R, was determined. References Bernabé, Y. and Revil, A. 1995. Pore-scale heterogeneity, energy dissipation and the transport properties of rocks. Geophys. Res. Lett. 22: 1529-1532. Johnson, D.L. and Sen, P.N. 1988. Dependence of the conductivity of a porous medium on electrolytic conductivity. Phys. Rev. B Condens. matter. 37: 3502-3510. Katz, A.J. and Thompson, A.H. 1986. Quantitative prediction of permeability in porous rock. Phys. Rev. B Condens. Matter. 34: 8179-8181. Sanchez-Villa X, Carrera J, Girardi JP (1996). Scale effects in transmissivity. J Hydrol 183:1-22. Schulze-Makuch D, Cherkauer DS (1998). Variations in hydraulic conductivity with scale of measurement during aquifer tests in heterogeneous, porous, carbonate rocks. Hydrogeol J. 6:204-215.

  10. Heat balance statistics derived from four-dimensional assimilations with a global circulation model

    NASA Technical Reports Server (NTRS)

    Schubert, S. D.; Herman, G. F.

    1981-01-01

    The reported investigation was conducted to develop a reliable procedure for obtaining the diabatic and vertical terms required for atmospheric heat balance studies. The method developed employs a four-dimensional assimilation mode in connection with the general circulation model of NASA's Goddard Laboratory for Atmospheric Sciences. The initial analysis was conducted with data obtained in connection with the 1976 Data Systems Test. On the basis of the results of the investigation, it appears possible to use the model's observationally constrained diagnostics to provide estimates of the global distribution of virtually all of the quantities which are needed to compute the atmosphere's heat and energy balance.

  11. Impact of a Dialectic Behavior Therapy - Corrections Modified (DBT-CM) Upon Behaviorally Challenged Incarcerated Male Adolescents

    PubMed Central

    Shelton, Deborah; Kesten, Karen; Zhang, Wanli; Trestman, Robert

    2011-01-01

    Purpose This article reports the findings of a Dialectical Behavioral Therapy- Corrections Modified (DBT-CM) intervention upon difficult to manage, impulsive and/or aggressive incarcerated male adolescents. Methods A secondary analysis of a sub-sample of 38 male adolescents who participated in the study was conducted. A one-group pretest-posttest design was used; descriptive statistics and t-tests were conducted. Results Significant changes were found in physical aggression, distancing coping methods and number of disciplinary tickets for behavior. Conclusion The study supports the value of DBT-CM for management of incarcerated male adolescents with difficult to manage aggressive behaviors. PMID:21501287

  12. Effect of promoting self-esteem by participatory learning process on emotional intelligence among early adolescents.

    PubMed

    Munsawaengsub, Chokchai; Yimklib, Somkid; Nanthamongkolchai, Sutham; Apinanthavech, Suporn

    2009-12-01

    To study the effect of promoting self-esteem by participatory learning program on emotional intelligence among early adolescents. The quasi-experimental study was conducted in grade 9 students from two schools in Bangbuathong district, Nonthaburi province. Each experimental and comparative group consisted of 34 students with the lowest score of emotional intelligence. The instruments were questionnaires, Program to Develop Emotional Intelligence and Handbook of Emotional Intelligence Development. The experimental group attended 8 participatory learning activities in 4 weeks to Develop Emotional Intelligence while the comparative group received the handbook for self study. Assessment the effectiveness of program was done by pre-test and post-test immediately and 4 weeks apart concerning the emotional intelligence. Implementation and evaluation was done during May 24-August 12, 2005. Data were analyzed by frequency, percentage, mean, standard deviation, Chi-square, independent sample t-test and paired sample t-test. Before program implementation, both groups had no statistical difference in mean score of emotional intelligence. After intervention, the experimental group had higher mean score of emotional intelligence both immediately and 4 weeks later with statistical significant (p = 0.001 and < 0.001). At 4 weeks after experiment, the mean score in experimental group was higher than the mean score at immediate after experiment with statistical significance (p < 0.001). The program to promote self-esteem by participatory learning process could enhance the emotional intelligence in early-adolescent. This program could be modified and implemented for early adolescent in the community.

  13. The Impact of Team-Based Learning on Nervous System Examination Knowledge of Nursing Students.

    PubMed

    Hemmati Maslakpak, Masomeh; Parizad, Naser; Zareie, Farzad

    2015-12-01

    Team-based learning is one of the active learning approaches in which independent learning is combined with small group discussion in the class. This study aimed to determine the impact of team-based learning in nervous system examination knowledge of nursing students. This quasi-experimental study was conducted on 3(rd) grade nursing students, including 5th semester (intervention group) and 6(th) semester (control group). The traditional lecture method and the team-based learning method were used for educating the examination of the nervous system for intervention and control groups, respectively. The data were collected by a test covering 40-questions (multiple choice, matching, gap-filling and descriptive questions) before and after intervention in both groups. Individual Readiness Assurance Test (RAT) and Group Readiness Assurance Test (GRAT) used to collect data in the intervention group. In the end, the collected data were analyzed by SPSS ver. 13 using descriptive and inferential statistical tests. In team-based learning group, mean and standard deviation was 13.39 (4.52) before the intervention, which had been increased to 31.07 (3.20) after the intervention and this increase was statistically significant. Also, there was a statistically significant difference between the scores of RAT and GRAT in team-based learning group. Using team-based learning approach resulted in much better improvement and stability in the nervous system examination knowledge of nursing students compared to traditional lecture method; therefore, this method could be efficiently used as an effective educational approach in nursing education.

  14. Dental enamel defect diagnosis through different technology-based devices.

    PubMed

    Kobayashi, Tatiana Yuriko; Vitor, Luciana Lourenço Ribeiro; Carrara, Cleide Felício Carvalho; Silva, Thiago Cruvinel; Rios, Daniela; Machado, Maria Aparecida Andrade Moreira; Oliveira, Thais Marchini

    2018-06-01

    Dental enamel defects (DEDs) are faulty or deficient enamel formations of primary and permanent teeth. Changes during tooth development result in hypoplasia (a quantitative defect) and/or hypomineralisation (a qualitative defect). To compare technology-based diagnostic methods for detecting DEDs. Two-hundred and nine dental surfaces of anterior permanent teeth were selected in patients, 6-11 years of age, with cleft lip with/without cleft palate. First, a conventional clinical examination was conducted according to the modified Developmental Defects of Enamel Index (DDE Index). Dental surfaces were evaluated using an operating microscope and a fluorescence-based device. Interexaminer reproducibility was determined using the kappa test. To compare groups, McNemar's test was used. Cramer's V test was used for comparing the distribution of index codes obtained after classification of all dental surfaces. Cramer's V test revealed statistically significant differences (P < .0001) in the distribution of index codes obtained using the different methods; the coefficients were 0.365 for conventional clinical examination versus fluorescence, 0.961 for conventional clinical examination versus operating microscope and 0.358 for operating microscope versus fluorescence. The sensitivity of the operating microscope and fluorescence method was statistically significant (P = .008 and P < .0001, respectively). Otherwise, the results did not show statistically significant differences in accuracy and specificity for either the operating microscope or the fluorescence methods. This study suggests that the operating microscope performed better than the fluorescence-based device and could be an auxiliary method for the detection of DEDs. © 2017 FDI World Dental Federation.

  15. Active Female Maximal and Anaerobic Threshold Cardiorespiratory Responses to Six Different Water Aerobics Exercises.

    PubMed

    Antunes, Amanda H; Alberton, Cristine L; Finatto, Paula; Pinto, Stephanie S; Cadore, Eduardo L; Zaffari, Paula; Kruel, Luiz F M

    2015-01-01

    Maximal tests conducted on land are not suitable for the prescription of aquatic exercises, which makes it difficult to optimize the intensity of water aerobics classes. The aim of the present study was to evaluate the maximal and anaerobic threshold cardiorespiratory responses to 6 water aerobics exercises. Volunteers performed 3 of the exercises in the sagittal plane and 3 in the frontal plane. Twelve active female volunteers (aged 24 ± 2 years) performed 6 maximal progressive test sessions. Throughout the exercise tests, we measured heart rate (HR) and oxygen consumption (VO2). We randomized all sessions with a minimum interval of 48 hr between each session. For statistical analysis, we used repeated-measures 1-way analysis of variance. Regarding the maximal responses, for the peak VO2, abductor hop and jumping jacks (JJ) showed significantly lower values than frontal kick and cross-country skiing (CCS; p < .001; partial η(2) = .509), while for the peak HR, JJ showed statistically significantly lower responses compared with stationary running and CCS (p < .001; partial η(2) = .401). At anaerobic threshold intensity expressed as the percentage of the maximum values, no statistically significant differences were found among exercises. Cardiorespiratory responses are directly associated with the muscle mass involved in the exercise. Thus, it is worth emphasizing the importance of performing a maximal test that is specific to the analyzed exercise so the prescription of the intensity can be safer and valid.

  16. Statistics of indicated pressure in combustion engine.

    NASA Astrophysics Data System (ADS)

    Sitnik, L. J.; Andrych-Zalewska, M.

    2016-09-01

    The paper presents the classic form of pressure waveforms in burn chamber of diesel engine but based on strict analytical basis for amending the displacement volume. The pressure measurement results are obtained in the engine running on an engine dynamometer stand. The study was conducted by a 13-phase ESC test (European Stationary Cycle). In each test phase are archived 90 waveforms of pressure. As a result of extensive statistical analysis was found that while the engine is idling distribution of 90 value of pressure at any value of the angle of rotation of the crankshaft can be described uniform distribution. In the each point of characteristic of the engine corresponding to the individual phases of the ESC test, 90 of the pressure for any value of the angle of rotation of the crankshaft can be described as normal distribution. These relationships are verified using tests: Shapiro-Wilk, Jarque-Bera, Lilliefors, Anderson-Darling. In the following part, with each value of the crank angle, are obtain values of descriptive statistics for the pressure data. In its essence, are obtained a new way to approach the issue of pressure waveform analysis in the burn chamber of engine. The new method can be used to further analysis, especially the combustion process in the engine. It was found, e.g. a very large variances of pressure near the transition from compression to expansion stroke. This lack of stationarity of the process can be important both because of the emissions of exhaust gases and fuel consumption of the engine.

  17. Materials property definition and generation for carbon-carbon and carbon phenolic materials

    NASA Technical Reports Server (NTRS)

    Canfield, A. R.; Mathis, J. R.; Starrett, H. S.; Koenig, J. R.

    1987-01-01

    A data base program to generate statistically significant material-property data for carbon-carbon and carbon phenolic materials to be used in designs of Space Shuttle is described. The program, which will provide data necessary for thermal and stress modeling of Shuttle nozzle and exit cone structures, includes evaluation of tension, compression, shear strength, shear modulus, thermal expansion, thermal conductivity, permeability, and emittance for both materials; the testing of carbon phenolic materials also includes CTE, off-gassing, pyrolysis, and RTG. Materials to be tested will be excised from Space Shuttle inlet, throat, and exit cone billets and modified involute carbon-carbon exit cones; coprocessed blocks, panels, and cylinders will also be tested.

  18. Empirical study of alginate impression materials by customized proportioning system

    PubMed Central

    2016-01-01

    PURPOSE Alginate mixers available in the market do not have the automatic proportioning unit. In this study, an automatic proportioning unit for the alginate mixer and controller software were designed and produced for a new automatic proportioning unit. With this device, it was ensured that proportioning operation could arrange weight-based alginate impression materials. MATERIALS AND METHODS The variation of coefficient in the tested groups was compared with the manual proportioning. Compression tension and tear tests were conducted to determine the mechanical properties of alginate impression materials. The experimental data were statistically analyzed using one way ANOVA and Tukey test at the 0.05 level of significance. RESULTS No statistically significant differences in modulus of elastisity (P>0.3), tensional/compresional strength (P>0.3), resilience (P>0.2), strain in failure (P>0.4), and tear energy (P>0.7) of alginate impression materials were seen. However, a decrease in the standard deviation of tested groups was observed when the customized machine was used. To verify the efficiency of the system, powder and powder/water mixing were weighed and significant decrease was observed. CONCLUSION It was possible to obtain more mechanically stable alginate impression materials by using the custom-made proportioning unit. PMID:27826387

  19. Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks

    PubMed Central

    Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto

    2014-01-01

    Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model. PMID:24634645

  20. Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks.

    PubMed

    Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto

    2014-01-01

    Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model.

  1. Improving Education in Medical Statistics: Implementing a Blended Learning Model in the Existing Curriculum

    PubMed Central

    Milic, Natasa M.; Trajkovic, Goran Z.; Bukumiric, Zoran M.; Cirkovic, Andja; Nikolic, Ivan M.; Milin, Jelena S.; Milic, Nikola V.; Savic, Marko D.; Corac, Aleksandar M.; Marinkovic, Jelena M.; Stanisavljevic, Dejana M.

    2016-01-01

    Background Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. Methods This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013–14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Results Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (p<0.001). Conclusion This study provides empirical evidence to support educator decisions to implement different learning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient alternative to traditional classroom training in medical statistics. PMID:26859832

  2. Improving Education in Medical Statistics: Implementing a Blended Learning Model in the Existing Curriculum.

    PubMed

    Milic, Natasa M; Trajkovic, Goran Z; Bukumiric, Zoran M; Cirkovic, Andja; Nikolic, Ivan M; Milin, Jelena S; Milic, Nikola V; Savic, Marko D; Corac, Aleksandar M; Marinkovic, Jelena M; Stanisavljevic, Dejana M

    2016-01-01

    Although recent studies report on the benefits of blended learning in improving medical student education, there is still no empirical evidence on the relative effectiveness of blended over traditional learning approaches in medical statistics. We implemented blended along with on-site (i.e. face-to-face) learning to further assess the potential value of web-based learning in medical statistics. This was a prospective study conducted with third year medical undergraduate students attending the Faculty of Medicine, University of Belgrade, who passed (440 of 545) the final exam of the obligatory introductory statistics course during 2013-14. Student statistics achievements were stratified based on the two methods of education delivery: blended learning and on-site learning. Blended learning included a combination of face-to-face and distance learning methodologies integrated into a single course. Mean exam scores for the blended learning student group were higher than for the on-site student group for both final statistics score (89.36±6.60 vs. 86.06±8.48; p = 0.001) and knowledge test score (7.88±1.30 vs. 7.51±1.36; p = 0.023) with a medium effect size. There were no differences in sex or study duration between the groups. Current grade point average (GPA) was higher in the blended group. In a multivariable regression model, current GPA and knowledge test scores were associated with the final statistics score after adjusting for study duration and learning modality (p<0.001). This study provides empirical evidence to support educator decisions to implement different learning environments for teaching medical statistics to undergraduate medical students. Blended and on-site training formats led to similar knowledge acquisition; however, students with higher GPA preferred the technology assisted learning format. Implementation of blended learning approaches can be considered an attractive, cost-effective, and efficient alternative to traditional classroom training in medical statistics.

  3. National Aquatic Resource Surveys & Statistics: Role of statistics in the development of a national monitoring program

    EPA Science Inventory

    The National Aquatic Resource Surveys (NARS) are a series of four statistical surveys conducted by the U.S. Environmental Protection Agency working in collaboration with states, tribal nations and other federal agencies. The surveys are conducted for lakes and reservoirs, streams...

  4. 78 FR 26611 - Notice of Intent To Seek Approval To Conduct an Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-07

    ... Statistics Service Notice of Intent To Seek Approval To Conduct an Information Collection AGENCY: National Agricultural Statistics Service, USDA. ACTION: Notice and request for comments. SUMMARY: In accordance with the Paperwork Reduction Act of 1995, this notice announces the intention of the National Agricultural Statistics...

  5. Patient perceptions of receiving test results via online portals: a mixed-methods study.

    PubMed

    Giardina, Traber D; Baldwin, Jessica; Nystrom, Daniel T; Sittig, Dean F; Singh, Hardeep

    2018-04-01

    Online portals provide patients with access to their test results, but it is unknown how patients use these tools to manage results and what information is available to promote understanding. We conducted a mixed-methods study to explore patients' experiences and preferences when accessing their test results via portals. We conducted 95 interviews (13 semistructured and 82 structured) with adults who viewed a test result in their portal between April 2015 and September 2016 at 4 large outpatient clinics in Houston, Texas. Semistructured interviews were coded using content analysis and transformed into quantitative data and integrated with the structured interview data. Descriptive statistics were used to summarize the structured data. Nearly two-thirds (63%) did not receive any explanatory information or test result interpretation at the time they received the result, and 46% conducted online searches for further information about their result. Patients who received an abnormal result were more likely to experience negative emotions (56% vs 21%; P = .003) and more likely to call their physician (44% vs 15%; P = .002) compared with those who received normal results. Study findings suggest that online portals are not currently designed to present test results to patients in a meaningful way. Patients experienced negative emotions often with abnormal results, but sometimes even with normal results. Simply providing access via portals is insufficient; additional strategies are needed to help patients interpret and manage their online test results. Given the absence of national guidance, our findings could help strengthen policy and practice in this area and inform innovations that promote patient understanding of test results.

  6. Improving the analysis of slug tests

    USGS Publications Warehouse

    McElwee, C.D.

    2002-01-01

    This paper examines several techniques that have the potential to improve the quality of slug test analysis. These techniques are applicable in the range from low hydraulic conductivities with overdamped responses to high hydraulic conductivities with nonlinear oscillatory responses. Four techniques for improving slug test analysis will be discussed: use of an extended capability nonlinear model, sensitivity analysis, correction for acceleration and velocity effects, and use of multiple slug tests. The four-parameter nonlinear slug test model used in this work is shown to allow accurate analysis of slug tests with widely differing character. The parameter ?? represents a correction to the water column length caused primarily by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude. The water column velocity at slug initiation (V0) is an additional model parameter, which would ideally be zero but may not be due to the initiation mechanism. The remaining two model parameters are A (parameter for nonlinear effects) and K (hydraulic conductivity). Sensitivity analysis shows that in general ?? and V0 have the lowest sensitivity and K usually has the highest. However, for very high K values the sensitivity to A may surpass the sensitivity to K. Oscillatory slug tests involve higher accelerations and velocities of the water column; thus, the pressure transducer responses are affected by these factors and the model response must be corrected to allow maximum accuracy for the analysis. The performance of multiple slug tests will allow some statistical measure of the experimental accuracy and of the reliability of the resulting aquifer parameters. ?? 2002 Elsevier Science B.V. All rights reserved.

  7. The effect of ethics training on students recognizing ethical violations and developing moral sensitivity.

    PubMed

    Baykara, Zehra Gocmen; Demir, Sevil Guler; Yaman, Sengul

    2015-09-01

    Moral sensitivity is a life-long cognitive ability. It is expected that nurses who work in a professional purpose at "curing human beings" should have a highly developed moral sensitivity. The general opinion is that ethics education plays a significant role in this sense to enhance the moral sensitivity in terms of nurses' professional behaviors and distinguish ethical violations. This study was conducted as intervention research for the purpose of determining the effect of the ethics training on fourth-year students of the nursing department recognizing ethical violations experienced in the hospital and developing ethical sensitivity. The study was conducted with 50 students, with 25 students each in the experiment and control groups. Students in the experiment group were provided ethics training and consultancy services. The data were collected through the data collection form, which consists of questions on the socio-demographic characteristics and ethical sensitivity of the students, Moral Sensitivity Questionnaire, and the observation form on ethical principle violations/protection in the clinic environment. The data were digitized on the computer with the SPSS for Windows 13.0 program. The data were evaluated utilizing number, percentile calculation, paired samples t-test, Wilcoxon test, and the McNemar test. The total Moral Sensitivity Questionnaire pre-test score averages of students in the experiment group were determined to be 93.88 ± 13.57, and their total post-test score averages were determined to be 89.24 ± 15.90. The total pre-test score averages of students in the control group were determined to be 91.48 ± 17.59, and their total post-test score averages were determined to be 97.72 ± 19.91. In the study, it was determined that the post-training ethical sensitivity of students in the experiment group increased; however, this was statistically not significant. Furthermore, it was determined that the number of ethical principle protection/violation observations and correct examples provided by students in the experiment group were higher than the control group and the difference was statistically significant. Written permission and ethical approval were obtained from the university where the study was conducted. Written consent was received from students accepting to participate in the study. As a result, ethics education given to students enables them to distinguish ethical violations in a hospital and make a proper observation in this issue. © The Author(s) 2014.

  8. A small punch test technique for characterizing the elastic modulus and fracture behavior of PMMA bone cement used in total joint replacement.

    PubMed

    Giddings, V L; Kurtz, S M; Jewett, C W; Foulds, J R; Edidin, A A

    2001-07-01

    Polymethylmethacrylate (PMMA) bone cement is used in total joint replacements to anchor implants to the underlying bone. Establishing and maintaining the integrity of bone cement is thus of critical importance to the long-term outcome of joint replacement surgery. The goal of the present study was to evaluate the suitability of a novel testing technique, the small punch or miniaturized disk bend test, to characterize the elastic modulus and fracture behavior of PMMA. We investigated the hypothesis that the crack initiation behavior of PMMA during the small punch test was sensitive to the test temperature. Miniature disk-shaped specimens, 0.5 mm thick and 6.4 mm in diameter, were prepared from PMMA and Simplex-P bone cement according to manufacturers' instructions. Testing was conducted at ambient and body temperatures, and the effect of test temperature on the elastic modulus and fracture behavior was statistically evaluated using analysis of variance. For both PMMA materials, the test temperature had a significant effect on elastic modulus and crack initiation behavior. At body temperature, the specimens exhibited "ductile" crack initiation, whereas at room temperature "brittle" crack initiation was observed. The small punch test was found to be a sensitive and repeatable test method for evaluating the mechanical behavior of PMMA. In light of the results of this study, future small punch testing should be conducted at body temperature.

  9. PSYCHOLOGY. Estimating the reproducibility of psychological science.

    PubMed

    2015-08-28

    Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams. Copyright © 2015, American Association for the Advancement of Science.

  10. Explorations in statistics: hypothesis tests and P values.

    PubMed

    Curran-Everett, Douglas

    2009-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of Explorations in Statistics delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what we observe in the experiment to what we expect to see if the null hypothesis is true. The P value associated with the magnitude of that test statistic answers this question: if the null hypothesis is true, what proportion of possible values of the test statistic are at least as extreme as the one I got? Although statisticians continue to stress the limitations of hypothesis tests, there are two realities we must acknowledge: hypothesis tests are ingrained within science, and the simple test of a null hypothesis can be useful. As a result, it behooves us to explore the notions of hypothesis tests, test statistics, and P values.

  11. The effect of group bibliotherapy on the self-esteem of female students living in dormitory

    PubMed Central

    Salimi, Sepideh; Zare-Farashbandi, Firoozeh; Papi, Ahmad; Samouei, Rahele; Hassanzadeh, Akbar

    2014-01-01

    Introduction: Bibliotherapy is a supplement, simple, inexpensive and readily available method to treat the diseases that is performed with cooperation of librarians and psychologists or doctors. The aim of this study is the investigation of group bibliotherapy's effect on the self-esteem of the female students of Isfahan University of Medical Sciences Living in Dormitory in 2012. Materials and Methods: The present study is an interventional semi-experimental study with pre test and post test and control group. The statistical population of study consisted of 32 female students who reside in Isfahan University of Medical Sciences dormitories which control and case groups and the students were divided randomly between these two groups. Data was collected by Cooper Smith Self-esteem questionnaire scale (Cronbach's alpha: 0.85). Two groups were examined by the questionnaire in pre test. Case group received group bibliotherapy for 2 month (8 sessions of 2 hours), while the control group received no training at all. Then, 2 groups were assessed in post test after 1 month. Descriptive statistics (means and frequencies distribution) and inferential statistics (independent t- test, paired t- test and mann whitney) were used and data was analyzed by SPSS20 software. Results: The findings showed that group bibliotherapy had positive and significant effect on general, family, professional and total self esteem of female students living in dormitories, but it had no effect on their social self esteem. Conclusion: Group bibliotherapy can increase female students’ self-esteem levels. On the other hand, conducting these studies not only can improve mental health of people, but can also improve their reading habits. PMID:25250355

  12. Targeting regional pediatric congenital hearing loss using a spatial scan statistic.

    PubMed

    Bush, Matthew L; Christian, Warren Jay; Bianchi, Kristin; Lester, Cathy; Schoenberg, Nancy

    2015-01-01

    Congenital hearing loss is a common problem, and timely identification and intervention are paramount for language development. Patients from rural regions may have many barriers to timely diagnosis and intervention. The purpose of this study was to examine the spatial and hospital-based distribution of failed infant hearing screening testing and pediatric congenital hearing loss throughout Kentucky. Data on live births and audiological reporting of infant hearing loss results in Kentucky from 2009 to 2011 were analyzed. The authors used spatial scan statistics to identify high-rate clusters of failed newborn screening tests and permanent congenital hearing loss (PCHL), based on the total number of live births per county. The authors conducted further analyses on PCHL and failed newborn hearing screening tests, based on birth hospital data and method of screening. The authors observed four statistically significant (p < 0.05) high-rate clusters with failed newborn hearing screenings in Kentucky, including two in the Appalachian region. Hospitals using two-stage otoacoustic emission testing demonstrated higher rates of failed screening (p = 0.009) than those using two-stage automated auditory brainstem response testing. A significant cluster of high rate of PCHL was observed in Western Kentucky. Five of the 54 birthing hospitals were found to have higher relative risk of PCHL, and two of those hospitals are located in a very rural region of Western Kentucky within the cluster. This spatial analysis in children in Kentucky has identified specific regions throughout the state with high rates of congenital hearing loss and failed newborn hearing screening tests. Further investigation regarding causative factors is warranted. This method of analysis can be useful in the setting of hearing health disparities to focus efforts on regions facing high incidence of congenital hearing loss.

  13. Clinical assessment of pitch perception.

    PubMed

    Vaerenberg, Bart; Pascu, Alexandru; Del Bo, Luca; Schauwers, Karen; De Ceulaer, Geert; Daemers, Kristin; Coene, Martine; Govaerts, Paul J

    2011-07-01

    The perception of pitch has recently gained attention. At present, clinical audiologic tests to assess this are hardly available. This article reports on the development of a clinical test using harmonic intonation (HI) and disharmonic intonation (DI). Prospective collection of normative data and pilot study in hearing-impaired subjects. Tertiary referral center. Normative data were collected from 90 normal-hearing subjects recruited from 3 different language backgrounds. The pilot study was conducted on 18 hearing-impaired individuals who were selected into 3 pathologic groups: high-frequency hearing loss (HF), low-frequency hearing loss (LF), and cochlear implant users (CI). Normative data collection and exploratory diagnostics by means of the newly constructed HI/DI tests using intonation patterns to find the just noticeable difference (JND) for pitch discrimination in low-frequency harmonic complex sounds presented in a same-different task. JND for pitch discrimination using HI/DI tests in the hearing population and pathologic groups. Normative data are presented in 5 parameter statistics and box-and-whisker plots showing median JNDs of 2 (HI) and 3 Hz (DI). The results on both tests are statistically abnormal in LF and CI subjects, whereas they are not significantly abnormal in the HF group. The HI and DI tests allow the clinical assessment of low-frequency pitch perception. The data obtained in this study define the normal zone for both tests. Preliminary results indicate possible abnormal TFS perception in some hearing-impaired subjects.

  14. Evaluation of the Thermo Scientific SureTect Listeria species assay. AOAC Performance Tested Method 071304.

    PubMed

    Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron

    2014-01-01

    The Thermo Scientific SureTect Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University ofGuelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.

  15. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    PubMed

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  16. A weighted generalized score statistic for comparison of predictive values of diagnostic tests

    PubMed Central

    Kosinski, Andrzej S.

    2013-01-01

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations which are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we present, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic which incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, it always reduces to the score statistic in the independent samples situation, and it preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the weighted generalized score test statistic in a general GEE setting. PMID:22912343

  17. Gauging Skills of Hospital Security Personnel: a Statistically-driven, Questionnaire-based Approach.

    PubMed

    Rinkoo, Arvind Vashishta; Mishra, Shubhra; Rahesuddin; Nabi, Tauqeer; Chandra, Vidha; Chandra, Hem

    2013-01-01

    This study aims to gauge the technical and soft skills of the hospital security personnel so as to enable prioritization of their training needs. A cross sectional questionnaire based study was conducted in December 2011. Two separate predesigned and pretested questionnaires were used for gauging soft skills and technical skills of the security personnel. Extensive statistical analysis, including Multivariate Analysis (Pillai-Bartlett trace along with Multi-factorial ANOVA) and Post-hoc Tests (Bonferroni Test) was applied. The 143 participants performed better on the soft skills front with an average score of 6.43 and standard deviation of 1.40. The average technical skills score was 5.09 with a standard deviation of 1.44. The study avowed a need for formal hands on training with greater emphasis on technical skills. Multivariate analysis of the available data further helped in identifying 20 security personnel who should be prioritized for soft skills training and a group of 36 security personnel who should receive maximum attention during technical skills training. This statistically driven approach can be used as a prototype by healthcare delivery institutions worldwide, after situation specific customizations, to identify the training needs of any category of healthcare staff.

  18. Association among Working Hours, Occupational Stress, and Presenteeism among Wage Workers: Results from the Second Korean Working Conditions Survey.

    PubMed

    Jeon, Sung-Hwan; Leem, Jong-Han; Park, Shin-Goo; Heo, Yong-Seok; Lee, Bum-Joon; Moon, So-Hyun; Jung, Dal-Young; Kim, Hwan-Cheol

    2014-03-24

    The purpose of the present study was to identify the association between presenteeism and long working hours, shiftwork, and occupational stress using representative national survey data on Korean workers. We analyzed data from the second Korean Working Conditions Survey (KWCS), which was conducted in 2010, in which a total of 6,220 wage workers were analyzed. The study population included the economically active population aged above 15 years, and living in the Republic of Korea. We used the chi-squared test and multivariate logistic regression to test the statistical association between presenteeism and working hours, shiftwork, and occupational stress. Approximately 19% of the workers experienced presenteeism during the previous 12 months. Women had higher rates of presenteeism than men. We found a statistically significant dose-response relationship between working hours and presenteeism. Shift workers had a slightly higher rate of presenteeism than non-shift workers, but the difference was not statistically significant. Occupational stress, such as high job demand, lack of rewards, and inadequate social support, had a significant association with presenteeism. The present study suggests that long working hours and occupational stress are significantly related to presenteeism.

  19. Gauging Skills of Hospital Security Personnel: a Statistically-driven, Questionnaire-based Approach

    PubMed Central

    Rinkoo, Arvind Vashishta; Mishra, Shubhra; Rahesuddin; Nabi, Tauqeer; Chandra, Vidha; Chandra, Hem

    2013-01-01

    Objectives This study aims to gauge the technical and soft skills of the hospital security personnel so as to enable prioritization of their training needs. Methodology A cross sectional questionnaire based study was conducted in December 2011. Two separate predesigned and pretested questionnaires were used for gauging soft skills and technical skills of the security personnel. Extensive statistical analysis, including Multivariate Analysis (Pillai-Bartlett trace along with Multi-factorial ANOVA) and Post-hoc Tests (Bonferroni Test) was applied. Results The 143 participants performed better on the soft skills front with an average score of 6.43 and standard deviation of 1.40. The average technical skills score was 5.09 with a standard deviation of 1.44. The study avowed a need for formal hands on training with greater emphasis on technical skills. Multivariate analysis of the available data further helped in identifying 20 security personnel who should be prioritized for soft skills training and a group of 36 security personnel who should receive maximum attention during technical skills training. Conclusion This statistically driven approach can be used as a prototype by healthcare delivery institutions worldwide, after situation specific customizations, to identify the training needs of any category of healthcare staff. PMID:23559904

  20. Association among Working Hours, Occupational Stress, and Presenteeism among Wage Workers: Results from the Second Korean Working Conditions Survey

    PubMed Central

    2014-01-01

    Objectives The purpose of the present study was to identify the association between presenteeism and long working hours, shiftwork, and occupational stress using representative national survey data on Korean workers. Methods We analyzed data from the second Korean Working Conditions Survey (KWCS), which was conducted in 2010, in which a total of 6,220 wage workers were analyzed. The study population included the economically active population aged above 15 years, and living in the Republic of Korea. We used the chi-squared test and multivariate logistic regression to test the statistical association between presenteeism and working hours, shiftwork, and occupational stress. Results Approximately 19% of the workers experienced presenteeism during the previous 12 months. Women had higher rates of presenteeism than men. We found a statistically significant dose–response relationship between working hours and presenteeism. Shift workers had a slightly higher rate of presenteeism than non-shift workers, but the difference was not statistically significant. Occupational stress, such as high job demand, lack of rewards, and inadequate social support, had a significant association with presenteeism. Conclusions The present study suggests that long working hours and occupational stress are significantly related to presenteeism. PMID:24661575

  1. Automated training site selection for large-area remote-sensing image analysis

    NASA Astrophysics Data System (ADS)

    McCaffrey, Thomas M.; Franklin, Steven E.

    1993-11-01

    A computer program is presented to select training sites automatically from remotely sensed digital imagery. The basic ideas are to guide the image analyst through the process of selecting typical and representative areas for large-area image classifications by minimizing bias, and to provide an initial list of potential classes for which training sites are required to develop a classification scheme or to verify classification accuracy. Reducing subjectivity in training site selection is achieved by using a purely statistical selection of homogeneous sites which then can be compared to field knowledge, aerial photography, or other remote-sensing imagery and ancillary data to arrive at a final selection of sites to be used to train the classification decision rules. The selection of the homogeneous sites uses simple tests based on the coefficient of variance, the F-statistic, and the Student's i-statistic. Comparisons of site means are conducted with a linear growing list of previously located homogeneous pixels. The program supports a common pixel-interleaved digital image format and has been tested on aerial and satellite optical imagery. The program is coded efficiently in the C programming language and was developed under AIX-Unix on an IBM RISC 6000 24-bit color workstation.

  2. Optimizing α for better statistical decisions: a case study involving the pace-of-life syndrome hypothesis: optimal α levels set to minimize Type I and II errors frequently result in different conclusions from those using α = 0.05.

    PubMed

    Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E

    2012-12-01

    Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.

  3. DnaSAM: Software to perform neutrality testing for large datasets with complex null models.

    PubMed

    Eckert, Andrew J; Liechty, John D; Tearse, Brandon R; Pande, Barnaly; Neale, David B

    2010-05-01

    Patterns of DNA sequence polymorphisms can be used to understand the processes of demography and adaptation within natural populations. High-throughput generation of DNA sequence data has historically been the bottleneck with respect to data processing and experimental inference. Advances in marker technologies have largely solved this problem. Currently, the limiting step is computational, with most molecular population genetic software allowing a gene-by-gene analysis through a graphical user interface. An easy-to-use analysis program that allows both high-throughput processing of multiple sequence alignments along with the flexibility to simulate data under complex demographic scenarios is currently lacking. We introduce a new program, named DnaSAM, which allows high-throughput estimation of DNA sequence diversity and neutrality statistics from experimental data along with the ability to test those statistics via Monte Carlo coalescent simulations. These simulations are conducted using the ms program, which is able to incorporate several genetic parameters (e.g. recombination) and demographic scenarios (e.g. population bottlenecks). The output is a set of diversity and neutrality statistics with associated probability values under a user-specified null model that are stored in easy to manipulate text file. © 2009 Blackwell Publishing Ltd.

  4. Aronia melanocarpa Treatment and Antioxidant Status in Selected Tissues in Wistar Rats

    PubMed Central

    Krośniak, Mirosław; Sanocka, Ilona; Bartoń, Henryk; Hebda, Tomasz; Francik, Sławomir

    2014-01-01

    Aronia juice is considered to be a source of compounds with high antioxidative potential. We conducted a study on the impact of compounds in the Aronia juice on oxidative stress in plasma and brain tissues. The influence of Aronia juice on oxidative stress parameters was tested with the use of a model with a high content of fructose and nonsaturated fats. Therefore, the activity of enzymatic (catalase, CAT, and paraoxonase, PON) and nonenzymatic (thiol groups, SH, and protein carbonyl groups, PCG) oxidative stress markers, which indicate changes in the carbohydrate and protein profiles, was marked in brain tissue homogenates. Adding Aronia caused statistically significant increase in the CAT activity in plasma in all tested diets, while the PON activity showed a statistically significant increase only in case of high fat diet. In animals fed with Aronia juice supplemented with carbohydrates or fat, statistically significant increase in the PON activity and the decrease in the CAT activity in brain tissue were observed. In case of the high fat diet, an increase in the number of SH groups and a decrease in the number of PCG groups in brain tissue were observed. PMID:25057488

  5. Aronia melanocarpa treatment and antioxidant status in selected tissues in Wistar rats.

    PubMed

    Francik, Renata; Krośniak, Mirosław; Sanocka, Ilona; Bartoń, Henryk; Hebda, Tomasz; Francik, Sławomir

    2014-01-01

    Aronia juice is considered to be a source of compounds with high antioxidative potential. We conducted a study on the impact of compounds in the Aronia juice on oxidative stress in plasma and brain tissues. The influence of Aronia juice on oxidative stress parameters was tested with the use of a model with a high content of fructose and nonsaturated fats. Therefore, the activity of enzymatic (catalase, CAT, and paraoxonase, PON) and nonenzymatic (thiol groups, SH, and protein carbonyl groups, PCG) oxidative stress markers, which indicate changes in the carbohydrate and protein profiles, was marked in brain tissue homogenates. Adding Aronia caused statistically significant increase in the CAT activity in plasma in all tested diets, while the PON activity showed a statistically significant increase only in case of high fat diet. In animals fed with Aronia juice supplemented with carbohydrates or fat, statistically significant increase in the PON activity and the decrease in the CAT activity in brain tissue were observed. In case of the high fat diet, an increase in the number of SH groups and a decrease in the number of PCG groups in brain tissue were observed.

  6. A study conducted on the demographic factors of victims of violence in support and administrative departments of hospital in 2013.

    PubMed

    Keyvanara, Mahmoud; Maracy, Mohammad Reza; Ziari, Najmeh Bahman

    2015-01-01

    Violence is now regarded as a serious problem and its complication causes heavy costs on the healthcare systems. The present study aimed to investigate the correlation between some demographic characteristics and confrontation with violence. Since there is no study on the prevalence of violence among the support and administration staff of hospitals in Iran, this study was conducted to investigate violence in these departments. This descriptive-analytical and correlation survey was carried out by census among the support and administrative staff interacting with patients and their companions in Al-Zahra University Hospital of Isfahan in 2013. Research tool was a researcher-made questionnaire including five domains: Personal information, workplace information, verbal violence, physical violence, and other violent acts. Its validity was evaluated by experts reviewing it and its reliability by test-retest (r =0.9). Finally, data were analyzed using descriptive statistical indicators and statistical tests such as Chi-square for sex, marital status, and work department and Mann-Whitney U test for age, level of education, work experience, and violence types by the statistical software SPSS version 20. According to the results obtained, 81% of subjects had been abused at least once and the most reported violence was related to verbal violence (78.4%). There was significant correlation between sex and violence and men were the main victims of violence, but there was no relation between marital status, age, and violence. Work experience was correlated to physical violence and other violent acts conversely. There was also an inverse correlation between physical violence and education; also, security staff faced more violence than others. As high prevalence of violence was found especially among the security staff and personnel with less education and work experience, it is suggested to take actions such as educating about patient accompaniment and visiting condition, holding training workshops on confronting with violence and appropriate communication with patients and families, using experienced and patient staff to interact with clients.

  7. Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment

    PubMed Central

    Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih

    2015-01-01

    In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911

  8. A new method to address verification bias in studies of clinical screening tests: cervical cancer screening assays as an example.

    PubMed

    Xue, Xiaonan; Kim, Mimi Y; Castle, Philip E; Strickler, Howard D

    2014-03-01

    Studies to evaluate clinical screening tests often face the problem that the "gold standard" diagnostic approach is costly and/or invasive. It is therefore common to verify only a subset of negative screening tests using the gold standard method. However, undersampling the screen negatives can lead to substantial overestimation of the sensitivity and underestimation of the specificity of the diagnostic test. Our objective was to develop a simple and accurate statistical method to address this "verification bias." We developed a weighted generalized estimating equation approach to estimate, in a single model, the accuracy (eg, sensitivity/specificity) of multiple assays and simultaneously compare results between assays while addressing verification bias. This approach can be implemented using standard statistical software. Simulations were conducted to assess the proposed method. An example is provided using a cervical cancer screening trial that compared the accuracy of human papillomavirus and Pap tests, with histologic data as the gold standard. The proposed approach performed well in estimating and comparing the accuracy of multiple assays in the presence of verification bias. The proposed approach is an easy to apply and accurate method for addressing verification bias in studies of multiple screening methods. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Examining the effect of the computer-based educational package on quality of life and severity of hypogonadism symptoms in males.

    PubMed

    Afsharnia, Elahe; Pakgohar, Minoo; Khosravi, Shahla; Haghani, Hamid

    2018-06-01

    The objective of this study was to determine the effect of the computer-based educational package on men's QoL and the severity of their hypogonadism symptoms. A quasi-experimental study was conducted on 80 male employees. The data collection tool included the 'Aging Male Symptoms' (AMS) and 'Short Form-36' (SF36) questionnaires. Four sessions were held for the intervention group over a period of 4 weeks. Two months after training, QoL and the severity of hypogonadism symptoms were measured in both the intervention and control groups. The data were analyzed with SPSS 22 software and statistical tests, such as χ 2 , independent t-test, Fisher's exact test, and paired t-tests. Significant statistical changes were observed in the intervention group before and 2 months after the training in the QoL score in the overall dimensions of physical-psychological health and all its domains except for three domains of emotional role, social function, and pain. Furthermore, the paired t-tests showed significant differences between 2 months before and after the training in all the domains and the overall hypogonadism score in the intervention group. Based on our findings, the computer-based educational package has a positive effect on QoL and reduction of hypogonadism symptoms.

  10. [Personality traits of perpetrators of various types of crimes].

    PubMed

    Skoczek, Adrianna; Gancarczyk, Urszula; Prochownik, Paweł; Sobień, Bartosz; Podolec, Piotr; Komar, Monika

    2018-01-01

    This study was conducted in Nowy Wiśnicz, with prisoners sentenced for: murders, sex crimes, theft and robbery, maintenance, bullying. A Polish adaptation of PAI test, made by the author of the study, was used. The study results and its statistical analysis showed characteristic personality features of particular criminal groups can be used in rehabilitation of disturbed people, addicts, and become the basis for preparing actions reducing frequency of committing crimes.

  11. Prevalence of orthorexia nervosa in resident medical doctors in the faculty of medicine (Ankara, Turkey).

    PubMed

    Bağci Bosi, A Tülay; Camur, Derya; Güler, Cağatay

    2007-11-01

    This study has been carried out to "identify highly sensitive behavior on healthy nutrition (orthorexia nervosa-ON)" in residence medical doctors (MD) in the Faculty of Medicine. Diagnoses of ON was based on the presence of a disorder with obsessive-compulsive personality. The study is a cross-sectional research, which reached out to the entire 318 MD. The ORTO-15 test was used to propose a diagnostic proceeding and to try verify the prevalence of ON. Those subjects who were classified below 40 from the ORTO-15 test are accepted to have ON. Chi-square test, ANOVA (univariate) analysis and logistic regression were used for analyses of the data. Mean score of the participants from the ORTO-15 test is 39.8+/-0.22, and there is no statistical difference between women and men. A total of 45.5% of the residence MD involved in the research scored below 40 in the ORTO-15 test. Those who do their food shopping themselves, skip a meal with a salad/fruit, care about the quality of the things they eat, think that eating outside is healthy, look at the content of what they eat and the content of food is important in selection of a product score lower in their average marks in ORTO-15 and the difference among the groups is statistically significant. Food selection of 20.1% of the male participants and 38.9% of the female participants among the residence MD is influenced by the programs on nutrition/health in mass-media. The difference between the groups is statistically significant (p<0.05). Female medical doctors are more careful than men of their physical appearance and weight control and consume less caloric food, which is statistically significant. Since those who exhibit "healthy fanatic" eating habits may have a risk of ON in the future, it would be useful to conduct studies that identify the prevalence of ON in the public.

  12. Performance Comparison Between a Head-Worn Display System and a Head-Up Display for Low Visibility Commercial Operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Barnes, James R.; Williams, Steven P.; Jones, Denise R.; Harrison, Stephanie J.; Bailey, Randall E.

    2014-01-01

    Research, development, test, and evaluation of flight deck interface technologies is being conducted by NASA to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in Next Generation Air Transportation System (NextGen). Under the Vehicle Systems Safety Technologies (VSST) project in the Aviation Safety Program, one specific area of research is the use of small Head-Worn Displays (HWDs) as an equivalent display to a Head-Up Display (HUD). Title 14 of the US Code of Federal Regulations (CFR) 91.175 describes a possible operational credit which can be obtained with airplane equipage of a HUD or an "equivalent" display combined with Enhanced Vision (EV). If successful, a HWD may provide the same safety and operational benefits as current HUD-equipped aircraft but for significantly more aircraft in which HUD installation is neither practical nor possible. A simulation experiment was conducted to evaluate if the HWD, coupled with a head-tracker, can provide an equivalent display to a HUD. Comparative testing was performed in the Research Flight Deck (RFD) Cockpit Motion Facility (CMF) full mission, motion-based simulator at NASA Langley. Twelve airline crews conducted approach and landing, taxi, and departure operations during low visibility operations (1000' Runway Visual Range (RVR), 300' RVR) at Memphis International Airport (Federal Aviation Administration (FAA) identifier: KMEM). The results showed that there were no statistical differences in the crews performance in terms of touchdown and takeoff. Further, there were no statistical differences between the HUD and HWD in pilots' responses to questionnaires.

  13. Impact of genotyping errors on statistical power of association tests in genomic analyses: A case study

    PubMed Central

    Hou, Lin; Sun, Ning; Mane, Shrikant; Sayward, Fred; Rajeevan, Nallakkandi; Cheung, Kei-Hoi; Cho, Kelly; Pyarajan, Saiju; Aslan, Mihaela; Miller, Perry; Harvey, Philip D.; Gaziano, J. Michael; Concato, John; Zhao, Hongyu

    2017-01-01

    A key step in genomic studies is to assess high throughput measurements across millions of markers for each participant’s DNA, either using microarrays or sequencing techniques. Accurate genotype calling is essential for downstream statistical analysis of genotype-phenotype associations, and next generation sequencing (NGS) has recently become a more common approach in genomic studies. How the accuracy of variant calling in NGS-based studies affects downstream association analysis has not, however, been studied using empirical data in which both microarrays and NGS were available. In this article, we investigate the impact of variant calling errors on the statistical power to identify associations between single nucleotides and disease, and on associations between multiple rare variants and disease. Both differential and nondifferential genotyping errors are considered. Our results show that the power of burden tests for rare variants is strongly influenced by the specificity in variant calling, but is rather robust with regard to sensitivity. By using the variant calling accuracies estimated from a substudy of a Cooperative Studies Program project conducted by the Department of Veterans Affairs, we show that the power of association tests is mostly retained with commonly adopted variant calling pipelines. An R package, GWAS.PC, is provided to accommodate power analysis that takes account of genotyping errors (http://zhaocenter.org/software/). PMID:28019059

  14. Results of PBX 9501 and PBX 9502 Round-Robin Quasi-Static Tension Tests from JOWOG-9/39 Focused Exchange.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, D. G.

    2002-01-01

    A round-robin study was conducted with the participation of three laboratory facilities: Los Alamos National Laboratory (LANL), BWXT Pantex Plant (PX), and Lawrence Livermore National Laboratory (LLNL). The study involved the machining and quasi-static tension testing of two plastic-bonded high explosive (PBX) composites, PBX 9501 and PBX 9502. Nine tensile specimens for each type of PBX were to be machined at each of the three facilities; 3 of these specimens were to be sent to each of the participating materials testing facilities for tensile testing. The resultant data was analyzed to look for trends associated with specimen machining location and/ormore » trends associated with materials testing location. The analysis provides interesting insights into the variability and statistical nature of mechanical properties testing on PBX composites. Caution is warranted when results are compared/exchanged between testing facilities.« less

  15. [The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].

    PubMed

    Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel

    2017-01-01

    The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  16. The Impact of 200 Meter Breast Stroke Swimming Activity on Blood Glucose Level of The Student

    NASA Astrophysics Data System (ADS)

    Syahrastani, S.; Badri, H.; Argantos, A.; Yuniarti, E.

    2018-04-01

    Blood sugar in the human body is very important, as it is a source of energy for humans. Fasting blood sugar needs to be recognized, because it is an indicator of a person’s health. The research that has been done aims to determine the effect of physical activity on fasting blood sugar. This research is a quasi-experimental research. The research was conducted on 15 students of FIK UNP Padang, who have passed the swimming course. Blood was taken before and after doing physical exercise activity of 200 meter breaststroke swimming. Data collection was conducted with tests and measurements. The data analysis technique used inference statistics with t test formula, with α = 0.05. From the results it is obtained that there is a very significant impact on blood sugar levels after 200 meters breaststroke swimming activity with p <0.05

  17. Enhancing self-report assessment of PTSD: development of an item bank.

    PubMed

    Del Vecchio, Nicole; Elwy, A Rani; Smith, Eric; Bottonari, Kathryn A; Eisen, Susan V

    2011-04-01

    The authors report results of work to enhance self-report posttraumatic stress disorder (PTSD) assessment by developing an item bank for use in a computer-adapted test. Computer-adapted tests have great potential to decrease the burden of PTSD assessment and outcomes monitoring. The authors conducted a systematic literature review of PTSD instruments, created a database of items, performed qualitative review and readability analysis, and conducted cognitive interviews with veterans diagnosed with PTSD. The systematic review yielded 480 studies in which 41 PTSD instruments comprising 993 items met inclusion criteria. The final PTSD item bank includes 104 items representing each of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV; American Psychiatric Association [APA], 1994), PTSD symptom clusters (reexperiencing, avoidance, and hyperarousal), and 3 additional subdomains (depersonalization, guilt, and sexual problems) that expanded the assessment item pool. Copyright © 2011 International Society for Traumatic Stress Studies.

  18. Correlation between the Quality of Attention and Cognitive Competence with Motor Action in Stroke Patients.

    PubMed

    Arsic, S; Konstantinovic, Lj; Eminovic, F; Pavlovic, D; Popovic, M B; Arsic, V

    2015-01-01

    It is considered that cognitive function and attention could affect walking, motion control, and proper conduct during the walk. To determine whether there is a difference in the quality of attention and cognitive ability in stroke patients and patients without neurological damage of similar age and education and to determine whether the connection of attention and cognition affects motor skills, the sample consisted of 50 stroke patients tested with hemiparesis, involved in the process of rehabilitation, and 50 persons, randomly chosen, without neurological damage. The survey used the following tests: Trail Making (TMT A B) test for assessing the flexibility of attention; Mini-Mental State Examination (MMSE) for cognitive status; Functional Ambulation Category (FAC) test to assess the functional status and parameters of walk: speed, frequency, and length of stride; STEP test for assessing the precision of movement and balance. With stroke patients, relationship between age and performance on the MMSE test was marginally significant. The ratio of performance to TMT A B test and years does not indicate statistical significance, while statistical significance between the MMSE test performance and education exists. In stroke patients, performance on MMSE test is correlated with the frequency and length of stride walk. The quality of cognitive function and attention is associated with motor skills but differs in stroke patients and people without neurological damage of similar age. The significance of this correlation can supplement research in neurorehabilitation, improve the quality of medical rehabilitation, and contribute to efficient recovery of these patients.

  19. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography.

    PubMed

    Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-06-01

    Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.

  20. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography

    PubMed Central

    Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-01-01

    Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477

  1. Practicality of Elementary Statistics Module Based on CTL Completed by Instructions on Using Software R

    NASA Astrophysics Data System (ADS)

    Delyana, H.; Rismen, S.; Handayani, S.

    2018-04-01

    This research is a development research using 4-D design model (define, design, develop, and disseminate). The results of the define stage are analyzed for the needs of the following; Syllabus analysis, textbook analysis, student characteristics analysis and literature analysis. The results of textbook analysis obtained the description that of the two textbooks that must be owned by students also still difficulty in understanding it, the form of presentation also has not facilitated students to be independent in learning to find the concept, textbooks are also not equipped with data processing referrals by using software R. The developed module is considered valid by the experts. Further field trials are conducted to determine the practicality and effectiveness. The trial was conducted to the students of Mathematics Education Study Program of STKIP PGRI which was taken randomly which has not taken Basic Statistics Course that is as many as 4 people. Practical aspects of attention are easy, time efficient, easy to interpret, and equivalence. The practical value in each aspect is 3.7; 3.79, 3.7 and 3.78. Based on the results of the test students considered that the module has been very practical use in learning. This means that the module developed can be used by students in Elementary Statistics learning.

  2. AGR-1 Thermocouple Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeff Einerson

    2012-05-01

    This report documents an effort to analyze measured and simulated data obtained in the Advanced Gas Reactor (AGR) fuel irradiation test program conducted in the INL's Advanced Test Reactor (ATR) to support the Next Generation Nuclear Plant (NGNP) R&D program. The work follows up on a previous study (Pham and Einerson, 2010), in which statistical analysis methods were applied for AGR-1 thermocouple data qualification. The present work exercises the idea that, while recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, results of the numerical simulations can be used in combination with the statistical analysis methods tomore » further improve qualification of measured data. Additionally, the combined analysis of measured and simulation data can generate insights about simulation model uncertainty that can be useful for model improvement. This report also describes an experimental control procedure to maintain fuel target temperature in the future AGR tests using regression relationships that include simulation results. The report is organized into four chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program, AGR-1 test configuration and test procedure, overview of AGR-1 measured data, and overview of physics and thermal simulation, including modeling assumptions and uncertainties. A brief summary of statistical analysis methods developed in (Pham and Einerson 2010) for AGR-1 measured data qualification within NGNP Data Management and Analysis System (NDMAS) is also included for completeness. Chapters 2-3 describe and discuss cases, in which the combined use of experimental and simulation data is realized. A set of issues associated with measurement and modeling uncertainties resulted from the combined analysis are identified. This includes demonstration that such a combined analysis led to important insights for reducing uncertainty in presentation of AGR-1 measured data (Chapter 2) and interpretation of simulation results (Chapter 3). The statistics-based simulation-aided experimental control procedure described for the future AGR tests is developed and demonstrated in Chapter 4. The procedure for controlling the target fuel temperature (capsule peak or average) is based on regression functions of thermocouple readings and other relevant parameters and accounting for possible changes in both physical and thermal conditions and in instrument performance.« less

  3. New heterogeneous test statistics for the unbalanced fixed-effect nested design.

    PubMed

    Guo, Jiin-Huarng; Billard, L; Luh, Wei-Ming

    2011-05-01

    When the underlying variances are unknown or/and unequal, using the conventional F test is problematic in the two-factor hierarchical data structure. Prompted by the approximate test statistics (Welch and Alexander-Govern methods), the authors develop four new heterogeneous test statistics to test factor A and factor B nested within A for the unbalanced fixed-effect two-stage nested design under variance heterogeneity. The actual significance levels and statistical power of the test statistics were compared in a simulation study. The results show that the proposed procedures maintain better Type I error rate control and have greater statistical power than those obtained by the conventional F test in various conditions. Therefore, the proposed test statistics are recommended in terms of robustness and easy implementation. ©2010 The British Psychological Society.

  4. Effect of Streptokinase on Reperfusion After Acute Myocardial Infarction and Its Complications: An Ex-Post Facto Study

    PubMed Central

    Taheri, Leila; Zargham-Boroujeni, Ali; Jahromi, Marzieh Kargar; Charkhandaz, Maryam; Hojat, Mohsen

    2015-01-01

    Introduction: Emergency treatment of patients with acute myocardial infarction is very important. Streptokinase in Iran is often as the only clot-busting medication is used. The purpose of using streptokinase medication is to revive the ischemic heart tissue, although has dangerous complications too. Therefore, the present study aimed to determine the effect of streptokinase on reperfusion after acute myocardial infarction and its complications, has been designed and conducted. Materials and Methods: This is an Ex-post facto study. The study population included patients who suffer from acute myocardial infarction. The sample size was 300 patients, and 2 groups were matched, in variables of age, sex, underlying disease, frequencies and area of MI. Data collection did by researcher making questionnaire, that accept face and content validity by 10 expert researcher, the reliability was conducted with Spearman’s test (r=0.85) by Test-retest method. Data analysis did by SPSS software: V 12. Findings: Mean of EF in SK group was (46.15±8.11) and in control group was (43.11±12.57). Significant relationship was seen between SK, arrhythmia occurring and improve EF reperfusion by chi-square test (p=0.028), (p=0.020). The most arrhythmia in SK group was Ventricular Tachycardia (20.7%). Significant statistical relation between SK and mortality were found by Chi-square test (p=0.001). But a meaningful statistical relation was not found between SK and pulmonary edema incidence (p=0.071). Conclusions: Nurses of CCU should be aware about SK complications such as hypotension, bleeding and arrhythmias. Proposed compare SK and tissue plasminogen drug in reperfusion and complications effect. PMID:25946921

  5. Safe summers: Adapting evidence-based injury prevention into a summer curriculum.

    PubMed

    Schaeffer, Melody; Cioni, Claire; Kozma, Nicole; Rains, Catherine; Todd, Greta

    2017-11-01

    Unintentional injury is the leading cause of death for those aged 0 years to 19 years. St. Louis Children's Hospital created Safety Land, a comprehensive injury prevention intervention which is provided during summer months. This program uses a life-size board game to teach safety education to children in ages 5 years to 11 years. The purpose of this study was to evaluate the effect of Safety Land on safety knowledge in children that participated in the intervention. St. Louis Children's Hospital identified ZIP codes with the highest use of the emergency room for injury. Daycares and summer camps within these ZIP codes were targeted for the Safety Land intervention. A multiple choice pretest and posttest survey was designed to measure knowledge change within program participants. Students were selected for testing based on site availably. Within these sites, a convenience sample of children was selected for pretesting and posttesting. Safety Land staff conducted the pretest a week before the intervention, and the posttest was administered the week after the intervention. A total knowledge score was calculated to determine overall knowledge change. Descriptive statistics and independent-samples t tests were conducted to determine statistical significance of change in knowledge (p < 0.05) for each question. Between May 2014 and August 2016, 3,866 children participated in Safety Land. A total of 310 children completed the pretest and 274 completed the posttest. Mean test scores increased from 66.7% to 85.1% and independent-samples t test of the total knowledge score was significant (p < 0.05) between pretest and posttest values. Findings suggest that this intervention is effective in increasing the knowledge of safety behaviors for children receiving the curriculum during the summer months. Further research should focus on long-term behavior changes in these youth.

  6. Streaking into middle school science: The Dell Streak pilot project

    NASA Astrophysics Data System (ADS)

    Austin, Susan Eudy

    A case study is conducted implementing the Dell Streak seven-inch android device into eighth grade science classes of one teacher in a rural middle school in the Piedmont region of North Carolina. The purpose of the study is to determine if the use of the Dell Streaks would increase student achievement on standardized subject testing, if the Streak could be used as an effective instructional tool, and if it could be considered an effective instructional resource for reviewing and preparing for the science assessments. A mixed method research design was used for the study to analyze both quantitative and qualitative results to determine if the Dell Streaks' utilization could achieve the following: 1. instructional strategies would change, 2. it would be an effective instructional tool, and 3. a comparison of the students' test scores and benchmark assessments' scores would provide statistically significant difference. Through the use of an ANOVA it was determined a statistically significant difference had occurred. A Post Hoc analysis was conducted to identify where the difference occurred. Finally a T-test determined was there was no statistically significance difference between the mean End-of-Grade tests and four quarterly benchmark scores of the control and the experimental groups. Qualitative research methods were used to gather results to determine if the Streaks were an effective instructional tool. Classroom observations identified that the teacher's teaching styles and new instructional strategies were implemented throughout the pilot project. Students had an opportunity to complete a questionnaire three times during the pilot project. Results revealed what the students liked about using the devices and the challenges they were facing. The teacher completed a reflective questionnaire throughout the pilot project and offered valuable reflections about the use of the devices in an educational setting. The reflection data supporting the case study was drawn from the teacher's statements regarding the change in instructional delivery as a respect of using the students' device. The results section of the study will elaborate upon these findings. The study recommendations on the use of the Dell Streak device will address whether further actions as the use of the Streak technology in the classroom and summary section.

  7. 'Test n Treat (TnT)': a cluster-randomised feasibility trial of frequent, rapid-testing and same-day, on-site treatment to reduce rates of chlamydia in high-risk further education college students: statistical analysis plan.

    PubMed

    Phillips, Rachel; Oakeshott, Pippa; Kerry-Barnard, Sarah; Reid, Fiona

    2018-06-05

    There are high rates of sexually transmitted infections (STIs) in ethnically diverse, sexually active students aged 16-24 years attending London further education (FE) colleges. However, uptake of chlamydia screening remains low. The TnT study aims to assess the feasibility of conducting a future trial in FE colleges to investigate if frequent, rapid, on-site testing and treatment (TnT) reduces chlamydia rates. This article presents the statistical analysis plan for the main study publication as approved and signed off by the Trial Management Group prior to the first data extraction for the final report. TnT is a cluster-randomised feasibility trial conducted over 7 months with parallel qualitative and economic assessments. Colleges will be randomly allocated into the intervention (TnT) or the control group (no TnT). Six FE colleges in London will be included. At each college for 2 days, 80 consecutive sexually active students aged 16-24 years (total 480 students across all six colleges) will be recruited from public areas and asked to provide baseline samples. One and 4 months after recruitment intervention colleges will be visited on two consecutive days by the TnT team where participating students will be texted and invited to come for same-day, on-site, rapid chlamydia testing and, if positive, treatment. Participants in the control colleges will receive 'thank you' texts 1 and 4 months after recruitment. Seven months after recruitment, participants from both groups will be invited to complete questionnaires and provide samples for TnT. All samples will be tested, and same-day treatment offered to participants with positive results. Key feasibility outcomes include: recruitment rates, testing and treatment uptake rates (at 1 and 4 months) and follow-up rates (at 7 months). ISRCTN 58038795 . Registered on 31 August 2016.

  8. Periodontal disease and carotid atherosclerosis: A meta-analysis of 17,330 participants.

    PubMed

    Zeng, Xian-Tao; Leng, Wei-Dong; Lam, Yat-Yin; Yan, Bryan P; Wei, Xue-Mei; Weng, Hong; Kwong, Joey S W

    2016-01-15

    The association between periodontal disease and carotid atherosclerosis has been evaluated primarily in single-center studies, and whether periodontal disease is an independent risk factor of carotid atherosclerosis remains uncertain. This meta-analysis aimed to evaluate the association between periodontal disease and carotid atherosclerosis. We searched PubMed and Embase for relevant observational studies up to February 20, 2015. Two authors independently extracted data from included studies, and odds ratios (ORs) with 95% confidence intervals (CIs) were calculated for overall and subgroup meta-analyses. Statistical heterogeneity was assessed by the chi-squared test (P<0.1 for statistical significance) and quantified by the I(2) statistic. Data analysis was conducted using the Comprehensive Meta-Analysis (CMA) software. Fifteen observational studies involving 17,330 participants were included in the meta-analysis. The overall pooled result showed that periodontal disease was associated with carotid atherosclerosis (OR: 1.27, 95% CI: 1.14-1.41; P<0.001) but statistical heterogeneity was substantial (I(2)=78.90%). Subgroup analysis of adjusted smoking and diabetes mellitus showed borderline significance (OR: 1.08; 95% CI: 1.00-1.18; P=0.05). Sensitivity and cumulative analyses both indicated that our results were robust. Findings of our meta-analysis indicated that the presence of periodontal disease was associated with carotid atherosclerosis; however, further large-scale, well-conducted clinical studies are needed to explore the precise risk of developing carotid atherosclerosis in patients with periodontal disease. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. [Analysis on willingness to pay for HIV antibody saliva rapid test and related factors].

    PubMed

    Li, Junjie; Huo, Junli; Cui, Wenqing; Zhang, Xiujie; Hu, Yi; Su, Xingfang; Zhang, Wanyue; Li, Youfang; Shi, Yuhua; Jia, Manhong

    2015-02-01

    To understand the willingness to pay for HIV antibody saliva rapid test and its influential factors among people seeking counsel and HIV test, STD clinic patients, university students, migrant people, female sex workers (FSWs), men who have sex with men (MSM) and injecting drug users (IDUs). An anonymous questionnaire survey was conducted among 511 subjects in the 7 groups selected by different sampling methods, and 509 valid questionnaires were collected. The majority of subjects were males (54.8%) and aged 20-29 years (41.5%). Among the subjects, 60.3% had education level of high school or above, 55.4% were unmarried, 37.3% were unemployed, 73.3% had monthly expenditure <2 000 Yuan RMB, 44.2% had received HIV test, 28.3% knew HIV saliva test, 21.0% were willing to receive HIV saliva test, 2.0% had received HIV saliva test, only 1.0% had bought HIV test kit for self-test, and 84.1% were willing to pay for HIV antibody saliva rapid test. Univariate logistic regression analysis indicated that subject group, age, education level, employment status, monthly expenditure level, HIV test experience and willingness to receive HIV saliva test were correlated statistically with willingness to pay for HIV antibody saliva rapid test. Multivariate logistic regression analysis showed that subject group and monthly expenditure level were statistically correlated with willingness to pay for HIV antibody saliva rapid test. The willingness to pay for HIV antibody saliva rapid test and acceptable price of HIV antibody saliva rapid test varied in different areas and populations. Different populations may have different willingness to pay for HIV antibody saliva rapid test;the affordability of the test could influence the willingness to pay for the test.

  10. A review of mammalian carcinogenicity study design and potential effects of alternate test procedures on the safety evaluation of food ingredients.

    PubMed

    Hayes, A W; Dayan, A D; Hall, W C; Kodell, R L; Williams, G M; Waddell, W D; Slesinski, R S; Kruger, C L

    2011-06-01

    Extensive experience in conducting long term cancer bioassays has been gained over the past 50 years of animal testing on drugs, pesticides, industrial chemicals, food additives and consumer products. Testing protocols for the conduct of carcinogenicity studies in rodents have been developed in Guidelines promulgated by regulatory agencies, including the US EPA (Environmental Protection Agency), the US FDA (Food and Drug Administration), the OECD (Organization for Economic Co-operation and Development) for the EU member states and the MAFF (Ministries of Agriculture, Forestries and Fisheries) and MHW (Ministry of Health and Welfare) in Japan. The basis of critical elements of the study design that lead to an accepted identification of the carcinogenic hazard of substances in food and beverages is the focus of this review. The approaches used by entities well-known for carcinogenicity testing and/or guideline development are discussed. Particular focus is placed on comparison of testing programs used by the US National Toxicology Program (NTP) and advocated in OECD guidelines to the testing programs of the European Ramazzini Foundation (ERF), an organization with numerous published carcinogenicity studies. This focus allows for a good comparison of differences in approaches to carcinogenicity testing and allows for a critical consideration of elements important to appropriate carcinogenicity study designs and practices. OECD protocols serve as good standard models for carcinogenicity testing protocol design. Additionally, the detailed design of any protocol should include attention to the rationale for inclusion of particular elements, including the impact of those elements on study interpretations. Appropriate interpretation of study results is dependent on rigorous evaluation of the study design and conduct, including differences from standard practices. Important considerations are differences in the strain of animal used, diet and housing practices, rigorousness of test procedures, dose selection, histopathology procedures, application of historical control data, statistical evaluations and whether statistical extrapolations are supported by, or are beyond the limits of, the data generated. Without due consideration, there can be result conflicting data interpretations and uncertainty about the relevance of a study's results to human risk. This paper discusses the critical elements of rodent (rat) carcinogenicity studies, particularly with respect to the study of food ingredients. It also highlights study practices and procedures that can detract from the appropriate evaluation of human relevance of results, indicating the importance of adherence to international consensus protocols, such as those detailed by OECD. Copyright © 2010. Published by Elsevier Inc.

  11. Explorations in Statistics: Hypothesis Tests and P Values

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…

  12. Getting a head start: high-fidelity, simulation-based operating room team training of interprofessional students.

    PubMed

    Paige, John T; Garbee, Deborah D; Kozmenko, Valeriy; Yu, Qingzhao; Kozmenko, Lyubov; Yang, Tong; Bonanno, Laura; Swartz, William

    2014-01-01

    Effective teamwork in the operating room (OR) is often undermined by the "silo mentality" of the differing professions. Such thinking is formed early in one's professional experience and is fostered by undergraduate medical and nursing curricula lacking interprofessional education. We investigated the immediate impact of conducting interprofessional student OR team training using high-fidelity simulation (HFS) on students' team-related attitudes and behaviors. Ten HFS OR interprofessional student team training sessions were conducted involving 2 standardized HFS scenarios, each of which was followed by a structured debriefing that targeted team-based competencies. Pre- and post-session mean scores were calculated and analyzed for 15 Likert-type items measuring self-efficacy in teamwork competencies using the t-test. Additionally, mean scores of observer ratings of team performance after each scenario and participant ratings after the second scenario for an 11-item Likert-type teamwork scale were calculated and analyzed using one-way ANOVA and t-test. Eighteen nursing students, 20 nurse anesthetist students, and 28 medical students participated in the training. Statistically significant gains from mean pre- to post-training scores occurred on 11 of the 15 self-efficacy items. Statistically significant gains in mean observer performance scores were present on all 3 subscales of the teamwork scale from the first scenario to the second. A statistically significant difference was found in comparisons of mean observer scores with mean participant scores for the team-based behaviors subscale. High-fidelity simulation OR interprofessional student team training improves students' team-based attitudes and behaviors. Students tend to overestimate their team-based behaviors. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  13. Comparative analysis on the selection of number of clusters in community detection

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro; Kabashima, Yoshiyuki

    2018-02-01

    We conduct a comparative analysis on various estimates of the number of clusters in community detection. An exhaustive comparison requires testing of all possible combinations of frameworks, algorithms, and assessment criteria. In this paper we focus on the framework based on a stochastic block model, and investigate the performance of greedy algorithms, statistical inference, and spectral methods. For the assessment criteria, we consider modularity, map equation, Bethe free energy, prediction errors, and isolated eigenvalues. From the analysis, the tendency of overfit and underfit that the assessment criteria and algorithms have becomes apparent. In addition, we propose that the alluvial diagram is a suitable tool to visualize statistical inference results and can be useful to determine the number of clusters.

  14. Integrated Data Collection Analysis (IDCA) Program - Statistical Analysis of RDX Standard Data Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.

    2015-10-30

    The Integrated Data Collection Analysis (IDCA) program is conducting a Proficiency Test for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard. The material was tested as a well-characterized standard several times during the proficiency study to assess differences among participants and the range of results that may arise for well-behaved explosive materials. The analyses show that there are detectable differences among the results from IDCA participants. While these differences are statisticallymore » significant, most of them can be disregarded for comparison purposes to assess potential variability when laboratories attempt to measure identical samples using methods assumed to be nominally the same. The results presented in this report include the average sensitivity results for the IDCA participants and the ranges of values obtained. The ranges represent variation about the mean values of the tests of between 26% and 42%. The magnitude of this variation is attributed to differences in operator, method, and environment as well as the use of different instruments that are also of varying age. The results appear to be a good representation of the broader safety testing community based on the range of methods, instruments, and environments included in the IDCA Proficiency Test.« less

  15. Graphical Tests for Power Comparison of Competing Designs.

    PubMed

    Hofmann, H; Follett, L; Majumder, M; Cook, D

    2012-12-01

    Lineups have been established as tools for visual testing similar to standard statistical inference tests, allowing us to evaluate the validity of graphical findings in an objective manner. In simulation studies lineups have been shown as being efficient: the power of visual tests is comparable to classical tests while being much less stringent in terms of distributional assumptions made. This makes lineups versatile, yet powerful, tools in situations where conditions for regular statistical tests are not or cannot be met. In this paper we introduce lineups as a tool for evaluating the power of competing graphical designs. We highlight some of the theoretical properties and then show results from two studies evaluating competing designs: both studies are designed to go to the limits of our perceptual abilities to highlight differences between designs. We use both accuracy and speed of evaluation as measures of a successful design. The first study compares the choice of coordinate system: polar versus cartesian coordinates. The results show strong support in favor of cartesian coordinates in finding fast and accurate answers to spotting patterns. The second study is aimed at finding shift differences between distributions. Both studies are motivated by data problems that we have recently encountered, and explore using simulated data to evaluate the plot designs under controlled conditions. Amazon Mechanical Turk (MTurk) is used to conduct the studies. The lineups provide an effective mechanism for objectively evaluating plot designs.

  16. A generalized plate method for estimating total aerobic microbial count.

    PubMed

    Ho, Kai Fai

    2004-01-01

    The plate method outlined in Chapter 61: Microbial Limit Tests of the U.S. Pharmacopeia (USP 61) provides very specific guidance for assessing total aerobic bioburden in pharmaceutical articles. This methodology, while comprehensive, lacks the flexibility to be useful in all situations. By studying the plate method as a special case within a more general family of assays, the effects of each parameter in the guidance can be understood. Using a mathematical model to describe the plate counting procedure, a statistical framework for making more definitive statements about total aerobic bioburden is developed. Such a framework allows the laboratory scientist to adjust the USP 61 methods to satisfy specific practical constraints. In particular, it is shown that the plate method can be conducted, albeit with stricter acceptance criteria, using a test specimen quantity that is smaller than the 10 g or 10 mL prescribed in the guidance. Finally, the interpretation of results proffered by the guidance is re-examined within this statistical framework and shown to be overly aggressive.

  17. Religious Practices and Self-Care in Iranian Patients with Type 2 Diabetes.

    PubMed

    Heidari, Saeide; Rezaei, Mahboubeh; Sajadi, Mahbobeh; Ajorpaz, Neda Mirbagher; Koenig, Harold G

    2017-04-01

    This study aimed to examine the relationship between religious practices and self-care of patients with type 2 diabetes. A descriptive cross-sectional survey was conducted on 154 diabetic patients who were referred to two general teaching hospitals in Qom City (Iran). Data were collected using demographic questionnaire, private and public religious practices, and summary of diabetes self-care activities questionnaires. Data were analyzed using descriptive statistics and statistical tests including independent t test, and Pearson correlation coefficient. Significant positive correlations were observed between religious practices and self-care activities in diabetic patients (p < 0.05). Significant positive correlations were also found between some religious practices and self-care activities subscales (p < 0.05). Healthcare providers should be aware of the role that religion plays in the lives of diabetic patients and be able to take religious factors into account when developing care plans. Doing so will enhance a more patient-centered approach and thereby support patients in their role as self-care decision-makers.

  18. A study of tensile test on open-cell aluminum foam sandwich

    NASA Astrophysics Data System (ADS)

    Ibrahim, N. A.; Hazza, M. H. F. Al; Adesta, E. Y. T.; Abdullah Sidek, Atiah Bt.; Endut, N. A.

    2018-01-01

    Aluminum foam sandwich (AFS) panels are one of the growing materials in the various industries because of its lightweight behavior. AFS also known for having excellent stiffness to weight ratio and high-energy absorption. Due to their advantages, many researchers’ shows an interest in aluminum foam material for expanding the use of foam structure. However, there is still a gap need to be fill in order to develop reliable data on mechanical behavior of AFS with different parameters and analysis method approach. Least of researcher focusing on open-cell aluminum foam and statistical analysis. Thus, this research conducted by using open-cell aluminum foam core grade 6101 with aluminum sheets skin tested under tension. The data is analyzed using full factorial in JMP statistical analysis software (version 11). ANOVA result show a significant value of the model which less than 0.500. While scatter diagram and 3D plot surface profiler found that skins thickness gives a significant impact to stress/strain value compared to core thickness.

  19. Comparison of wear behaviour and mechanical properties of as-cast Al6082 and Al6082-T6 using statistical analysis

    NASA Astrophysics Data System (ADS)

    Rani Rana, Sandhya; Pattnaik, A. B.; Patnaik, S. C.

    2018-03-01

    In the present work the wear behavior and mechanical properties of as cast A16082 and A16086-T6 were compared and analyzed using statistical analysis. The as cast Al6082 alloy was solutionized at 550°C, quenched and artificially aged at 170°C for 8hrs. Metallographic examination and XRD analysis revealed the presence of intermetallic compounds Al6Mn.Hardness of heat treated Al6082 was found to be more than as cast sample. Wear tests were carried out using Pin on Disc wear testing machine according to Taguchi L9 orthogonal array. Experiments were conducted under normal load 10-30N, sliding speed 1-3m/s, sliding distance 400,800,1200m respectively. Sliding speed was found to be the dominant factor for wear in both as cast and aged Al 6082 alloy. Sliding distance increases the wear rate up to 800m and then after it decreases.

  20. Effects of temporal variability in ground data collection on classification accuracy

    USGS Publications Warehouse

    Hoch, G.A.; Cully, J.F.

    1999-01-01

    This research tested whether the timing of ground data collection can significantly impact the accuracy of land cover classification. Ft. Riley Military Reservation, Kansas, USA was used to test this hypothesis. The U.S. Army's Land Condition Trend Analysis (LCTA) data annually collected at military bases was used to ground truth disturbance patterns. Ground data collected over an entire growing season and data collected one year after the imagery had a kappa statistic of 0.33. When using ground data from only within two weeks of image acquisition the kappa statistic improved to 0.55. Potential sources of this discrepancy are identified. These data demonstrate that there can be significant amounts of land cover change within a narrow time window on military reservations. To accurately conduct land cover classification at military reservations, ground data need to be collected in as narrow a window of time as possible and be closely synchronized with the date of the satellite imagery.

  1. Evaluation of the Thermo Scientific SureTect Salmonella species assay. AOAC Performance Tested Method 051303.

    PubMed

    Cloke, Jonathan; Clark, Dorn; Radcliff, Roy; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko

    2014-01-01

    The Thermo Scientific SureTect Salmonella species Assay is a new real-time PCR assay for the detection of Salmonellae in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Salmonella species Assay in comparison to the reference method detailed in International Organization for Standardization 6579:2002 in a variety of food matrixes, namely, raw ground beef, raw chicken breast, raw ground pork, fresh bagged lettuce, pork frankfurters, nonfat dried milk powder, cooked peeled shrimp, pasteurized liquid whole egg, ready-to-eat meal containing beef, and stainless steel surface samples. With the exception of liquid whole egg and fresh bagged lettuce, which were tested in-house, all matrixes were tested by Marshfield Food Safety, Marshfield, WI, on behalf of Thermo Fisher Scientific. In addition, three matrixes (pork frankfurters, lettuce, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled laboratory study by the University of Guelph, Canada. No significant difference by probability of detection or McNemars Chi-squared statistical analysis was found between the candidate or reference methods for any of the food matrixes or environmental surface samples tested during the validation study. Inclusivity and exclusivity testing was conducted with 117 and 36 isolates, respectively, which demonstrated that the SureTect Salmonella species Assay was able to detect all the major groups of Salmonella enterica subspecies enterica (e.g., Typhimurium) and the less common subspecies of S. enterica (e.g., arizoniae) and the rarely encountered S. bongori. None of the exclusivity isolates analyzed were detected by the SureTect Salmonella species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation (enrichment time and temperature, and lysis temperature), which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.

  2. Development and validation of challenge materials for double-blind, placebo-controlled food challenges in children.

    PubMed

    Vlieg-Boerstra, Berber J; Bijleveld, Charles M A; van der Heide, Sicco; Beusekamp, Berta J; Wolt-Plompen, Saskia A A; Kukler, Jeanet; Brinkman, Joep; Duiverman, Eric J; Dubois, Anthony E J

    2004-02-01

    The use of double-blind, placebo-controlled food challenges (DBPCFCs) is considered the gold standard for the diagnosis of food allergy. Despite this, materials and methods used in DBPCFCs have not been standardized. The purpose of this study was to develop and validate recipes for use in DBPCFCs in children by using allergenic foods, preferably in their usual edible form. Recipes containing milk, soy, cooked egg, raw whole egg, peanut, hazelnut, and wheat were developed. For each food, placebo and active test food recipes were developed that met the requirements of acceptable taste, allowance of a challenge dose high enough to elicit reactions in an acceptable volume, optimal matrix ingredients, and good matching of sensory properties of placebo and active test food recipes. Validation was conducted on the basis of sensory tests for difference by using the triangle test and the paired comparison test. Recipes were first tested by volunteers from the hospital staff and subsequently by a professional panel of food tasters in a food laboratory designed for sensory testing. Recipes were considered to be validated if no statistically significant differences were found. Twenty-seven recipes were developed and found to be valid by the volunteer panel. Of these 27 recipes, 17 could be validated by the professional panel. Sensory testing with appropriate statistical analysis allows for objective validation of challenge materials. We recommend the use of professional tasters in the setting of a food laboratory for best results.

  3. Investigation of the Role of Training Health Volunteers in Promoting Pap Smear Test Use among Iranian Women Based on the Protection Motivation Theory.

    PubMed

    Ghahremani, Leila; Harami, Zahra Khiyali; Kaveh, Mohammad Hossein; Keshavarzi, Sareh

    2016-01-01

    Cervical cancer is known as one of the most prevalent types of cancers and a major public health problem in developing countries which can be detected by Pap test, prevented, and treated. Despite the effective role of Pap test in decreasing the incidence and mortality due to cervical cancer, it is still one the most common causes of cancer-related deaths among women, especially in developing countries. Thus, this study aimed to examine the effect of educational interventions implemented by health volunteers based on protection motivation theory (PMT) on promoting Pap test use among women. This quasi-experimental study was conducted on 60 health volunteers and 420 women. The study participants were divided into an intervention and a control group. Data were collected using a valid self-reported questionnaire including demographic variables and PMT constructs which was completed by both groups before and 2 months after the intervention. Then, the data were entered into the SPSS statistical software, version 19 and were analyzed using Chi-square test, independent T-test, and descriptive statistical methods. P<0.05 was considered as statistically significant. The findings of this study showed that the mean scores of PMT constructs (i.e. perceived vulnerability, perceived severity, fear, response-costs, self-efficacy, and intention) increased in the intervention group after the intervention (P<0.001). However, no significant difference was found between the two groups regarding response efficacy after the intervention (P=0.06). The rate of Pap test use also increased by about 62.9% among the study women. This study showed a significant positive relationship between PMT-based training and Pap test use. The results also revealed the successful contribution of health volunteers to training cervical cancer screening. Thus, training interventions based on PMT are suggested to be designed and implemented and health volunteers are recommended to be employed for educational purposes and promoting the community's, especially women's, health.

  4. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    PubMed

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  5. Sample size determination for disease prevalence studies with partially validated data.

    PubMed

    Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai

    2016-02-01

    Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.

  6. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    USGS Publications Warehouse

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  7. Spatial scan statistics for detection of multiple clusters with arbitrary shapes.

    PubMed

    Lin, Pei-Sheng; Kung, Yi-Hung; Clayton, Murray

    2016-12-01

    In applying scan statistics for public health research, it would be valuable to develop a detection method for multiple clusters that accommodates spatial correlation and covariate effects in an integrated model. In this article, we connect the concepts of the likelihood ratio (LR) scan statistic and the quasi-likelihood (QL) scan statistic to provide a series of detection procedures sufficiently flexible to apply to clusters of arbitrary shape. First, we use an independent scan model for detection of clusters and then a variogram tool to examine the existence of spatial correlation and regional variation based on residuals of the independent scan model. When the estimate of regional variation is significantly different from zero, a mixed QL estimating equation is developed to estimate coefficients of geographic clusters and covariates. We use the Benjamini-Hochberg procedure (1995) to find a threshold for p-values to address the multiple testing problem. A quasi-deviance criterion is used to regroup the estimated clusters to find geographic clusters with arbitrary shapes. We conduct simulations to compare the performance of the proposed method with other scan statistics. For illustration, the method is applied to enterovirus data from Taiwan. © 2016, The International Biometric Society.

  8. Towards a web-based decision support tool for selecting appropriate statistical test in medical and biological sciences.

    PubMed

    Suner, Aslı; Karakülah, Gökhan; Dicle, Oğuz

    2014-01-01

    Statistical hypothesis testing is an essential component of biological and medical studies for making inferences and estimations from the collected data in the study; however, the misuse of statistical tests is widely common. In order to prevent possible errors in convenient statistical test selection, it is currently possible to consult available test selection algorithms developed for various purposes. However, the lack of an algorithm presenting the most common statistical tests used in biomedical research in a single flowchart causes several problems such as shifting users among the algorithms, poor decision support in test selection and lack of satisfaction of potential users. Herein, we demonstrated a unified flowchart; covers mostly used statistical tests in biomedical domain, to provide decision aid to non-statistician users while choosing the appropriate statistical test for testing their hypothesis. We also discuss some of the findings while we are integrating the flowcharts into each other to develop a single but more comprehensive decision algorithm.

  9. NASA Bioculture System: From Experiment Definition to Flight Payload

    NASA Technical Reports Server (NTRS)

    Sato, Kevin Y.; Almeida, Eduardo; Austin, Edward M.

    2014-01-01

    Starting in 2015, the NASA Bioculture System will be available to the science community to conduct cell biology and microbiology experiments on ISS. The Bioculture System carries ten environmentally independent Cassettes, which house the experiments. The closed loop fluids flow path subsystem in each Cassette provides a perfusion-based method for maintain specimen cultures in a shear-free environment by using a biochamber based on porous hollow fiber bioreactor technology. Each Cassette contains an incubator and separate insulated refrigerator compartment for storage of media, samples, nutrients and additives. The hardware is capable of fully automated or manual specimen culturing and processing, including in-flight experiment initiation, sampling and fixation, up to BSL-2 specimen culturing, and the ability to up to 10 independent cultures in parallel for statistical analysis. The incubation and culturing of specimens in the Bioculture System is a departure from standard laboratory culturing methods. Therefore, it is critical that the PI has an understanding the pre-flight test required for successfully using the Bioculture System to conduct an on-orbit experiment. Overall, the PI will conduct a series of ground tests to define flight experiment and on-orbit implementation requirements, verify biocompatibility, and determine base bioreactor conditions. The ground test processes for the utilization of the Bioculture System, from experiment selection to flight, will be reviewed. Also, pre-flight test schedules and use of COTS ground test equipment (CellMax and FiberCell systems) and the Bioculture System will be discussed.

  10. The impact of the Virtual Ophthalmology Clinic on medical students' learning: a randomised controlled trial

    PubMed Central

    Succar, T; Zebington, G; Billson, F; Byth, K; Barrie, S; McCluskey, P; Grigg, J

    2013-01-01

    Aim The Virtual Ophthalmology Clinic (VOC) is an interactive web-based teaching module, with special emphasis on history taking and clinical reasoning skills. The purpose of this study was to determine the impact of VOC on medical students' learning. Methods A randomised controlled trial (RCT) was conducted with medical students from the University of Sydney (n=188) who were randomly assigned into either an experimental (n=93) or a control group (n=95). A pre- and post-test and student satisfaction questionnaire were administered. Twelve months later a follow-up test was conducted to determine the long-term retention rate of graduates. Results There was a statistically significant (P<0.001) within-subject improvement pre- to post rotation in the number of correctly answered questions for both the control and experimental groups (mean improvement for control 10%, 95% CI 1.3–2.6, and for experimental 17.5%, 95% CI 3.0–4.0). The improvement was significantly greater in the experimental group (mean difference in improvement between groups 7.5%, 95% CI 0.8–2.3, P<0.001). At 12 months follow-up testing, the experimental group scored on average 1.6 (8%) (95%CI 0.4 to 2.7, P=0.007) higher than the controls. Conclusion On the basis of a statistically significant improvement in academic performance and highly positive student feedback, the implementation of VOC may provide a means to address challenges to ophthalmic learning outcomes in an already crowded medical curriculum. PMID:23867718

  11. Comparing the effect of mefenamic Acid and vitex agnus on intrauterine device induced bleeding.

    PubMed

    Yavarikia, Parisa; Shahnazi, Mahnaz; Hadavand Mirzaie, Samira; Javadzadeh, Yousef; Lutfi, Razieh

    2013-09-01

    Increased bleeding is the most common cause of intrauterine device (IUD) removal. The use of alternative therapies to treat bleeding has increased due to the complications of medications. But most alternative therapies are not accepted by women. Therefore, conducting studies to find the right treatment with fewer complications and being acceptable is necessary. This study aimed to compare the effect of mefenamic acid and vitex agnus castus on IUD induced bleeding. This was a double blinded randomized controlled clinical trial. It was conducted on 84 women with random allocation in to two groups of 42 treated with mefenamic acid and vitex agnus capsules taking three times a day during menstruation for four months. Data were collected by demographic questionnaire and Higham 5 stage chart (1 month before the treatment and 4 months during the treatment)., Paired t-test, independent t-test, chi-square test, analysis of variance (ANOVA) with repeated measurements, and SPSS software were used to determine the results. Mefenamic acid and vitex agnus significantly decreased bleeding. This decrease in month 4 was 52% in the mefenamic acid group and 47.6% in the vitex agnus group. The mean bleeding score changes was statistically significant between the two groups in the first three months and before the intervention. In the mefenamic acid group, the decreased bleeding was significantly more than the vitex agnus group. However, during the 4(th) month, the mean change was not statistically significant. Mefenamic acid and vitex agnus were both effective on IUD induced bleeding; however, mefenamic acid was more effective.

  12. Vertical integration of basic science in final year of medical education

    PubMed Central

    Rajan, Sudha Jasmine; Jacob, Tripti Meriel; Sathyendra, Sowmya

    2016-01-01

    Background: Development of health professionals with ability to integrate, synthesize, and apply knowledge gained through medical college is greatly hampered by the system of delivery that is compartmentalized and piecemeal. There is a need to integrate basic sciences with clinical teaching to enable application in clinical care. Aim: To study the benefit and acceptance of vertical integration of basic science in final year MBBS undergraduate curriculum. Materials and Methods: After Institutional Ethics Clearance, neuroanatomy refresher classes with clinical application to neurological diseases were held as part of the final year posting in two medical units. Feedback was collected. Pre- and post-tests which tested application and synthesis were conducted. Summative assessment was compared with the control group of students who had standard teaching in other two medical units. In-depth interview was conducted on 2 willing participants and 2 teachers who did neurology bedside teaching. Results: Majority (>80%) found the classes useful and interesting. There was statistically significant improvement in the post-test scores. There was a statistically significant difference between the intervention and control groups' scores during summative assessment (76.2 vs. 61.8 P < 0.01). Students felt that it reinforced, motivated self-directed learning, enabled correlations, improved understanding, put things in perspective, gave confidence, aided application, and enabled them to follow discussions during clinical teaching. Conclusion: Vertical integration of basic science in final year was beneficial and resulted in knowledge gain and improved summative scores. The classes were found to be useful, interesting and thought to help in clinical care and application by majority of students. PMID:27563584

  13. Comparing the Effect of Mefenamic Acid and Vitex Agnus on Intrauterine Device Induced Bleeding

    PubMed Central

    Yavarikia, Parisa; Shahnazi, Mahnaz; Hadavand Mirzaie, Samira; Javadzadeh, Yousef; Lutfi, Razieh

    2013-01-01

    Introduction: Increased bleeding is the most common cause of intrauterine device (IUD) removal. The use of alternative therapies to treat bleeding has increased due to the complications of medications. But most alternative therapies are not accepted by women. Therefore, conducting studies to find the right treatment with fewer complications and being acceptable is necessary. This study aimed to compare the effect of mefenamic acid and vitex agnus castus on IUD induced bleeding. Methods: This was a double blinded randomized controlled clinical trial. It was conducted on 84 women with random allocation in to two groups of 42 treated with mefenamic acid and vitex agnus capsules taking three times a day during menstruation for four months. Data were collected by demographic questionnaire and Higham 5 stage chart (1 month before the treatment and 4 months during the treatment)., Paired t-test, independent t-test, chi-square test, analysis of variance (ANOVA) with repeated measurements, and SPSS software were used to determine the results. Results: Mefenamic acid and vitex agnus significantly decreased bleeding. This decrease in month 4 was 52% in the mefenamic acid group and 47.6% in the vitex agnus group. The mean bleeding score changes was statistically significant between the two groups in the first three months and before the intervention. In the mefenamic acid group, the decreased bleeding was significantly more than the vitex agnus group. However, during the 4th month, the mean change was not statistically significant. Conclusion: Mefenamic acid and vitex agnus were both effective on IUD induced bleeding; however, mefenamic acid was more effective. PMID:25276733

  14. Wilderness First Aid Training as a Tool for Improving Basic Medical Knowledge in South Sudan.

    PubMed

    Katona, Lindsay B; Douglas, William S; Lena, Sean R; Ratner, Kyle G; Crothers, Daniel; Zondervan, Robert L; Radis, Charles D

    2015-12-01

    The challenges presented by traumatic injuries in low-resource communities are especially relevant in South Sudan. This study was conducted to assess whether a 3-day wilderness first aid (WFA) training course taught in South Sudan improved first aid knowledge. Stonehearth Open Learning Opportunities (SOLO) Schools designed the course to teach people with limited medical knowledge to use materials from their environment to provide life-saving care in the event of an emergency. A pre-test/post-test study design was used to assess first aid knowledge of 46 community members in Kit, South Sudan, according to a protocol approved by the University of New England Institutional Review Board. The course and assessments were administered in English and translated in real-time to Acholi and Arabic, the two primary languages spoken in the Kit region. Descriptive statistics, t-test, ANOVA, and correlation analyses were conducted. Results included a statistically significant improvement in first aid knowledge after the 3-day training course: t(38)=3.94; P<.001. Although men started with more health care knowledge: (t(37)=2.79; P=.008), men and women demonstrated equal levels of knowledge upon course completion: t(37)=1.56; P=.88. This research, which may be the first of its kind in South Sudan, provides evidence that a WFA training course in South Sudan is efficacious. These findings suggest that similar training opportunities could be used in other parts of the world to improve basic medical knowledge in communities with limited access to medical resources and varying levels of education and professional experiences.

  15. An advanced simulator for orthopedic surgical training.

    PubMed

    Cecil, J; Gupta, Avinash; Pirela-Cruz, Miguel

    2018-02-01

    The purpose of creating the virtual reality (VR) simulator is to facilitate and supplement the training opportunities provided to orthopedic residents. The use of VR simulators has increased rapidly in the field of medical surgery for training purposes. This paper discusses the creation of the virtual surgical environment (VSE) for training residents in an orthopedic surgical process called less invasive stabilization system (LISS) surgery which is used to address fractures of the femur. The overall methodology included first obtaining an understanding of the LISS plating process through interactions with expert orthopedic surgeons and developing the information centric models. The information centric models provided a structured basis to design and build the simulator. Subsequently, the haptic-based simulator was built. Finally, the learning assessments were conducted in a medical school. The results from the learning assessments confirm the effectiveness of the VSE for teaching medical residents and students. The scope of the assessment was to ensure (1) the correctness and (2) the usefulness of the VSE. Out of 37 residents/students who participated in the test, 32 showed improvements in their understanding of the LISS plating surgical process. A majority of participants were satisfied with the use of teaching Avatars and haptic technology. A paired t test was conducted to test the statistical significance of the assessment data which showed that the data were statistically significant. This paper demonstrates the usefulness of adopting information centric modeling approach in the design and development of the simulator. The assessment results underscore the potential of using VR-based simulators in medical education especially in orthopedic surgery.

  16. Performance evaluation of rapid diagnostic test for malaria in high malarious districts of Amhara region, Ethiopia.

    PubMed

    Beyene, Belay Bezabih; Yalew, Woyneshet Gelaye; Demilew, Ermias; Abie, Getent; Tewabe, Tsehaye; Abera, Bayeh

    2016-03-01

    Malaria is one of the leading public health challenges in Ethiopia. To address this, the Federal Ministry of Ethiopia launched a laboratory diagnosis programme for promoting use of either rapid diagnostic tests (RDTs) or Giemsa microscopy to all suspected malaria cases. This study was conducted to assess the performance of RDT and influencing factors for Giemsa microscopic diagnosis in Amhara region. A cross-sectional study was conducted in 10 high burden malaria districts of Amhara region from 15 May to 15 June 2014. Data were collected using structured questionnaire. Blood samples were collected from 1000 malaria suspected cases in 10 health centers. RDT (SD BIOLINE) and Giemsa microscopy were performed as per standard procedures. Kappa value, logistic regression and chi-square test were used for statistical analysis. The overall positivity rate (PR) of malaria parasites by RDT and Giemsa microscopy was 17.1 and 16.5% respectively. Compared to Giemsa microscopy as "gold standard", RDT showed 83.9% sensitivity and 96% specificity. The level of agreement between first reader and second reader for blood film microscopy was moderate (Kappa value = 0.74). Logistic regression showed that male, under five year of age and having fever more than 24 h prior to malaria diagnosis had statistically significant association with malaria positivity rate for malaria parasites. The overall specificity and negative predictive values of RDT for malaria diagnosis were excellent. However, the sensitivity and positive predictive values of RDT were low. Therefore, in-service training, quality monitoring of RDTs, and adequate laboratory supplies for diagnostic services of malaria would be crucial for effective intervention measures.

  17. High-resolution vertical profiles of groundwater electrical conductivity (EC) and chloride from direct-push EC logs

    NASA Astrophysics Data System (ADS)

    Bourke, Sarah A.; Hermann, Kristian J.; Hendry, M. Jim

    2017-11-01

    Elevated groundwater salinity associated with produced water, leaching from landfills or secondary salinity can degrade arable soils and potable water resources. Direct-push electrical conductivity (EC) profiling enables rapid, relatively inexpensive, high-resolution in-situ measurements of subsurface salinity, without requiring core collection or installation of groundwater wells. However, because the direct-push tool measures the bulk EC of both solid and liquid phases (ECa), incorporation of ECa data into regional or historical groundwater data sets requires the prediction of pore water EC (ECw) or chloride (Cl-) concentrations from measured ECa. Statistical linear regression and physically based models for predicting ECw and Cl- from ECa profiles were tested on a brine plume in central Saskatchewan, Canada. A linear relationship between ECa/ECw and porosity was more accurate for predicting ECw and Cl- concentrations than a power-law relationship (Archie's Law). Despite clay contents of up to 96%, the addition of terms to account for electrical conductance in the solid phase did not improve model predictions. In the absence of porosity data, statistical linear regression models adequately predicted ECw and Cl- concentrations from direct-push ECa profiles (ECw = 5.48 ECa + 0.78, R 2 = 0.87; Cl- = 1,978 ECa - 1,398, R 2 = 0.73). These statistical models can be used to predict ECw in the absence of lithologic data and will be particularly useful for initial site assessments. The more accurate linear physically based model can be used to predict ECw and Cl- as porosity data become available and the site-specific ECw-Cl- relationship is determined.

  18. Reliability of clinical guideline development using mail-only versus in-person expert panels.

    PubMed

    Washington, Donna L; Bernstein, Steven J; Kahan, James P; Leape, Lucian L; Kamberg, Caren J; Shekelle, Paul G

    2003-12-01

    Clinical practice guidelines quickly become outdated. One reason they might not be updated as often as needed is the expense of collecting expert judgment regarding the evidence. The RAND-UCLA Appropriateness Method is one commonly used method for collecting expert opinion. We tested whether a less expensive, mail-only process could substitute for the standard in-person process normally used. We performed a 4-way replication of the appropriateness panel process for coronary revascularization and hysterectomy, conducting 3 panels using the conventional in-person method and 1 panel entirely by mail. All indications were classified as inappropriate or not (to evaluate overuse), and coronary revascularization indications were classified as necessary or not (to evaluate underuse). Kappa statistics were calculated for the comparison in ratings from the 2 methods. Agreement beyond chance between the 2 panel methods ranged from moderate to substantial. The kappa statistic to detect overuse was 0.57 for coronary revascularization and 0.70 for hysterectomy. The kappa statistic to detect coronary revascularization underuse was 0.76. There were no cases in which coronary revascularization was considered inappropriate by 1 method, but necessary or appropriate by the other. Three of 636 (0.5%) hysterectomy cases were categorized as inappropriate by 1 method but appropriate by the other. The reproducibility of the overuse and underuse assessments from the mail-only compared with the conventional in-person conduct of expert panels in this application was similar to the underlying reproducibility of the process. This suggests a potential role for updating guidelines using an expert judgment process conducted entirely through the mail.

  19. Gas-turbine critical research and advanced technology support project

    NASA Technical Reports Server (NTRS)

    Clark, J. S.; Hodge, P. E.; Lowell, C. E.; Anderson, D. N.; Schultz, D. F.

    1981-01-01

    A technology data base for utility gas turbine systems capable of burning coal derived fuels was developed. The following areas are investigated: combustion; materials; and system studies. A two stage test rig is designed to study the conversion of fuel bound nitrogen to NOx. The feasibility of using heavy fuels in catalytic combustors is evaluated. A statistically designed series of hot corrosion burner rig tests was conducted to measure the corrosion rates of typical gas turbine alloys with several fuel contaminants. Fuel additives and several advanced thermal barrier coatings are tested. Thermal barrier coatings used in conjunction with low critical alloys and those used in a combined cycle system in which the stack temperature was maintained above the acid corrosion temperature are also studied.

  20. An Analysis of the Tracking Performances of Two Straight-wing and Two Swept-wing Fighter Airplanes with Fixed Sights in a Standardized Test Maneuver

    NASA Technical Reports Server (NTRS)

    Ziff, Howard L; Rathert, George A; Gadeberg, Burnett L

    1953-01-01

    Standard air-to-air-gunnery tracking runs were conducted with F-51H, F8F-1, F-86A, and F-86E airplanes equipped with fixed gunsights. The tracking performances were documented over the normal operating range of altitude, Mach number, and normal acceleration factor for each airplane. The sources of error were studied by statistical analyses of the aim wander.

  1. Research to develop guidelines for cathodic protection of concentric neutral cables, volume 3

    NASA Astrophysics Data System (ADS)

    Hanck, J. A.; Nekoksa, G.

    1982-08-01

    Data associated with the corrosion of concentric neutral (CN) wires of direct buried primary cables were statistically analyzed, and guidelines for cathodic protection of CN wires for the electric utility industry were developed. The cathodic protection are reported. Field tests conducted at 36 bellholes excavated in California, Oklahoma, and North Carolina are described. Details of the electrochemical, chemical, bacteriological, and sieve analyses of native soil and imported backfill samples are also included.

  2. Effectiveness of Educational Technology in Promoting Quality of Life and Treatment Adherence in Hypertensive People

    PubMed Central

    de Souza, Ana Célia Caetano; Moreira, Thereza Maria Magalhaes; de Oliveira, Edmar Souza; de Menezes, Anaíze Viana Bezerra; Loureiro, Aline Maria Oliveira; Silva, Camila Brasileiro de Araújo; Linard, Jair Gomes; de Almeida, Italo Lennon Sales; Mattos, Samuel Miranda; Borges, José Wicto Pereira

    2016-01-01

    The objective of this study was to test the effectiveness of an educational intervention with use of educational technology (flipchart) to promote quality of life (QOL) and treatment adherence in people with hypertension. It was an intervention study of before-and-after type conducted with 116 hypertensive people registered in Primary Health Care Units. The educational interventions were conducted using the flipchart educational technology. Quality of life was assessed through the MINICHAL (lowest score = better QOL) and the QATSH (higher score = better adherence) was used to assess the adherence to hypertension treatment. Both were measured before and after applying the intervention. In the analysis, we used the Student’s t-test for paired data. The average baseline quality of life was 11.66 ± 7.55, and 7.71 ± 5.72 two months after the intervention, showing a statistically significant reduction (p <0.001) and mean of differences of 3.95. The average baseline adherence to treatment was 98.03 ± 7.08 and 100.71 ± 6.88 two months after the intervention, which is statistically significant (p < 0.001), and mean of differences of 2.68. The conclusion was that the educational intervention using the flipchart improved the total score of quality of life in the scores of physical and mental domains, and increased adherence to hypertension treatment in people with the disease. PMID:27851752

  3. Effectiveness of Educational Technology in Promoting Quality of Life and Treatment Adherence in Hypertensive People.

    PubMed

    de Souza, Ana Célia Caetano; Moreira, Thereza Maria Magalhaes; Oliveira, Edmar Souza de; Menezes, Anaíze Viana Bezerra de; Loureiro, Aline Maria Oliveira; Silva, Camila Brasileiro de Araújo; Linard, Jair Gomes; Almeida, Italo Lennon Sales de; Mattos, Samuel Miranda; Borges, José Wicto Pereira

    2016-01-01

    The objective of this study was to test the effectiveness of an educational intervention with use of educational technology (flipchart) to promote quality of life (QOL) and treatment adherence in people with hypertension. It was an intervention study of before-and-after type conducted with 116 hypertensive people registered in Primary Health Care Units. The educational interventions were conducted using the flipchart educational technology. Quality of life was assessed through the MINICHAL (lowest score = better QOL) and the QATSH (higher score = better adherence) was used to assess the adherence to hypertension treatment. Both were measured before and after applying the intervention. In the analysis, we used the Student's t-test for paired data. The average baseline quality of life was 11.66 ± 7.55, and 7.71 ± 5.72 two months after the intervention, showing a statistically significant reduction (p <0.001) and mean of differences of 3.95. The average baseline adherence to treatment was 98.03 ± 7.08 and 100.71 ± 6.88 two months after the intervention, which is statistically significant (p < 0.001), and mean of differences of 2.68. The conclusion was that the educational intervention using the flipchart improved the total score of quality of life in the scores of physical and mental domains, and increased adherence to hypertension treatment in people with the disease.

  4. Effects of structured written feedback by cards on medical students' performance at Mini Clinical Evaluation Exercise (Mini-CEX) in an outpatient clinic.

    PubMed

    Haghani, Fariba; Hatef Khorami, Mohammad; Fakhari, Mohammad

    2016-07-01

    Feedback cards are recommended as a feasible tool for structured written feedback delivery in clinical education while effectiveness of this tool on the medical students' performance is still questionable.  The purpose of this study was to compare the effects of structured written feedback by cards as well as verbal feedback versus verbal feedback alone on the clinical performance of medical students at the Mini Clinical Evaluation Exercise (Mini-CEX) test in an outpatient clinic. This is a quasi-experimental study with pre- and post-test comprising four groups in two terms of medical students' externship. The students' performance was assessed through the Mini-Clinical Evaluation Exercise (Mini-CEX) as a clinical performance evaluation tool. Structured written feedbacks were given to two experimental groups by designed feedback cards as well as verbal feedback, while in the two control groups feedback was delivered verbally as a routine approach in clinical education. By consecutive sampling method, 62 externship students were enrolled in this study and seven students were excluded from the final analysis due to their absence for three days. According to the ANOVA analysis and Post Hoc Tukey test,  no statistically significant difference was observed among the four groups at the pre-test, whereas a statistically significant difference was observed between the experimental and control groups at the post-test  (F = 4.023, p =0.012). The effect size of the structured written feedbacks on clinical performance was 0.19. Structured written feedback by cards could improve the performance of medical students in a statistical sense. Further studies must be conducted in other clinical courses with longer durations.

  5. Is generic physical activity or specific exercise associated with motor abilities?

    PubMed

    Rinne, Marjo; Pasanen, Matti; Miilunpalo, Seppo; Mälkiä, Esko

    2010-09-01

    Evidence of the effect of leisure time physical activity (LTPA) modes on the motor abilities of a mature population is scarce. The purpose of this study was to compare the motor abilities of physically active and inactive men and women and to examine the associations of different exercise modes and former and recent LTPA (R-LTPA) with motor ability and various physical tests. The LTPA of the participants (men n = 69, women n = 79; aged 41-47 yr) was ascertained by a modified Physical Activity Readiness Questionnaire, including questions on the frequency, duration, and intensity of R-LTPA and former LTPA and on exercise modes. Motor abilities in terms of balance, agility, and coordination were assessed with a battery of nine tests supplemented with five physical fitness tests. Multiple statistical methods were used in analyses that were conducted separately for men and women. The MET-hours per week of R-LTPA correlated statistically significantly with the tests of agility and static balance (rs = -0.28, P = 0.022; rs = -0.25, P = 0.043, respectively) among men and with the static balance (rs = 0.41), 2-km walking (rs = 0.36), step squat (rs = 0.36) (P < or = 0.001, respectively), and static back endurance (rs = 0.25, P = 0.024) among women. In the stepwise regression among men, the most frequent statistically significant predictor was the playing of several games. For women, a history of LTPA for more than 3 yr was the strongest predictor for good results in almost all tests. Participants with long-term and regular LTPA had better motor performance, and especially a variety of games improve components of motor ability. Diverse, regular, and long-term exercise including both specific training and general activity develops both motor abilities and physical fitness.

  6. Awareness and Attitude of the General Public Toward HIV/AIDS in Coastal Karnataka

    PubMed Central

    Unnikrishnan, B; Mithra, Prasanna P; T, Rekha; B, Reshmi

    2010-01-01

    Objective: To assess the awareness and attitude of the general public toward people living with HIV/AIDS (PLWHA) in Mangalore, a city in Coastal Karnataka. Design: Community-based cross-sectional study. Materials and Methods: The study population included 630 individuals aged 18 years and above. The information was collected using a semi structured pre-tested questionnaire. The questionnaire consisted of 24 questions regarding awareness of the modes of transmission of HIV/AIDS (nine questions) and questions to assess the attitude toward People Living With HIV/AIDS (PLWHA) (15 questions). Statistical package SPSS version 11.5 was used, Chi-square test was conducted and P < 0.05 was considered as statistically significant. Results: About one-third of the study population thought that one could get infected by merely touching an HIV positive individual. Approximately 45% stated that they would dismiss their maid on finding out her HIV positive status. About 54% were willing to undergo the HIV test. The respondents with less than secondary school education had a discriminatory attitude toward HIV positive people, with regard to them deserving to suffer, dismissing a HIV positive maid, hesitating to sit next to a HIV positive person in the bus, divorcing the infected spouse, and willingness to get tested for HIV, which was found to be statistically significant. Conclusion: Stigma among the general public was mostly due to fear of contracting the illness. Stigma does exist to significant degrees among the educated people, which was suggested by about 45% of the participants being willing to undergo the HIV test. There is a need for greater attempts toward making information regarding HIV/AIDS available to every individual of the society. PMID:20606940

  7. Power of mental health nursing research: a statistical analysis of studies in the International Journal of Mental Health Nursing.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2013-02-01

    Having sufficient power to detect effect sizes of an expected magnitude is a core consideration when designing studies in which inferential statistics will be used. The main aim of this study was to investigate the statistical power in studies published in the International Journal of Mental Health Nursing. From volumes 19 (2010) and 20 (2011) of the journal, studies were analysed for their power to detect small, medium, and large effect sizes, according to Cohen's guidelines. The power of the 23 studies included in this review to detect small, medium, and large effects was 0.34, 0.79, and 0.94, respectively. In 90% of papers, no adjustments for experiment-wise error were reported. With a median of nine inferential tests per paper, the mean experiment-wise error rate was 0.51. A priori power analyses were only reported in 17% of studies. Although effect sizes for correlations and regressions were routinely reported, effect sizes for other tests (χ(2)-tests, t-tests, ANOVA/MANOVA) were largely absent from the papers. All types of effect sizes were infrequently interpreted. Researchers are strongly encouraged to conduct power analyses when designing studies, and to avoid scattergun approaches to data analysis (i.e. undertaking large numbers of tests in the hope of finding 'significant' results). Because reviewing effect sizes is essential for determining the clinical significance of study findings, researchers would better serve the field of mental health nursing if they reported and interpreted effect sizes. © 2012 The Authors. International Journal of Mental Health Nursing © 2012 Australian College of Mental Health Nurses Inc.

  8. Double Blind Test For Bio-Stimulation Effects On Pain Relief By Diode Laser

    NASA Astrophysics Data System (ADS)

    Saeki, Norio; Sembokuya, Iwajiro; Arakawa, Kazuo; Fujimasa, Iwao; Mabuchi, Kunihiko; Abe, Yuusuke; Atsumi, Kazuhiko

    1989-09-01

    The bio-stimulation effect of semiconductor laser on therapeutic pain relief was investigated by conducting a double blind test performed on more than one hundred patient subjects suffering from various neualgia. A compact laser therapeutic equipment with two laser probes each having 60 mW power was developed and utilized for the experiment. Each probe was driven by either the active or the dummy source selected randomly, and its results were stored in the memory for statistical processing. The therapeutic treatments including active and dummy treatments were performed on 102 subjects. The pain relief effects were confirmed for 85.5% of the subjects.

  9. Effects of size on three-cone bit performance in laboratory drilled shale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, A.D.; DiBona, B.G.; Sandstrom, J.L.

    1982-09-01

    The effects of size on the performance of 3-cone bits were measured during laboratory drilling tests in shale at simulated downhole conditions. Four Reed HP-SM 3-cone bits with diameters of 6 1/2, 7 7/8, 9 1/2 and 11 inches were used to drill Mancos shale with water-based mud. The tests were conducted at constant borehole pressure, two conditions of hydraulic horsepower per square inch of bit area, three conditions of rotary speed and four conditions of weight-on-bit per inch of bit diameter. The resulting penetration rates and torques were measured. Statistical techniques were used to analyze the data.

  10. Performance Review of Harmony Search, Differential Evolution and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Mohan Pandey, Hari

    2017-08-01

    Metaheuristic algorithms are effective in the design of an intelligent system. These algorithms are widely applied to solve complex optimization problems, including image processing, big data analytics, language processing, pattern recognition and others. This paper presents a performance comparison of three meta-heuristic algorithms, namely Harmony Search, Differential Evolution, and Particle Swarm Optimization. These algorithms are originated altogether from different fields of meta-heuristics yet share a common objective. The standard benchmark functions are used for the simulation. Statistical tests are conducted to derive a conclusion on the performance. The key motivation to conduct this research is to categorize the computational capabilities, which might be useful to the researchers.

  11. Anatomy of a Jam

    NASA Astrophysics Data System (ADS)

    Tang, Junyao; Sagdighpour, Sepehr; Behringer, Robert

    2008-11-01

    Flow in a hopper is both a fertile testing ground for understanding models for granular flow and industrially highly relevant. However, the formation of arches in the hopper opening, which halts the hopper flow unpredictably, is still poorly understood. In this work, we conduct a two-dimension hopper experiments, using photoelastic particles, and characterize these experiments in terms of a statistical model that considers the probability of jamming. The distribution of the hopper flow times exhibits an exponential decay, which shows the existence of a characteristic ``mean flow time.'' We then conduct further experiments to examine the connection between the mean flow time, the hopper geometry, the local density, and geometric structures and forces at the particle scale.

  12. Statistical Analysis of CO 2 Exposed Wells to Predict Long Term Leakage through the Development of an Integrated Neural-Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Boyun; Duguid, Andrew; Nygaard, Ronar

    The objective of this project is to develop a computerized statistical model with the Integrated Neural-Genetic Algorithm (INGA) for predicting the probability of long-term leak of wells in CO 2 sequestration operations. This object has been accomplished by conducting research in three phases: 1) data mining of CO 2-explosed wells, 2) INGA computer model development, and 3) evaluation of the predictive performance of the computer model with data from field tests. Data mining was conducted for 510 wells in two CO 2 sequestration projects in the Texas Gulf Coast region. They are the Hasting West field and Oyster Bayou fieldmore » in the Southern Texas. Missing wellbore integrity data were estimated using an analytical and Finite Element Method (FEM) model. The INGA was first tested for performances of convergence and computing efficiency with the obtained data set of high dimension. It was concluded that the INGA can handle the gathered data set with good accuracy and reasonable computing time after a reduction of dimension with a grouping mechanism. A computerized statistical model with the INGA was then developed based on data pre-processing and grouping. Comprehensive training and testing of the model were carried out to ensure that the model is accurate and efficient enough for predicting the probability of long-term leak of wells in CO 2 sequestration operations. The Cranfield in the southern Mississippi was select as the test site. Observation wells CFU31F2 and CFU31F3 were used for pressure-testing, formation-logging, and cement-sampling. Tools run in the wells include Isolation Scanner, Slim Cement Mapping Tool (SCMT), Cased Hole Formation Dynamics Tester (CHDT), and Mechanical Sidewall Coring Tool (MSCT). Analyses of the obtained data indicate no leak of CO 2 cross the cap zone while it is evident that the well cement sheath was invaded by the CO 2 from the storage zone. This observation is consistent with the result predicted by the INGA model which indicates the well has a CO 2 leak-safe probability of 72%. This comparison implies that the developed INGA model is valid for future use in predicting well leak probability.« less

  13. Evaluation of the Thermo Scientific™ SureTect™ Listeria species Assay.

    PubMed

    Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko

    2014-03-01

    The Thermo Scientific™ SureTect™ Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested MethodsSM program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University of Guelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.

  14. A model for predicting thermal properties of asphalt mixtures from their constituents

    NASA Astrophysics Data System (ADS)

    Keller, Merlin; Roche, Alexis; Lavielle, Marc

    Numerous theoretical and experimental approaches have been developed to predict the effective thermal conductivity of composite materials such as polymers, foams, epoxies, soils and concrete. None of such models have been applied to asphalt concrete. This study attempts to develop a model to predict the thermal conductivity of asphalt concrete from its constituents that will contribute to the asphalt industry by reducing costs and saving time on laboratory testing. The necessity to do the laboratory testing would be no longer required when a mix for the pavement is created with desired thermal properties at the design stage by selecting correct constituents. This thesis investigated six existing predictive models for applicability to asphalt mixtures, and four standard mathematical techniques were used to develop a regression model to predict the effective thermal conductivity. The effective thermal conductivities of 81 asphalt specimens were used as the response variables, and the thermal conductivities and volume fractions of their constituents were used as the predictors. The conducted statistical analyses showed that the measured values of thermal conductivities of the mixtures are affected by the bitumen and aggregate content, but not by the air content. Contrarily, the predicted data for some investigated models are highly sensitive to air voids, but not to bitumen and/or aggregate content. Additionally, the comparison of the experimental with analytical data showed that none of the existing models gave satisfactory results; on the other hand, two regression models (Exponential 1* and Linear 3*) are promising for asphalt concrete.

  15. Assessing the applicability of the Taguchi design method to an interrill erosion study

    NASA Astrophysics Data System (ADS)

    Zhang, F. B.; Wang, Z. L.; Yang, M. Y.

    2015-02-01

    Full-factorial experimental designs have been used in soil erosion studies, but are time, cost and labor intensive, and sometimes they are impossible to conduct due to the increasing number of factors and their levels to consider. The Taguchi design is a simple, economical and efficient statistical tool that only uses a portion of the total possible factorial combinations to obtain the results of a study. Soil erosion studies that use the Taguchi design are scarce and no comparisons with full-factorial designs have been made. In this paper, a series of simulated rainfall experiments using a full-factorial design of five slope lengths (0.4, 0.8, 1.2, 1.6, and 2 m), five slope gradients (18%, 27%, 36%, 48%, and 58%), and five rainfall intensities (48, 62.4, 102, 149, and 170 mm h-1) were conducted. Validation of the applicability of a Taguchi design to interrill erosion experiments was achieved by extracting data from the full dataset according to a theoretical Taguchi design. The statistical parameters for the mean quasi-steady state erosion and runoff rates of each test, the optimum conditions for producing maximum erosion and runoff, and the main effect and percentage contribution of each factor obtained from the full-factorial and Taguchi designs were compared. Both designs generated almost identical results. Using the experimental data from the Taguchi design, it was possible to accurately predict the erosion and runoff rates under the conditions that had been excluded from the Taguchi design. All of the results obtained from analyzing the experimental data for both designs indicated that the Taguchi design could be applied to interrill erosion studies and could replace full-factorial designs. This would save time, labor and costs by generally reducing the number of tests to be conducted. Further work should test the applicability of the Taguchi design to a wider range of conditions.

  16. A comparison of body image concern in candidates for rhinoplasty and therapeutic surgery.

    PubMed

    Hashemi, Seyed Amirhosein Ghazizadeh; Edalatnoor, Behnoosh; Edalatnoor, Behnaz; Niksun, Omid

    2017-09-01

    Body dysmorphic disorder among patients referring for cosmetic surgeries is a disorder that if not diagnosed by a physician, can cause irreparable damage to the doctor and the patient. The aim of this study was to compare body image concern in candidates for rhinoplasty and therapeutic surgery. This was a cross-sectional study conducted on 212 patients referring to Loghman Hospital of Tehran for rhinoplasty and therapeutic surgery during the period from 2014 through 2016. For each person in a cosmetic surgery group, a person of the same sex and age in a therapeutic surgery group was matched, and the study was conducted on 60 subjects in the rhinoplasty group and 62 patients in the therapeutic surgery group. Then, the Body Image Concern Inventory and demographic data were filled by all patients and the level of body image concern in both groups was compared. Statistical analysis was conducted using SPSS 16, Chi-square test as well as paired-samples t-test. P-value of less than 0.05 was considered statistically significant. In this study, 122 patients (49 males and 73 females) with mean age of 27.1±7.3 between 18 and 55 years of age were investigated. Sixty subjects were candidates for rhinoplasty and 62 subjects for therapeutic surgery. Candidates for rhinoplasty were mostly male (60%) and single (63.3%). Results of the t-test demonstrated that body image concern and body dysmorphic disorder were higher in the rhinoplasty group compared to the therapeutic group (p<0.05). Results of this study showed that the frequency of rhinoplasty candidates is higher in single male subjects. In addition, body image concern was higher in rhinoplasty candidates compared to candidates for other surgeries. Visiting and correct interviewing of people who referred for rhinoplasty is very important to measure their level of body image concern to diagnose any disorders available and to consider required treatments.

  17. Analysis of repeated measurement data in the clinical trials

    PubMed Central

    Singh, Vineeta; Rana, Rakesh Kumar; Singhal, Richa

    2013-01-01

    Statistics is an integral part of Clinical Trials. Elements of statistics span Clinical Trial design, data monitoring, analyses and reporting. A solid understanding of statistical concepts by clinicians improves the comprehension and the resulting quality of Clinical Trials. In biomedical research it has been seen that researcher frequently use t-test and ANOVA to compare means between the groups of interest irrespective of the nature of the data. In Clinical Trials we record the data on the patients more than two times. In such a situation using the standard ANOVA procedures is not appropriate as it does not consider dependencies between observations within subjects in the analysis. To deal with such types of study data Repeated Measure ANOVA should be used. In this article the application of One-way Repeated Measure ANOVA has been demonstrated by using the software SPSS (Statistical Package for Social Sciences) Version 15.0 on the data collected at four time points 0 day, 15th day, 30th day, and 45th day of multicentre clinical trial conducted on Pandu Roga (~Iron Deficiency Anemia) with an Ayurvedic formulation Dhatrilauha. PMID:23930038

  18. Patient perceptions of receiving test results via online portals: a mixed-methods study

    PubMed Central

    Giardina, Traber D; Baldwin, Jessica; Nystrom, Daniel T; Sittig, Dean F; Singh, Hardeep

    2018-01-01

    Abstract Objective Online portals provide patients with access to their test results, but it is unknown how patients use these tools to manage results and what information is available to promote understanding. We conducted a mixed-methods study to explore patients’ experiences and preferences when accessing their test results via portals. Materials and Methods We conducted 95 interviews (13 semistructured and 82 structured) with adults who viewed a test result in their portal between April 2015 and September 2016 at 4 large outpatient clinics in Houston, Texas. Semistructured interviews were coded using content analysis and transformed into quantitative data and integrated with the structured interview data. Descriptive statistics were used to summarize the structured data. Results Nearly two-thirds (63%) did not receive any explanatory information or test result interpretation at the time they received the result, and 46% conducted online searches for further information about their result. Patients who received an abnormal result were more likely to experience negative emotions (56% vs 21%; P = .003) and more likely to call their physician (44% vs 15%; P = .002) compared with those who received normal results. Discussion Study findings suggest that online portals are not currently designed to present test results to patients in a meaningful way. Patients experienced negative emotions often with abnormal results, but sometimes even with normal results. Simply providing access via portals is insufficient; additional strategies are needed to help patients interpret and manage their online test results. Conclusion Given the absence of national guidance, our findings could help strengthen policy and practice in this area and inform innovations that promote patient understanding of test results. PMID:29240899

  19. Evaluation of the 3M™ Molecular Detection Assay (MDA) 2 - Salmonella for the Detection of Salmonella spp. in Select Foods and Environmental Surfaces: Collaborative Study, First Action 2016.01.

    PubMed

    Bird, Patrick; Flannery, Jonathan; Crowley, Erin; Agin, James R; Goins, David; Monteroso, Lisa

    2016-07-01

    The 3M™ Molecular Detection Assay (MDA) 2 - Salmonella uses real-time isothermal technology for the rapid and accurate detection of Salmonella spp. from enriched select food, feed, and food-process environmental samples. The 3M MDA 2 - Salmonella was evaluated in a multilaboratory collaborative study using an unpaired study design. The 3M MDA 2 - Salmonella was compared to the U.S. Food and Drug Administration Bacteriological Analytical Manual Chapter 5 reference method for the detection of Salmonella in creamy peanut butter, and to the U.S. Department of Agriculture, Food Safety and Inspection Service Microbiology Laboratory Guidebook Chapter 4.08 reference method "Isolation and Identification of Salmonella from Meat, Poultry, Pasteurized Egg and Catfish Products and Carcass and Environmental Samples" for the detection of Salmonella in raw ground beef (73% lean). Technicians from 16 laboratories located within the continental United States participated. Each matrix was evaluated at three levels of contamination: an uninoculated control level (0 CFU/test portion), a low inoculum level (0.2-2 CFU/test portion), and a high inoculum level (2-5 CFU/test portion). Statistical analysis was conducted according to the probability of detection (POD) statistical model. Results obtained for the low inoculum level test portions produced difference in collaborator POD values of 0.03 (95% confidence interval, -0.10 to 0.16) for raw ground beef and 0.06 (95% confidence interval, -0.06 to 0.18) for creamy peanut butter, indicating no statistically significant difference between the candidate and reference methods.

  20. The effect of four-phase teaching method on midwifery students’ emotional intelligence in managing the childbirth

    PubMed Central

    Mohamadirizi, Soheila; Fahami, Fariba; Bahadoran, Parvin; Ehsanpour, Soheila

    2015-01-01

    Background: An active teaching method has been used widely in medical education. The aim of this study was to determine the effectiveness of the four-phase teaching method on midwifery students’ emotional intelligence (EQ) in managing the childbirth. Materials and Methods: This was an experimental study that performed in 2013 in Isfahan University of Medical Sciences. Thirty midwifery students were involved in this study and selected through a random sampling method. The EQ questionnaire (43Q) was completed by both the groups, before and after the education. The collected data were analyzed using SPSS 14, the independent t-test, and the paired t-test. The statistically significant level was considered to be <0.05. Results: The findings of the independent t-test did not show any significant difference between EQ scores of the experimental and the control group before the intervention, whereas a statistically significant difference was observed after the intervention between the scores of two groups (P = 0.009). The paired t-test showed a statistically significant difference in EQ scores in the two groups after the intervention in the four-phase and the control group, respectively, as P = 0.005 and P = 0.018. Furthermore, the rate of self-efficiency has increased in the experimental group and control group as 66% and 13% (P = 0.024), respectively. Conclusion: The four-phase teaching method can increase the EQ levels of midwifery students. Therefore, the conduction of this educational model is recommended as an effective learning method. PMID:26097861

  1. 41 CFR 105-50.202-2 - Preparation of or assistance in the conduct of statistical or other studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Preparation of or... Available From General Services Administration § 105-50.202-2 Preparation of or assistance in the conduct of statistical or other studies. (a) This service includes preparation of statistical or other studies and...

  2. 41 CFR 105-50.202-2 - Preparation of or assistance in the conduct of statistical or other studies.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false Preparation of or... Available From General Services Administration § 105-50.202-2 Preparation of or assistance in the conduct of statistical or other studies. (a) This service includes preparation of statistical or other studies and...

  3. Dental Composite Restorations and Neuropsychological Development in Children: Treatment Level Analysis from a Randomized Clinical Trial

    PubMed Central

    Maserejian, Nancy N.; Trachtenberg, Felicia L.; Hauser, Russ; McKinlay, Sonja; Shrader, Peter; Bellinger, David C.

    2012-01-01

    Background Resin-based dental restorations may intra-orally release their components and bisphenol A. Gestational bisphenol A exposure has been associated with poorer executive functioning in children. Objectives To examine whether exposure to resin-based composite restorations is associated with neuropsychological development in children. Methods Secondary analysis of treatment level data from the New England Children’s Amalgam Trial, a 2-group randomized safety trial conducted from 1997–2006. Children (N=534) aged 6–10 y with >2 posterior tooth caries were randomized to treatment with amalgam or resin-based composites (bisphenol-A-diglycidyl-dimethacrylate-composite for permanent teeth; urethane dimethacrylate-based polyacid-modified compomer for primary teeth). Neuropsychological function at 4- and 5-year follow-up (N=444) was measured by a battery of tests of executive function, intelligence, memory, visual-spatial skills, verbal fluency, and problem-solving. Multivariable generalized linear regression models were used to examine the association between composite exposure levels and changes in neuropsychological test scores from baseline to follow-up. For comparison, data on children randomized to amalgam treatment were similarly analyzed. Results With greater exposure to either dental composite material, results were generally consistent in the direction of slightly poorer changes in tests of intelligence, achievement or memory, but there were no statistically significant associations. For the four primary measures of executive function, scores were slightly worse with greater total composite exposure, but statistically significant only for the test of Letter Fluency (10-surface-years β= −0.8, SE=0.4, P=0.035), and the subtest of color naming (β= −1.5, SE=0.5, P=0.004) in the Stroop Color-Word Interference Test. Multivariate analysis of variance confirmed that the negative associations between composite level and executive function were not statistically significant (MANOVA P=0.18). Results for greater amalgam exposure were mostly nonsignificant in the opposite direction of slightly improved scores over follow-up. Conclusions Dental composite restorations had statistically insignificant associations of small magnitude with impairments in neuropsychological test change scores over 4- or 5-years of follow-up in this trial. PMID:22906860

  4. Postoperative improvement in DASH score, clinical findings, and nerve conduction velocity in patients with cubital tunnel syndrome.

    PubMed

    Ido, Yoshikazu; Uchiyama, Shigeharu; Nakamura, Koichi; Itsubo, Toshiro; Hayashi, Masanori; Hata, Yukihiko; Imaeda, Toshihiko; Kato, Hiroyuki

    2016-06-06

    We investigated a recovery pattern in subjective and objective measures among 52 patients with cubital tunnel syndrome after anterior subcutaneous transposition of the ulnar nerve. Disabilities of the Arm, Shoulder and Hand (DASH) score (primary outcome), numbness score, grip and pinch strength, Semmes-Weinstein (SW) score, static 2-point discrimination (2PD) score, and motor conduction velocity (MCV) stage were examined preoperatively and 1, 3, 6, 12, and ≥24 months postoperatively. Statistical analyses were conducted to evaluate how each variable improved after surgery. A linear mixed-effects model was used for continuous variables (DASH score, numbness, grip and pinch strength), and a proportional odds model was used for categorical variables (SW and 2PD tests and MCV stages). DASH score significantly improved by 6 months. Significant recovery in numbness and SW test scores occurred at 1 month. Grip and pinch strength, 2PD test scores, and MCV stage improved by 3 months. DASH scores and numbness recovered regardless of age, sex, or disease severity. It was still unclear if both subjective and objective measures improved beyond 1-year postoperatively. These data are helpful for predicting postoperative recovery patterns and tend to be most important for patients prior to surgery.

  5. Usability Assessment of the Missouri Cancer Registry's Published Interactive Mapping Reports: Round One.

    PubMed

    Ben Ramadan, Awatef Ahmed; Jackson-Thompson, Jeannette; Schmaltz, Chester Lee

    2017-08-04

     Many users of spatial data have difficulty interpreting information in health-related spatial reports. The Missouri Cancer Registry and Research Center (MCR-ARC) has produced interactive reports for several years. These reports have never been tested for usability.  The aims of this study were to: (1) conduct a multi-approach usability testing study to understand ease of use (user friendliness) and user satisfaction; and (2) evaluate the usability of MCR-ARC's published InstantAtlas reports.   An institutional review board (IRB) approved mixed methodology usability testing study using a convenience sample of health professionals. A recruiting email was sent to faculty in the Master of Public Health program and to faculty and staff in the Department of Health Management and Informatics at the University of Missouri-Columbia. The study included 7 participants. The test included a pretest questionnaire, a multi-task usability test, and the System Usability Scale (SUS). Also, the researchers collected participants' comments about the tested maps immediately after every trial. Software was used to record the computer screen during the trial and the participants' spoken comments. Several performance and usability metrics were measured to evaluate the usability of MCR-ARC's published mapping reports. Of the 10 assigned tasks, 6 reached a 100% completion success rate, and this outcome was relative to the complexity of the tasks. The simple tasks were handled more efficiently than the complicated tasks. The SUS score ranged between 20-100 points, with an average of 62.7 points and a median of 50.5 points. The tested maps' effectiveness outcomes were better than the efficiency and satisfaction outcomes. There was a statistically significant relationship between the subjects' performance on the study test and the users' previous experience with geographic information system (GIS) tools (P=.03). There were no statistically significant relationships between users' performance and satisfaction and their education level, work type, or previous experience in health care (P>.05). There were strong positive correlations between the three measured usability elements. The tested maps should undergo an extensive refining and updating to overcome all the discovered usability issues and meet the perspectives and needs of the tested maps' potential users. The study results might convey the perspectives of academic health professionals toward GIS health data. We need to conduct a second-round usability study with public health practitioners and cancer professionals who use GIS tools on a routine basis. Usability testing should be conducted before and after releasing MCR-ARC's maps in the future. ©Awatef Ahmed Ben Ramadan, Jeannette Jackson-Thompson, Chester Lee Schmaltz. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 04.08.2017.

  6. Usability Assessment of the Missouri Cancer Registry’s Published Interactive Mapping Reports: Round One

    PubMed Central

    Jackson-Thompson, Jeannette; Schmaltz, Chester Lee

    2017-01-01

    Background  Many users of spatial data have difficulty interpreting information in health-related spatial reports. The Missouri Cancer Registry and Research Center (MCR-ARC) has produced interactive reports for several years. These reports have never been tested for usability. Objective  The aims of this study were to: (1) conduct a multi-approach usability testing study to understand ease of use (user friendliness) and user satisfaction; and (2) evaluate the usability of MCR-ARC’s published InstantAtlas reports. Methods   An institutional review board (IRB) approved mixed methodology usability testing study using a convenience sample of health professionals. A recruiting email was sent to faculty in the Master of Public Health program and to faculty and staff in the Department of Health Management and Informatics at the University of Missouri-Columbia. The study included 7 participants. The test included a pretest questionnaire, a multi-task usability test, and the System Usability Scale (SUS). Also, the researchers collected participants’ comments about the tested maps immediately after every trial. Software was used to record the computer screen during the trial and the participants’ spoken comments. Several performance and usability metrics were measured to evaluate the usability of MCR-ARC’s published mapping reports. Results Of the 10 assigned tasks, 6 reached a 100% completion success rate, and this outcome was relative to the complexity of the tasks. The simple tasks were handled more efficiently than the complicated tasks. The SUS score ranged between 20-100 points, with an average of 62.7 points and a median of 50.5 points. The tested maps’ effectiveness outcomes were better than the efficiency and satisfaction outcomes. There was a statistically significant relationship between the subjects’ performance on the study test and the users’ previous experience with geographic information system (GIS) tools (P=.03). There were no statistically significant relationships between users’ performance and satisfaction and their education level, work type, or previous experience in health care (P>.05). There were strong positive correlations between the three measured usability elements. Conclusions The tested maps should undergo an extensive refining and updating to overcome all the discovered usability issues and meet the perspectives and needs of the tested maps’ potential users. The study results might convey the perspectives of academic health professionals toward GIS health data. We need to conduct a second-round usability study with public health practitioners and cancer professionals who use GIS tools on a routine basis. Usability testing should be conducted before and after releasing MCR-ARC’s maps in the future. PMID:28778842

  7. Prevalence of premenstrual syndrome and its relationship to depressive symptoms in first-year university students

    PubMed Central

    Acikgoz, Ayla; Dayi, Ayfer; Binbay, Tolga

    2017-01-01

    Objectives: To determine the prevalence of and factors influencing premenstrual syndrome (PMS) in first-year students at a university health campus and to evaluate the relationship between depression and PMS. Methods: This cross-sectional study was conducted on a population of 618 university students from March to June 2016 at Dokuz Eylül University, Izmir, Turkey. Data were collected using the Premenstrual Syndrome Scale (PMSS), Beck Depression Inventory and Student Identification Form. The data were analyzed with Version 20.0 of the Statistical Package for the Social Science. Descriptive statistics, Pearson’s chi-square test, and Chi-square test for trend, and independent samples t test and logistic regression analysis were used. Results: The prevalence of PMS in the university students was 58.1%. Premenstrual syndrome was significantly higher in students who smoked, drink alcohol, and consumed a large amount of fatty and high-calorie foods, in students who had a bad to very bad perception of their economic situation, and those who had any chronic disease or anemia (p<0.05). Premenstrual syndrome was significantly higher in students who had a risk of depression (p<0.01). A statistically significant relationship was determined between the risk of depression and PMSS total score and all PMSS subscale scores except for appetite changes (p<0.01). Conclusion: Premenstrual syndrome was found in more than half of the students who participated in the study. Premenstrual syndrome was higher in students who had a chronic disease and/or an unhealthy lifestyle. There was a statistically significant relationship between PMS and risk of depression. Students who have PMS symptoms should be evaluated for the risk of depression. PMID:29114701

  8. Evaluation of the flame propagation within an SI engine using flame imaging and LES

    NASA Astrophysics Data System (ADS)

    He, Chao; Kuenne, Guido; Yildar, Esra; van Oijen, Jeroen; di Mare, Francesca; Sadiki, Amsini; Ding, Carl-Philipp; Baum, Elias; Peterson, Brian; Böhm, Benjamin; Janicka, Johannes

    2017-11-01

    This work shows experiments and simulations of the fired operation of a spark ignition engine with port-fuelled injection. The test rig considered is an optically accessible single cylinder engine specifically designed at TU Darmstadt for the detailed investigation of in-cylinder processes and model validation. The engine was operated under lean conditions using iso-octane as a substitute for gasoline. Experiments have been conducted to provide a sound database of the combustion process. A planar flame imaging technique has been applied within the swirl- and tumble-planes to provide statistical information on the combustion process to complement a pressure-based comparison between simulation and experiments. This data is then analysed and used to assess the large eddy simulation performed within this work. For the simulation, the engine code KIVA has been extended by the dynamically thickened flame model combined with chemistry reduction by means of pressure dependent tabulation. Sixty cycles have been simulated to perform a statistical evaluation. Based on a detailed comparison with the experimental data, a systematic study has been conducted to obtain insight into the most crucial modelling uncertainties.

  9. Scalability of Several Asynchronous Many-Task Models for In Situ Statistical Analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille; Kolla, Hemanth

    This report is a sequel to [PB16], in which we provided a first progress report on research and development towards a scalable, asynchronous many-task, in situ statistical analysis engine using the Legion runtime system. This earlier work included a prototype implementation of a proposed solution, using a proxy mini-application as a surrogate for a full-scale scientific simulation code. The first scalability studies were conducted with the above on modestly-sized experimental clusters. In contrast, in the current work we have integrated our in situ analysis engines with a full-size scientific application (S3D, using the Legion-SPMD model), and have conducted nu- mericalmore » tests on the largest computational platform currently available for DOE science ap- plications. We also provide details regarding the design and development of a light-weight asynchronous collectives library. We describe how this library is utilized within our SPMD- Legion S3D workflow, and compare the data aggregation technique deployed herein to the approach taken within our previous work.« less

  10. A model for generating Surface EMG signal of m. Tibialis Anterior.

    PubMed

    Siddiqi, Ariba; Kumar, Dinesh; Arjunan, Sridhar P

    2014-01-01

    A model that simulates surface electromyogram (sEMG) signal of m. Tibialis Anterior has been developed and tested. This has a firing rate equation that is based on experimental findings. It also has a recruitment threshold that is based on observed statistical distribution. Importantly, it has considered both, slow and fast type which has been distinguished based on their conduction velocity. This model has assumed that the deeper unipennate half of the muscle does not contribute significantly to the potential induced on the surface of the muscle and has approximated the muscle to have parallel structure. The model was validated by comparing the simulated and the experimental sEMG signal recordings. Experiments were conducted on eight subjects who performed isometric dorsiflexion at 10, 20, 30, 50, 75, and 100% maximal voluntary contraction. Normalized root mean square and median frequency of the experimental and simulated EMG signal were computed and the slopes of the linearity with the force were statistically analyzed. The gradients were found to be similar (p>0.05) for both experimental and simulated sEMG signal, validating the proposed model.

  11. Designing Human Immunodeficiency Virus Counselling and Testing Services to Maximize Uptake Among High School Learners in South Africa: What Matters?

    PubMed

    Strauss, Michael; George, Gavin; Rhodes, Bruce

    2017-05-01

    Increasing human immunodeficiency virus (HIV) testing in South Africa is vital for the HIV response. Targeting young people is important as they become sexually active and because HIV risk rapidly increases as youth enter their 20s. This study aims to increase the understanding of high school learners' preferences regarding the characteristics of HIV testing service delivery models and to inform policy makers and implementers regarding potential barriers to and facilitators of HIV testing. An attitudinal survey was used to examine HIV testing preferences among 248 high school learners in KwaZulu-Natal. Statistical tests were used to identify the most favored characteristics of testing service delivery models and examine key differences in preferences based on demographic characteristics and testing history. Most learners were found to prefer testing offered at a clinic on a Saturday (43%), using a finger prick test (59%), conducted by a doctor (61%) who also provides individual counselling (60%). Shorter testing times were preferred, as well as a monetary incentive to cover any associated expenses. Time, location, the type of test, and who conducts the test were most important. However, stratified analysis suggests that preferences diverge, particularly around gender, grade, but also sexual history and previous testing experience. Human immunodeficiency virus testing services can be improved in line with preferences, but there is no single optimal design that caters to the preferences of all learners. It is unlikely that a "one-size-fits-all" approach will be effective to reach HIV testing targets. A range of options may be required to maximize coverage.

  12. Effects of progressive relaxation exercises on anxiety and comfort of Turkish breast cancer patients receiving chemotherapy.

    PubMed

    Yilmaz, Seher Gurdil; Arslan, Sevban

    2015-01-01

    Breast cancer is the second most common cancer in the world and by far the most frequent cancer among women. This study was conducted to observe the effect of progressive relaxation exercises on anxiety and comfort level of breast cancer patients receiving chemotherapy. A control group pre-test/post-test quasi-experimental model was applied with experimental (30) and control (30) groups, who agreed to participate in this study. Data collection was with the "Personnel Information Form, State-Trait Anxiety Inventory and General Comfort Scale". The average age of the patients that participated in the study was 49.1±7.96 years. Eighty-three point three percent (n=25) of the patients in the experiment group and 86.7 (n=26) percent of patients in control group were married. Patient state of anxiety post-test mean scores were 36.2±8.21 in the experimental group and 43.4±7.96 in the control group, the difference being statistically significant (p<0.05). The general comfort scale post-test mean scores were 149.5±13.9 in the experimental group and 137.7±15.0 in the control group, again statistically significant (p<0.05). Progressive relaxation exercises positively affect patient comfort and anxiety levels in Turkey.

  13. The effect of shunt surgery on neuropsychological performance in normal pressure hydrocephalus: a systematic review and meta-analysis.

    PubMed

    Peterson, Katie A; Savulich, George; Jackson, Dan; Killikelly, Clare; Pickard, John D; Sahakian, Barbara J

    2016-08-01

    We conducted a systematic review of the literature and used meta-analytic techniques to evaluate the impact of shunt surgery on neuropsychological performance in patients with normal pressure hydrocephalus (NPH). Twenty-three studies with 1059 patients were identified for review using PubMed, Web of Science, Google scholar and manual searching. Inclusion criteria were prospective, within-subject investigations of cognitive outcome using neuropsychological assessment before and after shunt surgery in patients with NPH. There were statistically significant effects of shunt surgery on cognition (Mini-Mental State Examination; MMSE), learning and memory (Rey Auditory Verbal Learning Test; RAVLT, total and delayed subtests), executive function (backwards digit span, phonemic verbal fluency, trail making test B) and psychomotor speed (trail making test A) all in the direction of improvement following shunt surgery, but with considerable heterogeneity across all measures. A more detailed examination of the data suggested robust evidence for improved MMSE, RAVLT total, RAVLT delayed, phonemic verbal fluency and trail making test A only. Meta-regressions revealed no statistically significant effect of age, sex or follow-up interval on improvement in the MMSE. Our results suggest that shunt surgery is most sensitive for improving global cognition, learning and memory and psychomotor speed in patients with NPH.

  14. Blood lead level analysis among refugee children resettled in New Hampshire and Rhode Island.

    PubMed

    Raymond, Jaime S; Kennedy, Chinaro; Brown, Mary Jean

    2013-01-01

    To examine the association between refugee status and elevated blood lead levels (EBLLs) among children living in two U.S. cities and to assess the effect of the Centers for Disease Control and Prevention recommendations for BLL testing of newly emigrated refugee children for EBLLs. A longitudinal study was conducted of 1,007 refugee children and 953 nonrefugee children living, when blood testing occurred, in the same buildings in Manchester, New Hampshire and Providence, Rhode Island. Surveillance and blood lead data were collected from both sites, including demographic information, BLLs, sample type, refugee status, and age of housing. Refugee children living in Manchester were statistically significantly more likely to have an EBLL compared with nonrefugee children even after controlling for potential confounders. We did not find this association in Providence. Compared with before enactment, the mean time of refugee children to fall below 10 μg/dL was significantly shorter after the recommendations to test newly emigrated children were enacted. Refugee children living in Manchester were significantly more likely to have an EBLL compared with nonrefugee children. And among refugee children, we found a statistically significant difference in the mean days to BLL decline <10 μg/dL before and after recommendations to test newly emigrated children. © 2012 Wiley Periodicals, Inc.

  15. Bovine origin Staphylococcus aureus: A new zoonotic agent?

    PubMed

    Rao, Relangi Tulasi; Jayakumar, Kannan; Kumar, Pavitra

    2017-10-01

    The study aimed to assess the nature of animal origin Staphylococcus aureus strains. The study has zoonotic importance and aimed to compare virulence between two different hosts, i.e., bovine and ovine origin. Conventional polymerase chain reaction-based methods used for the characterization of S. aureus strains and chick embryo model employed for the assessment of virulence capacity of strains. All statistical tests carried on R program, version 3.0.4. After initial screening and molecular characterization of the prevalence of S. aureus found to be 42.62% in bovine origin samples and 28.35% among ovine origin samples. Meanwhile, the methicillin-resistant S. aureus prevalence is found to be meager in both the hosts. Among the samples, only 6.8% isolates tested positive for methicillin resistance. The biofilm formation quantified and the variation compared among the host. A Welch two-sample t -test found to be statistically significant, t=2.3179, df=28.103, and p=0.02795. Chicken embryo model found effective to test the pathogenicity of the strains. The study helped to conclude healthy bovines can act as S. aureus reservoirs. Bovine origin S. aureus strains are more virulent than ovine origin strains. Bovine origin strains have high probability to become zoonotic pathogen. Further, gene knock out studies may be conducted to conclude zoonocity of the bovine origin strains.

  16. TiO2-Nanofillers Effects on Some Properties of Highly- Impact Resin Using Different Processing Techniques.

    PubMed

    Aziz, Hawraa Khalid

    2018-01-01

    The criteria of conventional curing of polymethyl methacrylate do not match the standard properties of the denture base materials. This research was conducted to investigate the addition of TiO 2 nano practical on impact strength, thermal conductivity and color stability of acrylic resin cured by microwave in comparison to the conventional cured of heat-polymerized acrylic resin. 120 specimens made of high impact acrylic resin were divided into two main groups according to the type of curing (water bath, microwave), then each group was subdivided into two groups according to the addition of 3% TiO 2 nano-fillers and control group (without the addition of TiO 2 0%). Each group was subdivided according to the type of test into 3 groups with 10 specimens for each group. Data were statistically analyzed using Student t-test to detect the significant differences between tested and control groups at significance level ( P <0.05). According to curing type methods, the results showed that there was a significant decrease in impact strength of microwaved cured resin, but there was no significant difference in the thermal conductivity and color stability of resin. In addition, by using nanofiller, there was a significant increase in the impact strength and color stability with the addition of 3% TiO 2 nanofillers, but no significant difference was found in the thermal conductivity of the acrylic resin. The microwave curing of acrylic resin had no change in the color stability and thermal conductivity in comparison to the water bath, but the impact strength was decreased. The addition of 3% TiO 2 improved the impact and the color stability, but the thermal conductivity did not change.

  17. Evaluation of the Thermo Scientific™ SureTect™ Salmonella species Assay.

    PubMed

    Cloke, Jonathan; Clark, Dorn; Radcliff, Roy; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko

    2014-03-01

    The Thermo Scientific™ SureTect™ Salmonella species Assay is a new real-time PCR assay for the detection of Salmonellae in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested MethodsSM program to validate the SureTect Salmonella species Assay in comparison to the reference method detailed in International Organization for Standardization 6579:2002 in a variety of food matrixes, namely, raw ground beef, raw chicken breast, raw ground pork, fresh bagged lettuce, pork frankfurters, nonfat dried milk powder, cooked peeled shrimp, pasteurized liquid whole egg, ready-to-eat meal containing beef, and stainless steel surface samples. With the exception of liquid whole egg and fresh bagged lettuce, which were tested in-house, all matrixes were tested by Marshfield Food Safety, Marshfield, WI, on behalf of Thermo Fisher Scientific. In addition, three matrixes (pork frankfurters, lettuce, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled laboratory study by the University of Guelph, Canada. No significant difference by probability of detection or McNemars Chi-squared statistical analysis was found between the candidate or reference methods for any of the food matrixes or environmental surface samples tested during the validation study. Inclusivity and exclusivity testing was conducted with 117 and 36 isolates, respectively, which demonstrated that the SureTect Salmonella species Assay was able to detect all the major groups of Salmonella enterica subspecies enterica (e.g., Typhimurium) and the less common subspecies of S. enterica (e.g., arizoniae) and the rarely encountered S. bongori. None of the exclusivity isolates analyzed were detected by the SureTect Salmonella species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation (enrichment time and temperature, and lysis temperature), which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.

  18. SPSS and SAS procedures for estimating indirect effects in simple mediation models.

    PubMed

    Preacher, Kristopher J; Hayes, Andrew F

    2004-11-01

    Researchers often conduct mediation analysis in order to indirectly assess the effect of a proposed cause on some outcome through a proposed mediator. The utility of mediation analysis stems from its ability to go beyond the merely descriptive to a more functional understanding of the relationships among variables. A necessary component of mediation is a statistically and practically significant indirect effect. Although mediation hypotheses are frequently explored in psychological research, formal significance tests of indirect effects are rarely conducted. After a brief overview of mediation, we argue the importance of directly testing the significance of indirect effects and provide SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals, as well as the traditional approach advocated by Baron and Kenny (1986). We hope that this discussion and the macros will enhance the frequency of formal mediation tests in the psychology literature. Electronic copies of these macros may be downloaded from the Psychonomic Society's Web archive at www.psychonomic.org/archive/.

  19. Statistical methods used to test for agreement of medical instruments measuring continuous variables in method comparison studies: a systematic review.

    PubMed

    Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina

    2012-01-01

    Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future.

  20. Statistical Methods Used to Test for Agreement of Medical Instruments Measuring Continuous Variables in Method Comparison Studies: A Systematic Review

    PubMed Central

    Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina

    2012-01-01

    Background Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Methodology/Findings Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. Conclusions This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future. PMID:22662248

Top