ERIC Educational Resources Information Center
Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.
2017-01-01
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…
[Analysis of variance of repeated data measured by water maze with SPSS].
Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang
2007-01-01
To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P
The use of analysis of variance procedures in biological studies
Williams, B.K.
1987-01-01
The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
A LISREL Model for the Analysis of Repeated Measures with a Patterned Covariance Matrix.
ERIC Educational Resources Information Center
Rovine, Michael J.; Molenaar, Peter C. M.
1998-01-01
Presents a LISREL model for the estimation of the repeated measures analysis of variance (ANOVA) with a patterned covariance matrix. The model is demonstrated for a 5 x 2 (Time x Group) ANOVA in which the data are assumed to be serially correlated. Similarities with the Statistical Analysis System PROC MIXED model are discussed. (SLD)
Teaching Principles of Inference with ANOVA
ERIC Educational Resources Information Center
Tarlow, Kevin R.
2016-01-01
Analysis of variance (ANOVA) is a test of "mean" differences, but the reference to "variances" in the name is often overlooked. Classroom activities are presented to illustrate how ANOVA works with emphasis on how to think critically about inferential reasoning.
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.
Testing Interaction Effects without Discarding Variance.
ERIC Educational Resources Information Center
Lopez, Kay A.
Analysis of variance (ANOVA) and multiple regression are two of the most commonly used methods of data analysis in behavioral science research. Although ANOVA was intended for use with experimental designs, educational researchers have used ANOVA extensively in aptitude-treatment interaction (ATI) research. This practice tends to make researchers…
Simulation of Autonomic Logistics System (ALS) Sortie Generation
2003-03-01
84 Appendix B. ANOVA Assumptions Mission Capable Rate ANOVA Assumptions Constant Variance SSR # X cols SSE n Breusch - Pagan Chi-square 3.57E...85 Flying Scheduling Effectiveness ANOVA Assumptions Constant Variance SSR # X cols SSE n Breusch - Pagan Chi-square 2.12E-10 3 0.000816 270...Constant Variance SSR # X cols SSE n Breusch - Pagan Chi-square 1.86E-09 3 0.003758 270 3.20308814 0.9556957 Independence Durbin-Watson
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
Analysis of Variance in Statistical Image Processing
NASA Astrophysics Data System (ADS)
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
Analysis of Variance: Variably Complex
ERIC Educational Resources Information Center
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution…
40 CFR 264.97 - General ground-water monitoring requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... paragraph (i) of this section. (1) A parametric analysis of variance (ANOVA) followed by multiple... mean levels for each constituent. (2) An analysis of variance (ANOVA) based on ranks followed by...
ERIC Educational Resources Information Center
Thompson, Bruce
The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…
WASP (Write a Scientific Paper) using Excel 9: Analysis of variance.
Grech, Victor
2018-06-01
Analysis of variance (ANOVA) may be required by researchers as an inferential statistical test when more than two means require comparison. This paper explains how to perform ANOVA in Microsoft Excel. Copyright © 2018 Elsevier B.V. All rights reserved.
An Investigation of Collaborative Leadership
2013-01-01
businesses . The second will use an analysis of variance (ANOVA) to statistically compare the variance among organizations. Research regarding collaborative...horizontal column of the “T,” a leader is networking across the larger business model to understand how their organization’s core skills can be used in...the exchange of information or services among individuals, groups, or institutions in order to cultivate productive business relationships
Analysis of Variance in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
NASA Technical Reports Server (NTRS)
Ploutz-Snyder, Robert
2011-01-01
This slide presentation is a series of educational presentations that are on the statistical function of analysis of variance (ANOVA). Analysis of Variance (ANOVA) examines variability between groups, relative to within groups, to determine whether there's evidence that the groups are not from the same population. One other presentation reviews hypothesis testing.
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…
A Demonstration of the Analysis of Variance Using Physical Movement and Space
ERIC Educational Resources Information Center
Owen, William J.; Siakaluk, Paul D.
2011-01-01
Classroom demonstrations help students better understand challenging concepts. This article introduces an activity that demonstrates the basic concepts involved in analysis of variance (ANOVA). Students who physically participated in the activity had a better understanding of ANOVA concepts (i.e., higher scores on an exam question answered 2…
Teaching Principles of One-Way Analysis of Variance Using M&M's Candy
ERIC Educational Resources Information Center
Schwartz, Todd A.
2013-01-01
I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…
ERIC Educational Resources Information Center
Proger, Barton B.; And Others
Many researchers assume that unequal cell frequencies in analysis of variance (ANOVA) designs result from poor planning. However, there are several valid reasons why one might have to analyze an unequal-n data matrix. The present study reviewed four categories of methods for treating unequal-n matrices by ANOVA: (a) unaltered data (least-squares…
On statistical analysis of factors affecting anthocyanin extraction from Ixora siamensis
NASA Astrophysics Data System (ADS)
Mat Nor, N. A.; Arof, A. K.
2016-10-01
This study focused on designing an experimental model in order to evaluate the influence of operative extraction parameters employed for anthocyanin extraction from Ixora siamensis on CIE color measurements (a*, b* and color saturation). Extractions were conducted at temperatures of 30, 55 and 80°C, soaking time of 60, 120 and 180 min using acidified methanol solvent with different trifluoroacetic acid (TFA) contents of 0.5, 1.75 and 3% (v/v). The statistical evaluation was performed by running analysis of variance (ANOVA) and regression calculation to investigate the significance of the generated model. Results show that the generated regression models adequately explain the data variation and significantly represented the actual relationship between the independent variables and the responses. Analysis of variance (ANOVA) showed high coefficient determination values (R2) of 0.9687 for a*, 0.9621 for b* and 0.9758 for color saturation, thus ensuring a satisfactory fit of the developed models with the experimental data. Interaction between TFA content and extraction temperature exhibited to the highest significant influence on CIE color parameter.
Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-11-01
This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.
ERIC Educational Resources Information Center
Richter, Tobias
2006-01-01
Most reading time studies using naturalistic texts yield data sets characterized by a multilevel structure: Sentences (sentence level) are nested within persons (person level). In contrast to analysis of variance and multiple regression techniques, hierarchical linear models take the multilevel structure of reading time data into account. They…
Bias and robustness of uncertainty components estimates in transient climate projections
NASA Astrophysics Data System (ADS)
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias is always positive. It can be especially high with STANOVA. In the most critical configurations, when the number of members available for each modeling chain is small (< 3) and when internal variability explains most of total uncertainty variance (75% or more), the overestimation is higher than 100% of the true model uncertainty variance. The bias can be considerably reduced with a time series ANOVA approach, owing to the multiple time steps accounted for. The longer the transient time period used for the analysis, the larger the reduction. When a quasi-ergodic ANOVA approach is applied to decadal data for the whole 1980-2100 period, the bias is reduced by a factor 2.5 to 20 depending on the projection lead time. In all cases, the bias is likely to be not negligible for a large number of climate impact studies resulting in a likely large overestimation of the contribution of model uncertainty to total variance. For both approaches, the robustness of all uncertainty estimates is higher when more members are available, when internal variability is smaller and/or the response-to-uncertainty ratio is higher. QEANOVA estimates are much more robust than STANOVA ones: QEANOVA simulated confidence intervals are roughly 3 to 5 times smaller than STANOVA ones. Excepted for STANOVA when less than 3 members is available, the robustness is rather high for total uncertainty and moderate for internal variability estimates. For model uncertainty or response-to-uncertainty ratio estimates, the robustness is conversely low for QEANOVA to very low for STANOVA. In the most critical configurations (small number of member, large internal variability), large over- or underestimation of uncertainty components is very thus likely. To propose relevant uncertainty analyses and avoid misleading interpretations, estimates of uncertainty components should be therefore bias corrected and ideally come with estimates of their robustness. This work is part of the COMPLEX Project (European Collaborative Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/). Hingray, B., Saïd, M., 2014. Partitioning internal variability and model uncertainty components in a multimodel multireplicate ensemble of climate projections. J.Climate. doi:10.1175/JCLI-D-13-00629.1 Hingray, B., Blanchet, J. (revision) Unbiased estimators for uncertainty components in transient climate projections. J. Climate Hingray, B., Blanchet, J., Vidal, J.P. (revision) Robustness of uncertainty components estimates in climate projections. J.Climate
Barth, Amy E.; Barnes, Marcia; Francis, David J.; Vaughn, Sharon; York, Mary
2015-01-01
Separate mixed model analyses of variance (ANOVA) were conducted to examine the effect of textual distance on the accuracy and speed of text consistency judgments among adequate and struggling comprehenders across grades 6–12 (n = 1203). Multiple regressions examined whether accuracy in text consistency judgments uniquely accounted for variance in comprehension. Results suggest that there is considerable growth across the middle and high school years, particularly for adequate comprehenders in those text integration processes that maintain local coherence. Accuracy in text consistency judgments accounted for significant unique variance for passage-level, but not sentence-level comprehension, particularly for adequate comprehenders. PMID:26166946
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.
McNamee, R L; Eddy, W F
2001-12-01
Analysis of variance (ANOVA) is widely used for the study of experimental data. Here, the reach of this tool is extended to cover the preprocessing of functional magnetic resonance imaging (fMRI) data. This technique, termed visual ANOVA (VANOVA), provides both numerical and pictorial information to aid the user in understanding the effects of various parts of the data analysis. Unlike a formal ANOVA, this method does not depend on the mathematics of orthogonal projections or strictly additive decompositions. An illustrative example is presented and the application of the method to a large number of fMRI experiments is discussed. Copyright 2001 Wiley-Liss, Inc.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Interpreting Regression Results: beta Weights and Structure Coefficients are Both Important.
ERIC Educational Resources Information Center
Thompson, Bruce
Various realizations have led to less frequent use of the "OVA" methods (analysis of variance--ANOVA--among others) and to more frequent use of general linear model approaches such as regression. However, too few researchers understand all the various coefficients produced in regression. This paper explains these coefficients and their…
Role of Adenosine Receptor A2A in Traumatic Optic Neuropathies (Addendum)
2016-03-01
inflammation was evaluated using Western blot, Real-Time PCR and immuno-staining analyses. Role of A2AAR signaling in the anti-inflammation effect of ABT...Neuroimmunology 277 (2014) 96–104were evaluated by analysis of variance (one-way ANOVA), and the significance of differences between groups was assessed by the...Ahmad et al. / Journal of Neuroimmunology 277 (2014) 96–104were evaluated by analysis of variance (one-way ANOVA), and the significance of differences
Multiple Regression as a Flexible Alternative to ANOVA in L2 Research
ERIC Educational Resources Information Center
Plonsky, Luke; Oswald, Frederick L.
2017-01-01
Second language (L2) research relies heavily and increasingly on ANOVA (analysis of variance)-based results as a means to advance theory and practice. This fact alone should merit some reflection on the utility and value of ANOVA. It is possible that we could use this procedure more appropriately and, as argued here, other analyses such as…
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A
2006-06-01
When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
2010-06-28
Variance (ANOVA) Table for Serum Corticosterone Table 15 – Repeated-Measures ANOVA Table for Body Weight, Within-Subject Effects Table 16...Serum Ethanol Concentration Table 21 – Repeated-Measures ANOVA Table for Rotarod Performance, Within- Subject Effects Table 22 – Repeated...and that this social environment seems to attenuate the effects of predator and unpredictable stressors on serum corticosterone . In summary, stressed
NASA Astrophysics Data System (ADS)
Mathivanan, N. Rajesh; Mouli, Chandra
2012-12-01
In this work, a new methodology based on artificial neural networks (ANN) has been developed to study the low-velocity impact characteristics of woven glass epoxy laminates of EP3 grade. To train and test the networks, multiple impact cases have been generated using statistical analysis of variance (ANOVA). Experimental tests were performed using an instrumented falling-weight impact-testing machine. Different impact velocities and impact energies on different thicknesses of laminates were considered as the input parameters of the ANN model. This model is a feed-forward back-propagation neural network. Using the input/output data of the experiments, the model was trained and tested. Further, the effects of the low-velocity impact response of the laminates at different energy levels were investigated by studying the cause-effect relationship among the influential factors using response surface methodology. The most significant parameter is determined from the other input variables through ANOVA.
The Peter Effect in Early Experimental Education Research.
ERIC Educational Resources Information Center
Little, Joseph
2003-01-01
Traces the ways in which educational researchers referred to Ronald A. Fisher's analysis of variance (ANOVA) between 1932 and 1944 in the "Journal of Experimental Education" (JXE). Shows how the changes in citational practices served to separate the ANOVA from its affiliation with Fisher, essentially effacing the memory of its human…
Inhibition of Orthopaedic Implant Infections by Immunomodulatory Effects of Host Defense Peptides
2014-12-01
significance was determined by t- tests or by one-way analysis of variance (ANOVA) followed by Bonferroni post hoc tests in experiments with multiple...groups. Non- parametric Mann-Whitney tests , Kruskal-Wallis ANOVA followed by Newman-Kuels post hoc tests , or van Elteren’s two-way tests were applied to...in D, and black symbols in A), statistical analysis was by one-way ANOVA followed by Bonferroni versus control, post hoc tests . Otherwise, statistical
Coding and Commonality Analysis: Non-ANOVA Methods for Analyzing Data from Experiments.
ERIC Educational Resources Information Center
Thompson, Bruce
The advantages and disadvantages of three analytic methods used to analyze experimental data in educational research are discussed. The same hypothetical data set is used with all methods for a direct comparison. The Analysis of Variance (ANOVA) method and its several analogs are collectively labeled OVA methods and are evaluated. Regression…
Cautionary Note on Reporting Eta-Squared Values from Multifactor ANOVA Designs
ERIC Educational Resources Information Center
Pierce, Charles A.; Block, Richard A.; Aguinis, Herman
2004-01-01
The authors provide a cautionary note on reporting accurate eta-squared values from multifactor analysis of variance (ANOVA) designs. They reinforce the distinction between classical and partial eta-squared as measures of strength of association. They provide examples from articles published in premier psychology journals in which the authors…
Should a First Course in ANOVA Be Taught Through MLR?
ERIC Educational Resources Information Center
Williams, John D.
Before implementing a course in the analysis of variance (ANOVA) taught through multiple linear regression, several concerns must be addressed. Adequate computer facilities that are available to students on a low-cost or cost-free basis are necessary; also students must be able to meaningfully communicate with their major advisor regarding their…
Planned Comparisons as Better Alternatives to ANOVA Omnibus Tests.
ERIC Educational Resources Information Center
Benton, Roberta L.
Analyses of data are presented to illustrate the advantages of using a priori or planned comparisons rather than omnibus analysis of variance (ANOVA) tests followed by post hoc or posteriori testing. The two types of planned comparisons considered are planned orthogonal non-trend coding contrasts and orthogonal polynomial or trend contrast coding.…
Use of "t"-Test and ANOVA in Career-Technical Education Research
ERIC Educational Resources Information Center
Rojewski, Jay W.; Lee, In Heok; Gemici, Sinan
2012-01-01
Use of t-tests and analysis of variance (ANOVA) procedures in published research from three scholarly journals in career and technical education (CTE) during a recent 5-year period was examined. Information on post hoc analyses, reporting of effect size, alpha adjustments to account for multiple tests, power, and examination of assumptions…
Assessment of Adolescent Perceptions on Parental Attitudes on Different Variables
ERIC Educational Resources Information Center
Ersoy, Evren
2015-01-01
The purpose of this study is to examine secondary school student perceptions of parental attitudes with regards to specific variables. Independent samples t test for parametric distributions and one-way variance analysis (ANOVA) was used for analyzing the data, when the ANOVA analyses were significant Scheffe test was conducted on homogeneous…
The Statistical Power of Planned Comparisons.
ERIC Educational Resources Information Center
Benton, Roberta L.
Basic principles underlying statistical power are examined; and issues pertaining to effect size, sample size, error variance, and significance level are highlighted via the use of specific hypothetical examples. Analysis of variance (ANOVA) and related methods remain popular, although other procedures sometimes have more statistical power against…
ERIC Educational Resources Information Center
Shieh, Gwowen; Jan, Show-Li
2015-01-01
The general formulation of a linear combination of population means permits a wide range of research questions to be tested within the context of ANOVA. However, it has been stressed in many research areas that the homogeneous variances assumption is frequently violated. To accommodate the heterogeneity of variance structure, the…
ERIC Educational Resources Information Center
Suen, Hoi K.; And Others
The applicability is explored of the Bayesian random-effect analysis of variance (ANOVA) model developed by G. C. Tiao and W. Y. Tan (1966) and a method suggested by H. K. Suen and P. S. Lee (1987) for the generalizability analysis of autocorrelated data. According to Tiao and Tan, if time series data could be described as a first-order…
Eta Squared, Partial Eta Squared, and Misreporting of Effect Size in Communication Research.
ERIC Educational Resources Information Center
Levine, Timothy R.; Hullett, Craig R.
2002-01-01
Alerts communication researchers to potential errors stemming from the use of SPSS (Statistical Package for the Social Sciences) to obtain estimates of eta squared in analysis of variance (ANOVA). Strives to clarify issues concerning the development and appropriate use of eta squared and partial eta squared in ANOVA. Discusses the reporting of…
Finding P-Values for F Tests of Hypothesis on a Spreadsheet.
ERIC Educational Resources Information Center
Rochowicz, John A., Jr.
The calculation of the F statistic for a one-factor analysis of variance (ANOVA) and the construction of an ANOVA tables are easily implemented on a spreadsheet. This paper describes how to compute the p-value (observed significance level) for a particular F statistic on a spreadsheet. Decision making on a spreadsheet and applications to the…
The analysis of morphometric data on rocky mountain wolves and artic wolves using statistical method
NASA Astrophysics Data System (ADS)
Ammar Shafi, Muhammad; Saifullah Rusiman, Mohd; Hamzah, Nor Shamsidah Amir; Nor, Maria Elena; Ahmad, Noor’ani; Azia Hazida Mohamad Azmi, Nur; Latip, Muhammad Faez Ab; Hilmi Azman, Ahmad
2018-04-01
Morphometrics is a quantitative analysis depending on the shape and size of several specimens. Morphometric quantitative analyses are commonly used to analyse fossil record, shape and size of specimens and others. The aim of the study is to find the differences between rocky mountain wolves and arctic wolves based on gender. The sample utilised secondary data which included seven variables as independent variables and two dependent variables. Statistical modelling was used in the analysis such was the analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA). The results showed there exist differentiating results between arctic wolves and rocky mountain wolves based on independent factors and gender.
ERIC Educational Resources Information Center
Santos-Delgado, M. J.; Larrea-Tarruella, L.
2004-01-01
The back-titration methods are compared statistically to establish glycine in a nonaqueous medium of acetic acid. Important variations in the mean values of glycine are observed due to the interaction effects between the analysis of variance (ANOVA) technique and a statistical study through a computer software.
ERIC Educational Resources Information Center
Krus, David J.; Krus, Patricia H.
1978-01-01
The conceptual differences between coded regression analysis and traditional analysis of variance are discussed. Also, a modification of several SPSS routines is proposed which allows for direct interpretation of ANOVA and ANCOVA results in a form stressing the strength and significance of scrutinized relationships. (Author)
Robustness of S1 statistic with Hodges-Lehmann for skewed distributions
NASA Astrophysics Data System (ADS)
Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping
2016-10-01
Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.
Sequential experimental design based generalised ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Sequential experimental design based generalised ANOVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less
USDA-ARS?s Scientific Manuscript database
Forty-one samples of skim milk powder (SMP) and non-fat dry milk (NFDM) from 8 suppliers, 13 production sites, and 3 processing temperatures were analyzed by NIR diffuse reflectance spectrometry over a period of three days. NIR reflectance spectra (1700-2500 nm) were converted to pseudo-absorbance ...
Technical note: Application of the Box-Cox data transformation to animal science experiments.
Peltier, M R; Wilcox, C J; Sharp, D C
1998-03-01
In the use of ANOVA for hypothesis testing in animal science experiments, the assumption of homogeneity of errors often is violated because of scale effects and the nature of the measurements. We demonstrate a method for transforming data so that the assumptions of ANOVA are met (or violated to a lesser degree) and apply it in analysis of data from a physiology experiment. Our study examined whether melatonin implantation would affect progesterone secretion in cycling pony mares. Overall treatment variances were greater in the melatonin-treated group, and several common transformation procedures failed. Application of the Box-Cox transformation algorithm reduced the heterogeneity of error and permitted the assumption of equal variance to be met.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less
Boccard, Julien; Rudaz, Serge
2016-05-12
Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. Copyright © 2016 Elsevier B.V. All rights reserved.
Sabouhi, Mahmoud; Bajoghli, Farshad; Abolhasani, Majid
2015-01-01
The success of an implant-supported prosthesis is dependent on the passive fit of its framework fabricated on a precise cast. The aim of this in vitro study was to digitally compare the three-dimensional accuracy of implant impression techniques in partially and completely edentulous conditions. The master model simulated two clinical conditions. The first condition was a partially edentulous mandibular arch with an anterior edentulous space (D condition). Two implant analogs were inserted in bilateral canine sites. After elimination of the teeth, the model was converted to a completely edentulous condition (E condition). Three different impression techniques were performed (open splinted [OS], open unsplinted [OU], closed [C]) for each condition. Six groups of casts (DOS, DOU, DC, EOS, EOU, EC) (n = 8), totaling 48 casts, were made. Two scan bodies were secured onto the master edentulous model and onto each test cast and digitized by an optical scanning system. The related scans were superimposed, and the mean discrepancy for each cast was determined. The statistical analysis showed no significant difference in the accuracy of casts as a function of model status (P = .78, analysis of variance [ANOVA] test), impression technique (P = .57, ANOVA test), or as the combination of both (P = .29, ANOVA test). The distribution of data was normal (Kolmogorov-Smirnov test). Model status (dentate or edentulous) and impression technique did not influence the precision of the casts. There is no difference among any of the impression techniques in either simulated clinical condition.
NASA Astrophysics Data System (ADS)
Arif, Sajjad; Tanwir Alam, Md; Ansari, Akhter H.; Bilal Naim Shaikh, Mohd; Arif Siddiqui, M.
2018-05-01
The tribological performance of aluminium hybrid composites reinforced with micro SiC (5 wt%) and nano zirconia (0, 3, 6 and 9 wt%) fabricated through powder metallurgy technique were investigated using statistical and artificial neural network (ANN) approach. The influence of zirconia reinforcement, sliding distance and applied load were analyzed with test based on full factorial design of experiments. Analysis of variance (ANOVA) was used to evaluate the percentage contribution of each process parameters on wear loss. ANOVA approach suggested that wear loss be mainly influenced by sliding distance followed by zirconia reinforcement and applied load. Further, a feed forward back propagation neural network was applied on input/output date for predicting and analyzing the wear behaviour of fabricated composite. A very close correlation between experimental and ANN output were achieved by implementing the model. Finally, ANN model was effectively used to find the influence of various control factors on wear behaviour of hybrid composites.
Split-plot microarray experiments: issues of design, power and sample size.
Tsai, Pi-Wen; Lee, Mei-Ling Ting
2005-01-01
This article focuses on microarray experiments with two or more factors in which treatment combinations of the factors corresponding to the samples paired together onto arrays are not completely random. A main effect of one (or more) factor(s) is confounded with arrays (the experimental blocks). This is called a split-plot microarray experiment. We utilise an analysis of variance (ANOVA) model to assess differentially expressed genes for between-array and within-array comparisons that are generic under a split-plot microarray experiment. Instead of standard t- or F-test statistics that rely on mean square errors of the ANOVA model, we use a robust method, referred to as 'a pooled percentile estimator', to identify genes that are differentially expressed across different treatment conditions. We illustrate the design and analysis of split-plot microarray experiments based on a case application described by Jin et al. A brief discussion of power and sample size for split-plot microarray experiments is also presented.
Tensiomyographical responses to accelerometer loads in female collegiate basketball players.
Peterson, Kyle D; Quiggle, Gabriela T
2017-12-01
The purpose of the present study was to characterise the relationship between relative versus absolute internal and external loads in collegiate basketball players throughout the course of a season. Five Division I basketball players wore triaxial accelerometers throughout the 2015-2016 season and were tensiomyographically assessed weekly. One-way repeated-measure analysis of variance (RM ANOVA) with least-significant-difference (LSD) pairwise comparisons was used to determine which absolute weekly loads were different across the season. Cohen's d was used to supplement the determination of meaningful relative load changes. Overall RM ANOVA models suggest absolute external load differences occurred (PlayerLoad™ F = 17.63; IMA™ F = 31.63). Two-way RM ANOVA models revealed main effect differences were revealed between muscle groups for Tc (F = 9.11) and Dm (F = 3.25). Meaningful relative load changes between weeks were observed for both external and internal. The present study observed that tensiomyography utilised as a tool to monitor internal load may be more suitable for detecting fatigue from relative external load changes versus absolute load attained. Limiting weekly training volume changes to ≤10% may maintain appropriate adaptation. Mediolateral plane IMA™ and adductor longus muscle group may be pertinent metrics when monitoring female collegiate basketball athletes.
Colombo, Lisa M; Perla, Rocco J; Carifio, James; Bernhardt, Jean M; Slayton, Val W
2011-01-01
Combining the use of employee perception surveys with sound analytical techniques and models is critical to capturing high quality data from which effective decisions can be made in complex healthcare settings. This study used the Baldrige Award companion surveys with an analysis of variance (ANOVA) framework to identify discordant perceptions of hospital staff and leadership in the areas of customer focus, knowledge management, and results that were significant at the 0.05 and 0.01 levels. Senior leaders in the organization found the ANOVA framework helpful as they interpreted results from the Baldrige companion surveys and planned future improvement activities. During the execution of our study a number of difficulties and challenges arose that are not uncommon to survey administration in smaller settings, such as community hospitals, or in larger hospital settings with no research staff or research staff with relevant psychometric expertise. Our experience suggests that the Baldrige companion survey process would be enhanced by providing organizations with general guidance and protocols for optimal survey administration and data analysis. The purpose of this article is to outline the ANOVA model we used with the Baldrige companion surveys and to provide guidance related to the administration and analysis of these companion surveys for those that use them. © 2010 National Association for Healthcare Quality.
ERIC Educational Resources Information Center
Guo, Jiin-Huarng; Luh, Wei-Ming
2008-01-01
This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…
NASA Astrophysics Data System (ADS)
Nazir, Mohd Yusuf Mohd; Al-Shorgani, Najeeb Kaid Nasser; Kalil, Mohd Sahaid; Hamid, Aidil Abdul
2015-09-01
In this study, three factors (fructose concentration, agitation speed and monosodium glutamate (MSG) concentration) were optimized to enhance DHA production by Schizochytrium SW1 using response surface methodology (RSM). Central composite design was applied as the experimental design and analysis of variance (ANOVA) was used to analyze the data. The experiments were conducted using 500 mL flask with 100 mL working volume at 30°C for 96 hours. ANOVA analysis revealed that the process was adequately represented significantly by the quadratic model (p<0.0001) and two of the factors namely agitation speed and MSG concentration significantly affect DHA production (p<0.005). Level of influence for each variable and quadratic polynomial equation were obtained for DHA production by multiple regression analyses. The estimated optimum conditions for maximizing DHA production by SW1 were 70 g/L fructose, 250 rpm agitation speed and 12 g/L MSG. Consequently, the quadratic model was validated by applying of the estimated optimum conditions, which confirmed the model validity and 52.86% of DHA was produced.
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.
2018-05-01
We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multiple comparison analysis testing in ANOVA.
McHugh, Mary L
2011-01-01
The Analysis of Variance (ANOVA) test has long been an important tool for researchers conducting studies on multiple experimental groups and one or more control groups. However, ANOVA cannot provide detailed information on differences among the various study groups, or on complex combinations of study groups. To fully understand group differences in an ANOVA, researchers must conduct tests of the differences between particular pairs of experimental and control groups. Tests conducted on subsets of data tested previously in another analysis are called post hoc tests. A class of post hoc tests that provide this type of detailed information for ANOVA results are called "multiple comparison analysis" tests. The most commonly used multiple comparison analysis statistics include the following tests: Tukey, Newman-Keuls, Scheffee, Bonferroni and Dunnett. These statistical tools each have specific uses, advantages and disadvantages. Some are best used for testing theory while others are useful in generating new theory. Selection of the appropriate post hoc test will provide researchers with the most detailed information while limiting Type 1 errors due to alpha inflation.
Andridge, Rebecca. R.
2011-01-01
In cluster randomized trials (CRTs), identifiable clusters rather than individuals are randomized to study groups. Resulting data often consist of a small number of clusters with correlated observations within a treatment group. Missing data often present a problem in the analysis of such trials, and multiple imputation (MI) has been used to create complete data sets, enabling subsequent analysis with well-established analysis methods for CRTs. We discuss strategies for accounting for clustering when multiply imputing a missing continuous outcome, focusing on estimation of the variance of group means as used in an adjusted t-test or ANOVA. These analysis procedures are congenial to (can be derived from) a mixed effects imputation model; however, this imputation procedure is not yet available in commercial statistical software. An alternative approach that is readily available and has been used in recent studies is to include fixed effects for cluster, but the impact of using this convenient method has not been studied. We show that under this imputation model the MI variance estimator is positively biased and that smaller ICCs lead to larger overestimation of the MI variance. Analytical expressions for the bias of the variance estimator are derived in the case of data missing completely at random (MCAR), and cases in which data are missing at random (MAR) are illustrated through simulation. Finally, various imputation methods are applied to data from the Detroit Middle School Asthma Project, a recent school-based CRT, and differences in inference are compared. PMID:21259309
Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan
2016-01-01
The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation
NASA Astrophysics Data System (ADS)
Qiang, Z.; Zeng, L.; Wu, L.
2016-12-01
Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.
Rahman, A.; Tsai, F.T.-C.; White, C.D.; Carlson, D.A.; Willson, C.S.
2008-01-01
Data integration is challenging where there are different levels of support between primary and secondary data that need to be correlated in various ways. A geostatistical method is described, which integrates the hydraulic conductivity (K) measurements and electrical resistivity data to better estimate the K distribution in the Upper Chicot Aquifer of southwestern Louisiana, USA. The K measurements were obtained from pumping tests and represent the primary (hard) data. Borehole electrical resistivity data from electrical logs were regarded as the secondary (soft) data, and were used to infer K values through Archie's law and the Kozeny-Carman equation. A pseudo cross-semivariogram was developed to cope with the resistivity data non-collocation. Uncertainties in the auto-semivariograms and pseudo cross-semivariogram were quantified. The groundwater flow model responses by the regionalized and coregionalized models of K were compared using analysis of variance (ANOVA). The results indicate that non-collocated secondary data may improve estimates of K and affect groundwater flow responses of practical interest, including specific capacity and drawdown. ?? Springer-Verlag 2007.
LED traffic signal replacement schedules : facilitating smooth freight flows.
DOT National Transportation Integrated Search
2011-11-01
This research details a field study of LED traffic signals in Missouri and develops a replacement schedule based on key findings. : Rates of degradation were statistically analyzed using Analysis of Variance (ANOVA). Results of this research will pro...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Yuanshun; Baek, Seung H.; Garcia-Diza, Alberto
2012-01-01
This paper designs a comprehensive approach based on the engineering machine/system concept, to model, analyze, and assess the level of CO2 exchange between the atmosphere and terrestrial ecosystems, which is an important factor in understanding changes in global climate. The focus of this article is on spatial patterns and on the correlation between levels of CO2 fluxes and a variety of influencing factors in eco-environments. The engineering/machine concept used is a system protocol that includes the sequential activities of design, test, observe, and model. This concept is applied to explicitly include various influencing factors and interactions associated with CO2 fluxes.more » To formulate effective models of a large and complex climate system, this article introduces a modeling technique that will be referred to as Stochastic Filtering Analysis of Variance (SFANOVA). The CO2 flux data observed from some sites of AmeriFlux are used to illustrate and validate the analysis, prediction and globalization capabilities of the proposed engineering approach and the SF-ANOVA technology. The SF-ANOVA modeling approach was compared to stepwise regression, ridge regression, and neural networks. The comparison indicated that the proposed approach is a valid and effective tool with similar accuracy and less complexity than the other procedures.« less
ANALYSES OF NEUROBEHAVIORAL SCREENING DATA: BENCHMARK DOSE ESTIMATION.
Analysis of neurotoxicological screening data such as those of the functional observational battery (FOB) traditionally relies on analysis of variance (ANOVA) with repeated measurements, followed by determination of a no-adverse-effect level (NOAEL). The US EPA has proposed the ...
Ghosh, Debasree; Chattopadhyay, Parimal
2012-06-01
The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.
Xu, Jing; Li, Wenlong; Zhang, Chunhui; Liu, Wei; Du, Guozhen
2014-01-01
Seed germination is a crucial stage in the life history of a species because it represents the pathway from adult to offspring, and it can affect the distribution and abundance of species in communities. In this study, we examined the effects of phylogenetic, life history and environmental factors on seed germination of 134 common species from an alpine/subalpine meadow on the eastern Tibetan Plateau. In one-way ANOVAs, phylogenetic groups (at or above order) explained 13.0% and 25.9% of the variance in germination percentage and mean germination time, respectively; life history attributes, such as seed size, dispersal mode, explained 3.7%, 2.1% of the variance in germination percentage and 6.3%, 8.7% of the variance in mean germination time, respectively; the environmental factors temperature and habitat explained 4.7%, 1.0% of the variance in germination percentage and 13.5%, 1.7% of the variance in mean germination time, respectively. Our results demonstrated that elevated temperature would lead to a significant increase in germination percentage and an accelerated germination. Multi-factorial ANOVAs showed that the three major factors contributing to differences in germination percentage and mean germination time in this alpine/subalpine meadow were phylogenetic attributes, temperature and seed size (explained 10.5%, 4.7% and 1.4% of the variance in germination percentage independently, respectively; and explained 14.9%, 13.5% and 2.7% of the variance in mean germination time independently, respectively). In addition, there were strong associations between phylogenetic group and life history attributes, and between life history attributes and environmental factors. Therefore, germination variation are constrained mainly by phylogenetic inertia in a community, and seed germination variation correlated with phylogeny is also associated with life history attributes, suggesting a role of niche adaptation in the conservation of germination variation within lineages. Meanwhile, selection can maintain the association between germination behavior and the environmental conditions within a lineage. PMID:24893308
Life expectancy evaluation and development of a replacement schedule for LED traffic signals.
DOT National Transportation Integrated Search
2011-03-01
This research details a field study of LED traffic signals in Missouri and develops a replacement schedule : based on key findings. Rates of degradation were statistically analyzed using Analysis of Variance : (ANOVA). Results of this research will p...
Uncertainty Analysis for a Jet Flap Airfoil
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Cruz, Josue
2006-01-01
An analysis of variance (ANOVA) study was performed to quantify the potential uncertainties of lift and pitching moment coefficient calculations from a computational fluid dynamics code, relative to an experiment, for a jet flap airfoil configuration. Uncertainties due to a number of factors including grid density, angle of attack and jet flap blowing coefficient were examined. The ANOVA software produced a numerical model of the input coefficient data, as functions of the selected factors, to a user-specified order (linear, 2-factor interference, quadratic, or cubic). Residuals between the model and actual data were also produced at each of the input conditions, and uncertainty confidence intervals (in the form of Least Significant Differences or LSD) for experimental, computational, and combined experimental / computational data sets were computed. The LSD bars indicate the smallest resolvable differences in the functional values (lift or pitching moment coefficient) attributable solely to changes in independent variable, given just the input data points from selected data sets. The software also provided a collection of diagnostics which evaluate the suitability of the input data set for use within the ANOVA process, and which examine the behavior of the resultant data, possibly suggesting transformations which should be applied to the data to reduce the LSD. The results illustrate some of the key features of, and results from, the uncertainty analysis studies, including the use of both numerical (continuous) and categorical (discrete) factors, the effects of the number and range of the input data points, and the effects of the number of factors considered simultaneously.
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
NASA Astrophysics Data System (ADS)
Gong, Maozhen
Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Nazir, Yusuf; Shuib, Shuwahida; Kalil, Mohd Sahaid; Song, Yuanda; Hamid, Aidil Abdul
2018-06-11
In this study, optimization of growth, lipid and DHA production of Aurantiochytrium SW1 was carried out using response surface methodology (RSM) in optimizing initial fructose concentration, agitation speed and monosodium glutamate (MSG) concentration. Central composite design was applied as the experimental design and analysis of variance (ANOVA) was used to analyze the data. ANOVA analysis revealed that the process which adequately represented by quadratic model was significant (p < 0.0001) for all the response. All the three factors were significant (p < 0.005) in influencing the biomass and lipid data while only two factors (agitation speed and MSG) gave significant effect on DHA production (p < 0.005). The estimated optimal conditions for enhanced growth, lipid and DHA production were 70 g/L fructose, 250 rpm agitation speed and 10 g/L MSG. Consequently, the quadratic model was validated by applying the estimated optimum conditions, which confirmed the model validity where 19.0 g/L biomass, 9.13 g/L lipid and 4.75 g/L of DHA were produced. The growth, lipid and DHA were 28, 36 and 35% respectively higher than that produced in the original medium prior to optimization.
Bouchard, C; An, P; Rice, T; Skinner, J S; Wilmore, J H; Gagnon, J; Pérusse, L; Leon, A S; Rao, D C
1999-09-01
The aim of this study was to test the hypothesis that individual differences in the response of maximal O(2) uptake (VO(2max)) to a standardized training program are characterized by familial aggregation. A total of 481 sedentary adult Caucasians from 98 two-generation families was exercise trained for 20 wk and was tested for VO(2max) on a cycle ergometer twice before and twice after the training program. The mean increase in VO(2max) reached approximately 400 ml/min, but there was considerable heterogeneity in responsiveness, with some individuals experiencing little or no gain, whereas others gained >1.0 l/min. An ANOVA revealed that there was 2.5 times more variance between families than within families in the VO(2max) response variance. With the use of a model-fitting procedure, the most parsimonious models yielded a maximal heritability estimate of 47% for the VO(2max) response, which was adjusted for age and sex with a maternal transmission of 28% in one of the models. We conclude that the trainability of VO(2max) is highly familial and includes a significant genetic component.
Predicting Presynaptic and Postsynaptic Neurotoxins by Developing Feature Selection Technique
Yang, Yunchun; Zhang, Chunmei; Chen, Rong; Huang, Po
2017-01-01
Presynaptic and postsynaptic neurotoxins are proteins which act at the presynaptic and postsynaptic membrane. Correctly predicting presynaptic and postsynaptic neurotoxins will provide important clues for drug-target discovery and drug design. In this study, we developed a theoretical method to discriminate presynaptic neurotoxins from postsynaptic neurotoxins. A strict and objective benchmark dataset was constructed to train and test our proposed model. The dipeptide composition was used to formulate neurotoxin samples. The analysis of variance (ANOVA) was proposed to find out the optimal feature set which can produce the maximum accuracy. In the jackknife cross-validation test, the overall accuracy of 94.9% was achieved. We believe that the proposed model will provide important information to study neurotoxins. PMID:28303250
Wheat crown rot pathogens Fusarium graminearum and F. pseudograminearum lack specialization.
Chakraborty, Sukumar; Obanor, Friday; Westecott, Rhyannyn; Abeywickrama, Krishanthi
2010-10-01
This article reports a lack of pathogenic specialization among Australian Fusarium graminearum and F. pseudograminearum causing crown rot (CR) of wheat using analysis of variance (ANOVA), principal component and biplot analysis, Kendall's coefficient of concordance (W), and κ statistics. Overall, F. pseudograminearum was more aggressive than F. graminearum, supporting earlier delineation of the crown-infecting group as a new species. Although significant wheat line-pathogen isolate interaction in ANOVA suggested putative specialization when seedlings of 60 wheat lines were inoculated with 4 pathogen isolates or 26 wheat lines were inoculated with 10 isolates, significant W and κ showed agreement in rank order of wheat lines, indicating a lack of specialization. The first principal component representing nondifferential aggressiveness explained a large part (up to 65%) of the variation in CR severity. The differential components were small and more pronounced in seedlings than in adult plants. By maximizing variance on the first two principal components, biplots were useful for highlighting the association between isolates and wheat lines. A key finding of this work is that a range of analytical tools are needed to explore pathogenic specialization, and a statistically significant interaction in an ANOVA cannot be taken as conclusive evidence of specialization. With no highly resistant wheat cultivars, Fusarium isolates mostly differ in aggressiveness; however, specialization may appear as more resistant cultivars become widespread.
Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach
Seeley, Matthew K.; Francom, Devin; Reese, C. Shane; Hopkins, J. Ty
2017-01-01
Abstract In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle, knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. When using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function. PMID:29339984
Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach.
Park, Jihong; Seeley, Matthew K; Francom, Devin; Reese, C Shane; Hopkins, J Ty
2017-12-01
In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle, knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. When using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function.
Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Jihong; Seeley, Matthew K.; Francom, Devin
In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle,more » knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. Thus when using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function.« less
Functional vs. Traditional Analysis in Biomechanical Gait Data: An Alternative Statistical Approach
Park, Jihong; Seeley, Matthew K.; Francom, Devin; ...
2017-12-28
In human motion studies, discrete points such as peak or average kinematic values are commonly selected to test hypotheses. The purpose of this study was to describe a functional data analysis and describe the advantages of using functional data analyses when compared with a traditional analysis of variance (ANOVA) approach. Nineteen healthy participants (age: 22 ± 2 yrs, body height: 1.7 ± 0.1 m, body mass: 73 ± 16 kg) walked under two different conditions: control and pain+effusion. Pain+effusion was induced by injection of sterile saline into the joint capsule and hypertonic saline into the infrapatellar fat pad. Sagittal-plane ankle,more » knee, and hip joint kinematics were recorded and compared following injections using 2×2 mixed model ANOVAs and FANOVAs. The results of ANOVAs detected a condition × time interaction for the peak ankle (F1,18 = 8.56, p = 0.01) and hip joint angle (F1,18 = 5.77, p = 0.03), but did not for the knee joint angle (F1,18 = 0.36, p = 0.56). The functional data analysis, however, found several differences at initial contact (ankle and knee joint), in the mid-stance (each joint) and at toe off (ankle). Although a traditional ANOVA is often appropriate for discrete or summary data, in biomechanical applications, the functional data analysis could be a beneficial alternative. Thus when using the functional data analysis approach, a researcher can (1) evaluate the entire data as a function, and (2) detect the location and magnitude of differences within the evaluated function.« less
Factors Influencing Stress, Burnout, and Retention of Secondary Teachers
ERIC Educational Resources Information Center
Fisher, Molly H.
2011-01-01
This study examines the stress, burnout, satisfaction, and preventive coping skills of nearly 400 secondary teachers to determine variables contributing to these major factors influencing teachers. Analysis of Variance (ANOVA) statistics were conducted that found the burnout levels between new and experienced teachers are significantly different,…
ERIC Educational Resources Information Center
Linacre, John Michael
Various methods of estimating main effects from ordinal data are presented and contrasted. Problems discussed include: (1) at what level to accumulate ordinal data into linear measures; (2) how to maintain scaling across analyses; and (3) the inevitable confounding of within cell variance with measurement error. An example shows three methods of…
The Impact of Bullying and Victimization on Students' Relationships
ERIC Educational Resources Information Center
Demanet, Jannick; Van Houtte, Mieke
2012-01-01
Grschool, in Flemish secondary schools. Methods: We use data from the Flemish Educational Assessment (FlEA), consisting of 11,872 students in 85 schools. Multivariate analyses of variance (ANOVA) were performed. Results: Non-involved students felt most attached to peers, parents, teachers, and school. Bullies matched the level of parental…
Career Development in Language Education Programs
ERIC Educational Resources Information Center
Shawer, Saad Fathy; Alkahtani, Saad Ali
2013-01-01
This study assesses the influence of a two-year language program evaluation on program directors and faculty career development. The study makes use of mixed-paradigms (positivism and qualitative interpretive), mixed-strategies (survey research and qualitative evaluation), one-way analysis of variance (ANOVA) and a post-hoc test of multiple…
Time-frequency analysis of human motion during rhythmic exercises.
Omkar, S N; Vyas, Khushi; Vikranth, H N
2011-01-01
Biomechanical signals due to human movements during exercise are represented in time-frequency domain using Wigner Distribution Function (WDF). Analysis based on WDF reveals instantaneous spectral and power changes during a rhythmic exercise. Investigations were carried out on 11 healthy subjects who performed 5 cycles of sun salutation, with a body-mounted Inertial Measurement Unit (IMU) as a motion sensor. Variance of Instantaneous Frequency (I.F) and Instantaneous Power (I.P) for performance analysis of the subject is estimated using one-way ANOVA model. Results reveal that joint Time-Frequency analysis of biomechanical signals during motion facilitates a better understanding of grace and consistency during rhythmic exercise.
Tao, Fulu; Rötter, Reimund P; Palosuo, Taru; Gregorio Hernández Díaz-Ambrona, Carlos; Mínguez, M Inés; Semenov, Mikhail A; Kersebaum, Kurt Christian; Nendel, Claas; Specka, Xenia; Hoffmann, Holger; Ewert, Frank; Dambreville, Anaelle; Martre, Pierre; Rodríguez, Lucía; Ruiz-Ramos, Margarita; Gaiser, Thomas; Höhn, Jukka G; Salo, Tapio; Ferrise, Roberto; Bindi, Marco; Cammarano, Davide; Schulman, Alan H
2018-03-01
Climate change impact assessments are plagued with uncertainties from many sources, such as climate projections or the inadequacies in structure and parameters of the impact model. Previous studies tried to account for the uncertainty from one or two of these. Here, we developed a triple-ensemble probabilistic assessment using seven crop models, multiple sets of model parameters and eight contrasting climate projections together to comprehensively account for uncertainties from these three important sources. We demonstrated the approach in assessing climate change impact on barley growth and yield at Jokioinen, Finland in the Boreal climatic zone and Lleida, Spain in the Mediterranean climatic zone, for the 2050s. We further quantified and compared the contribution of crop model structure, crop model parameters and climate projections to the total variance of ensemble output using Analysis of Variance (ANOVA). Based on the triple-ensemble probabilistic assessment, the median of simulated yield change was -4% and +16%, and the probability of decreasing yield was 63% and 31% in the 2050s, at Jokioinen and Lleida, respectively, relative to 1981-2010. The contribution of crop model structure to the total variance of ensemble output was larger than that from downscaled climate projections and model parameters. The relative contribution of crop model parameters and downscaled climate projections to the total variance of ensemble output varied greatly among the seven crop models and between the two sites. The contribution of downscaled climate projections was on average larger than that of crop model parameters. This information on the uncertainty from different sources can be quite useful for model users to decide where to put the most effort when preparing or choosing models or parameters for impact analyses. We concluded that the triple-ensemble probabilistic approach that accounts for the uncertainties from multiple important sources provide more comprehensive information for quantifying uncertainties in climate change impact assessments as compared to the conventional approaches that are deterministic or only account for the uncertainties from one or two of the uncertainty sources. © 2017 John Wiley & Sons Ltd.
Sim, Ji-Young; Jang, Yeon; Kim, Woong-Chul; Kim, Hae-Young; Lee, Dong-Hwan; Kim, Ji-Hwan
2018-03-31
This study aimed to evaluate and compare the accuracy. A reference model was prepared with three prepared teeth for three types of restorations: single crown, 3-unit bridge, and inlay. Stone models were fabricated from conventional impressions. Digital impressions of the reference model were created using an intraoral scanner (digital models). Physical models were fabricated using a three-dimensional (3D) printer. Reference, stone, and 3D printed models were subsequently scanned using an industrial optical scanner; files were exported in a stereolithography file format. All datasets were superimposed using 3D analysis software to evaluate the accuracy of the complete arch and trueness of the preparations. One-way and two-way analyses of variance (ANOVA) were performed to compare the accuracy among the three model groups and evaluate the trueness among the three types of preparation. For the complete arch, significant intergroup differences in precision were observed for the three groups (p<.001). However, no significant difference in trueness was found between the stone and digital models (p>.05). 3D printed models had the poorest accuracy. A two-way ANOVA revealed significant differences in trueness among the model groups (p<.001) and types of preparation (p<.001). Digital models had smaller root mean square values of trueness of the complete arch and preparations than stone models. However, the accuracy of the complete arch and trueness of the preparations of 3D printed models were inferior to those of the other groups. Copyright © 2018 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Uncertainty Analysis of Historical Hurricane Data
NASA Technical Reports Server (NTRS)
Green, Lawrence L.
2007-01-01
An analysis of variance (ANOVA) study was conducted for historical hurricane data dating back to 1851 that was obtained from the U. S. Department of Commerce National Oceanic and Atmospheric Administration (NOAA). The data set was chosen because it is a large, publicly available collection of information, exhibiting great variability which has made the forecasting of future states, from current and previous states, difficult. The availability of substantial, high-fidelity validation data, however, made for an excellent uncertainty assessment study. Several factors (independent variables) were identified from the data set, which could potentially influence the track and intensity of the storms. The values of these factors, along with the values of responses of interest (dependent variables) were extracted from the data base, and provided to a commercial software package for processing via the ANOVA technique. The primary goal of the study was to document the ANOVA modeling uncertainty and predictive errors in making predictions about hurricane location and intensity 24 to 120 hours beyond known conditions, as reported by the data set. A secondary goal was to expose the ANOVA technique to a broader community within NASA. The independent factors considered to have an influence on the hurricane track included the current and starting longitudes and latitudes (measured in degrees), and current and starting maximum sustained wind speeds (measured in knots), and the storm starting date, its current duration from its first appearance, and the current year fraction of each reading, all measured in years. The year fraction and starting date were included in order to attempt to account for long duration cyclic behaviors, such as seasonal weather patterns, and years in which the sea or atmosphere were unusually warm or cold. The effect of short duration weather patterns and ocean conditions could not be examined with the current data set. The responses analyzed were the storm latitude, longitude and intensity, as recorded in the data set, 24 or 120 hours beyond the current state. Several ANOVA modeling schemes were examined. Two forms of validation were used: 1) comparison with official hurricane prediction performance metrics and 2) cases studies conducted on hurricanes from the 2005 season, which were not included within the model construction and ANOVA assessment. In general, the ANOVA technique did not perform as well as the established official prediction performance metrics published by NOAA; still, the technique did remarkably well in this demonstration with a difficult data set and could probably be made to perform better with more knowledge of hurricane development and dynamics applied to the problem. The technique provides a repeatable prediction process that eliminates the need for judgment in the forecast.
Tin, L N W; Lui, S S Y; Ho, K K Y; Hung, K S Y; Wang, Y; Yeung, H K H; Wong, T Y; Lam, S M; Chan, R C K; Cheung, E F C
2018-06-01
Evidence suggests that autism and schizophrenia share similarities in genetic, neuropsychological and behavioural aspects. Although both disorders are associated with theory of mind (ToM) impairments, a few studies have directly compared ToM between autism patients and schizophrenia patients. This study aimed to investigate to what extent high-functioning autism patients and schizophrenia patients share and differ in ToM performance. Thirty high-functioning autism patients, 30 schizophrenia patients and 30 healthy individuals were recruited. Participants were matched in age, gender and estimated intelligence quotient. The verbal-based Faux Pas Task and the visual-based Yoni Task were utilised to examine first- and higher-order, affective and cognitive ToM. The task/item difficulty of two paradigms was examined using mixed model analyses of variance (ANOVAs). Multiple ANOVAs and mixed model ANOVAs were used to examine group differences in ToM. The Faux Pas Task was more difficult than the Yoni Task. High-functioning autism patients showed more severely impaired verbal-based ToM in the Faux Pas Task, but shared similar visual-based ToM impairments in the Yoni Task with schizophrenia patients. The findings that individuals with high-functioning autism shared similar but more severe impairments in verbal ToM than individuals with schizophrenia support the autism-schizophrenia continuum. The finding that verbal-based but not visual-based ToM was more impaired in high-functioning autism patients than schizophrenia patients could be attributable to the varied task/item difficulty between the two paradigms.
NASA Astrophysics Data System (ADS)
Wibowo, Wahyu; Sinu, Elisabeth B.; Setiawan
2017-03-01
The condition of East Nusa Tenggara Province which recently developed new districts can affect the number of information or data collected become unbalanced. One of the consequences of ignoring the data incompleteness is the estimator become not valid. Therefore, the analysis of unbalanced panel data is very crucial.The aim of this paper is to find the estimation of Gross Regional Domestic Product in East Nusa Tenggara Province using unbalanced panel data regression model for two-way error component which assume random effect model (REM). In this research, we employ Feasible Generalized Least Squares (FGLS) as regression coefficients estimation method. Since variance of the model is unknown, ANOVA method is considered to obtain the variance components in order to construct the variance-covariance matrix. The data used in this research is secondary data taken from Central Bureau of Statistics of East Nusa Tenggara Province in 21 districts period 2004-2013. The predictors are the number of labor over 15 years old (X1), electrification ratios (X2), and local revenues (X3) while Gross Regional Domestic Product based on constant price 2000 is the response (Y). The FGLS estimation result shows that the value of R2 is 80,539% and all the predictors chosen are significantly affect (α = 5%) the Gross Regional Domestic Product in all district of East Nusa Tenggara Province. Those variables are the number of labor over 15 years old (X1), electrification ratios (X2), and local revenues (X3) with 0,22986, 0,090476, and 0,14749 of elasticities, respectively.
Matuschek, Hannes; Kliegl, Reinhold; Holschneider, Matthias
2015-01-01
The Smoothing Spline ANOVA (SS-ANOVA) requires a specialized construction of basis and penalty terms in order to incorporate prior knowledge about the data to be fitted. Typically, one resorts to the most general approach using tensor product splines. This implies severe constraints on the correlation structure, i.e. the assumption of isotropy of smoothness can not be incorporated in general. This may increase the variance of the spline fit, especially if only a relatively small set of observations are given. In this article, we propose an alternative method that allows to incorporate prior knowledge without the need to construct specialized bases and penalties, allowing the researcher to choose the spline basis and penalty according to the prior knowledge of the observations rather than choosing them according to the analysis to be done. The two approaches are compared with an artificial example and with analyses of fixation durations during reading. PMID:25816246
Racial/ethnic variation in mental health correlates of substance use among college students.
Sumstine, Stephanie; Cruz, Sheena; Schroeder, Cassandra; Takeda, Summer; Bavarian, Niloofar
2018-01-01
This study investigated mental health indicators, substance use, and their relationships, by race/ethnicity. A probability sample of 1,053 students at two California universities self-reported their frequency of substance use and rated their experience with indicators of mental health. One-way analysis of variance (ANOVA), chi-square tests, and multivariate censored regression models were estimated to examine which indicators of mental health were associated with each substance use form by race/ethnicity. Results from the one-way ANOVA and chi-square tests showed differences in substance use prevalence and mental health by race/ethnicity. For example, students who identified as White demonstrate a higher prevalence for every form of substance use in comparison to the Asian, Latino, and "All other" categories. Results from the regression showed, among Whites, inattention was associated with prescription stimulant misuse, and psychological distress was associated with marijuana use. Among Latinos, inattention was associated with cocaine and prescription stimulant use. Among Asians, psychological distress was associated with tobacco use and the misuse of prescription painkillers. Findings highlight the need to ensure subpopulations receive needed services.
New Methods for Estimating Seasonal Potential Climate Predictability
NASA Astrophysics Data System (ADS)
Feng, Xia
This study develops two new statistical approaches to assess the seasonal potential predictability of the observed climate variables. One is the univariate analysis of covariance (ANOCOVA) model, a combination of autoregressive (AR) model and analysis of variance (ANOVA). It has the advantage of taking into account the uncertainty of the estimated parameter due to sampling errors in statistical test, which is often neglected in AR based methods, and accounting for daily autocorrelation that is not considered in traditional ANOVA. In the ANOCOVA model, the seasonal signals arising from external forcing are determined to be identical or not to assess any interannual variability that may exist is potentially predictable. The bootstrap is an attractive alternative method that requires no hypothesis model and is available no matter how mathematically complicated the parameter estimator. This method builds up the empirical distribution of the interannual variance from the resamplings drawn with replacement from the given sample, in which the only predictability in seasonal means arises from the weather noise. These two methods are applied to temperature and water cycle components including precipitation and evaporation, to measure the extent to which the interannual variance of seasonal means exceeds the unpredictable weather noise compared with the previous methods, including Leith-Shukla-Gutzler (LSG), Madden, and Katz. The potential predictability of temperature from ANOCOVA model, bootstrap, LSG and Madden exhibits a pronounced tropical-extratropical contrast with much larger predictability in the tropics dominated by El Nino/Southern Oscillation (ENSO) than in higher latitudes where strong internal variability lowers predictability. Bootstrap tends to display highest predictability of the four methods, ANOCOVA lies in the middle, while LSG and Madden appear to generate lower predictability. Seasonal precipitation from ANOCOVA, bootstrap, and Katz, resembling that for temperature, is more predictable over the tropical regions, and less predictable in extropics. Bootstrap and ANOCOVA are in good agreement with each other, both methods generating larger predictability than Katz. The seasonal predictability of evaporation over land bears considerably similarity with that of temperature using ANOCOVA, bootstrap, LSG and Madden. The remote SST forcing and soil moisture reveal substantial seasonality in their relations with the potentially predictable seasonal signals. For selected regions, either SST or soil moisture or both shows significant relationships with predictable signals, hence providing indirect insight on slowly varying boundary processes involved to enable useful seasonal climate predication. A multivariate analysis of covariance (MANOCOVA) model is established to identify distinctive predictable patterns, which are uncorrelated with each other. Generally speaking, the seasonal predictability from multivariate model is consistent with that from ANOCOVA. Besides unveiling the spatial variability of predictability, MANOCOVA model also reveals the temporal variability of each predictable pattern, which could be linked to the periodic oscillations.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
LI, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
NASA Astrophysics Data System (ADS)
Idris, M. A.; Jami, M. S.; Hammed, A. M.
2017-05-01
This paper presents the statistical optimization study of disinfection inactivation parameters of defatted Moringa oleifera seed extract on Pseudomonas aeruginosa bacterial cells. Three level factorial design was used to estimate the optimum range and the kinetics of the inactivation process was also carried. The inactivation process involved comparing different disinfection models of Chicks-Watson, Collins-Selleck and Homs models. The results from analysis of variance (ANOVA) of the statistical optimization process revealed that only contact time was significant. The optimum disinfection range of the seed extract was 125 mg/L, 30 minutes and 120rpm agitation. At the optimum dose, the inactivation kinetics followed the Collin-Selleck model with coefficient of determination (R2) of 0.6320. This study is the first of its kind in determining the inactivation kinetics of pseudomonas aeruginosa using the defatted seed extract.
ERIC Educational Resources Information Center
Posey-Goodwin, Patricia Ann
2013-01-01
The purpose of this study was to explore differences in perceptions of mentoring activities from four generations of registered nurses in Florida, using the Alleman Mentoring Activities Questionnaire ® (AMAQ ®). Statistical procedures of analysis of variance (ANOVA) were employed to explore differences among 65 registered nurses in Florida from…
ERIC Educational Resources Information Center
Yung-Kuan, Chan; Hsieh, Ming-Yuan; Lee, Chin-Feng; Huang, Chih-Cheng; Ho, Li-Chih
2017-01-01
Under the hyper-dynamic education situation, this research, in order to comprehensively explore the interplays between Teacher Competence Demands (TCD) and Learning Organization Requests (LOR), cross-employs the data refined method of Descriptive Statistics (DS) method and Analysis of Variance (ANOVA) and Principal Components Analysis (PCA)…
The Effect of Flow Frequency on Internet Addiction to Different Internet Usage Activities
ERIC Educational Resources Information Center
Yang, Hui-Ling; Wu, Wei-Pang
2017-01-01
This study investigated the online flow frequency among college students in regard to different internet activities, and analyzed the effect of flow frequency on internet addiction. This study surveyed 525 undergraduate internet users in Taiwan by using convenience sampling to question participants. In this paper, analysis of variance (ANOVA) was…
The Influence of Ability Grouping on Math Achievement in a Rural Middle School
ERIC Educational Resources Information Center
Pritchard, Robert R.
2012-01-01
The researcher examined the academic performance of low-tracked students (n = 156) using standardized math test scores to determine whether there is a statistically significant difference in achievement depending on academic environment, tracked or nontracked. An analysis of variance (ANOVA) was calculated, using a paired samples t-test for a…
Music Teachers' Computer Anxiety and Self-Efficacy
ERIC Educational Resources Information Center
Kiliç, Deniz Beste Çevik
2015-01-01
This study aims to examine the computer anxiety and self-efficacy of music teachers in terms of different variables. The research is implemented on 124 music teachers. A personal information form and scales of Computer Anxiety and Self Efficacy are implemented on 124 music teachers. Data are analyzed with one way analysis of variance (ANOVA) and…
Marini, Federico; de Beer, Dalene; Walters, Nico A; de Villiers, André; Joubert, Elizabeth; Walczak, Beata
2017-03-17
An ultimate goal of investigations of rooibos plant material subjected to different stages of fermentation is to identify the chemical changes taking place in the phenolic composition, using an untargeted approach and chromatographic fingerprints. Realization of this goal requires, among others, identification of the main components of the plant material involved in chemical reactions during the fermentation process. Quantitative chromatographic data for the compounds for extracts of green, semi-fermented and fermented rooibos form the basis of preliminary study following a targeted approach. The aim is to estimate whether treatment has a significant effect based on all quantified compounds and to identify the compounds, which contribute significantly to it. Analysis of variance is performed using modern multivariate methods such as ANOVA-Simultaneous Component Analysis, ANOVA - Target Projection and regularized MANOVA. This study is the first one in which all three approaches are compared and evaluated. For the data studied, all tree methods reveal the same significance of the fermentation effect on the extract compositions, but they lead to its different interpretation. Copyright © 2017 Elsevier B.V. All rights reserved.
Analytic methods for questions pertaining to a randomized pretest, posttest, follow-up design.
Rausch, Joseph R; Maxwell, Scott E; Kelley, Ken
2003-09-01
Delineates 5 questions regarding group differences that are likely to be of interest to researchers within the framework of a randomized pretest, posttest, follow-up (PPF) design. These 5 questions are examined from a methodological perspective by comparing and discussing analysis of variance (ANOVA) and analysis of covariance (ANCOVA) methods and briefly discussing hierarchical linear modeling (HLM) for these questions. This article demonstrates that the pretest should be utilized as a covariate in the model rather than as a level of the time factor or as part of the dependent variable within the analysis of group differences. It is also demonstrated that how the posttest and the follow-up are utilized in the analysis of group differences is determined by the specific question asked by the researcher.
Arabi, Simin; Sohrabi, Mahmoud Reza
2013-01-01
In this study, NZVI particles was prepared and studied for the removal of vat green 1 dye from aqueous solution. A four-factor central composite design (CCD) combined with response surface modeling (RSM) to evaluate the combined effects of variables as well as optimization was employed for maximizing the dye removal by prepared NZVI based on 30 different experimental data obtained in a batch study. Four independent variables, viz. NZVI dose (0.1-0.9 g/L), pH (1.5-9.5), contact time (20-100 s), and initial dye concentration (10-50 mg/L) were transform to coded values and quadratic model was built to predict the responses. The significant of independent variables and their interactions were tested by the analysis of variance (ANOVA). Adequacy of the model was tested by the correlation between experimental and predicted values of the response and enumeration of prediction errors. The ANOVA results indicated that the proposed model can be used to navigate the design space. Optimization of the variables for maximum adsorption of dye by NZVI particles was performed using quadratic model. The predicted maximum adsorption efficiency (96.97%) under the optimum conditions of the process variables (NZVI dose 0.5 g/L, pH 4, contact time 60 s, and initial dye concentration 30 mg/L) was very close to the experimental value (96.16%) determined in batch experiment. In the optimization, R2 and R2adj correlation coefficients for the model were evaluated as 0.95 and 0.90, respectively.
Improving Your Data Transformations: Applying the Box-Cox Transformation
ERIC Educational Resources Information Center
Osborne, Jason W.
2010-01-01
Many of us in the social sciences deal with data that do not conform to assumptions of normality and/or homoscedasticity/homogeneity of variance. Some research has shown that parametric tests (e.g., multiple regression, ANOVA) can be robust to modest violations of these assumptions. Yet the reality is that almost all analyses (even nonparametric…
ERIC Educational Resources Information Center
Crits-Christoph, Paul; Mintz, Jim
1991-01-01
Presents reasons therapist should be included as random design factor in nested analysis of (co)variance (AN[C]OVA) design used in psychotherapy research. Reviews studies which indicate majority of investigators ignore issue of effects from incorrect specification of ANOVA design. Presents reanalysis of data from 10 psychotherapy outcome studies…
Estimation of the KR20 Reliability Coefficient When Data Are Incomplete.
ERIC Educational Resources Information Center
Huynh, Huynh
Three techniques for estimating Kuder Richardson reliability (KR20) coefficients for incomplete data are contrasted. The methods are: (1) Henderson's Method 1 (analysis of variance, or ANOVA); (2) Henderson's Method 3 (FITCO); and (3) Koch's method of symmetric sums (SYSUM). A Monte Carlo simulation was used to assess the precision of the three…
Wives Domain-Specific "Marriage Work" with Friends and Spouses: Links to Marital Quality
ERIC Educational Resources Information Center
Proulx, Christine M.; Helms, Heather M.; Payne, C. Chris
2004-01-01
This study examined the friendship experiences of 52 wives and mothers, with particular attention given to wives' marriage work (discussions about concerns and problems in the marriage) in 10 domains with friends and spouses. A series of within-subjects repeated measures analyses of variance (ANOVAs) indicated that in all but two domains, wives…
ERIC Educational Resources Information Center
Thomas, L. M.; Thomas, Suzanne G.
This obtrusive post-hoc quasi-experimental study investigated Scholastic Assessment Test (SAT) scores of 111 high school students in grades 10 through 12. Fifty-three students were enrolled in at least one Advanced Placement (AP) course at the time of the study. General factorial analysis of variance (ANOVA) tested for significant differences…
Stability of Playfulness across Environmental Settings: A Pilot Study
ERIC Educational Resources Information Center
Rigby, Patricia; Gaik, Sandy
2007-01-01
The Test of Playfulness (ToP) was used in this pilot study to examine the stability of playfulness of 16 children with cerebral palsy (CP), aged 4-8 years, across three environmental settings: home, community, and school. Each videotaped play segment was scored using the ToP. The ANOVA statistic demonstrated a significant variance (p less than…
de Farias Jucá, Adriana; Faveri, Juliana Cantos; Melo Filho, Geraldo Magalhães; de Lisboa Ribeiro Filho, Antônio; Azevedo, Hymerson Costa; Muniz, Evandro Neves; Pinto, Luís Fernando Batista
2014-10-01
This study aimed to evaluate sex, the number of lambs per birth, and the family effects on production traits in the Santa Ines breed of sheep by estimating the least square means and coefficient of variance for those traits. A total of 484 lambs were evaluated for the following traits: weight at birth, at weaning, and at 240 days of age; weight gain during the pre-weaning and post-weaning periods; height, width, and length of different body regions; and rib eye area and fat thickness between the 12th and 13th ribs. We observed coefficients of variation higher than 10 % for several traits. Generally, males were larger than females (P < 0.05), while lambs from single and double births were larger than lambs from triple births (P < 0.05). Family effect was significant (P < 0.05) for most traits and explained the highest percentage of residual variance. The results showed good development of Santa Ines sheep, especially during the pre-weaning period but no in post-weaning. Our study also showed that there is an effect of sex, birth type, and family, which must be included in any statistical model for the estimation of least square means and residual variance in ANOVA.
Plasma iron levels appraised 15 days after spinal cord injury in a limb movement animal model.
Reis, F M; Esteves, A M; Tufik, S; de Mello, M T
2011-03-01
Experimental, controlled trial. The purpose of this study was to evaluate plasma iron and transferrin levels in a limb movement animal model with spinal cord injury (SCI). Universidade Federal de São Paulo, Departamento de Psicobiologia. In all, 72 male Wistar rats aged 90 days were divided into four groups: (1) acute SCI (1 day, SCI1), (2) 3 days post-SCI (SCI3), (3) 7 days post-SCI (SCI7) and (4) 15 days post-SCI (SCI15). Each of these groups had corresponding control (CTRL) and SHAM groups. Plasma iron and transferrin levels of the different groups were analyzed using a one-way analysis of variance (ANOVA) followed by Tukey's test. We found a significant reduction in iron plasma levels after SCI compared with the CTRL group: SCI1 (CTRL: 175±10.58 μg dl(-1); SCI: 108.28±11.7 μg dl(-1)), SCI3 (CTRL: 195.5±11.00 μg dl(-1); SCI: 127.88±12.63 μg dl(-1)), SCI7 (CTRL: 186±2.97 μg dl(-1); SCI: 89.2±15.39 μg dl(-1)) and SCI15 (CTRL: 163±5.48 μg dl(-1); SCI: 124.44±10.30 μg dl(-1)) (P<0.05; ANOVA). The SHAM1 group demonstrated a reduction in iron plasma after acute SCI (CTRL: 175±10.58 μg dl(-1); SHAM: 114.60±7.81 μg dl(-1)) (P<0.05; ANOVA). Reduced iron metabolism after SCI may be one of the mechanisms involved in the pathogenesis of sleep-related movement disorders.
Schistad, Elina Iordanova; Espeland, Ansgar; Rygh, Lars Jørgen; Røe, Cecilie; Gjerstad, Johannes
2014-09-01
To examine whether Modic changes influence pain during a 1-year follow-up in patients with lumbar radicular pain. A total of 243 patients with lumbar radicular pain due to disc herniation were recruited from two hospitals in Norway and followed up at 6 weeks, 6 months, and 12 months. On baseline lumbar magnetic resonance images, two observers independently evaluated Modic changes (types I-III; craniocaudal size 0-3). Outcomes were sensory pain (McGill Pain Questionnaire), back and leg pain (visual analogue scale, VAS). Association between Modic type and outcomes was explored with a mixed model and then by two-way analysis of variance (ANOVA) at each time point with Modic and treatment groups (surgical, n = 126; nonsurgical, n = 117) as fixed factors, adjusted for disc degeneration, age, sex, smoking, and duration of radicular pain. Modic size was also analyzed using ANOVA. Pain scores had decreased significantly at 1-year follow-up. Modic type was significantly related to McGill sensory scores (mixed model: p = 0.014-0.026; ANOVA: p = 0.007 at 6 weeks), but not to VAS back pain or VAS leg pain scores. At 6 weeks, the mean McGill sensory score was higher in Modic I than in Modic II-III patients (p = 0.003) and in patients without Modic changes (p = 0.018). Modic size L1-S1 was not associated with pain outcomes. Patients with lumbar radicular pain have a substantial pain reduction during 1-year follow-up, but Modic type I changes may imply a slower initial decrease in sensory pain.
Useche, Sergio; Montoro, Luis; Cendales, Boris; Gómez, Viviola
2018-08-01
This Data in Brief (DiB) article examines the association between the Job Demand-Control (JDC) model of stress and traffic safety outcomes (accidents and sanctions) in public transport drivers ( n = 780). The data was collected using a structured self-administrable questionnaire composed of measurements of work stress (Job Content Questionnaire), and demographics (professional driving experience, hours and days working/driving per week). The data contains 4 parts: descriptive statistics, bivariate correlations between the study variables, analysis of variance (ANOVA) and Post-Hoc comparisons between drivers classified different quadrants of the JDC model. For further information, it is convenient to read the full article entitled " Working conditions, job strain and traffic safety among three groups of public transport drivers ", published in Safety and Health at Work (SHAW) [1] (Useche et al., 2018).
Hummig, Wagner; Kopruszinski, Caroline Machado; Chichorro, Juliana Geremias
2014-01-01
To assess the analgesic effect of pregabalin in orofacial models of acute inflammatory pain and of persistent pain associated with nerve injury and cancer, and so determine its effectiveness in controlling orofacial pains having different underlying mechanisms. Orofacial capsaicin and formalin tests were employed in male Wistar rats to assess the influence of pregabalin (or vehicle) pretreatment in acute pain models, and the results from these experiments were analyzed by one-way analysis of variance (ANOVA) followed by Newman Keuls post-hoc test. Pregabalin (or vehicle) treatment was also tested on the facial heat hyperalgesia that was evaluated in rats receiving injection of the inflammatory irritant carrageenan into the upper lip, as well as after constriction of the infraorbital nerve (a model of trigeminal neuropathic pain), or after inoculation of tumor cells into the facial vibrissal pad; two-way repeated measures ANOVA followed by Newman-Keuls post-hoc test was used to analyze data from these experiments. Facial grooming induced by capsaicin was abolished by pretreatment with pregabalin at 10 and 30 mg/kg. However, pregabalin failed to modify the first phase of the formalin response, but reduced the second phase at both doses (10 and 30 mg/kg). In addition, treatment of rats with pregabalin reduced the heat hyperalgesia induced by carrageenan, as well as by nerve injury and facial cancer. Pregabalin produced a marked antinociceptive effect in rat models of facial inflammatory pain as well as in facial neuropathic and cancer pain models, suggesting that it may represent an important agent for the clinical control of orofacial pain.
Hypothesis exploration with visualization of variance
2014-01-01
Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666
Biostatistics Series Module 10: Brief Overview of Multivariate Methods.
Hazra, Avijit; Gogtay, Nithya
2017-01-01
Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.
Wygant, Dustin B; Arbisi, Paul A; Bianchini, Kevin J; Umlauf, Robert L
2017-04-01
Waddell et al. identified a set of eight non-organic signs in 1980. There has been controversy about their meaning, particularly with respect to their use as validity indicators. The current study examined the Waddell signs in relation to measures of somatic amplification or over-reporting in a sample of outpatient chronic pain patients. We examined the degree to which these signs were associated with measures of over-reporting. This study examined scores on the Waddell signs in relation to over-reporting indicators in an outpatient chronic pain sample. We examined 230 chronic pain patients treated at a multidisciplinary pain clinic. The majority of these patients presented with primary back or spinal injuries. The outcome measures used in the study were Waddell signs, Modified Somatic Perception Questionnaire, Pain Disability Index, and the Minnesota Multiphasic Personality Inventory-2 Restructured Form. We examined Waddell signs using multivariate analysis of variance (MANOVA) and analysis of variance (ANOVA), receiver operating characteristic analysis, classification accuracy, and relative risk ratios. Multivariate analysis of variance and ANOVA showed a significant association between Waddell signs and somatic amplification. Classification analyses showed increased odds of somatic amplification at a Waddell score of 2 or 3. Our results found significant evidence of an association between Waddell signs and somatic over-reporting. Elevated scores on the Waddell signs (particularly scores higher than 2 and 3) were associated with increased odds of exhibiting somatic over-reporting. Copyright © 2016 Elsevier Inc. All rights reserved.
Web-based tools for modelling and analysis of multivariate data: California ozone pollution activity
Dinov, Ivo D.; Christou, Nicolas
2014-01-01
This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting and statistical inference on these data are presented. All components of this case study (data, tools, activity) are freely available online at: http://wiki.stat.ucla.edu/socr/index.php/SOCR_MotionCharts_CAOzoneData. Several types of exploratory (motion charts, box-and-whisker plots, spider charts) and quantitative (inference, regression, analysis of variance (ANOVA)) data analyses tools are demonstrated. Two specific human health related questions (temporal and geographic effects of ozone pollution) are discussed as motivational challenges. PMID:24465054
Dinov, Ivo D; Christou, Nicolas
2011-09-01
This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting and statistical inference on these data are presented. All components of this case study (data, tools, activity) are freely available online at: http://wiki.stat.ucla.edu/socr/index.php/SOCR_MotionCharts_CAOzoneData. Several types of exploratory (motion charts, box-and-whisker plots, spider charts) and quantitative (inference, regression, analysis of variance (ANOVA)) data analyses tools are demonstrated. Two specific human health related questions (temporal and geographic effects of ozone pollution) are discussed as motivational challenges.
Palumbo, Biagio; Del Re, Francesco; Martorelli, Massimo; Lanzotti, Antonio; Corrado, Pasquale
2017-02-08
A statistical approach for the characterization of Additive Manufacturing (AM) processes is presented in this paper. Design of Experiments (DOE) and ANalysis of VAriance (ANOVA), both based on Nested Effects Modeling (NEM) technique, are adopted to assess the effect of different laser exposure strategies on physical and mechanical properties of AlSi10Mg parts produced by Direct Metal Laser Sintering (DMLS). Due to the wide industrial interest in AM technologies in many different fields, it is extremely important to ensure high parts performances and productivity. For this aim, the present paper focuses on the evaluation of tensile properties of specimens built with different laser exposure strategies. Two optimal laser parameters settings, in terms of both process quality (part performances) and productivity (part build rate), are identified.
Palumbo, Biagio; Del Re, Francesco; Martorelli, Massimo; Lanzotti, Antonio; Corrado, Pasquale
2017-01-01
A statistical approach for the characterization of Additive Manufacturing (AM) processes is presented in this paper. Design of Experiments (DOE) and ANalysis of VAriance (ANOVA), both based on Nested Effects Modeling (NEM) technique, are adopted to assess the effect of different laser exposure strategies on physical and mechanical properties of AlSi10Mg parts produced by Direct Metal Laser Sintering (DMLS). Due to the wide industrial interest in AM technologies in many different fields, it is extremely important to ensure high parts performances and productivity. For this aim, the present paper focuses on the evaluation of tensile properties of specimens built with different laser exposure strategies. Two optimal laser parameters settings, in terms of both process quality (part performances) and productivity (part build rate), are identified. PMID:28772505
Eta- and Partial Eta-Squared in L2 Research: A Cautionary Review and Guide to More Appropriate Usage
ERIC Educational Resources Information Center
Norouzian, Reza; Plonsky, Luke
2018-01-01
Eta-squared (?[superscript 2]) and partial eta-squared (?[subscript p][superscript 2]) are effect sizes that express the amount of variance accounted for by one or more independent variables. These indices are generally used in conjunction with ANOVA, the most commonly used statistical test in second language (L2) research (Plonsky, 2013).…
ERIC Educational Resources Information Center
Gunter, Nancy C.; Gunter, B. G.
This study examined the relationship of gender, sex role orientation, and work attitudes to the domestic division of labor in 141 working couples. Couples completed the Bem Sex Role Inventory and a questionnaire on the performance of household tasks. Analysis of variance (ANOVA) confirmed that working women performed a disproportionately larger…
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2013-01-01
Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…
Analysis of half diallel mating designs I: a practical analysis procedure for ANOVA approximation.
G.R. Johnson; J.N. King
1998-01-01
Procedures to analyze half-diallel mating designs using the SAS statistical package are presented. The procedure requires two runs of PROC and VARCOMP and results in estimates of additive and non-additive genetic variation. The procedures described can be modified to work on most statistical software packages which can compute variance component estimates. The...
Krüger, Stephanie; Bagby, R Michael; Höffler, Jürgen; Bräunig, Peter
2003-01-01
Catatonia is a frequent psychomotor syndrome, which has received increasing recognition over the last decade. The assessment of the catatonic syndrome requires systematic rating scales that cover the complex spectrum of catatonic motor signs and behaviors. The Catatonia Rating Scale (CRS) is such an instrument, which has been validated and which has undergone extensive reliability testing. In the present study, to further validate the CRS, the items composing this scale were submitted to principal components factor extraction followed by a varimax rotation. An analysis of variance (ANOVA) was performed to assess group differences on the extracted factors in patients with schizophrenia, pure mania, mixed mania, and major depression (N=165). Four factors were extracted, which accounted for 71.5% of the variance. The factors corresponded to the clinical syndromes of (1) catatonic excitement, (2) abnormal involuntary movements/mannerisms, (3) disturbance of volition/catalepsy, and (4) catatonic inhibition. The ANOVA revealed that each of the groups showed a distinctive catatonic symptom pattern and that the overlap between diagnostic groups was minimal. We conclude that this four-factor symptom structure of catatonia challenges the current conceptualization, which proposes only two symptom subtypes.
3D facial landmarks: Inter-operator variability of manual annotation
2014-01-01
Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436
Accuracy of different impression materials in parallel and nonparallel implants
Vojdani, Mahroo; Torabi, Kianoosh; Ansarifard, Elham
2015-01-01
Background: A precise impression is mandatory to obtain passive fit in implant-supported prostheses. The aim of this study was to compare the accuracy of three impression materials in both parallel and nonparallel implant positions. Materials and Methods: In this experimental study, two partial dentate maxillary acrylic models with four implant analogues in canines and lateral incisors areas were used. One model was simulating the parallel condition and the other nonparallel one, in which implants were tilted 30° bucally and 20° in either mesial or distal directions. Thirty stone casts were made from each model using polyether (Impregum), additional silicone (Monopren) and vinyl siloxanether (Identium), with open tray technique. The distortion values in three-dimensions (X, Y and Z-axis) were measured by coordinate measuring machine. Two-way analysis of variance (ANOVA), one-way ANOVA and Tukey tests were used for data analysis (α = 0.05). Results: Under parallel condition, all the materials showed comparable, accurate casts (P = 0.74). In the presence of angulated implants, while Monopren showed more accurate results compared to Impregum (P = 0.01), Identium yielded almost similar results to those produced by Impregum (P = 0.27) and Monopren (P = 0.26). Conclusion: Within the limitations of this study, in parallel conditions, the type of impression material cannot affect the accuracy of the implant impressions; however, in nonparallel conditions, polyvinyl siloxane is shown to be a better choice, followed by vinyl siloxanether and polyether respectively. PMID:26288620
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-11-01
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.
Proximity of fast food restaurants to schools: do neighborhood income and type of school matter?
Simon, Paul A; Kwan, David; Angelescu, Aida; Shih, Margaret; Fielding, Jonathan E
2008-09-01
To investigate the proximity of fast food restaurants to public schools and examine proximity by neighborhood income and school level (elementary, middle, or high school). Geocoded school and restaurant databases from 2005 and 2003, respectively, were used to determine the percentage of schools with one or more fast food restaurants within 400 m and 800 m of all public schools in Los Angeles County, California. Single-factor analysis of variance (ANOVA) models were run to examine fast food restaurant proximity to schools by median household income of the surrounding census tract and by school level. Two-factor ANOVA models were run to assess the additional influence of neighborhood level of commercialization. Overall, 23.3% and 64.8% of schools had one or more fast food restaurants located within 400 m and 800 m, respectively. Fast food restaurant proximity was greater for high schools than for middle and elementary schools, and was inversely related to neighborhood income for schools in the highest commercial areas. No association with income was observed in less commercial areas. Fast food restaurants are located in close proximity to many schools in this large metropolitan area, especially high schools and schools located in low income highly commercial neighborhoods. Further research is needed to assess the relationship between fast food proximity and student dietary practices and obesity risk.
ERIC Educational Resources Information Center
Badri, Masood A.; Abdulla, Mohamed; Kamali, Mohammed A.; Dodeen, Hamzeh
2006-01-01
Purpose: The purpose of this paper is to investigate the effect of many factors on student evaluation of teaching. Design/methodology/approach: The study analyzed 3,185 student evaluations of faculty from a newly accredited business program at the United Arab Emirates University using univariate and multi-analysis of variance (ANOVA and MANOVA).…
The Effect of Integrated Learning Model and Critical Thinking Skill of Science Learning Outcomes
NASA Astrophysics Data System (ADS)
Fazriyah, N.; Supriyati, Y.; Rahayu, W.
2017-02-01
This study aimed to determine the effect of integrated learning model and critical thinking skill toward science learning outcomes. The study was conducted in SDN Kemiri Muka 1 Depok in fifth grade school year 2014/2015 using cluster random sampling was done to 80 students. Retrieval of data obtained through tests and analysis by Variance (ANOVA) and two lines with the design treatment by level 2x2. The results showed that: (1) science learning outcomes students that given thematic integrated learning model is higher than in the group of students given fragmented learning model, (2) there is an interaction effect between critical thinking skills with integrated learning model, (3) for students who have high critical thinking skills, science learning outcomes students who given by thematic integrated learning model higher than fragmented learning model and (4) for students who have the ability to think critically low yield higher learning science fragmented model. The results of this study indicate that thematic learning model with critical thinking skills can improve science learning outcomes of students.
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-02-01
Weather forecasting is an important issue in the field of meteorology all over the world. The pattern and amount of rainfall are the essential factors that affect agricultural systems. India experiences the precious Southwest monsoon season for four months from June to September. The present paper describes an empirical study for modeling and forecasting the time series of Southwest monsoon rainfall patterns in the North-East India. The Box-Jenkins Seasonal Autoregressive Integrated Moving Average (SARIMA) methodology has been adopted for model identification, diagnostic checking and forecasting for this region. The study has shown that the SARIMA (0, 1, 1) (1, 0, 1)4 model is appropriate for analyzing and forecasting the future rainfall patterns. The Analysis of Means (ANOM) is a useful alternative to the analysis of variance (ANOVA) for comparing the group of treatments to study the variations and critical comparisons of rainfall patterns in different months of the season.
NASA Astrophysics Data System (ADS)
POP, A. B.; ȚÎȚU, M. A.
2016-11-01
In the metal cutting process, surface quality is intrinsically related to the cutting parameters and to the cutting tool geometry. At the same time, metal cutting processes are closely related to the machining costs. The purpose of this paper is to reduce manufacturing costs and processing time. A study was made, based on the mathematical modelling of the average of the absolute value deviation (Ra) resulting from the end milling process on 7136 aluminium alloy, depending on cutting process parameters. The novel element brought by this paper is the 7136 aluminium alloy type, chosen to conduct the experiments, which is a material developed and patented by Universal Alloy Corporation. This aluminium alloy is used in the aircraft industry to make parts from extruded profiles, and it has not been studied for the proposed research direction. Based on this research, a mathematical model of surface roughness Ra was established according to the cutting parameters studied in a set experimental field. A regression analysis was performed, which identified the quantitative relationships between cutting parameters and the surface roughness. Using the variance analysis ANOVA, the degree of confidence for the achieved results by the regression equation was determined, and the suitability of this equation at every point of the experimental field.
Liebenberg, Leandi; L'Abbé, Ericka N; Stull, Kyra E
2015-12-01
The cranium is widely recognized as the most important skeletal element to use when evaluating population differences and estimating ancestry. However, the cranium is not always intact or available for analysis, which emphasizes the need for postcranial alternatives. The purpose of this study was to quantify postcraniometric differences among South Africans that can be used to estimate ancestry. Thirty-nine standard measurements from 11 postcranial bones were collected from 360 modern black, white and coloured South Africans; the sex and ancestry distribution were equal. Group differences were explored with analysis of variance (ANOVA) and Tukey's honestly significant difference (HSD) test. Linear and flexible discriminant analysis (LDA and FDA, respectively) were conducted with bone models as well as numerous multivariate subsets to identify the model and method that yielded the highest correct classifications. Leave-one-out (LDA) and k-fold (k=10; FDA) cross-validation with equal priors were used for all models. ANOVA and Tukey's HSD results reveal statistically significant differences between at least two of the three groups for the majority of the variables, with varying degrees of group overlap. Bone models, which consisted of all measurements per bone, resulted in low accuracies that ranged from 46% to 63% (LDA) and 41% to 66% (FDA). In contrast, the multivariate subsets, which consisted of different variable combinations from all elements, achieved accuracies as high as 85% (LDA) and 87% (FDA). Thus, when using a multivariate approach, the postcranial skeleton can distinguish among three modern South African groups with high accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Lindqvist, R
2006-07-01
Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.
Almalki, Mohammed J; FitzGerald, Gerry; Clark, Michele
2012-09-12
Quality of work life (QWL) has been found to influence the commitment of health professionals, including nurses. However, reliable information on QWL and turnover intention of primary health care (PHC) nurses is limited. The aim of this study was to examine the relationship between QWL and turnover intention of PHC nurses in Saudi Arabia. A cross-sectional survey was used in this study. Data were collected using Brooks' survey of Quality of Nursing Work Life, the Anticipated Turnover Scale and demographic data questions. A total of 508 PHC nurses in the Jazan Region, Saudi Arabia, completed the questionnaire (RR = 87%). Descriptive statistics, t-test, ANOVA, General Linear Model (GLM) univariate analysis, standard multiple regression, and hierarchical multiple regression were applied for analysis using SPSS v17 for Windows. Findings suggested that the respondents were dissatisfied with their work life, with almost 40% indicating a turnover intention from their current PHC centres. Turnover intention was significantly related to QWL. Using standard multiple regression, 26% of the variance in turnover intention was explained by QWL, p < 0.001, with R2 = .263. Further analysis using hierarchical multiple regression found that the total variance explained by the model as a whole (demographics and QWL) was 32.1%, p < 0.001. QWL explained an additional 19% of the variance in turnover intention, after controlling for demographic variables. Creating and maintaining a healthy work life for PHC nurses is very important to improve their work satisfaction, reduce turnover, enhance productivity and improve nursing care outcomes.
Error in geometric morphometric data collection: Combining data from multiple sources.
Robinson, Chris; Terhune, Claire E
2017-09-01
This study compares two- and three-dimensional morphometric data to determine the extent to which intra- and interobserver and intermethod error influence the outcomes of statistical analyses. Data were collected five times for each method and observer on 14 anthropoid crania using calipers, a MicroScribe, and 3D models created from NextEngine and microCT scans. ANOVA models were used to examine variance in the linear data at the level of genus, species, specimen, observer, method, and trial. Three-dimensional data were analyzed using geometric morphometric methods; principal components analysis was employed to examine how trials of all specimens were distributed in morphospace and Procrustes distances among trials were calculated and used to generate UPGMA trees to explore whether all trials of the same individual grouped together regardless of observer or method. Most variance in the linear data was at the genus level, with greater variance at the observer than method levels. In the 3D data, interobserver and intermethod error were similar to intraspecific distances among Callicebus cupreus individuals, with interobserver error being higher than intermethod error. Generally, taxa separate well in morphospace, with different trials of the same specimen typically grouping together. However, trials of individuals in the same species overlapped substantially with one another. Researchers should be cautious when compiling data from multiple methods and/or observers, especially if analyses are focused on intraspecific variation or closely related species, as in these cases, patterns among individuals may be obscured by interobserver and intermethod error. Conducting interobserver and intermethod reliability assessments prior to the collection of data is recommended. © 2017 Wiley Periodicals, Inc.
2012-01-01
Background Quality of work life (QWL) has been found to influence the commitment of health professionals, including nurses. However, reliable information on QWL and turnover intention of primary health care (PHC) nurses is limited. The aim of this study was to examine the relationship between QWL and turnover intention of PHC nurses in Saudi Arabia. Methods A cross-sectional survey was used in this study. Data were collected using Brooks’ survey of Quality of Nursing Work Life, the Anticipated Turnover Scale and demographic data questions. A total of 508 PHC nurses in the Jazan Region, Saudi Arabia, completed the questionnaire (RR = 87%). Descriptive statistics, t-test, ANOVA, General Linear Model (GLM) univariate analysis, standard multiple regression, and hierarchical multiple regression were applied for analysis using SPSS v17 for Windows. Results Findings suggested that the respondents were dissatisfied with their work life, with almost 40% indicating a turnover intention from their current PHC centres. Turnover intention was significantly related to QWL. Using standard multiple regression, 26% of the variance in turnover intention was explained by QWL, p < 0.001, with R2 = .263. Further analysis using hierarchical multiple regression found that the total variance explained by the model as a whole (demographics and QWL) was 32.1%, p < 0.001. QWL explained an additional 19% of the variance in turnover intention, after controlling for demographic variables. Conclusions Creating and maintaining a healthy work life for PHC nurses is very important to improve their work satisfaction, reduce turnover, enhance productivity and improve nursing care outcomes. PMID:22970764
Haverkamp, Nicolas; Beauducel, André
2017-01-01
We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.
Ullattuthodi, Sujana; Cherian, Kandathil Phillip; Anandkumar, R; Nambiar, M Sreedevi
2017-01-01
This in vitro study seeks to evaluate and compare the marginal and internal fit of cobalt-chromium copings fabricated using the conventional and direct metal laser sintering (DMLS) techniques. A master model of a prepared molar tooth was made using cobalt-chromium alloy. Silicone impression of the master model was made and thirty standardized working models were then produced; twenty working models for conventional lost-wax technique and ten working models for DMLS technique. A total of twenty metal copings were fabricated using two different production techniques: conventional lost-wax method and DMLS; ten samples in each group. The conventional and DMLS copings were cemented to the working models using glass ionomer cement. Marginal gap of the copings were measured at predetermined four points. The die with the cemented copings are standardized-sectioned with a heavy duty lathe. Then, each sectioned samples were analyzed for the internal gap between the die and the metal coping using a metallurgical microscope. Digital photographs were taken at ×50 magnification and analyzed using measurement software. Statistical analysis was done by unpaired t -test and analysis of variance (ANOVA). The results of this study reveal that no significant difference was present in the marginal gap of conventional and DMLS copings ( P > 0.05) by means of ANOVA. The mean values of internal gap of DMLS copings were significantly greater than that of conventional copings ( P < 0.05). Within the limitations of this in vitro study, it was concluded that the internal fit of conventional copings was superior to that of the DMLS copings. Marginal fit of the copings fabricated by two different techniques had no significant difference.
Oetjen, Janina; Lachmund, Delf; Palmer, Andrew; Alexandrov, Theodore; Becker, Michael; Boskamp, Tobias; Maass, Peter
2016-09-01
A standardized workflow for matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI imaging MS) is a prerequisite for the routine use of this promising technology in clinical applications. We present an approach to develop standard operating procedures for MALDI imaging MS sample preparation of formalin-fixed and paraffin-embedded (FFPE) tissue sections based on a novel quantitative measure of dataset quality. To cover many parts of the complex workflow and simultaneously test several parameters, experiments were planned according to a fractional factorial design of experiments (DoE). The effect of ten different experiment parameters was investigated in two distinct DoE sets, each consisting of eight experiments. FFPE rat brain sections were used as standard material because of low biological variance. The mean peak intensity and a recently proposed spatial complexity measure were calculated for a list of 26 predefined peptides obtained by in silico digestion of five different proteins and served as quality criteria. A five-way analysis of variance (ANOVA) was applied on the final scores to retrieve a ranking of experiment parameters with increasing impact on data variance. Graphical abstract MALDI imaging experiments were planned according to fractional factorial design of experiments for the parameters under study. Selected peptide images were evaluated by the chosen quality metric (structure and intensity for a given peak list), and the calculated values were used as an input for the ANOVA. The parameters with the highest impact on the quality were deduced and SOPs recommended.
Uncertainty of Videogrammetric Techniques used for Aerodynamic Testing
NASA Technical Reports Server (NTRS)
Burner, A. W.; Liu, Tianshu; DeLoach, Richard
2002-01-01
The uncertainty of videogrammetric techniques used for the measurement of static aeroelastic wind tunnel model deformation and wind tunnel model pitch angle is discussed. Sensitivity analyses and geometrical considerations of uncertainty are augmented by analyses of experimental data in which videogrammetric angle measurements were taken simultaneously with precision servo accelerometers corrected for dynamics. An analysis of variance (ANOVA) to examine error dependence on angle of attack, sensor used (inertial or optical). and on tunnel state variables such as Mach number is presented. Experimental comparisons with a high-accuracy indexing table are presented. Small roll angles are found to introduce a zero-shift in the measured angles. It is shown experimentally that. provided the proper constraints necessary for a solution are met, a single- camera solution can he comparable to a 2-camera intersection result. The relative immunity of optical techniques to dynamics is illustrated.
NASA Astrophysics Data System (ADS)
Wang, Xiao; Zhang, Cheng; Li, Pin; Wang, Kai; Hu, Yang; Zhang, Peng; Liu, Huixia
2012-11-01
A central composite rotatable experimental design(CCRD) is conducted to design experiments for laser transmission joining of thermoplastic-Polycarbonate (PC). The artificial neural network was used to establish the relationships between laser transmission joining process parameters (the laser power, velocity, clamp pressure, scanning number) and joint strength and joint seam width. The developed mathematical models are tested by analysis of variance (ANOVA) method to check their adequacy and the effects of process parameters on the responses and the interaction effects of key process parameters on the quality are analyzed and discussed. Finally, the desirability function coupled with genetic algorithm is used to carry out the optimization of the joint strength and joint width. The results show that the predicted results of the optimization are in good agreement with the experimental results, so this study provides an effective method to enhance the joint quality.
Measurement and analysis of thrust force in drilling sisal-glass fiber reinforced polymer composites
NASA Astrophysics Data System (ADS)
Ramesh, M.; Gopinath, A.
2017-05-01
Drilling of composite materials is difficult when compared to the conventional materials because of its in-homogeneous nature. The force developed during drilling play a major role in the surface quality of the hole and minimizing the damages around the surface. This paper focuses the effect of drilling parameters on thrust force in drilling of sisal-glass fiber reinforced polymer composite laminates. The quadratic response models are developed by using response surface methodology (RSM) to predict the influence of cutting parameters on thrust force. The adequacy of the models is checked by using the analysis of variance (ANOVA). A scanning electron microscope (SEM) analysis is carried out to analyze the quality of the drilled surface. From the results, it is found that, the feed rate is the most influencing parameter followed by spindle speed and the drill diameter is the least influencing parameter on the thrust force.
Breaking good: breaking ties with social groups may be good for recovery from substance misuse.
Dingle, Genevieve A; Stark, Claire; Cruwys, Tegan; Best, David
2015-06-01
According to the Social Identity Model of Identity Change, maintaining social identities and support over time is good for health and well-being, particularly during stressful transitions. However, in this study we explore the circumstances under which maintaining social identities - such as 'substance user' - may be harmful to health, and when a successful transition constitutes identity change, rather than maintenance. This prospective study examined social identities of 132 adults entering a drug and alcohol therapeutic community (TC) at admission, three fortnightly intervals and exit, as well as a representative subsample of 60 participants at follow-up. Repeated measures ANOVA results showed that user identity decreased significantly over time, such that 76% of the sample decreased in user identity strength over the first month in the TC. At the same time, recovery identity ratings increased significantly over time, with 64% of the sample staying the same or increasing their recovery identity ratings over the first month. Identity change, indexed by the change in the difference score between user identity and recovery identity over the treatment period, accounted for 34% of the variance in drinking quantity, 41% of the variance in drinking frequency, 5% of the variance in other drug use frequency, and 49% of the variance in life satisfaction at follow-up, after accounting for initial substance abuse severity and social identity ratings at entry to the TC. The findings indicate that moving from a substance using identity towards a recovery identity constitutes an important step in substance abuse treatment. © 2014 The British Psychological Society.
Ringus, Daina L; Ivy, Reid A; Wiedmann, Martin; Boor, Kathryn J
2012-03-01
Listeria monocytogenes is a foodborne pathogen that can persist in food processing environments. Six persistent and six non-persistent strains from fish processing plants and one persistent strain from a meat plant were selected to determine if expression of genes in the regulons of two stress response regulators, σ(B) and CtsR, under salt stress conditions is associated with the ability of L. monocytogenes to persist in food processing environments. Subtype data were also used to categorize the strains into genetic lineages I or II. Quantitative reverse transcriptase-polymerase chain reaction (qRT-PCR) was used to measure transcript levels for two σ(B)-regulated genes, inlA and gadD3, and two CtsR-regulated genes, lmo1138 and clpB, before and after (t=10 min) salt shock (i.e., exposure of exponential phase cells to BHI+6% NaCl for 10 min at 37°C). Exposure to salt stress induced higher transcript levels relative to levels under non-stress conditions for all four stress and virulence genes across all wildtype strains tested. Analysis of variance (ANOVA) of induction data revealed that transcript levels for one gene (clpB) were induced at significantly higher levels in non-persistent strains compared to persistent strains (p=0.020; two-way ANOVA). Significantly higher transcript levels of gadD3 (p=0.024; two-way ANOVA) and clpB (p=0.053; two-way ANOVA) were observed after salt shock in lineage I strains compared to lineage II strains. No clear association between stress gene transcript levels and persistence was detected. Our data are consistent with an emerging model that proposes that establishment of L. monocytogenes persistence in a specific environment occurs as a random, stochastic event, rather than as a consequence of specific bacterial strain characteristics.
Subtle Cognitive Effects of Moderate Hypoxia
2009-08-01
using SPSS® 13.0 with significance set at an alpha level of .05 for all statistical tests. A repeated measures analysis of variance (ANOVA) was...there was not statistically significant change in reaction time (p=.781), accuracy (p=.152), or throughout (p=.967) with increasing altitude. The...results indicate that healthy individuals aged 19 to 45 years do not experience significant cognitive deficit, as measured by the CogScreen®-HE, when
2010-01-01
Seemingly not . Repeated measures analysis of variance (ANOVA) for posttest - pretest score gain x training product interaction yielded a non-significant...Code 15. Supplemental Notes Work was accomplished under approved task AM-A-07-HRR-521 16. Abstract This research has two main...1 Purpose of This Research
Clark, S; Rose, D J
2001-04-01
To establish reliability estimates of the 75% Limits of Stability Test (75% LOS test) when administered to community-dwelling older adults with a history of falls. Generalizability theory was used to estimate both the relative contribution of identified error sources to the total measurement error and generalizability coefficients. A random effects repeated-measures analysis of variance (ANOVA) was used to assess consistency of LOS test movement variables across both days and targets. A motor control research laboratory in a university setting. Fifty community-dwelling older adults with 2 or more falls in the previous year. Spatial and temporal measures of dynamic balance derived from the 75% LOS test included average movement velocity, maximum center of gravity (COG) excursion, end-point COG excursion, and directional control. Estimated generalizability coefficients for 2 testing days ranged from.58 to.87. Total variance in LOS test measures attributable to inconsistencies in day-to-day test performance (Day and Subject x Day facets) ranged from 2.5% to 8.4%. The ANOVA results indicated that no significant differences were observed in the LOS test variables across the 2 testing days. The 75% LOS test administered to older adult fallers on 2 consecutive days provides consistent and reliable measures of dynamic balance.
Pacifici, Edoardo; Bossù, Maurizio; Giovannetti, Agostino; La Torre, Giuseppe; Guerra, Fabrizio; Polimeni, Antonella
2013-01-01
Summary Background Even today, use of Glass Ionomer Cements (GIC) as restorative material is indicated for uncooperative patients. Aim The study aimed at estimating the surface roughness of different GICs using or not their proprietary surface coatings and at observing the interfaces between cement and coating through SEM. Materials and methods Forty specimens have been obtained and divided into 4 groups: Fuji IX (IX), Fuji IX/G-Coat Plus (IXC), Vitremer (V), Vitremer/Finishing Gloss (VFG). Samples were obtained using silicone moulds to simulate class I restorations. All specimens were processed for profilometric evaluation. The statistical differences of surface roughness between groups were assessed using One-Way Analysis of Variance (One-Way ANOVA) (p<0.05). The Two-Way Analysis of Variance (Two-Way ANOVA) was used to evaluate the influence of two factors: restoration material and presence of coating. Coated restoration specimens (IXC and VFG) were sectioned perpendicular to the restoration surface and processed for SEM evaluation. Results No statistical differences in roughness could be noticed between groups or factors. Following microscopic observation, interfaces between restoration material and coating were better for group IXC than for group VFG. Conclusions When specimens are obtained simulating normal clinical procedures, the presence of surface protection does not significantly improve the surface roughness of GICs. PMID:24611090
Pacifici, Edoardo; Bossù, Maurizio; Giovannetti, Agostino; La Torre, Giuseppe; Guerra, Fabrizio; Polimeni, Antonella
2013-01-01
Even today, use of Glass Ionomer Cements (GIC) as restorative material is indicated for uncooperative patients. The study aimed at estimating the surface roughness of different GICs using or not their proprietary surface coatings and at observing the interfaces between cement and coating through SEM. Forty specimens have been obtained and divided into 4 groups: Fuji IX (IX), Fuji IX/G-Coat Plus (IXC), Vitremer (V), Vitremer/Finishing Gloss (VFG). Samples were obtained using silicone moulds to simulate class I restorations. All specimens were processed for profilometric evaluation. The statistical differences of surface roughness between groups were assessed using One-Way Analysis of Variance (One-Way ANOVA) (p<0.05). The Two-Way Analysis of Variance (Two-Way ANOVA) was used to evaluate the influence of two factors: restoration material and presence of coating. Coated restoration specimens (IXC and VFG) were sectioned perpendicular to the restoration surface and processed for SEM evaluation. No statistical differences in roughness could be noticed between groups or factors. Following microscopic observation, interfaces between restoration material and coating were better for group IXC than for group VFG. When specimens are obtained simulating normal clinical procedures, the presence of surface protection does not significantly improve the surface roughness of GICs.
Chitosan based grey wastewater treatment--a statistical design approach.
Thirugnanasambandham, K; Sivakumar, V; Prakash Maran, J; Kandasamy, S
2014-01-01
In this present study, grey wastewater was treated under different operating conditions such as agitation time (1-3 min), pH (2.5-5.5), chitosan dose (0.3-0.6g/l) and settling time (10-20 min) using response surface methodology (RSM). Four factors with three levels Box-Behnken response surface design (BBD) were employed to optimize and investigate the effect of process variables on the responses such as turbidity, BOD and COD removal. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed in order to predict the responses. Under the optimum conditions, experimental values such as turbidity (96%), BOD (91%) and COD (73%) removals are closely agreed with predicted values. Copyright © 2013 Elsevier Ltd. All rights reserved.
Guided discovery learning in geometry learning
NASA Astrophysics Data System (ADS)
Khasanah, V. N.; Usodo, B.; Subanti, S.
2018-03-01
Geometry is a part of the mathematics that must be learned in school. The purpose of this research was to determine the effect of Guided Discovery Learning (GDL) toward geometry learning achievement. This research had conducted at junior high school in Sukoharjo on academic years 2016/2017. Data collection was done based on student’s work test and documentation. Hypothesis testing used two ways analysis of variance (ANOVA) with unequal cells. The results of this research that GDL gave positive effect towards mathematics learning achievement. GDL gave better mathematics learning achievement than direct learning. There was no difference of mathematics learning achievement between male and female. There was no an interaction between sex differences and learning models toward student’s mathematics learning achievement. GDL can be used to improve students’ mathematics learning achievement in geometry.
Optimization of Photooxidative Removal of Phenazopyridine from Water
NASA Astrophysics Data System (ADS)
Saeid, Soudabeh; Behnajady, Mohammad A.; Tolvanen, Pasi; Salmi, Tapio
2018-05-01
The photooxidative removal of analgesic pharmaceutical compound phenazopyridine (PhP) from aqueous solutions by UV/H2O2 system with a re-circulated photoreactor was investigated. Response surface methodology (RSM) was employed to optimize the effect of operational parameters on the photooxidative removal efficiency. The investigated variables were: the initial PhP and H2O2 concentrations, irradiation time, volume of solution and pH. The analysis of variance (ANOVA) of quadratic model demonstrated that the described model was highly significant. The predicted values of the photooxidative removal efficiency were found to be in a fair agreement with experimental values ( R 2 = 0.9832, adjusted R 2 = 0.9716). The model predicted that the optimal reaction conditions for a maximum removal of PhP (>98%) were: initial PhP concentration less than 23 mg L-1, initial concentration of H2O2 higher than 470 mg L-1, solution volume less than 500 mL, pH close to 2 and irradiation time longer than 6 min.
Singh, R K Ratankumar; Majumdar, Ranendra K; Venkateshwarlu, G
2014-09-01
To establish the effect of barrel temperature, screw speed, total moisture and fish flour content on the expansion ratio and bulk density of the fish based extrudates, response surface methodology was adopted in this study. The experiments were optimized using five-levels, four factors central composite design. Analysis of Variance was carried to study the effects of main factors and interaction effects of various factors and regression analysis was carried out to explain the variability. The fitting was done to a second order model with the coded variables for each response. The response surface plots were developed as a function of two independent variables while keeping the other two independent variables at optimal values. Based on the ANOVA, the fitted model confirmed the model fitness for both the dependent variables. Organoleptically highest score was obtained with the combination of temperature-110(0) C, screw speed-480 rpm, moisture-18 % and fish flour-20 %.
Teng, Hui; Choi, Yong Hee
2014-01-01
The optimum extraction conditions for the maximum recovery of total alkaloid content (TAC), berberine content (BC), palmatine content (PC), and the highest antioxidant capacity (AC) from rhizoma coptidis subjected to ultrasonic-assisted extraction (UAE) were determined using response surface methodology (RSM). Central composite design (CCD) with three variables and five levels was employed, and response surface plots were constructed in accordance with a second order polynomial model. Analysis of variance (ANOVA) showed that the quadratic model was well fitted and significant for responses of TAC, BC, PC, and AA. The optimum conditions obtained through the overlapped contour plot were as follows: ethanol concentration of 59%, extraction time of 46.57min, and temperature of 66.22°C. Verification experiment was carried out, and no significant difference was found between observed and estimated values for each response, suggesting that the estimated models were reliable and valid for UAE of alkaloids. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Larouci, M; Safa, M; Meddah, B; Aoues, A; Sonnet, P
2015-03-01
The optimum conditions for acid activation of diatomite for maximizing bleaching efficiency of the diatomite in sun flower oil treatment were studied. Box-Behnken experimental design combining with response surface modeling (RSM) and quadratic programming (QP) was employed to obtain the optimum conditions of three independent variables (acid concentration, activation time and solid to liquid) for acid activation of diatomite. The significance of independent variables and their interactions were tested by means of the analysis of variance (ANOVA) with 95 % confidence limits (α = 0.05). The optimum values of the selected variables were obtained by solving the quadratic regression model, as well as by analyzing the response surface contour plots. The experimental conditions at this global point were determined to be acid concentration = 8.963 N, activation time = 11.9878 h, and solid to liquid ratio = 221.2113 g/l, the corresponding bleaching efficiency was found to be about 99 %.
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.
2015-10-01
In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.
Jami, Mohammed S; Rosli, Nurul-Shafiqah; Amosa, Mutiu K
2016-06-01
Availability of quality-certified water is pertinent to the production of food and pharmaceutical products. Adverse effects of manganese content of water on the corrosion of vessels and reactors necessitate that process water is scrutinized for allowable concentration levels before being applied in the production processes. In this research, optimization of the adsorption process conditions germane to the removal of manganese from biotreated palm oil mill effluent (BPOME) using zeolite 3A subsequent to a comparative adsorption with clinoptilolite was studied. A face-centered central composite design (FCCCD) of the response surface methodology (RSM) was adopted for the study. Analysis of variance (ANOVA) for response surface quadratic model revealed that the model was significant with dosage and agitation speed connoting the main significant process factors for the optimization. R(2) of 0.9478 yielded by the model was in agreement with predicted R(2). Langmuir and pseudo-second-order suggest the adsorption mechanism involved monolayer adsorption and cation exchanging.
Roberts, M A; Milich, R; Loney, J; Caputo, J
1981-09-01
The convergent and discriminant validities of three teacher rating scale measures of the traits of hyperactivity, aggression, and inattention were explored, using the multitrait-multimethod matrix approach of Campbell and Fiske (1959), as well as an analysis of variance procedure (Stanley, 1961). In the present study teachers rated children from their elementary school classrooms on the above traits. The results provided strong evidence for convergent validity. Data also indicated that these traits can be reliable differentiated by teachers, suggesting that research aimed at better understanding the unique contributions of hyperactivity, aggression, and inattention is warranted. The respective benefits of analyzing multitrait-multimethod matrices by employing the ANOVA procedure or by using the Campbell and Fiske (1959) criteria were discussed.
Doctors' trustworthiness, practice orientation, performance and patient satisfaction.
Van Den Assem, Barend; Dulewicz, Victor
2015-01-01
The purpose of this paper is to provide a greater understanding of the general practitioner (GP)-patient relationship for academics and practitioners. A new model for dyadic professional relationships specifically designed for research into the doctor-patient relationship was developed and tested. Various conceptual models of trust and related constructs in the literature were considered and assessed for their relevance as were various related scales. The model was designed and tested using purposefully designed scales measuring doctors' trustworthiness, practice orientation performance and patient satisfaction. A quantitative survey used closed-ended questions and 372 patients responded from seven GP practices. The sample closely reflected the profile of the patients who responded to the DoH/NHS GP Patient Survey for England, 2010. Hierarchical regression and partial least squares both accounted for 74 per cent of the variance in "overall patient satisfaction", the dependent variable. Trust accounted for 39 per cent of the variance explained, with the other independent variables accounting for the other 35 per cent. ANOVA showed good model fit. The findings on the factors which affect patient satisfaction and the doctor-patient relationship have direct implications for GPs and other health professionals. They are of particular relevance at a time of health reform and change. The paper provides: a new model of the doctor-patient relationship and specifically designed scales to test it; a greater understanding of the effects of doctors' trustworthiness, practice orientation and performance on patient satisfaction; and a new framework for examining the breadth and meaning of the doctor-patient relationship and the management of care from the patient's viewpoint.
Chitinosans as tableting excipients for modified release delivery systems.
Rege, P R; Shukla, D J; Block, L H
1999-04-20
The term 'chitinosans' embraces the spectrum of acetylated poly(N-glucosamines) ranging from chitin to chitosan. Chitinosans (I), at acidic pH, have protonated amines which can interact with oppositely charged drug ions and, thereby, modify drug release from drug delivery systems. Tablets were compressed from a physical mixture containing salicylic acid (II) as the model drug, I, and magnesium stearate. Five commercial I compounds, varying in degree of deacetylation and molecular weight, were selected. Tablets were compressed at 5000, 10 000, and 15 000 psig using a Carver and a single punch tablet press. The differential scanning calorimetry thermograms provided evidence of I-II interaction in the powder blend. Analysis of variance (ANOVA) indicated that the compression pressure did not significantly affect the crushing strength (CS) or the release profile of II from the I-matrix tablets (P?0.05). Furthermore, the ANOVA also indicated that the tablet press used during manufacture did not affect the above properties (P?0.05); however, the chitinosans significantly affected the CS as well as the release profile of II from I-matrix tablets (P<0.05). This study provides further evidence for the use of commercial I compounds as excipients for use in modified release drug delivery systems. Copyright.
2013-01-01
Background The prevalence of obese and overweight patients has increased dramatically worldwide. Both are common risk factors for chronic kidney disease (CKD) as indicated by a diminished estimated glomerular filtration rate (eGFR) or microalbuminuria. This study aimed to investigate whether anthropometric parameters [waist circumference (WC), waist-to-height ratio (WHtR) and body mass index (BMI)] are associated with renal function in a population-based study of Caucasian subjects. Methods Data from 3749 subjects (1825 women) aged 20 to 81 years from the Study of Health in Pomerania (SHIP) were analysed. Renal indices, including the urinary albumin-to-creatinine ratio (uACR), microalbuminuria, eGFR and CKD, were studied. Parameters of anthropometry (WC, WHtR and BMI) were categorised into sex-specific quintiles. Results Analysis of variance (ANOVA) models, adjusting for age, sex, type 2 diabetes mellitus and hypertension, revealed that a high and low WC or WHtR and low BMI were independently related to a higher uACR. Logistic regression models confirmed these results with respect to uACR and showed that subjects with a high or low WC or a high WHtR had increased odds of microalbuminuria. The ANOVA models revealed no relations of the investigated anthropometric parameters with eGFR. However, subjects with high values for these parameters had increased odds of CKD. Conclusions Our results demonstrate U-shaped associations between markers of central fat distribution and uACR or microalbuminuria in the general population, suggesting that both obese and very thin subjects have a higher risk of renal impairment. PMID:23594567
Unser, C U; Bruland, G L; Hood, A; Duin, K
2010-01-01
Accumulation of nitrogen (N) by native Hawaiian riparian plants from surface water was measured under a controlled experimental mesocosm setting. Four species, Cladium jamaicense, Cyperus javanicus, Cyperus laevigatus, and Cyperus polystachyos were tested for their ability to survive in coconut fiber coir log media with exposure to differing N concentrations. It was hypothesized that the selected species would have significantly different tissue total nitrogen (TN) concentrations, aboveground biomass, and TN accumulation rates because of habitat preference and physiological growth differences. A general linear model (GLM) analysis of variance (ANOVA) determined that species differences accounted for the greatest proportion of variance in tissue TN concentration, aboveground biomass growth, and accumulation rates, when compared with the other main effects (i.e. N concentration, time) and their interactions. A post hoc test of means demonstrated that C. jamaicense had significantly higher tissue TN concentration, aboveground biomass growth, and accumulation rates than the other species under all N concentrations. It was also hypothesized that tissue TN concentrations and biomass growth would increase in plants exposed to elevated N concentrations, however data did not support this hypothesis. Nitrogen accumulation rates by species were controlled by differences in plant biomass growth.
Nasal airway and septal variation in unilateral and bilateral cleft lip and palate.
Starbuck, John M; Friel, Michael T; Ghoneima, Ahmed; Flores, Roberto L; Tholpady, Sunil; Kula, Katherine
2014-10-01
Cleft lip and palate (CLP) affects the dentoalveolar and nasolabial facial regions. Internal and external nasal dysmorphology may persist in individuals born with CLP despite surgical interventions. 7-18 year old individuals born with unilateral and bilateral CLP (n = 50) were retrospectively assessed using cone beam computed tomography. Anterior, middle, and posterior nasal airway volumes were measured on each facial side. Septal deviation was measured at the anterior and posterior nasal spine, and the midpoint between these two locations. Data were evaluated using principal components analysis (PCA), multivariate analysis of variance (MANOVA), and post-hoc ANOVA tests. PCA results show partial separation in high dimensional space along PC1 (48.5% variance) based on age groups and partial separation along PC2 (29.8% variance) based on CLP type and septal deviation patterns. MANOVA results indicate that age (P = 0.007) and CLP type (P ≤ 0.001) significantly affect nasal airway volume and septal deviation. ANOVA results indicate that anterior nasal volume is significantly affected by age (P ≤ 0.001), whereas septal deviation patterns are significantly affected by CLP type (P ≤ 0.001). Age and CLP type affect nasal airway volume and septal deviation patterns. Nasal airway volumes tend to be reduced on the clefted sides of the face relative to non-clefted sides of the face. Nasal airway volumes tend to strongly increase with age, whereas septal deviation values tend to increase only slightly with age. These results suggest that functional nasal breathing may be impaired in individuals born with the unilateral and bilateral CLP deformity. © 2014 Wiley Periodicals, Inc.
Physiological Efficacy of a Lightweight Ambient Air Cooling Unit for Various Applications
1993-10-01
Acei 10. Mean skin temperature during continuous work . N*TIS. CRjI DTIC TAB 11. Thermal comfort rate during continuous work. . U nagnounf. 4...perceived exertion (RPE) and thermal comfort (TC) were taken every 10 min. Statistical analysis using a 3-way analysis of variance (ANOVA) was conducted...may account for the fact that no statistically significant differences were seen for thermal comfort and ratings of perceived exertion between the IC
ERIC Educational Resources Information Center
Gkouvatzi, Anastasia N.; Mantis, Konstantinos; Kambas, Antonis
2010-01-01
Using the Bruininks-Oseretsky Test the motor performance of 34 deaf--hard-of-hearing pupils, 6-14 year, was evaluated in reaction time, visual-motor control and upper limb speed and dexterity. The two-way ANOVA variance analysis for two independent variables, group, age, and the Post Hoc (Scheffe test) for multiple comparisons were used. The…
Effects of a Network-Centric Multi-Modal Communication Tool on a Communication Monitoring Task
2012-03-01
replaced (Nelson, Bolia, Vidulich, & Langhorne , 2004). Communication will continue to be the central tool for Command and Control (C2) operators. However...Nelson, Bolia, Vidulich, & Langhorne , 2004). The two highest ratings for most potential technologies were data capture/replay tools and chat...analysis of variance (ANOVA). A significant main effect was found for Difficulty, F (1, 13) = 21.11, p < .05; the overall level of detections was
Bouguecha, Salah T; Boubakri, Ali; Aly, Samir E; Al-Beirutty, Mohammad H; Hamdi, Mohamed M
2016-01-01
Membrane distillation (MD) is considered as a relatively high-energy requirement. To overcome this drawback, it is recommended to couple the MD process with solar energy as the renewable energy source in order to provide heat energy required to optimize its performance to produce permeate flux. In the present work, an original solar energy driven direct contact membrane distillation (DCMD) pilot plant was built and tested under actual weather conditions at Jeddah, KSA, in order to model and optimize permeate flux. The dependency of permeate flux on various operating parameters such as feed temperature (46.6-63.4°C), permeate temperature (6.6-23.4°C), feed flow rate (199-451L/h) and permeate flow rate (199-451L/h) was studied by response surface methodology based on central composite design approach. The analysis of variance (ANOVA) confirmed that all independent variables had significant influence on the model (where P-value <0.05). The high coefficient of determination (R(2) = 0.9644 and R(adj)(2) = 0.9261) obtained by ANOVA demonstrated good correlation between experimental and predicted values of the response. The optimized conditions, determined using desirability function, were T(f) = 63.4°C, Tp = 6.6°C, Q(f) = 451L/h and Q(p) = 451L/h. Under these conditions, the maximum permeate flux of 6.122 kg/m(2).h was achieved, which was close to the predicted value of 6.398 kg/m(2).h.
The Grapefruit: An Alternative Arthroscopic Tool Skill Platform.
Molho, David A; Sylvia, Stephen M; Schwartz, Daniel L; Merwin, Sara L; Levy, I Martin
2017-08-01
To establish the construct validity of an arthroscopic training model that teaches arthroscopic tool skills including triangulation, grasping, precision biting, implant delivery and ambidexterity and uses a whole grapefruit for its training platform. For the grapefruit training model (GTM), an arthroscope and arthroscopic instruments were introduced through portals cut in the grapefruit skin of a whole prepared grapefruit. After institutional review board approval, participants performed a set of tasks inside the grapefruit. Performance for each component was assessed by recording errors, achievement of criteria, and time to completion. A total of 19 medical students, orthopaedic surgery residents, and fellowship-trained orthopaedic surgeons were included in the analysis and were divided into 3 groups based on arthroscopic experience. One-way analysis of variance (ANOVA) and the post hoc Tukey test were used for statistical analysis. One-way ANOVA showed significant differences in both time to completion and errors between groups, F(2, 16) = 16.10, P < .001; F(2, 16) = 17.43, P < .001. Group A had a longer time to completion and more errors than group B (P = .025, P = .019), and group B had a longer time to completion and more errors than group C (P = .023, P = .018). The GTM is an easily assembled and an alternative arthroscopic training model that bridges the gap between box trainers, cadavers, and virtual reality simulators. Our findings suggest construct validity when evaluating its use for teaching the basic arthroscopic tool skills. As such, it is a useful addition to the arthroscopic training toolbox. There is a need for validated low-cost arthroscopic training models that are easily accessible. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
4D MRI of polycystic kidneys from rapamycin-treated Glis3-deficient mice
Xie, Luke; Qi, Yi; Subashi, Ergys; Liao, Grace; Miller DeGraff, Laura; Jetten, Anton M.; Johnson, G. Allan
2015-01-01
Polycystic kidney disease (PKD) is a life-threatening disease that leads to a grotesque enlargement of the kidney and significant lose of function. Several imaging studies with MRI have demonstrated that cyst size in polycystic kidneys can determine disease severity and progression. In the present study, we found that while kidney volume and cyst volume decreased with drug treatment, renal function did not improve with treatment. Here, we applied dynamic contrast-enhanced MRI to study PKD in a Glis3-deficient mouse model. Cysts from this model have a wide range of sizes and develop at an early age. To capture this crucial stage and assess cysts in detail, we imaged during early development (3 to 17 weeks) and applied high spatiotemporal resolution MRI (125×125×125 cubic microns every 7.7 seconds). A drug treatment with rapamycin (also known as sirolimus) was applied to determine whether disease progression could be halted. The effect and synergy (interaction) of aging and treatment were evaluated using an analysis of variance (ANOVA). Structural measurements including kidney volume, cyst volume, and cyst-kidney volume ratio changed significantly with age. Drug treatment significantly decreased these metrics. Functional measurements of time-to-peak (TTP) mean and TTP variance were determined. TTP mean did not change with age, while TTP variance increased with age. The treatment of rapamycin generally did not affect these functional metrics. Synergistic effects of treatment and age were not found for any measurements. Together, the size and volume ratio of cysts decreased with drug treatment, while renal function remained the same. Quantifying renal structure and function with MRI can comprehensively assess the pathophysiology of PKD and response to treatment. PMID:25810360
Repeatability and reproducibility of ribotyping and its computer interpretation.
Lefresne, Gwénola; Latrille, Eric; Irlinger, Françoise; Grimont, Patrick A D
2004-04-01
Many molecular typing methods are difficult to interpret because their repeatability (within-laboratory variance) and reproducibility (between-laboratory variance) have not been thoroughly studied. In the present work, ribotyping of coryneform bacteria was the basis of a study involving within-gel and between-gel repeatability and between-laboratory reproducibility (two laboratories involved). The effect of different technical protocols, different algorithms, and different software for fragment size determination was studied. Analysis of variance (ANOVA) showed, within a laboratory, that there was no significant added variance between gels. However, between-laboratory variance was significantly higher than within-laboratory variance. This may be due to the use of different protocols. An experimental function was calculated to transform the data and make them compatible (i.e., erase the between-laboratory variance). The use of different interpolation algorithms (spline, Schaffer and Sederoff) was a significant source of variation in one laboratory only. The use of either Taxotron (Institut Pasteur) or GelCompar (Applied Maths) was not a significant source of added variation when the same algorithm (spline) was used. However, the use of Bio-Gene (Vilber Lourmat) dramatically increased the error (within laboratory, within gel) in one laboratory, while decreasing the error in the other laboratory; this might be due to automatic normalization attempts. These results were taken into account for building a database and performing automatic pattern identification using Taxotron. Conversion of the data considerably improved the identification of patterns irrespective of the laboratory in which the data were obtained.
NASA Astrophysics Data System (ADS)
Shoko, Cletah; Clark, David; Mengistu, Michael; Dube, Timothy; Bulcock, Hartley
2015-01-01
This study evaluated the effect of two readily available multispectral sensors: the newly launched 30 m spatial resolution Landsat 8 and the long-serving 1000 m moderate resolution imaging spectroradiometer (MODIS) datasets in the spatial representation of total evaporation in the heterogeneous uMngeni catchment, South Africa, using the surface energy balance system model. The results showed that sensor spatial resolution plays a critical role in the accurate estimation of energy fluxes and total evaporation across a heterogeneous catchment. Landsat 8 estimates showed better spatial representation of the biophysical parameters and total evaporation for different land cover types, due to the relatively higher spatial resolution compared to the coarse spatial resolution MODIS sensor. Moreover, MODIS failed to capture the spatial variations of total evaporation estimates across the catchment. Analysis of variance (ANOVA) results showed that MODIS-based total evaporation estimates did not show any significant differences across different land cover types (one-way ANOVA; F1.924=1.412, p=0.186). However, Landsat 8 images yielded significantly different estimates between different land cover types (one-way ANOVA; F1.993=5.185, p<0.001). The validation results showed that Landsat 8 estimates were more comparable to eddy covariance (EC) measurements than the MODIS-based total evaporation estimates. EC measurement on May 23, 2013, was 3.8 mm/day, whereas the Landsat 8 estimate on the same day was 3.6 mm/day, with MODIS showing significantly lower estimates of 2.3 mm/day. The findings of this study underscore the importance of spatial resolution in estimating spatial variations of total evaporation at the catchment scale, thus, they provide critical information on the relevance of the readily available remote sensing products in water resources management in data-scarce environments.
NASA Astrophysics Data System (ADS)
Razak, Jeefferie Abd; Ahmad, Sahrim Haji; Ratnam, Chantara Thevy; Mahamood, Mazlin Aida; Yaakub, Juliana; Mohamad, Noraiham
2014-09-01
Fractional 25 two-level factorial design of experiment (DOE) was applied to systematically prepare the NR/EPDM blend using Haake internal mixer set-up. The process model of rubber blend preparation that correlates the relationships between the mixer process input parameters and the output response of blend compatibility was developed. Model analysis of variance (ANOVA) and model fitting through curve evaluation finalized the R2 of 99.60% with proposed parametric combination of A = 30/70 NR/EPDM blend ratio; B = 70°C mixing temperature; C = 70 rpm of rotor speed; D = 5 minutes of mixing period and E = 1.30 phr EPDM-g-MAH compatibilizer addition, with overall 0.966 desirability. Model validation with small deviation at +2.09% confirmed the repeatability of the mixing strategy with valid maximum tensile strength output representing the blend miscibility. Theoretical calculation of NR/EPDM blend compatibility is also included and compared. In short, this study provides a brief insight on the utilization of DOE for experimental simplification and parameter inter-correlation studies, especially when dealing with multiple variables during elastomeric rubber blend preparation.
NASA Astrophysics Data System (ADS)
Tibin, El Mubarak Musa; Al-Shorgani, Najeeb Kaid Naseer; Abuelhassan, Nawal Noureldaim; Hamid, Aidil Abdul; Kalil, Mohd Sahaid; Yusoff, Wan Mohtar Wan
2013-11-01
The cellulase production using sorghum straw as substrate by fungal culture of Aspergillus terreus SUK-1 was investigated in solid substrate fermentation (SSF). The optimum CMCase was achieved by testing most effective fermentation parameters which were: incubation temperature, pH and moisture content using Response Surface Methodology (RSM) based on Central Composite Design (CCD). The carboxymethyl cellulase activity (CMCase) was measured as the defining factor. The results were analysed by analysis of variance (ANOVA) and the regression quadratic model was obtained. The model was found to be significant (p<0.05) and the effect of temperature (25-40°C) and pH (4-7) was found to be not significant on CMCase activity whereas the moisture content was significant in the SSF conditions employed. The high yield of predicted CMCase activity (0.2 U/ml) was obtained under the optimized conditions (temperature 40 □C, pH 5.4 and moisture content of 80%). The model was validated by applying the optimized conditions and it was found that the model was valid.
Chen, Chunhui; Chen, Chuansheng; Moyzis, Robert; Stern, Hal; He, Qinghua; Li, He; Li, Jin; Zhu, Bi; Dong, Qi
2011-01-01
Traditional behavioral genetic studies (e.g., twin, adoption studies) have shown that human personality has moderate to high heritability, but recent molecular behavioral genetic studies have failed to identify quantitative trait loci (QTL) with consistent effects. The current study adopted a multi-step approach (ANOVA followed by multiple regression and permutation) to assess the cumulative effects of multiple QTLs. Using a system-level (dopamine system) genetic approach, we investigated a personality trait deeply rooted in the nervous system (the Highly Sensitive Personality, HSP). 480 healthy Chinese college students were given the HSP scale and genotyped for 98 representative polymorphisms in all major dopamine neurotransmitter genes. In addition, two environment factors (stressful life events and parental warmth) that have been implicated for their contributions to personality development were included to investigate their relative contributions as compared to genetic factors. In Step 1, using ANOVA, we identified 10 polymorphisms that made statistically significant contributions to HSP. In Step 2, these polymorphism's main effects and interactions were assessed using multiple regression. This model accounted for 15% of the variance of HSP (p<0.001). Recent stressful life events accounted for an additional 2% of the variance. Finally, permutation analyses ascertained the probability of obtaining these findings by chance to be very low, p ranging from 0.001 to 0.006. Dividing these loci by the subsystems of dopamine synthesis, degradation/transport, receptor and modulation, we found that the modulation and receptor subsystems made the most significant contribution to HSP. The results of this study demonstrate the utility of a multi-step neuronal system-level approach in assessing genetic contributions to individual differences in human behavior. It can potentially bridge the gap between the high heritability estimates based on traditional behavioral genetics and the lack of reproducible genetic effects observed currently from molecular genetic studies.
Chen, Chunhui; Chen, Chuansheng; Moyzis, Robert; Stern, Hal; He, Qinghua; Li, He; Li, Jin; Zhu, Bi; Dong, Qi
2011-01-01
Traditional behavioral genetic studies (e.g., twin, adoption studies) have shown that human personality has moderate to high heritability, but recent molecular behavioral genetic studies have failed to identify quantitative trait loci (QTL) with consistent effects. The current study adopted a multi-step approach (ANOVA followed by multiple regression and permutation) to assess the cumulative effects of multiple QTLs. Using a system-level (dopamine system) genetic approach, we investigated a personality trait deeply rooted in the nervous system (the Highly Sensitive Personality, HSP). 480 healthy Chinese college students were given the HSP scale and genotyped for 98 representative polymorphisms in all major dopamine neurotransmitter genes. In addition, two environment factors (stressful life events and parental warmth) that have been implicated for their contributions to personality development were included to investigate their relative contributions as compared to genetic factors. In Step 1, using ANOVA, we identified 10 polymorphisms that made statistically significant contributions to HSP. In Step 2, these polymorphism's main effects and interactions were assessed using multiple regression. This model accounted for 15% of the variance of HSP (p<0.001). Recent stressful life events accounted for an additional 2% of the variance. Finally, permutation analyses ascertained the probability of obtaining these findings by chance to be very low, p ranging from 0.001 to 0.006. Dividing these loci by the subsystems of dopamine synthesis, degradation/transport, receptor and modulation, we found that the modulation and receptor subsystems made the most significant contribution to HSP. The results of this study demonstrate the utility of a multi-step neuronal system-level approach in assessing genetic contributions to individual differences in human behavior. It can potentially bridge the gap between the high heritability estimates based on traditional behavioral genetics and the lack of reproducible genetic effects observed currently from molecular genetic studies. PMID:21765900
Analysis and optimization of machining parameters of laser cutting for polypropylene composite
NASA Astrophysics Data System (ADS)
Deepa, A.; Padmanabhan, K.; Kuppan, P.
2017-11-01
Present works explains about machining of self-reinforced Polypropylene composite fabricated using hot compaction method. The objective of the experiment is to find optimum machining parameters for Polypropylene (PP). Laser power and Machining speed were the parameters considered in response to tensile test and Flexure test. Taguchi method is used for experimentation. Grey Relational Analysis (GRA) is used for multiple process parameter optimization. ANOVA (Analysis of Variance) is used to find impact for process parameter. Polypropylene has got the great application in various fields like, it is used in the form of foam in model aircraft and other radio-controlled vehicles, thin sheets (∼2-20μm) used as a dielectric, PP is also used in piping system, it is also been used in hernia and pelvic organ repair or protect new herrnis in the same location.
NASA Astrophysics Data System (ADS)
Babagowda; Kadadevara Math, R. S.; Goutham, R.; Srinivas Prasad, K. R.
2018-02-01
Fused deposition modeling is a rapidly growing additive manufacturing technology due to its ability to build functional parts having complex geometry. The mechanical properties of the build part is depends on several process parameters and build material of the printed specimen. The aim of this study is to characterize and optimize the parameters such as layer thickness and PLA build material which is mixed with recycled PLA material. Tensile and flexural or bending test are carried out to determine the mechanical response characteristics of the printed specimen. Taguchi method is used for number of experiments and Taguchi S/N ratio is used to identify the set of parameters which give good results for respective response characteristics, effectiveness of each parameters is investigated by using analysis of variance (ANOVA).
2009-08-17
Weight: Repeated Measures (Between-Subjects Effects ) Body Weight: Analysis of Variance (ANOVA) - Males Only Body Weight: Last Measurement Corticosterone ...also decreased the effects of stress on freezing behavior and corticosterone levels in mice (Benaroya-Milshtein et aI., 2004). Summary of Social...rats (e.g., Faradayet aI., 2005; Kalinichev et aI., 2002; Kant et aI., 1987; Hayley et aI., 2001). 22 Corticosterone responses to stressors are
ERP evidence suggests executive dysfunction in ecstasy polydrug users.
Roberts, C A; Fairclough, S H; Fisk, J E; Tames, F; Montgomery, C
2013-08-01
Deficits in executive functions such as access to semantic/long-term memory have been shown in ecstasy users in previous research. Equally, there have been many reports of equivocal findings in this area. The current study sought to further investigate behavioural and electro-physiological measures of this executive function in ecstasy users. Twenty ecstasy-polydrug users, 20 non-ecstasy-polydrug users and 20 drug-naïve controls were recruited. Participants completed background questionnaires about their drug use, sleep quality, fluid intelligence and mood state. Each individual also completed a semantic retrieval task whilst 64 channel Electroencephalography (EEG) measures were recorded. Analysis of Variance (ANOVA) revealed no between-group differences in behavioural performance on the task. Mixed ANOVA on event-related potential (ERP) components P2, N2 and P3 revealed significant between-group differences in the N2 component. Subsequent exploratory univariate ANOVAs on the N2 component revealed marginally significant between-group differences, generally showing greater negativity at occipito-parietal electrodes in ecstasy users compared to drug-naïve controls. Despite absence of behavioural differences, differences in N2 magnitude are evidence of abnormal executive functioning in ecstasy-polydrug users.
Measurement Consistency from Magnetic Resonance Images
Chung, Dongjun; Chung, Moo K.; Durtschi, Reid B.; Lindell, R. Gentry; Vorperian, Houri K.
2010-01-01
Rationale and Objectives In quantifying medical images, length-based measurements are still obtained manually. Due to possible human error, a measurement protocol is required to guarantee the consistency of measurements. In this paper, we review various statistical techniques that can be used in determining measurement consistency. The focus is on detecting a possible measurement bias and determining the robustness of the procedures to outliers. Materials and Methods We review correlation analysis, linear regression, Bland-Altman method, paired t-test, and analysis of variance (ANOVA). These techniques were applied to measurements, obtained by two raters, of head and neck structures from magnetic resonance images (MRI). Results The correlation analysis and the linear regression were shown to be insufficient for detecting measurement inconsistency. They are also very sensitive to outliers. The widely used Bland-Altman method is a visualization technique so it lacks the numerical quantification. The paired t-test tends to be sensitive to small measurement bias. On the other hand, ANOVA performs well even under small measurement bias. Conclusion In almost all cases, using only one method is insufficient and it is recommended to use several methods simultaneously. In general, ANOVA performs the best. PMID:18790405
Untargeted Identification of Wood Type-Specific Markers in Particulate Matter from Wood Combustion.
Weggler, Benedikt A; Ly-Verdu, Saray; Jennerwein, Maximilian; Sippula, Olli; Reda, Ahmed A; Orasche, Jürgen; Gröger, Thomas; Jokiniemi, Jorma; Zimmermann, Ralf
2016-09-20
Residential wood combustion emissions are one of the major global sources of particulate and gaseous organic pollutants. However, the detailed chemical compositions of these emissions are poorly characterized due to their highly complex molecular compositions, nonideal combustion conditions, and sample preparation steps. In this study, the particulate organic emissions from a masonry heater using three types of wood logs, namely, beech, birch, and spruce, were chemically characterized using thermal desorption in situ derivatization coupled to a GCxGC-ToF/MS system. Untargeted data analyses were performed using the comprehensive measurements. Univariate and multivariate chemometric tools, such as analysis of variance (ANOVA), principal component analysis (PCA), and ANOVA simultaneous component analysis (ASCA), were used to reduce the data to highly significant and wood type-specific features. This study reveals substances not previously considered in the literature as meaningful markers for differentiation among wood types.
Chen, Pei; Harnly, James M.; Lester, Gene E.
2013-01-01
Spectral fingerprints were acquired for Rio Red grapefruit using flow injection electrospray ionization with ion trap and time-of-flight mass spectrometry (FI-ESI-IT-MS and FI-ESI-TOF-MS). Rio Red grapefruits were harvested 3 times a year (early, mid, and late harvests) in 2005 and 2006 from conventionally and organically grown trees. Data analysis using analysis of variance principal component analysis (ANOVA-PCA) demonstrated that, for both MS systems, the chemical patterns were different as a function of farming mode (conventional vs organic), as well as growing year and time of harvest. This was visually obvious with PCA and was shown to be statistically significant using ANOVA. The spectral fingerprints provided a more inclusive view of the chemical composition of the grapefruit and extended previous conclusions regarding the chemical differences between conventionally and organically grown Rio Red grapefruit. PMID:20337420
Boruah, Sourabh; Subit, Damien L; Paskoff, Glenn R; Shender, Barry S; Crandall, Jeff R; Salzar, Robert S
2017-01-01
The strength and compliance of the dense cortical layers of the human skull have been examined since the beginning of the 20th century with the wide range in the observed mechanical properties attributed to natural biological variance. Since this variance may be explained by the difference in structural arrangement of bone tissue, micro-computed tomography (µCT) was used in conjunction with mechanical testing to study the relationship between the microstructure of human skull cortical coupons and their mechanical response. Ninety-seven bone samples were machined from the cortical tables of the calvaria of ten fresh post mortem human surrogates and tested in dynamic tension until failure. A linear response between stress and strain was observed until close to failure, which occurred at 0.6% strain on average. The effective modulus of elasticity for the coupons was 12.01 ± 3.28GPa. Porosity of the test specimens, determined from µCT, could explain only 51% of the variation of their effective elastic modulus. Finite element (FE) models of the tested specimens built from µCT images indicated that modeling the microstructural arrangement of the bone, in addition to the porosity, led to a marginal improvement of the coefficient of determination to 54%. Modulus for skull cortical bone for an element size of 50µm was estimated to be 19GPa at an average. Unlike the load bearing bones of the body, almost half of the variance in the mechanical properties of cortical bone from the skull may be attributed to differences at the sub-osteon (< 50µm) level. ANOVA tests indicated that effective failure stress and strain varied significantly between the frontal and parietal bones, while the bone phase modulus was different for the superior and inferior aspects of the calvarium. The micro FE models did not indicate any anisotropy attributable to the pores observable under µCT. Published by Elsevier Ltd.
Zhang, Yun-jian; Li, Qiang; Zhang, Yu-xiu; Wang, Dan; Xing, Jian-min
2012-01-01
Succinic acid is considered as an important platform chemical. Succinic acid fermentation with Actinobacillus succinogenes strain BE-1 was optimized by central composite design (CCD) using a response surface methodology (RSM). The optimized production of succinic acid was predicted and the interactive effects between glucose, yeast extract, and magnesium carbonate were investigated. As a result, a model for predicting the concentration of succinic acid production was developed. The accuracy of the model was confirmed by the analysis of variance (ANOVA), and the validity was further proved by verification experiments showing that percentage errors between actual and predicted values varied from 3.02% to 6.38%. In addition, it was observed that the interactive effect between yeast extract and magnesium carbonate was statistically significant. In conclusion, RSM is an effective and useful method for optimizing the medium components and investigating the interactive effects, and can provide valuable information for succinic acid scale-up fermentation using A. succinogenes strain BE-1. PMID:22302423
Exploring the Theory of Planned Behavior to Explain Sugar-Sweetened Beverage Consumption
Estabrooks, Paul; Davy, Brenda; Chen, Yvonnes; You, Wendy
2011-01-01
Objective To describe sugar-sweetened beverage (SSB) consumption, establish psychometric properties and utility of a Theory of Planned Behavior (TPB) instrument for SSB consumption. Methods This cross-sectional survey included 119 southwest Virginia participants. Respondents were majority female (66%), white (89%), ≤ high school education (79%), and averaged 41.4 (±13.5) years. A validated beverage questionnaire was used to measure SSB. Eleven TPB constructs were assessed with a 56-item instrument. Analyses included descriptive statistics, one-way ANOVAs, Cronbach alphas, and multiple regressions. Results Sugar-sweetened beverage intake averaged 457 (±430) kilocalories/day. The TPB model provided a moderate explanation of SSB intake (R2=0.38; F=13.10, P<0.01). Behavioral intentions had the strongest relationships with SSB consumption, followed by attitudes, perceived behavioral control, and subjective norms. The six belief constructs did not predict significant variance in the models. Conclusions and Implications Future efforts to comprehensively develop and implement interventions guided by the TPB hold promise for reducing SSB intake. PMID:22154130
Prakash Maran, J; Sivakumar, V; Thirugnanasambandham, K; Kandasamy, S
2013-11-01
The present study investigates the influence of composition (content of maize starch (1-3 g), sorbitol (0.5-1.0 ml), agar (0.5-1.0 g) and tween-80 (0.1-0.5 ml)) on the mechanical properties (tensile strength, elongation, Young's modulus, puncture force and puncture deformation) of the maize starch based edible films using four factors with three level Box-Behnken design. The edible films were obtained by casting method. The results showed that, tween-80 increases the permeation of sorbitol in to the polymer matrix. Increasing concentration of sorbitol (hydrophilic nature and plasticizing effect of sorbitol) decreases the tensile strength, Young's modulus and puncture force of the films. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were obtained for all responses with high R(2) values (R(2)>0.95). 3D response surface plots were constructed to study the relationship between process variables and the responses. Copyright © 2013 Elsevier B.V. All rights reserved.
Rainfall Threshold Assessment Corresponding to the Maximum Allowable Turbidity for Source Water.
Fan, Shu-Kai S; Kuan, Wen-Hui; Fan, Chihhao; Chen, Chiu-Yang
2016-12-01
This study aims to assess the upstream rainfall thresholds corresponding to the maximum allowable turbidity of source water, using monitoring data and artificial neural network computation. The Taipei Water Source Domain was selected as the study area, and the upstream rainfall records were collected for statistical analysis. Using analysis of variance (ANOVA), the cumulative rainfall records of one-day Ping-lin, two-day Ping-lin, two-day Tong-hou, one-day Guie-shan, and one-day Tai-ping (rainfall in the previous 24 or 48 hours at the named weather stations) were found to be the five most significant parameters for downstream turbidity development. An artificial neural network model was constructed to predict the downstream turbidity in the area investigated. The observed and model-calculated turbidity data were applied to assess the rainfall thresholds in the studied area. By setting preselected turbidity criteria, the upstream rainfall thresholds for these statistically determined rain gauge stations were calculated.
Chemometric simultaneous determination of Sofosbuvir and Ledipasvir in pharmaceutical dosage form
NASA Astrophysics Data System (ADS)
Khalili, Mahsa; Sohrabi, Mahmoud Reza; Mirzabeygi, Vahid; Torabi Ziaratgahi, Nahid
2018-04-01
Partial least squares (PLS), different families of continuous wavelet transform (CWT), and first derivative spectrophotometry (DS) techniques were studied for quantification of Sofosbuvir (SFB) and Ledipasvir (LDV) simultaneously without separation step. The components were dissolved in Acetonitrile and the spectral behaviors were evaluated in the range of 200 to 400 nm. The ultraviolet (UV) absorbance of LDV exhibits no interferences between 300 and 400 nm and it was decided to predict the LDV amount through the classic spectrophotometry (CS) method in this spectral region as well. Data matrix of concentrations and calibrated models were developed, and then by applying a validation set the accuracy and precision of each model were studied. Actual concentrations versus predicted concentrations plotted and good correlation coefficients by each method resulted. Pharmaceutical dosage form was quantified by developed methods and the results were compared with the High Performance Liquid Chromatography (HPLC) reference method. Analysis Of Variance (ANOVA) in 95% confidence level showed no significant differences among methods.
Javadi, Najvan; Ashtiani, Farzin Zokaee; Fouladitajar, Amir; Zenooz, Alireza Moosavi
2014-06-01
Response surface methodology (RSM) and central composite design (CCD) were applied for modeling and optimization of cross-flow microfiltration of Chlorella sp. suspension. The effects of operating conditions, namely transmembrane pressure (TMP), feed flow rate (Qf) and optical density of feed suspension (ODf), on the permeate flux and their interactions were determined. Analysis of variance (ANOVA) was performed to test the significance of response surface model. The effect of gas sparging technique and different gas-liquid two phase flow regimes on the permeate flux was also investigated. Maximum flux enhancement was 61% and 15% for Chlorella sp. with optical densities of 1.0 and 3.0, respectively. These results indicated that gas sparging technique was more efficient in low concentration microalgae microfiltration in which up to 60% enhancement was achieved in slug flow pattern. Additionally, variations in the transmission of exopolysaccharides (EPS) and its effects on the fouling phenomenon were evaluated. Copyright © 2014 Elsevier Ltd. All rights reserved.
2011-01-01
Background Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Methods Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI®) for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Results Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. Conclusions The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects. PMID:21854614
Hou, Qingjiang; Mahnken, Jonathan D; Gajewski, Byron J; Dunton, Nancy
2011-08-19
Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.
Statistical modeling of an integrated boiler for coal fired thermal power plant.
Chandrasekharan, Sreepradha; Panda, Rames Chandra; Swaminathan, Bhuvaneswari Natrajan
2017-06-01
The coal fired thermal power plants plays major role in the power production in the world as they are available in abundance. Many of the existing power plants are based on the subcritical technology which can produce power with the efficiency of around 33%. But the newer plants are built on either supercritical or ultra-supercritical technology whose efficiency can be up to 50%. Main objective of the work is to enhance the efficiency of the existing subcritical power plants to compensate for the increasing demand. For achieving the objective, the statistical modeling of the boiler units such as economizer, drum and the superheater are initially carried out. The effectiveness of the developed models is tested using analysis methods like R 2 analysis and ANOVA (Analysis of Variance). The dependability of the process variable (temperature) on different manipulated variables is analyzed in the paper. Validations of the model are provided with their error analysis. Response surface methodology (RSM) supported by DOE (design of experiments) are implemented to optimize the operating parameters. Individual models along with the integrated model are used to study and design the predictive control of the coal-fired thermal power plant.
Al-Farhan, Haya M; Al-Otaibi, Wafa’a Majed
2012-01-01
Purpose To compare the precision of central corneal thickness (CCT) measurements taken with the handheld ultrasound pachymeter (USP), ultrasound biomicroscopy (UBM), and the Artemis-2 very high frequency ultrasound scanner (VHFUS) on normal subjects. Design Prospective study. Methods One eye from each of 61 normal subjects was randomly selected for this study. The measurements of the CCT were taken with the USP, VHFUS, and UBM. Results were compared statistically using repeated-measures analysis of variance (ANOVA), Pearson’s correlation coefficient, and limits of agreement. Results The average CCT (± standard deviation) was 530.1 ± 30.5 μm, 554.9 ± 31.7 μm, and 559.5 ± 30.7 μm for UBM, VHFUS, and USP respectively. The intraobserver repeatability analyses of variance are not significant for USP, UBM, and VHFUS. P-values were 0.17, 0.19, and 0.37 respectively. Repeated-measures ANOVA showed a significant difference between the three different methods of measuring CCT (P = 0.0001). The ANOVA test revealed no statistically significant difference between USP and VHFUS (P > 0.05), yet statistical significant differences with UBM versus USP and UBM versus VHFUS (P < 0.001). There were high correlations between the three instruments (P < 0.0001). The mean differences (and upper/lower limits of agreement) for CCT measurements were 29.4 ± 14.3 (2.7/56), 4.6 ± 8.6 (−14.7/23.8), and −24.8 ± 13.1 (−50.4/0.8) for USP versus UBM, USP versus VHFUS, and UBM versus VHFUS, respectively. Conclusion The UBM produces CCT measurements that vary significantly from those returned by the USP and the VHFUS, suggesting that the UBM may not be used interchangeably with either equipment for monitoring the CCT in the clinical setting. PMID:22848145
Geurts, Brigitte P; Neerincx, Anne H; Bertrand, Samuel; Leemans, Manja A A P; Postma, Geert J; Wolfender, Jean-Luc; Cristescu, Simona M; Buydens, Lutgarde M C; Jansen, Jeroen J
2017-04-22
Revealing the biochemistry associated to micro-organismal interspecies interactions is highly relevant for many purposes. Each pathogen has a characteristic metabolic fingerprint that allows identification based on their unique multivariate biochemistry. When pathogen species come into mutual contact, their co-culture will display a chemistry that may be attributed both to mixing of the characteristic chemistries of the mono-cultures and to competition between the pathogens. Therefore, investigating pathogen development in a polymicrobial environment requires dedicated chemometric methods to untangle and focus upon these sources of variation. The multivariate data analysis method Projected Orthogonalised Chemical Encounter Monitoring (POCHEMON) is dedicated to highlight metabolites characteristic for the interaction of two micro-organisms in co-culture. However, this approach is currently limited to a single time-point, while development of polymicrobial interactions may be highly dynamic. A well-known multivariate implementation of Analysis of Variance (ANOVA) uses Principal Component Analysis (ANOVA-PCA). This allows the overall dynamics to be separated from the pathogen-specific chemistry to analyse the contributions of both aspects separately. For this reason, we propose to integrate ANOVA-PCA with the POCHEMON approach to disentangle the pathogen dynamics and the specific biochemistry in interspecies interactions. Two complementary case studies show great potential for both liquid and gas chromatography - mass spectrometry to reveal novel information on chemistry specific to interspecies interaction during pathogen development. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Randomized trial of the effect of contact lens wear on self-perception in children.
Walline, Jeffrey J; Jones, Lisa A; Sinnott, Loraine; Chitkara, Monica; Coffey, Bradley; Jackson, John Mark; Manny, Ruth E; Rah, Marjorie J; Prinstein, Mitchell J
2009-03-01
To determine whether contact lens wear affects children's self-perceptions. The Adolescent and Child Health Initiative to Encourage Vision Empowerment Study was a randomized, single-masked trial conducted at five clinical centers in the United States. Subjects were 8- to 11-year-old myopic children randomly assigned to wear spectacles (n = 237) or soft contact lenses (n = 247) for 3 years. The primary endpoint was the Self-Perception Profile for Children Global Self-Worth scale. Secondary outcomes included the Physical Appearance, Athletic Competence, Scholastic Competence, Behavioral Conduct, and Social Acceptance Self-Perception Profile for Children scales. Global self-worth was not affected by contact lens wear [analysis of variance (ANOVA), difference = 0.06; 95% CI, -0.004 to 0.117]. Physical appearance (ANOVA, difference = 0.15; 95% CI, 0.07 to 0.22), athletic competence (ANOVA, difference = 0.08; 95% CI, 0.01 to 0.15), and social acceptance (ANOVA, difference = 0.10; 95% CI, 0.03 to 0.17) were all greater for contact lens wearers. Although contact lens wear does not affect global self-perceptions of 8- to 11-year-old myopic children their physical appearance, athletic competence, and social acceptance self-perceptions are likely to improve with contact lens wear. Eye care practitioners should consider the social and visual benefits of contact lens wear when choosing the most appropriate vision correction modality for children as young as 8 years of age.
Seidel, Clemens; Lautenschläger, Christine; Dunst, Jürgen; Müller, Arndt-Christian
2012-04-20
To investigate whether different conditions of DNA structure and radiation treatment could modify heterogeneity of response. Additionally to study variance as a potential parameter of heterogeneity for radiosensitivity testing. Two-hundred leukocytes per sample of healthy donors were split into four groups. I: Intact chromatin structure; II: Nucleoids of histone-depleted DNA; III: Nucleoids of histone-depleted DNA with 90 mM DMSO as antioxidant. Response to single (I-III) and twice (IV) irradiation with 4 Gy and repair kinetics were evaluated using %Tail-DNA. Heterogeneity of DNA damage was determined by calculation of variance of DNA-damage (V) and mean variance (Mvar), mutual comparisons were done by one-way analysis of variance (ANOVA). Heterogeneity of initial DNA-damage (I, 0 min repair) increased without histones (II). Absence of histones was balanced by addition of antioxidants (III). Repair reduced heterogeneity of all samples (with and without irradiation). However double irradiation plus repair led to a higher level of heterogeneity distinguishable from single irradiation and repair in intact cells. Increase of mean DNA damage was associated with a similarly elevated variance of DNA damage (r = +0.88). Heterogeneity of DNA-damage can be modified by histone level, antioxidant concentration, repair and radiation dose and was positively correlated with DNA damage. Experimental conditions might be optimized by reducing scatter of comet assay data by repair and antioxidants, potentially allowing better discrimination of small differences. Amount of heterogeneity measured by variance might be an additional useful parameter to characterize radiosensitivity.
A comparative study of inelastic scattering models at energy levels ranging from 0.5 keV to 10 keV
NASA Astrophysics Data System (ADS)
Hu, Chia-Yu; Lin, Chun-Hung
2017-03-01
Six models, including a single-scattering model, four hybrid models, and one dielectric function model, were evaluated using Monte Carlo simulations for aluminum and copper at incident beam energies ranging from 0.5 keV to 10 keV. The inelastic mean free path, mean energy loss per unit path length, and backscattering coefficients obtained by these models are compared and discussed to understand the merits of the various models. ANOVA (analysis of variance) statistical models were used to quantify the effects of inelastic cross section and energy loss models on the basis of the simulated results deviation from the experimental data for the inelastic mean free path, the mean energy loss per unit path length, and the backscattering coefficient, as well as their correlations. This work in this study is believed to be the first application of ANOVA models towards evaluating inelastic electron beam scattering models. This approach is an improvement over the traditional approach which involves only visual estimation of the difference between the experimental data and simulated results. The data suggests that the optimization of the effective electron number per atom, binding energy, and cut-off energy of an inelastic model for different materials at different beam energies is more important than the selection of inelastic models for Monte Carlo electron scattering simulation. During the simulations, parameters in the equations should be tuned according to different materials for different beam energies rather than merely employing default parameters for an arbitrary material. Energy loss models and cross-section formulas are not the main factors influencing energy loss. Comparison of the deviation of the simulated results from the experimental data shows a significant correlation (p < 0.05) between the backscattering coefficient and energy loss per unit path length. The inclusion of backscattering electrons generated by both primary and secondary electrons for backscattering coefficient simulation is recommended for elements with high atomic numbers. In hybrid models, introducing the inner shell ionization model improves the accuracy of simulated results.
Crock, J.G.; Severson, R.C.; Gough, L.P.
1992-01-01
Recent investigations on the Kenai Peninsula had two major objectives: (1) to establish elemental baseline concentrations ranges for native vegetation and soils; and, (2) to determine the sampling density required for preparing stable regional geochemical maps for various elements in native plants and soils. These objectives were accomplished using an unbalanced, nested analysis-of-variance (ANOVA) barbell sampling design. Hylocomium splendens (Hedw.) BSG (feather moss, whole plant), Picea glauca (Moench) Voss (white spruce, twigs and needles), and soil horizons (02 and C) were collected and analyzed for major and trace total element concentrations. Using geometric means and geometric deviations, expected baseline ranges for elements were calculated. Results of the ANOVA show that intensive soil or plant sampling is needed to reliably map the geochemistry of the area, due to large local variability. For example, producing reliable element maps of feather moss using a 50 km cell (at 95% probability) would require sampling densities of from 4 samples per cell for Al, Co, Fe, La, Li, and V, to more than 15 samples per cell for Cu, Pb, Se, and Zn.Recent investigations on the Kenai Peninsula had two major objectives: (1) to establish elemental baseline concentrations ranges for native vegetation and soils; and, (2) to determine the sampling density required for preparing stable regional geochemical maps for various elements in native plants and soils. These objectives were accomplished using an unbalanced, nested analysis-of-variance (ANOVA) barbell sampling design. Hylocomium splendens (Hedw.) BSG (feather moss, whole plant), Picea glauca (Moench) Voss (white spruce, twigs and needles), and soil horizons (02 and C) were collected and analyzed for major and trace total element concentrations. Using geometric means and geometric deviations, expected baseline ranges for elements were calculated. Results of the ANOVA show that intensive soil or plant sampling is needed to reliably map the geochemistry of the area, due to large local variability. For example, producing reliable element maps of feather moss using a 50 km cell (at 95% probability) would require sampling densities of from 4 samples per cell Al, Co, Fe, La, Li, and V, to more than 15 samples per cell for Cu, Pb, Se, and Zn.
2007-05-01
and post - synaptic dopamine biosynthesis, uptake and receptor expression as well as glutamatergic synapses. This hypothesis will be tested through...0.05) compared to mice at 7 days (9.6 ± 3.2%) or 30 days post -MPTP (16.5 ± 7.3%). The tail suspension test showed a significant increase in percent of...were compared using one-way analysis of variance (ANOVA), followed by the Fisher post hoc test for comparison of multiple means for the following
Reisenwitz, T H; Wimbish, G J
1996-01-01
The capsule dosage form in nonprescription pharmaceuticals persists as being one of the most vulnerable to product tampering. This study examines consumer preference toward three solid oral dosage forms (capsules, caplets, and tablets) in nonprescription products. Thirteen independent variables representing dosage form attributes are measured on semantic differential scales. The data are analyzed using analysis of variance (ANOVA) and factor analysis. Implications for the pharmaceutical marketer are noted. Future directions for research are also outlined.
GLIMMPSE Lite: Calculating Power and Sample Size on Smartphone Devices
Munjal, Aarti; Sakhadeo, Uttara R.; Muller, Keith E.; Glueck, Deborah H.; Kreidler, Sarah M.
2014-01-01
Researchers seeking to develop complex statistical applications for mobile devices face a common set of difficult implementation issues. In this work, we discuss general solutions to the design challenges. We demonstrate the utility of the solutions for a free mobile application designed to provide power and sample size calculations for univariate, one-way analysis of variance (ANOVA), GLIMMPSE Lite. Our design decisions provide a guide for other scientists seeking to produce statistical software for mobile platforms. PMID:25541688
Valois, Robert F; Zullig, Keith J
2013-10-01
Preliminary data were collected to evaluate the psychometric properties of an emotional self-efficacy (ESE) measure in a sample of 3836 public high school adolescents who completed the Centers for Disease Control and Prevention (CDC) Youth Risk Behavior Survey in South Carolina. Principal axis factor analysis was followed by a 4-way between-groups analysis of variance (ANOVA) testing for differences in total score means on selected demographic estimates and their interactions. Relationships between total score and selected risk behaviors were examined through a series of 1-way ANOVA procedures and subsequent Tukey Honest Significant Difference (HSD) tests. Factor analysis results suggested that a 1-factor model best explained factor structure of the scale items (factor loadings .64 to .71, eigenvalue = 3.24, h(2) = .46). Girls reported a significantly higher mean total ESE rating than boys; White students reported a significantly higher mean total ESE rating than Black students. Statistically significant lower mean total ESE ratings were also noted for those who reported physical fighting, lifetime alcohol use, and sexual intercourse. This 7-item scale is a reliable measure and could aid school health researchers and mental health practitioners in psychosocial screening and as an outcome of social and emotional learning as a brief measure of adolescent ESE. © 2013, American School Health Association.
Rahman, A.; Tsai, F.T.-C.; White, C.D.; Willson, C.S.
2008-01-01
This study investigates capture zone uncertainty that relates to the coupled semivariogram uncertainty of hydrogeological and geophysical data. Semivariogram uncertainty is represented by the uncertainty in structural parameters (range, sill, and nugget). We used the beta distribution function to derive the prior distributions of structural parameters. The probability distributions of structural parameters were further updated through the Bayesian approach with the Gaussian likelihood functions. Cokriging of noncollocated pumping test data and electrical resistivity data was conducted to better estimate hydraulic conductivity through autosemivariograms and pseudo-cross-semivariogram. Sensitivities of capture zone variability with respect to the spatial variability of hydraulic conductivity, porosity and aquifer thickness were analyzed using ANOVA. The proposed methodology was applied to the analysis of capture zone uncertainty at the Chicot aquifer in Southwestern Louisiana, where a regional groundwater flow model was developed. MODFLOW-MODPATH was adopted to delineate the capture zone. The ANOVA results showed that both capture zone area and compactness were sensitive to hydraulic conductivity variation. We concluded that the capture zone uncertainty due to the semivariogram uncertainty is much higher than that due to the kriging uncertainty for given semivariograms. In other words, the sole use of conditional variances of kriging may greatly underestimate the flow response uncertainty. Semivariogram uncertainty should also be taken into account in the uncertainty analysis. ?? 2008 ASCE.
NASA Astrophysics Data System (ADS)
Naik, Deepak kumar; Maity, K. P.
2018-03-01
Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.
Lazic, Stanley E
2008-07-21
Analysis of variance (ANOVA) is a common statistical technique in physiological research, and often one or more of the independent/predictor variables such as dose, time, or age, can be treated as a continuous, rather than a categorical variable during analysis - even if subjects were randomly assigned to treatment groups. While this is not common, there are a number of advantages of such an approach, including greater statistical power due to increased precision, a simpler and more informative interpretation of the results, greater parsimony, and transformation of the predictor variable is possible. An example is given from an experiment where rats were randomly assigned to receive either 0, 60, 180, or 240 mg/L of fluoxetine in their drinking water, with performance on the forced swim test as the outcome measure. Dose was treated as either a categorical or continuous variable during analysis, with the latter analysis leading to a more powerful test (p = 0.021 vs. p = 0.159). This will be true in general, and the reasons for this are discussed. There are many advantages to treating variables as continuous numeric variables if the data allow this, and this should be employed more often in experimental biology. Failure to use the optimal analysis runs the risk of missing significant effects or relationships.
Systematic methods for knowledge acquisition and expert system development
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
Nine cooperating rule-based systems, collectively called AUTOCREW, were designed to automate functions and decisions associated with a combat aircraft's subsystem. The organization of tasks within each system is described; performance metrics were developed to evaluate the workload of each rule base, and to assess the cooperation between the rule-bases. Each AUTOCREW subsystem is composed of several expert systems that perform specific tasks. AUTOCREW's NAVIGATOR was analyzed in detail to understand the difficulties involved in designing the system and to identify tools and methodologies that ease development. The NAVIGATOR determines optimal navigation strategies from a set of available sensors. A Navigation Sensor Management (NSM) expert system was systematically designed from Kalman filter covariance data; four ground-based, a satellite-based, and two on-board INS-aiding sensors were modeled and simulated to aid an INS. The NSM Expert was developed using the Analysis of Variance (ANOVA) and the ID3 algorithm. Navigation strategy selection is based on an RSS position error decision metric, which is computed from the covariance data. Results show that the NSM Expert predicts position error correctly between 45 and 100 percent of the time for a specified navaid configuration and aircraft trajectory. The NSM Expert adapts to new situations, and provides reasonable estimates of hybrid performance. The systematic nature of the ANOVA/ID3 method makes it broadly applicable to expert system design when experimental or simulation data is available.
Bureau, A; Lahet, J-J; Lenfant, F; Bouyer, F; Petitjean, M; Chaillot, B; Freysz, M
2005-08-01
The aggression of erythrocytes by an oxidative stress induces hemolysis. This paper aims to valid a model of erythrocytes in terms of composition of the phosphate buffer solution and of concentration of a well-known oxidant, AAPH. Three compositions of phosphate buffer solution are mixed with three concentrations of oxidant. The influence of these two parameters on hemolysis is independently studied by a variance analysis and a Kruskal-Wallis test when ANOVA is not available. The hemolysis rate increases with time at fixed oxidant concentration, but is not influenced by the composition of the buffer solution. The highest hemolysis rate, 90%, was only measured within 2 h with the highest oxidant concentration. If we retain this concentration of oxidant, the lower concentration of the buffer can by eliminated by a significant less hemolysis and the highest concentration of the buffer can by chosen in regard of the better precision for a similar hemolysis compared to the mean buffer. We hope to study the effect of anti-oxidant agent with such a model of erythrocytes.
NASA Astrophysics Data System (ADS)
Varun, Sajja; Reddy, Kalakada Bhargav Bal; Vardhan Reddy, R. R. Vishnu
2016-09-01
In this research work, development of a multi response optimization technique has been undertaken, using traditional desirability analysis and non-traditional particle swarm optimization techniques (for different customer's priorities) in wire electrical discharge machining (WEDM). Monel 400 has been selected as work material for experimentation. The effect of key process parameters such as pulse on time (TON), pulse off time (TOFF), peak current (IP), wire feed (WF) were on material removal rate (MRR) and surface roughness(SR) in WEDM operation were investigated. Further, the responses such as MRR and SR were modelled empirically through regression analysis. The developed models can be used by the machinists to predict the MRR and SR over a wide range of input parameters. The optimization of multiple responses has been done for satisfying the priorities of multiple users by using Taguchi-desirability function method and particle swarm optimization technique. The analysis of variance (ANOVA) is also applied to investigate the effect of influential parameters. Finally, the confirmation experiments were conducted for the optimal set of machining parameters, and the betterment has been proved.
Sarrai, Abd Elaziz; Hanini, Salah; Merzouk, Nachida Kasbadji; Tassalit, Djilali; Szabó, Tibor; Hernádi, Klára; Nagy, László
2016-01-01
The feasibility of the application of the Photo-Fenton process in the treatment of aqueous solution contaminated by Tylosin antibiotic was evaluated. The Response Surface Methodology (RSM) based on Central Composite Design (CCD) was used to evaluate and optimize the effect of hydrogen peroxide, ferrous ion concentration and initial pH as independent variables on the total organic carbon (TOC) removal as the response function. The interaction effects and optimal parameters were obtained by using MODDE software. The significance of the independent variables and their interactions was tested by means of analysis of variance (ANOVA) with a 95% confidence level. Results show that the concentration of the ferrous ion and pH were the main parameters affecting TOC removal, while peroxide concentration had a slight effect on the reaction. The optimum operating conditions to achieve maximum TOC removal were determined. The model prediction for maximum TOC removal was compared to the experimental result at optimal operating conditions. A good agreement between the model prediction and experimental results confirms the soundness of the developed model. PMID:28773551
Fanning, J; Porter, G; Awick, E A; Wójcicki, T R; Gothe, N P; Roberts, S A; Ehlers, D K; Motl, R W; McAuley, E
2016-06-01
In the present study, we examined the influence of a home-based, DVD-delivered exercise intervention on daily sedentary time and breaks in sedentary time in older adults. Between 2010 and 2012, older adults (i.e., aged 65 or older) residing in Illinois (N = 307) were randomized into a 6-month home-based, DVD-delivered exercise program (i.e., FlexToBa; FTB) or a waitlist control. Participants completed measurements prior to the first week (baseline), following the intervention period (month 6), and after a 6 month no-contact follow-up (month 12). Sedentary behavior was measured objectively using accelerometers for 7 consecutive days at each time point. Differences in daily sedentary time and breaks between groups and across the three time points were examined using mixed-factor analysis of variance (mixed ANOVA) and analysis of covariance (ANCOVA). Mixed ANOVA models revealed that daily minutes of sedentary time did not differ by group or time. The FTB condition, however, demonstrated a greater number of daily breaks in sedentary time relative to the control condition (p = .02). ANCOVA models revealed a non-significant effect favoring FTB at month 6, and a significant difference between groups at month 12 (p = .02). While overall sedentary time did not differ between groups, the DVD-delivered exercise intervention was effective for maintaining a greater number of breaks when compared with the control condition. Given the accumulating evidence emphasizing the importance of breaking up sedentary time, these findings have important implications for the design of future health behavior interventions.
Vina, Andres; Peters, Albert J.; Ji, Lei
2003-01-01
There is a global concern about the increase in atmospheric concentrations of greenhouse gases. One method being discussed to encourage greenhouse gas mitigation efforts is based on a trading system whereby carbon emitters can buy effective mitigation efforts from farmers implementing conservation tillage practices. These practices sequester carbon from the atmosphere, and such a trading system would require a low-cost and accurate method of verification. Remote sensing technology can offer such a verification technique. This paper is focused on the use of standard image processing procedures applied to a multispectral Ikonos image, to determine whether it is possible to validate that farmers have complied with agreements to implement conservation tillage practices. A principal component analysis (PCA) was performed in order to isolate image variance in cropped fields. Analyses of variance (ANOVA) statistical procedures were used to evaluate the capability of each Ikonos band and each principal component to discriminate between conventional and conservation tillage practices. A logistic regression model was implemented on the principal component most effective in discriminating between conventional and conservation tillage, in order to produce a map of the probability of conventional tillage. The Ikonos imagery, in combination with ground-reference information, proved to be a useful tool for verification of conservation tillage practices.
Perception of and satisfaction with the clinical learning environment among nursing students.
D'Souza, Melba Sheila; Karkada, Subrahmanya Nairy; Parahoo, Kader; Venkatesaperumal, Ramesh
2015-06-01
Clinical nursing education provides baccalaureate nursing students an opportunity to combine cognitive, psychomotor, and affective skills in the Middle East. The aim of the paper is to assess the satisfaction with and effectiveness of the clinical learning environment among nursing students in Oman. A cross-sectional descriptive design was used. A convenience sample consisting of 310 undergraduate nursing students was selected in a public school of nursing in Oman. Ethical approval was obtained from the Research and Ethics Committee, College of Nursing in 2011. A standardized, structured, validated and reliable Clinical Learning Environment Supervision Teacher Evaluation instrument was used. Informed consent was obtained from all the students. Data was analyzed with ANOVA and structural equation modeling. Satisfaction with the clinical learning environment (CLE) sub-dimensions was highly significant and had a positive relationship with the total clinical learning environment. In the path model 35% of its total variance of satisfaction with CLE is accounted by leadership style, clinical nurse commitment (variance=28%), and patient relationships (R(2)=27%). Higher age, GPA and completion of a number of clinical courses were significant in the satisfaction with the CLE among these students. Nurse educators can improvise clinical learning placements focusing on leadership style, premises of learning and nursing care, nurse teacher, and supervision while integrating student, teacher and environmental factors. Hence the clinical learning environment is integral to students' learning and valuable in providing educational experiences. The CLE model provides information to nurse educators regarding best clinical practices for improving the CLE for BSN students. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jha, Dilip Kumar; Vinithkumar, Nambali Valsalan; Sahu, Biraja Kumar; Dheenan, Palaiya Sukumaran; Das, Apurba Kumar; Begum, Mehmuna; Devi, Marimuthu Prashanthi; Kirubagaran, Ramalingam
2015-07-15
Chidiyatappu Bay is one of the least disturbed marine environments of Andaman & Nicobar Islands, the union territory of India. Oceanic flushing from southeast and northwest direction is prevalent in this bay. Further, anthropogenic activity is minimal in the adjoining environment. Considering the pristine nature of this bay, seawater samples collected from 12 sampling stations covering three seasons were analyzed. Principal Component Analysis (PCA) revealed 69.9% of total variance and exhibited strong factor loading for nitrite, chlorophyll a and phaeophytin. In addition, analysis of variance (ANOVA-one way), regression analysis, box-whisker plots and Geographical Information System based hot spot analysis further simplified and supported multivariate results. The results obtained are important to establish reference conditions for comparative study with other similar ecosystems in the region. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bone shape difference between control and osteochondral defect groups of the ankle joint.
Tümer, N; Blankevoort, L; van de Giessen, M; Terra, M P; de Jong, P A; Weinans, H; Tuijthof, G J M; Zadpoor, A A
2016-12-01
The etiology of osteochondral defects (OCDs), for which the ankle (talocrural) joint is one of the common sites, is not yet fully understood. In this study, we hypothesized that bone shape plays a role in development of OCDs. Therefore, we quantitatively compared the morphology of the talus and the distal tibia between an OCD group and a control group. The shape variations of the talus and distal tibia were described separately by constructing two statistical shape models (SSMs) based on the segmentation of the bones from ankle computed tomography (CT) scans obtained from control (i.e., 35 CT scans) and OCD (i.e., 37 CT scans) groups. The first five modes of shape variation for the SSM corresponding to each bone were statistically compared between control and OCD groups using an analysis of variance (ANOVA) corrected with the Bonferroni for multiple comparisons. The first five modes of variation in the SSMs respectively represented 49% and 40% of the total variance of talus and tibia. Less than 5% of the variance per mode was described by the higher modes. Mode 5 of the talus (P = 0.004) primarily describing changes in the vertical neck angle and Mode 1 of the tibia (P < 0.0001) representing variations at the medial malleolus, showed statistically significant difference between the control and OCD groups. Shape differences exist between control and OCD groups. This indicates that a geometry modulated biomechanical behavior of the talocrural joint may be a risk factor for OCD. Copyright © 2016. Published by Elsevier Ltd.
Brauer, Chris J; Unmack, Peter J; Beheregaray, Luciano B
2017-12-01
Understanding whether small populations with low genetic diversity can respond to rapid environmental change via phenotypic plasticity is an outstanding research question in biology. RNA sequencing (RNA-seq) has recently provided the opportunity to examine variation in gene expression, a surrogate for phenotypic variation, in nonmodel species. We used a comparative RNA-seq approach to assess expression variation within and among adaptively divergent populations of a threatened freshwater fish, Nannoperca australis, found across a steep hydroclimatic gradient in the Murray-Darling Basin, Australia. These populations evolved under contrasting selective environments (e.g., dry/hot lowland; wet/cold upland) and represent opposite ends of the species' spectrum of genetic diversity and population size. We tested the hypothesis that environmental variation among isolated populations has driven the evolution of divergent expression at ecologically important genes using differential expression (DE) analysis and an anova-based comparative phylogenetic expression variance and evolution model framework based on 27,425 de novo assembled transcripts. Additionally, we tested whether gene expression variance within populations was correlated with levels of standing genetic diversity. We identified 290 DE candidate transcripts, 33 transcripts with evidence for high expression plasticity, and 50 candidates for divergent selection on gene expression after accounting for phylogenetic structure. Variance in gene expression appeared unrelated to levels of genetic diversity. Functional annotation of the candidate transcripts revealed that variation in water quality is an important factor influencing expression variation for N. australis. Our findings suggest that gene expression variation can contribute to the evolutionary potential of small populations. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Khed, Veerendrakumar C.; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd
2018-04-01
The different sizes of crumb rubber have been used to investigate the effects on flowability and the compressive strength of the hybrid fibre reinforced engineered cementitious composite. Two sizes of crumb rubber 30 mesh and 1 to 3mm were used in partial replacement with the fine aggregate up to 60%. The experimental study was carried out through mathematical and statistical analysis by response surface methodology (RSM) using the Design Expert software. The response models have been developed and the results were validated by analysis of variance (ANOVA). It was found that finer sized crumb rubber inclusion had produced better workability and higher compressive strength when compared to the larger size and it was concluded that crumb rubber has negative effect on compressive strength and positive effect on workability. The optimization results are found to an approximately good agreement with the experimental results.
Tactile sensitivity of gloved hands in the cold operation.
Geng, Q; Kuklane, K; Holmér, I
1997-11-01
In this study, tactile sensitivity of gloved hand in the cold operation has been investigated. The relations among physical properties of protective gloves and hand tactile sensitivity and cold protection were also analysed both objectively and subjectively. Subjects with various gloves participated in the experimental study during cold exposure at different ambient temperatures of -12 degrees C and -25 degrees C. Tactual performance was measured using an identification task with various sizes of objects over the percentage of misjudgment. Forearm, hand and finger skin temperatures were also recorded throughout. The experimental data were analysed using analysis of variance (ANOVA) model and the Tukey's multiple range test. The results obtained indicated that the tactual performance was affected both by gloves and by hands/fingers cooling. Effect of object size on the tactile discrimination was significant and the misjudgment increased when similar sizes of objects were identified, especially at -25 degrees C.
Mental Health Correlates of Cigarette Use in LGBT Individuals in the Southeastern United States.
Drescher, Christopher F; Lopez, Eliot J; Griffin, James A; Toomey, Thomas M; Eldridge, Elizabeth D; Stepleman, Lara M
2018-05-12
Smoking prevalence for lesbian, gay, bisexual, and transgender (LGBT) individuals is higher than for heterosexual, cisgender individuals. Elevated smoking rates have been linked to psychiatric comorbidities, substance use, poverty, low education levels, and stress. This study examined mental health (MH) correlates of cigarette use in LGBT individuals residing in a metropolitan area in the southeastern United States. Participants were 335 individuals from an LGBT health needs assessment (mean age 34.7; SD = 13.5; 63% gay/lesbian; 66% Caucasian; 81% cisgender). Demographics, current/past psychiatric diagnoses, number of poor MH days in the last 30, the Patient Health Questionnaire (PHQ) 2 depression screener, the Three-Item Loneliness Scale, and frequency of cigarette use were included. Analyses included bivariate correlations, analysis of variance (ANOVA), and regression. Multiple demographic and MH factors were associated with smoker status and frequency of smoking. A logistic regression indicated that lower education and bipolar disorder were most strongly associated with being a smoker. For smokers, a hierarchical regression model including demographic and MH variables accounted for 17.6% of the variance in frequency of cigarette use. Only education, bipolar disorder, and the number of poor MH days were significant contributors in the overall model. Conclusions/Importance: Less education, bipolar disorder, and recurrent poor MH increase LGBT vulnerability to cigarette use. Access to LGBT-competent MH providers who can address culturally specific factors in tobacco cessation is crucial to reducing this health disparities.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Waldmann, P; García-Gil, M R; Sillanpää, M J
2005-06-01
Comparison of the level of differentiation at neutral molecular markers (estimated as F(ST) or G(ST)) with the level of differentiation at quantitative traits (estimated as Q(ST)) has become a standard tool for inferring that there is differential selection between populations. We estimated Q(ST) of timing of bud set from a latitudinal cline of Pinus sylvestris with a Bayesian hierarchical variance component method utilizing the information on the pre-estimated population structure from neutral molecular markers. Unfortunately, the between-family variances differed substantially between populations that resulted in a bimodal posterior of Q(ST) that could not be compared in any sensible way with the unimodal posterior of the microsatellite F(ST). In order to avoid publishing studies with flawed Q(ST) estimates, we recommend that future studies should present heritability estimates for each trait and population. Moreover, to detect variance heterogeneity in frequentist methods (ANOVA and REML), it is of essential importance to check also that the residuals are normally distributed and do not follow any systematically deviating trends.
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
The internal resistance of a PEM fuel cell depends on the operation conditions and on the current delivered by the cell. This work's goal is to obtain a semiempirical model able to reproduce the effect of the operation current on the internal resistance of an individual cell of a commercial PEM fuel cell stack; and to perform a statistical analysis in order to study the effect of the operation temperature and the inlet humidities on the parameters of the model. First, the internal resistance of the individual fuel cell operating in different operation conditions was experimentally measured for different DC currents, using the high frequency intercept of the impedance spectra. Then, a semiempirical model based on Springer and co-workers' model was proposed. This model is able to successfully reproduce the experimental trends. Subsequently, the curves of resistance versus DC current obtained for different operation conditions were fitted to the semiempirical model, and an analysis of variance (ANOVA) was performed in order to determine which factors have a statistically significant effect on each model parameter. Finally, a response surface method was applied in order to obtain a regression model.
Minimum number of measurements for evaluating Bertholletia excelsa.
Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E
2017-09-27
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.
2012-01-01
Background To investigate whether different conditions of DNA structure and radiation treatment could modify heterogeneity of response. Additionally to study variance as a potential parameter of heterogeneity for radiosensitivity testing. Methods Two-hundred leukocytes per sample of healthy donors were split into four groups. I: Intact chromatin structure; II: Nucleoids of histone-depleted DNA; III: Nucleoids of histone-depleted DNA with 90 mM DMSO as antioxidant. Response to single (I-III) and twice (IV) irradiation with 4 Gy and repair kinetics were evaluated using %Tail-DNA. Heterogeneity of DNA damage was determined by calculation of variance of DNA-damage (V) and mean variance (Mvar), mutual comparisons were done by one-way analysis of variance (ANOVA). Results Heterogeneity of initial DNA-damage (I, 0 min repair) increased without histones (II). Absence of histones was balanced by addition of antioxidants (III). Repair reduced heterogeneity of all samples (with and without irradiation). However double irradiation plus repair led to a higher level of heterogeneity distinguishable from single irradiation and repair in intact cells. Increase of mean DNA damage was associated with a similarly elevated variance of DNA damage (r = +0.88). Conclusions Heterogeneity of DNA-damage can be modified by histone level, antioxidant concentration, repair and radiation dose and was positively correlated with DNA damage. Experimental conditions might be optimized by reducing scatter of comet assay data by repair and antioxidants, potentially allowing better discrimination of small differences. Amount of heterogeneity measured by variance might be an additional useful parameter to characterize radiosensitivity. PMID:22520045
Sharma, Praveen; Singh, Lakhvinder; Dilbaghi, Neeraj
2009-05-30
Decolorization of textile azo dye Disperse Yellow 211 (DY 211) was carried out from simulated aqueous solution by bacterial strain Bacillus subtilis. Response surface methodology (RSM), involving Box-Behnken design matrix in three most important operating variables; temperature, pH and initial dye concentration was successfully employed for the study and optimization of decolorization process. The total 17 experiments were conducted in the study towards the construction of a quadratic model. According to analysis of variance (ANOVA) results, the proposed model can be used to navigate the design space. Under optimized conditions the bacterial strain was able to decolorize DY 211 up to 80%. Model indicated that initial dye concentration of 100 mgl(-1), pH 7 and a temperature of 32.5 degrees C were found optimum for maximum % decolorization. Very high regression coefficient between the variables and the response (R(2)=0.9930) indicated excellent evaluation of experimental data by polynomial regression model. The combination of the three variables predicted through RSM was confirmed through confirmatory experiments, hence the bacterial strain holds a great potential for the treatment of colored textile effluents.
Vats, Siddharth; Maurya, Devendra Prasad; Jain, Ayushi; Mall, Varija; Negi, Sangeeta
2013-11-01
The objective of this study was to optimize the physico-enzymatic pretreatment of P. roxburghii fallen foliage (needles) to produce reducing sugars through response surface methodology (RSM) with central composite face centered design (CCD). Under this, five parameters, i.e., concentration of laccase, cellulose and xylanase, steam explosion pressure and incubation period, at three levels with twenty six runs were taken into account. Cellulase, xylanase and laccase enzymes with activity 4.563, 38.32 and 0.05 IU/mL, respectively, were produced from locally isolated microbial strains. The analysis of variance (ANOVA) was applied for the validation of the predicted model at 95% of confidence level. This model predicted 334 mg/g release of reducing sugars on treating P. roxburghii fallen foliage with 1.18 mL of cellulose, 0.31 mL of xylanase and 0.01 mL of laccase, 14.39 psi steam explosion pressure and 24 h of incubation time. The experimental results obtained were in good agreement to predicted values, making it a reliable optimized model for five factors in combination to predict reducing sugar yield for ethanol production for bio-fuel industry.
Egbewale, Bolaji E; Lewis, Martyn; Sim, Julius
2014-04-09
Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. 126 hypothetical trial scenarios were evaluated (126,000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power.
2014-01-01
Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304
NASA Astrophysics Data System (ADS)
Öktem, H.
2012-01-01
Plastic injection molding plays a key role in the production of high-quality plastic parts. Shrinkage is one of the most significant problems of a plastic part in terms of quality in the plastic injection molding. This article focuses on the study of the modeling and analysis of the effects of process parameters on the shrinkage by evaluating the quality of the plastic part of a DVD-ROM cover made with Acrylonitrile Butadiene Styrene (ABS) polymer material. An effective regression model was developed to determine the mathematical relationship between the process parameters (mold temperature, melt temperature, injection pressure, injection time, and cooling time) and the volumetric shrinkage by utilizing the analysis data. Finite element (FE) analyses designed by Taguchi (L27) orthogonal arrays were run in the Moldflow simulation program. Analysis of variance (ANOVA) was then performed to check the adequacy of the regression model and to determine the effect of the process parameters on the shrinkage. Experiments were conducted to control the accuracy of the regression model with the FE analyses obtained from Moldflow. The results show that the regression model agrees very well with the FE analyses and the experiments. From this, it can be concluded that this study succeeded in modeling the shrinkage problem in our application.
Research, science and technology parks: A global comparison of best practices
NASA Astrophysics Data System (ADS)
Ruiz Villacres, Hugo D.
The purpose of this study was to determine if significant differences exist in the evaluation of effectiveness and efficiency between North American, European, and Asian research parks (RPs). Park directors and staff responded to 25 questions from the Survey for Research, Science and Technology Parks. Effectiveness was measured by director's perception of the RP's contribution to economic growth and job creation. Efficiency was evaluated by the interactions between local universities and research parks, assessment of the ecosystem's basic characteristics, and the culture of innovation in the ecosystem. A stratified sampling procedure from a population of 793 parks was used; analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA) were used to test for significance. 130 RPs from three continents participated in this study. No significant differences were found in the evaluation of RPs' directors on effectiveness and efficiency of RPs.
Prediction of Cutting Force in Turning Process-an Experimental Approach
NASA Astrophysics Data System (ADS)
Thangarasu, S. K.; Shankar, S.; Thomas, A. Tony; Sridhar, G.
2018-02-01
This Paper deals with a prediction of Cutting forces in a turning process. The turning process with advanced cutting tool has a several advantages over grinding such as short cycle time, process flexibility, compatible surface roughness, high material removal rate and less environment problems without the use of cutting fluid. In this a full bridge dynamometer has been used to measure the cutting forces over mild steel work piece and cemented carbide insert tool for different combination of cutting speed, feed rate and depth of cut. The experiments are planned based on taguchi design and measured cutting forces were compared with the predicted forces in order to validate the feasibility of the proposed design. The percentage contribution of each process parameter had been analyzed using Analysis of Variance (ANOVA). Both the experimental results taken from the lathe tool dynamometer and the designed full bridge dynamometer were analyzed using Taguchi design of experiment and Analysis of Variance.
The Impact of Face Skin Tone vs. Face Symmetry on Perceived Facial Attractiveness.
Vera Cruz, Germano
2018-01-01
The purpose of this study was to assess and compare the relative contribution of skin tone and symmetry on judgment of attractiveness regarding female faces. Two hundred and fifteen Mozambican adults were presented with a set of faces, and instructed to rate their degree of attractiveness along a continuous scale. Chi-square, factorial weight analyses and ANOVA were used to analyze the data. Face skin tone had a significant impact on the participants' attractiveness judgment of target faces. However, the target face skin tone contribution to the participants' attractiveness judgment (5% of the total variance) was much weaker than the contribution of the target face symmetry (85% of the total variance). These results imply that skin bleaching, common among Black people across sub-Saharan African countries, is not only dangerous to the health of those who practice it, but it is unlikely to make them appear much more attractive.
NASA Astrophysics Data System (ADS)
Lin, Lianghua; Liu, Zhiyi; Ying, Puyou; Liu, Meng
2015-12-01
Multi-step heat treatment effectively enhances the stress corrosion cracking (SCC) resistance but usually degrades the mechanical properties of Al-Zn-Mg-Cu alloys. With the aim to enhance SCC resistance as well as strength of Al-Zn-Mg-Cu alloys, we have optimized the process parameters during two-step aging of Al-6.1Zn-2.8Mg-1.9Cu alloy by Taguchi's L9 orthogonal array. In this work, analysis of variance (ANOVA) was performed to find out the significant heat treatment parameters. The slow strain rate testing combined with scanning electron microscope and transmission electron microscope was employed to study the SCC behaviors of Al-Zn-Mg-Cu alloy. Results showed that the contour map produced by ANOVA offered a reliable reference for selection of optimum heat treatment parameters. By using this method, a desired combination of mechanical performances and SCC resistance was obtained.
An ANOVA approach for statistical comparisons of brain networks.
Fraiman, Daniel; Fraiman, Ricardo
2018-03-16
The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.
An investigation of thermal changes of various permanent dental cements.
Duymus, Zeynep Yesil; Yilmaz, Baykal; Karaalioglu, F Osman
2009-05-01
The aim of this study was to investigate and compare the temperature rises which occurred during the setting reactions of different permanent cements used to lute fixed partial prosthodontics. In this study, four cements were used. They were mixed in three different proportions: according to manufacturers' recommendations, at doubled powder ratio, and at doubled liquid ratio. With a thermocouple, the temperature rises which occurred during the setting reactions were measured. For each proportion, the measurement was repeated five times such that a total of 60 measurements were done for the four different cements. Data were analyzed using analysis of variance (ANOVA). ANOVA results showed that cement type and the interaction between cement type and the powder-liquid ratio were statistically significant factors (p<0.001). Similarly, the powder-liquid ratio was a statistically significant (p<0.01) factor. Among the dental cements tested, zinc phosphate cement showed the highest temperature rise during setting reaction, whereas glass ionomer cement showed the lowest.
The effect of a family-based intervention with a cognitive-behavioral approach on elder abuse.
Khanlary, Zahra; Maarefvand, Masoomeh; Biglarian, Akbar; Heravi-Karimooi, Majideh
2016-01-01
Elder abuse may become a health issue in developing countries, including Iran. The purpose of this investigation was to study the effectiveness of Family-Based Cognitive-Behavioral Social Work (FBCBSW) in reducing elder abuse. In a randomized clinical trial in Iran, 27 elders participated in intervention and control groups. The intervention groups received a five-session FBCBSW intervention and completed the Domestic-Elder-Abuse-Questionnaire (DEAQ), which evaluates elder abuse at baseline and follow-ups. Repeated measures of analysis of variance (ANOVA) and the Wilcoxon test were used to analyze the data. The repeated measures ANOVA revealed that FBCBSW was successful in reducing elder abuse. The Wilcoxon test indicated that emotional neglect, care neglect, financial neglect, curtailment of personal autonomy, psychological abuse, and financial abuse significantly decreased over time, but there was no statistically significant difference in physical abuse before and after the intervention. The findings from this study suggest that FBCBSW is a promising approach to reducing elder abuse and warrants further study with larger samples.
Biostatistics Series Module 3: Comparing Groups: Numerical Variables.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Numerical data that are normally distributed can be analyzed with parametric tests, that is, tests which are based on the parameters that define a normal distribution curve. If the distribution is uncertain, the data can be plotted as a normal probability plot and visually inspected, or tested for normality using one of a number of goodness of fit tests, such as the Kolmogorov-Smirnov test. The widely used Student's t-test has three variants. The one-sample t-test is used to assess if a sample mean (as an estimate of the population mean) differs significantly from a given population mean. The means of two independent samples may be compared for a statistically significant difference by the unpaired or independent samples t-test. If the data sets are related in some way, their means may be compared by the paired or dependent samples t-test. The t-test should not be used to compare the means of more than two groups. Although it is possible to compare groups in pairs, when there are more than two groups, this will increase the probability of a Type I error. The one-way analysis of variance (ANOVA) is employed to compare the means of three or more independent data sets that are normally distributed. Multiple measurements from the same set of subjects cannot be treated as separate, unrelated data sets. Comparison of means in such a situation requires repeated measures ANOVA. It is to be noted that while a multiple group comparison test such as ANOVA can point to a significant difference, it does not identify exactly between which two groups the difference lies. To do this, multiple group comparison needs to be followed up by an appropriate post hoc test. An example is the Tukey's honestly significant difference test following ANOVA. If the assumptions for parametric tests are not met, there are nonparametric alternatives for comparing data sets. These include Mann-Whitney U-test as the nonparametric counterpart of the unpaired Student's t-test, Wilcoxon signed-rank test as the counterpart of the paired Student's t-test, Kruskal-Wallis test as the nonparametric equivalent of ANOVA and the Friedman's test as the counterpart of repeated measures ANOVA.
Effects of changes in dietary fatty acids on isolated skeletal muscle functions in rats.
Ayre, K J; Hulbert, A J
1996-02-01
The effects of manipulating dietary levels of essential polyunsaturated fatty acids on the function of isolated skeletal muscles in male Wistar rats were examined. Three isoenergetic diets were used: an essential fatty acid-deficient diet (EFAD), a diet high in essential (n-6) fatty acids [High (n-6)], and a diet enriched with essential (n-3) fatty acids [High (n-3)]. After 9 wk, groups of rats on each test diet were fed a stock diet of laboratory chow for a further 6 wk. Muscle function was examined by using a battery of five tests for soleus (slow twitch) and extensor digitorum longus (EDL; fast twitch). Tests included single muscle twitches, sustained tetanic contractions, posttetanic potentiation, sustained high-frequency stimulation, and intermittent low-frequency stimulation. Results for muscles from the High (n-6) and High (n-3) groups were very similar. However, the EFAD diet resulted in significantly lower muscular tensions and reduced response times compared with the High (n-6) and High (n-3) diets. Peak twitch tension in soleus muscles was 16-21% less in the EFAD group than in the High (n-6) and High (n-3) groups, respectively [analysis of variance (ANOVA), P < 0.01). During high-frequency stimulation, EDL muscles from the EFAD rats fatigued 32% more quickly (ANOVA, P < 0.01)]. Also, twitch contraction and half-relaxation times were significantly 5-7% reduced in the EFAD group (ANOVA, P < 0.01). During intermittent low-frequency stimulation, soleus muscles from the EFAD group generated 25-28% less tension than did the other groups (ANOVA, P < 0.01), but in EDL muscles from the EFAD group, endurance was 20% greater than in the High (n-6) group (ANOVA, P < 0.05). After 6 wk on the stock diet, there were no longer any differences between the dietary groups. Manipulation of dietary fatty acids results in significant, but reversible, effects in muscles of rats fed an EFAD diet.
The assessment of a structured online formative assessment program: a randomised controlled trial
2014-01-01
Background Online formative assessment continues to be an important area of research and methods which actively engage the learner and provide useful learning outcomes are of particular interest. This study reports on the outcomes of a two year study of medical students using formative assessment tools. Method The study was conducted over two consecutive years using two different strategies for engaging students. The Year 1 strategy involved voluntary use of the formative assessment tool by 129 students. In Year 2, a second cohort of 130 students was encouraged to complete the formative assessment by incorporating summative assessment elements into it. Outcomes from pre and post testing students around the formative assessment intervention were used as measures of learning. To compare improvement scores between the two years a two-way Analysis of Variance (ANOVA) model was fitted to the data. Results The ANOVA model showed that there was a significant difference in improvement scores between students in the two years (mean improvement percentage 19% vs. 38.5%, p < 0.0001). Students were more likely to complete formative assessment items if they had a summative component. In Year 2, the time spent using the formative assessment tool had no impact on student improvement, nor did the number of assessment items completed. Conclusion The online medium is a valuable learning resource, capable of providing timely formative feedback and stimulating student-centered learning. However the production of quality content is a time-consuming task and careful consideration must be given to the strategies employed to ensure its efficacy. Course designers should consider the potential positive impact summative components to formative assessment may have on student engagement and outcomes. PMID:24400883
Volume analysis of heat-induced cracks in human molars: A preliminary study
Sandholzer, Michael A.; Baron, Katharina; Heimel, Patrick; Metscher, Brian D.
2014-01-01
Context: Only a few methods have been published dealing with the visualization of heat-induced cracks inside bones and teeth. Aims: As a novel approach this study used nondestructive X-ray microtomography (micro-CT) for volume analysis of heat-induced cracks to observe the reaction of human molars to various levels of thermal stress. Materials and Methods: Eighteen clinically extracted third molars were rehydrated and burned under controlled temperatures (400, 650, and 800°C) using an electric furnace adjusted with a 25°C increase/min. The subsequent high-resolution scans (voxel-size 17.7 μm) were made with a compact micro-CT scanner (SkyScan 1174). In total, 14 scans were automatically segmented with Definiens XD Developer 1.2 and three-dimensional (3D) models were computed with Visage Imaging Amira 5.2.2. The results of the automated segmentation were analyzed with an analysis of variance (ANOVA) and uncorrected post hoc least significant difference (LSD) tests using Statistical Package for Social Sciences (SPSS) 17. A probability level of P < 0.05 was used as an index of statistical significance. Results: A temperature-dependent increase of heat-induced cracks was observed between the three temperature groups (P < 0.05, ANOVA post hoc LSD). In addition, the distributions and shape of the heat-induced changes could be classified using the computed 3D models. Conclusion: The macroscopic heat-induced changes observed in this preliminary study correspond with previous observations of unrestored human teeth, yet the current observations also take into account the entire microscopic 3D expansions of heat-induced cracks within the dental hard tissues. Using the same experimental conditions proposed in the literature, this study confirms previous results, adds new observations, and offers new perspectives in the investigation of forensic evidence. PMID:25125923
McAllister, Lisa; Gurven, Michael; Kaplan, Hillard; Stieglitz, Jonathan
2013-01-01
Objectives We develop and test a conceptual model of factors influencing women’s ideal family size (IFS) in a natural fertility population, the Tsimane of Bolivia. The model posits affects of socioecology, reproductive history, maternal condition, and men’s IFS. We test three hypotheses for why women may exceed their IFS despite experiencing socioeconomic development: (H1) limited autonomy; (H2) improved maternal condition; and (H3) low returns on investments in embodied capital. Methods Women’s reproductive histories and prospective fertility data were collected from 2002 to 2008 (n = 305 women). Semistructured interviews were conducted with Tsimane women to study the perceived value of parental investment (n = 76). Multiple regression, t-tests, and analysis of variance (ANOVA) are used to test model predictions. Results Women’s IFS is predicted by their socioecology, reproductive history, maternal condition, and husband’s IFS. Hypotheses 2 and 3 are supported. Couples residing near town have smaller IFS (women = 3.75 ± 1.64; men = 3.87 ± 2.64) and less variance in IFS. However, the degree fertility exceeds IFS is inversely correlated with distance to town (Partial r = −0.189, df = 156, P = 0.018). Women living near town have greater maternal condition but 64% value traditional skills over formal schooling and 88% believe living in town is unfeasible. Conclusions While reduced IFS is evident with socioeconomic development, fertility decline may not immediately follow. When perceived benefits of investment in novel forms of embodied capital are low, and somatic wealth and large kin networks persist as important components of fitness, fertility may remain high and increase if maternal condition improves. PMID:22987773
Evaluation of sleep quality in subjects with chronic nononcologic pain.
Covarrubias-Gomez, Alfredo; Mendoza-Reyes, Jonathan J
2013-08-01
A survey conducted by the National Sleep Foundation found that 20% of Americans have sleep disorders and 45% experience chronic pain. Several authors evaluated the interrelationship between these functions using various instruments such the Pittsburgh Sleep Quality Index (PSQI) and identified that 34% of subjects in the general population have a poor quality of sleep, but there are few studies that assess the quality of sleep in patients with chronic pain of nonmalignant origin. We undertook this study to evaluate the quality of sleep using the PSQI in patients with chronic pain unrelated to cancer. We conducted a clinical, nonrandomized, uncontrolled, descriptive, and prospective study, applying the PSQI through a direct one-time interview to 311 subjects with chronic pain unrelated to cancer. According to the categorization of the PSQI between good and poor sleepers, 89% of the subjects were poor sleepers (n = 276). There are significant differences in pain intensity according to the categorization of the PSQI, with a higher intensity shown in the "poor sleepers" (analysis of variance [ANOVA], P = .030). Using a linear regression model to estimate the curve, a higher score is rated on the PSQI global score (ANOVA, P = .000, R(2) = .46) with the increase of the intensity of the pain. We conclude that "poor sleepers" or those who considered their sleep as "poor quality" have significantly higher pain intensity. This suggests that intensity of pain plays a role in evaluating the quality of sleep in the subjective perception of sleep and instruments that assess quality.
Murado, M A; Prieto, M A
2013-09-01
NOEC and LOEC (no and lowest observed effect concentrations, respectively) are toxicological concepts derived from analysis of variance (ANOVA), a not very sensitive method that produces ambiguous results and does not provide confidence intervals (CI) of its estimates. For a long time, despite the abundant criticism that such concepts have raised, the field of the ecotoxicology is reticent to abandon them (two possible reasons will be discussed), adducing the difficulty of clear alternatives. However, this work proves that a debugged dose-response (DR) modeling, through explicit algebraic equations, enables two simple options to accurately calculate the CI of substantially lower doses than NOEC. Both ANOVA and DR analyses are affected by the experimental error, response profile, number of observations and experimental design. The study of these effects--analytically complex and experimentally unfeasible--was carried out using systematic simulations with realistic data, including different error levels. Results revealed the weakness of NOEC and LOEC notions, confirmed the feasibility of the proposed alternatives and allowed to discuss the--often violated--conditions that minimize the CI of the parametric estimates from DR assays. In addition, a table was developed providing the experimental design that minimizes the parametric CI for a given set of working conditions. This makes possible to reduce the experimental effort and to avoid the inconclusive results that are frequently obtained from intuitive experimental plans. Copyright © 2013 Elsevier B.V. All rights reserved.
[How medical students perform academically by admission types?].
Kim, Se-Hoon; Lee, Keumho; Hur, Yera; Kim, Ji-Ha
2013-09-01
Despite the importance of selecting students whom are capable for medical education and to become a good doctor, not enough studies have been done in the category. This study focused on analysing the medical students' academic performance (grade point average, GPA) differences, flunk and dropout rates by admission types. From 2004 to 2010, we gathered 369 Konyang University College of Medicine's students admission data and analyzed the differences between admission method and academic achievement, differences in failure and dropout rates. Analysis of variance (ANOVA), ordinary least square, and logistic regression were used. The rolling students showed higher academic achievement from year 1 to 3 than regular students (p < 0.01). Using admission type variable as control variable in multiple regression model similar results were shown. But unlike the results of ANOVA, GPA differences by admission types were shown not only in lower academic years but also in year 6 (p < 0.01). From the regression analysis of flunk and dropout rate by admission types, regular admission type students showed higher drop out rate than the rolling ones which demonstrates admission types gives significant effect on flunk or dropout rates in medical students (p < 0.01). The rolling admissions type students tend to show lower flunk rate and dropout rates and perform better academically. This implies selecting students primarily by Korean College Scholastic Ability Test does not guarantee their academic success in medical education. Thus we suggest a more in-depth comprehensive method of selecting students that are appropriate to individual medical school's educational goal.
NASA Astrophysics Data System (ADS)
Muti Mohamed, Norani; Bashiri, Robabeh; Kait, Chong Fai; Sufian, Suriati
2018-04-01
we investigated the influence of fluctuating the preparation variables of TiO2 on the efficiency of photocatalytic water splitting in photoelectrochemical (PEC) cell. Hydrothermal associated sol-gel technique was applied to synthesis modified TiO2 with nickel and copper oxide. The variation of water (mL), acid (mL) and total metal loading (%) were mathematically modelled using central composite design (CCD) from the response surface method (RSM) to explore the single and combined effects of parameters on the system performance. The experimental data were fitted using quadratic polynomial regression model from analysis of variance (ANOVA). The coefficient of determination value of 98% confirms the linear relationship between the experimental and predicted values. The amount of water had maximum effect on the photoconversion efficiency due to a direct effect on the crystalline and the number of defects on the surface of photocatalyst. The optimal parameter ratios with maximum photoconversion efficiency were 16 mL, 3 mL and 5 % for water, acid and total metal loading, respectively.
Gupta, Vishal; Pandey, Pulak M
2016-11-01
Thermal necrosis is one of the major problems associated with the bone drilling process in orthopedic/trauma surgical operations. To overcome this problem a new bone drilling method has been introduced recently. Studies have been carried out with rotary ultrasonic drilling (RUD) on pig bones using diamond coated abrasive hollow tools. In the present work, influence of process parameters (rotational speed, feed rate, drill diameter and vibrational amplitude) on change in the temperature was studied using design of experiment technique i.e., response surface methodology (RSM) and data analysis was carried out using analysis of variance (ANOVA). Temperature was recorded and measured by using embedded thermocouple technique at a distance of 0.5mm, 1.0mm, 1.5mm and 2.0mm from the drill site. Statistical model was developed to predict the maximum temperature at the drill tool and bone interface. It was observed that temperature increased with increase in the rotational speed, feed rate and drill diameter and decreased with increase in the vibrational amplitude. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nasef, Mohamed Mahmoud; Aly, Amgad Ahmed; Saidi, Hamdani; Ahmad, Arshad
2011-11-01
Radiation induced grafting of 1-vinylimidazole (1-VIm) onto poly(ethylene-co-tetraflouroethene) (ETFE) was investigated. The grafting parameters such as absorbed dose, monomer concentration, grafting time and temperature were optimized using response surface method (RSM). The Box-Behnken module available in the design expert software was used to investigate the effect of reaction conditions (independent parameters) varied in four levels on the degree of grafting ( G%) (response parameter). The model yielded a polynomial equation that relates the linear, quadratic and interaction effects of the independent parameters to the response parameter. The analysis of variance (ANOVA) was used to evaluate the results of the model and detect the significant values for the independent parameters. The optimum parameters to achieve a maximum G% were found to be monomer concentration of 55 vol%, absorbed dose of 100 kGy, time in the range of 14-20 h and a temperature of 61 °C. Fourier transform infrared (FTIR), thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) were used to investigate the properties of the obtained films and provide evidence for grafting.
Xu, Mingyu; Yin, Ping; Liu, Xiguang; Tang, Qinghua; Qu, Rongjun; Xu, Qiang
2013-12-01
Novel biosorbent materials (RH-2 and RH-3) obtained from agricultural waste materials rice husks (RH-1) were successfully developed through fast and facile esterification reactions with hydroxylethylidenediphosphonic acid and nitrilotrimethylenetriphosphonic acid, respectively. The present paper reported the feasibility of using RH-1, RH-2 and RH-3 for removal of heavy metals from simulated wastewater, the results revealed that the adsorption property of functionalized rice husks with organotriphosphonic acid RH-3 for Au(III) was very excellent, especially for gold ions. The combined effect of initial solution pH, RH-3 dosage and initial Au(III) concentration was investigated using response surface methodology (RSM), the results showed that initial Au(III) concentration exerted stronger influence on Au(III) uptake than initial pH and biomass dosage. The analysis of variance (ANOVA) of the quadratic model demonstrated that the model was highly significant, and under the optimum process conditions, the maximum adsorption capacity could reach 3.25 ± 0.07 mmol/g that is higher than other reported adsorbents. Copyright © 2013 Elsevier Ltd. All rights reserved.
A statistical approach to optimizing concrete mixture design.
Ahmad, Shamsad; Alghamdi, Saeid A
2014-01-01
A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.
A Statistical Approach to Optimizing Concrete Mixture Design
Alghamdi, Saeid A.
2014-01-01
A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405
NASA Astrophysics Data System (ADS)
Azmi, H.; Haron, C. H. C.; Ghani, J. A.; Suhaily, M.; Yuzairi, A. R.
2018-04-01
The surface roughness (Ra) and delamination factor (Fd) of a milled kenaf reinforced plastic composite materials are depending on the milling parameters (spindle speed, feed rate and depth of cut). Therefore, a study was carried out to investigate the relationship between the milling parameters and their effects on a kenaf reinforced plastic composite materials. The composite panels were fabricated using vacuum assisted resin transfer moulding (VARTM) method. A full factorial design of experiments was use as an initial step to screen the significance of the parameters on the defects using Analysis of Variance (ANOVA). If the curvature of the collected data shows significant, Response Surface Methodology (RSM) is then applied for obtaining a quadratic modelling equation that has more reliable in expressing the optimization. Thus, the objective of this research is obtaining an optimum setting of milling parameters and modelling equations to minimize the surface roughness (Ra) and delamination factor (Fd) of milled kenaf reinforced plastic composite materials. The spindle speed and feed rate contributed the most in affecting the surface roughness and the delamination factor of the kenaf composite materials.
Msimanga, Huggins Z; Ollis, Robert J
2010-06-01
Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were used to classify acetaminophen-containing medicines using their attenuated total reflection Fourier transform infrared (ATR-FT-IR) spectra. Four formulations of Tylenol (Arthritis Pain Relief, Extra Strength Pain Relief, 8 Hour Pain Relief, and Extra Strength Pain Relief Rapid Release) along with 98% pure acetaminophen were selected for this study because of the similarity of their spectral features, with correlation coefficients ranging from 0.9857 to 0.9988. Before acquiring spectra for the predictor matrix, the effects on spectral precision with respect to sample particle size (determined by sieve size opening), force gauge of the ATR accessory, sample reloading, and between-tablet variation were examined. Spectra were baseline corrected and normalized to unity before multivariate analysis. Analysis of variance (ANOVA) was used to study spectral precision. The large particles (35 mesh) showed large variance between spectra, while fine particles (120 mesh) indicated good spectral precision based on the F-test. Force gauge setting did not significantly affect precision. Sample reloading using the fine particle size and a constant force gauge setting of 50 units also did not compromise precision. Based on these observations, data acquisition for the predictor matrix was carried out with the fine particles (sieve size opening of 120 mesh) at a constant force gauge setting of 50 units. After removing outliers, PCA successfully classified the five samples in the first and second components, accounting for 45.0% and 24.5% of the variances, respectively. The four-component PLS-DA model (R(2)=0.925 and Q(2)=0.906) gave good test spectra predictions with an overall average of 0.961 +/- 7.1% RSD versus the expected 1.0 prediction for the 20 test spectra used.
Statistical analysis of fNIRS data: a comprehensive review.
Tak, Sungho; Ye, Jong Chul
2014-01-15
Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.
Contribution of attachment insecurity to health-related quality of life in depressed patients.
Ponizovsky, Alexander M; Drannikov, Angela
2013-06-22
To examine the individual contributions of insecure attachment styles and depression symptom severity to health-related quality of life (HRQoL) in patients diagnosed with adjustment disorder (AJD) with depressed mood. Participants were 67 patients diagnosed with International Classification of Diseases, Tenth edition AJD with depressed mood, who completed standardised self-report questionnaires measuring study variables. Mean scores and SDs were computed for the outcome and predictor measures. Pearson correlations among the measures were computed. The study hypotheses were tested using analysis of variance (ANOVA) and multiple regression analyses. All analyses were performed using the SPSS-17 software package (SPSS Inc., Chicago, IL, United States). ANOVA showed a significant main effect of the insecure attachment styles on depression symptom severity and life satisfaction scores. The results suggest that depressive symptoms were more severe (F = 4.13, df = 2.67, P < 0.05) and life satisfaction was poorer (F = 5.69, df = 2.67, P < 0.01) in both anxious-ambivalently and avoidantly attached patients compared with their securely attached counterparts, whereas the two insecure groups did not significantly differ by these variables. The anxious/ambivalent attachment style and depression symptom severity significantly contributed to HRQoL, accounting for 21.4% and 29.7% of the total variance, respectively [R(2) = 0.79; Adjusted R(2) = 0.77; F (5, 67) = 33.68, P < 0.0001], even after controlling for gender, marital and employment status confounders. The results show that the anxious/ambivalent attachment style together with depression symptom severity substantially and independently predict the HRQoL outcome in AJD with depressed mood.
Parathyras, John; Gebhardt, Stefan; Hillermann-Rebello, Renate; Grobbelaar, Nelis; Venter, Mauritz; Warnich, Louise
2009-05-01
South Africa, like many other Southern African countries, has one of the highest HIV infection rates in the world and many individuals consequently receive antiretroviral therapy (ART). However, knowledge regarding (i) the prevalence of functional single nucleotide polymorphisms (SNPs) in pharmacologically relevant genes, and (ii) variance in pharmacotherapy both within and between different populations and ethnic groups is limited. The aim of this study was to determine whether selected polymorphisms in cytochrome P450 (CYP) genes (CYP2B6 and CYP3A4) and the multidrug-resistance 1 (ABCB1) gene underlie altered antiretroviral (ARV) drug response in two South African populations. DNA samples from 182 HIV-positive individuals of Mixed-Ancestry and Xhosa ethnicity on ART were genotyped for the A-392G SNP in CYP3A4, the G516T and A785G SNPs in CYP2B6, and the T-129C, C1236T, G2677T/A and C3435T SNPs in ABCB1. Univariate two-way analysis of variance (ANOVA) testing revealed no apparent effect of ethnicity on immune recovery (in terms of CD4-cell count) in response to ART. Univariate one-way ANOVA testing revealed a discernible effect of genotype on immune recovery in the cases of the T-129C (P=0.03) and G2677A (P<0.01) polymorphisms in the ABCB1 gene. This study serves as a basis for better understanding and possible prediction of pharmacogenetic risk profiles and drug response in individuals and ethnic groups in South Africa.
T-cell homeostasis in breast cancer survivors with persistent fatigue.
Bower, Julienne E; Ganz, Patricia A; Aziz, Najib; Fahey, John L; Cole, Steve W
2003-08-06
Approximately 30% of women successfully treated for breast cancer suffer persistent fatigue of unknown origin. Recent studies linking inflammatory processes to central nervous system-mediated fatigue led us to examine cellular immune system status in 20 fatigued breast cancer survivors and 19 matched non-fatigued breast cancer survivors. Fatigued survivors, compared with non-fatigued survivors, had statistically significantly increased numbers of circulating T lymphocytes (mean 31% increase, 95% confidence interval [CI] = 6% to 56%; P =.015 by two-sided analysis of variance [ANOVA]), with pronounced elevation in the numbers of CD4+ T lymphocytes (mean 41% increase, 95% CI = 15% to 68%; P =.003 by two-sided ANOVA) and CD56+ effector T lymphocytes (mean 52% increase, 95% CI = 4% to 99%; P =.027 by two-sided ANOVA). These changes were independent of patient demographic and treatment characteristics. Absolute numbers of B cells, natural killer cells, granulocytes, and monocytes were not altered. The increased numbers of circulating T cells correlated with elevations in the level of serum interleukin 1 receptor antagonist (for CD3+ cells, r =.56 and P =.001; for CD3+/CD4+ cells, r =.68 and P<.001, by Spearman rank correlation). Results of this study suggest that persistent fatigue in breast cancer survivors might be associated with a chronic inflammatory process involving the T-cell compartment. These results require confirmation in a larger study that is specifically designed to address this hypothesis.
Race, Socioeconomic Status, and Implicit Bias: Implications for Closing the Achievement Gap
NASA Astrophysics Data System (ADS)
Schlosser, Elizabeth Auretta Cox
This study accessed the relationship between race, socioeconomic status, age and the race implicit bias held by middle and high school science teachers in Mobile and Baldwin County Public School Systems. Seventy-nine participants were administered the race Implicit Association Test (race IAT), created by Greenwald, A. G., Nosek, B. A., & Banaji, M. R., (2003) and a demographic survey. Quantitative analysis using analysis of variances, ANOVA and t-tests were used in this study. An ANOVA was performed comparing the race IAT scores of African American science teachers and their Caucasian counterparts. A statically significant difference was found (F = .4.56, p = .01). An ANOVA was also performed using the race IAT scores comparing the age of the participants; the analysis yielded no statistical difference based on age. A t-test was performed comparing the race IAT scores of African American teachers who taught at either Title I or non-Title I schools; no statistical difference was found between groups (t = -17.985, p < .001). A t-test was also performed comparing the race IAT scores of Caucasian teachers who taught at either Title I or non-Title I schools; a statistically significant difference was found between groups ( t = 2.44, p > .001). This research examines the implications of the achievement gap among African American and Caucasian students in science.
Burgert, James M; Johnson, Arthur D; Garcia-Blanco, Jose; Fulton, Lawrence V; Loughren, Michael J
2017-06-01
Introduction The American Heart Association (AHA; Dallas, Texas USA) and European Resuscitation Council (Niel, Belgium) cardiac arrest (CA) guidelines recommend the intraosseous (IO) route when intravenous (IV) access cannot be obtained. Vasopressin has been used as an alternative to epinephrine to treat ventricular fibrillation (VF). Hypothesis/Problem Limited data exist on the pharmacokinetics and resuscitative effects of vasopressin administered by the humeral IO (HIO) route for treatment of VF. The purpose of this study was to evaluate the effects of HIO and IV vasopressin, on the occurrence, odds, and time of return of spontaneous circulation (ROSC) and pharmacokinetic measures in a swine model of VF. Twenty-seven Yorkshire-cross swine (60 to 80 kg) were assigned randomly to three groups: HIO (n=9), IV (n=9), and a control group (n=9). Ventricular fibrillation was induced and untreated for two minutes. Chest compressions began at two minutes post-arrest and vasopressin (40 U) administered at four minutes post-arrest. Serial blood specimens were collected for four minutes, then the swine were resuscitated until ROSC or 29 post-arrest minutes elapsed. Fisher's Exact test determined ROSC was significantly higher in the HIO 5/7 (71.5%) and IV 8/11 (72.7%) groups compared to the control 0/9 (0.0%; P=.001). Odds ratios of ROSC indicated no significant difference between the treatment groups (P=.68) but significant differences between the HIO and control, and the IV and control groups (P=.03 and .01, respectively). Analysis of Variance (ANOVA) indicated the mean time to ROSC for HIO and IV was 621.20 seconds (SD=204.21 seconds) and 554.50 seconds (SD=213.96 seconds), respectively, with no significant difference between the groups (U=11; P=.22). Multivariate Analysis of Variance (MANOVA) revealed the maximum plasma concentration (Cmax) and time to maximum concentration (Tmax) of vasopressin in the HIO and IV groups was 71753.9 pg/mL (SD=26744.58 pg/mL) and 61853.7 pg/mL (SD=22745.04 pg/mL); 111.42 seconds (SD=51.3 seconds) and 114.55 seconds (SD=55.02 seconds), respectively. Repeated measures ANOVA indicated no significant difference in plasma vasopressin concentrations between the treatment groups over four minutes (P=.48). The HIO route delivered vasopressin effectively in a swine model of VF. Occurrence, time, and odds of ROSC, as well as pharmacokinetic measurements of HIO vasopressin, were comparable to IV. Burgert JM , Johnson AD , Garcia-Blanco J , Fulton LV , Loughren MJ . The resuscitative and pharmacokinetic effects of humeral intraosseous vasopressin in a swine model of ventricular fibrillation. Prehosp Disaster Med. 2017;32(3):305-310.
Antigen-loaded dendritic cell migration: MR imaging in a pancreatic carcinoma model.
Zhang, Zhuoli; Li, Weiguo; Procissi, Daniele; Li, Kangan; Sheu, Alexander Y; Gordon, Andrew C; Guo, Yang; Khazaie, Khashayarsha; Huan, Yi; Han, Guohong; Larson, Andrew C
2015-01-01
To test the following hypotheses in a murine model of pancreatic cancer: (a) Vaccination with antigen-loaded iron-labeled dendritic cells reduces T2-weighted signal intensity at magnetic resonance (MR) imaging within peripheral draining lymph nodes ( LN lymph node s) and (b) such signal intensity reductions are associated with tumor size changes after dendritic cell vaccination. The institutional animal care and use committee approved this study. Panc02 cells were implanted into the flanks of 27 C57BL/6 mice bilaterally. After tumors reached 10 mm, cell viability was evaluated, and iron-labeled dendritic cell vaccines were injected into the left hind footpad. The mice were randomly separated into the following three groups (n = 9 in each): Group 1 was injected with 1 million iron-labeled dendritic cells; group 2, with 2 million cells; and control mice, with 200 mL of phosphate-buffered saline. T1- and T2-weighted MR imaging of labeled dendritic cell migration to draining LN lymph node s was performed before cell injection and 6 and 24 hours after injection. The signal-to-noise ratio ( SNR signal-to-noise ratio ) of the draining LN lymph node s was measured. One-way analysis of variance ( ANOVA analysis of variance ) was used to compare Prussian blue-positive dendritic cell measurements in LN lymph node s. Repeated-measures ANOVA analysis of variance was used to compare in vivo T2-weighted SNR signal-to-noise ratio LN lymph node measurements between groups over the observation time points. Trypan blue assays showed no significant difference in mean viability indexes (unlabeled vs labeled dendritic cells, 4.32% ± 0.69 [standard deviation] vs 4.83% ± 0.76; P = .385). Thirty-five days after injection, the mean left and right flank tumor sizes, respectively, were 112.7 mm(2) ± 16.4 and 109 mm(2) ± 24.3 for the 1-million dendritic cell group, 92.2 mm(2) ± 9.9 and 90.4 mm(2) ± 12.8 for the 2-million dendritic cell group, and 193.7 mm(2) ± 20.9 and 189.4 mm(2) ± 17.8 for the control group (P = .0001 for control group vs 1-million cell group; P = .00007 for control group vs 2-million cell group). There was a correlation between postinjection T2-weighted SNR signal-to-noise ratio decreases in the left popliteal LN lymph node 24 hours after injection and size changes at follow-up for tumors in both flanks (R = 0.81 and R = 0.76 for left and right tumors, respectively). MR imaging approaches can be used for quantitative measurement of accumulated iron-labeled dendritic cell-based vaccines in draining LN lymph node s. The amount of dendritic cell-based vaccine in draining LN lymph node s correlates well with observed protective effects.
Sargazi, Ghasem; Afzali, Daryoush; Mostafavi, Ali
2018-03-01
Reverse micelle (RM) and ultrasound assisted reverse micelle (UARM) were applied to the synthesis of novel thorium nanostructures as metal organic frameworks (MOFs). Characterization with different techniques showed that the Th-MOF sample synthesized by UARM method had higher thermal stability (354°C), smaller mean particle size (27nm), and larger surface area (2.02×10 3 m 2 /g). Besides, in this novel approach, the nucleation of crystals was found to carry out in a shorter time. The synthesis parameters of UARM method were designed by 2 k-1 factorial and the process control was systematically studied using analysis of variance (ANOVA) and response surface methodology (RSM). ANOVA showed that various factors, including surfactant content, ultrasound duration, temperature, ultrasound power, and interaction between these factors, considerably affected different properties of the Th-MOF samples. According to the 2 k-1 factorial design, the determination coefficient (R 2 ) of the model is 0.999, with no significant lack of fit. The F value of 5432, implied that the model was highly significant and adequate to represent the relationship between the responses and the independent variables, also the large R-adjusted value indicates a good relationship between the experimental data and the fitted model. RSM predicted that it would be possible to produce Th-MOF samples with the thermal stability of 407°C, mean particle size of 13nm, and surface area of 2.20×10 3 m 2 /g. The mechanism controlling the Th-MOF properties was considerably different from the conventional mechanisms. Moreover, the MOF sample synthesized using UARM exhibited higher capacity for nitrogen adsorption as a result of larger pore sizes. It is believed that the UARM method and systematic studies developed in the present work can be considered as a new strategy for their application in other nanoscale MOF samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Horneff, Gerd; Becker, Ingrid
2014-07-01
The aim of this study was to define improvement thresholds for the Juvenile Arthritis Disease Activity Score (JADAS). Physicians' and parents' judgements on treatment efficacy, the ACR paediatric response measure (PedACR) and JADAS were extracted from BIKER. Patients were categorized by baseline classes in the 10-joint JADAS (JADAS10) as low (5 to <15), moderate (15 to <25) and high (25 to ≤40). Cut-offs for defining improvement following treatment with biologics or MTX were chosen by calculating the interquartile ranges (IQRs) of the judgement groups and considering the accuracy, sensitivity and specificity of the resulting model. Differences in the change of JADAS10 by JIA category were also analysed by analysis of variance (ANOVA). Sensitivity, specificity and accuracy were calculated. A total of 1315 treatment courses were analysed. The ANOVA of the JIA categories showed no significant differences of the mean JADAS10 in all baseline classes and IQRs also showed good overall limits. Therefore all JIA categories were combined for a collective cut-off. Analysis by baseline class revealed clear cut-off points. Improvement could be defined by the minimal decrease in the JADAS10 in baseline class low by 4 (41%), moderate by 10 (53%) and high by 17 (57%). The model shows values for accuracy from 75.6 to 85.5% and comparable values for sensitivity and specificity. Improvement after 3 months can be defined efficiently by the decrease of the JADAS10, depending on the baseline JADAS10 score, which specifies low, moderate or high disease activity. Our model demonstrates clear cut-off values. The JADAS10 may be used in addition to ACR criteria in clinical trials. Also, since the JADAS10 can easily be calculated at each patient visit, it also can be used for clinical decisions. © The Author 2014. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Tahari, M.; Ghorbanian, A.; Hatami, M.; Jing, D.
2017-12-01
In this paper, the physical effect of a variable magnetic field on a nanofluid-based concentrating parabolic solar collector (NCPSC) is demonstrated. A section of reservoir is modeled as a semi-circular cavity under the solar radiation with the magnetic source located in the center or out of the cavity and the governing equations were solved by the FlexPDE numerical software. The effect of four physical parameters, i.e., Hartmann Number (Ha), nanoparticles volume fraction ( φ, magnetic field strength ( γ and magnetic source location ( b, on the Nusselt number is discussed. To find the interaction of these parameters and its effect on the heat transfer, a central composite design (CCD) is used and analysis is performed on the 25 experiments proposed by CCD. Analysis of variance (ANOVA) of the results reveals that increasing the Hartmann number decreases the Nusselt number due to the Lorentz force resulting from the presence of stronger magnetic field.
Optimisation of process parameters on thin shell part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.
2017-09-01
This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.
NASA Astrophysics Data System (ADS)
Fakir, Rachid; Barka, Noureddine; Brousseau, Jean
2018-03-01
This paper proposes a statistical approach to analyze the mechanical properties of a standard test specimen, of cylindrical geometry and in steel 4340, with a diameter of 6 mm, heat-treated and quenched in three different fluids. Samples were evaluated in standard tensile test to access their characteristic quantities: hardness, modulus of elasticity, yield strength, tensile strength and ultimate deformation. The proposed approach is gradually being built (a) by a presentation of the experimental device, (b) a presentation of the experimental plan and the results of the mechanical tests, (c) anova analysis of variance and a representation of the output responses using the RSM response surface method, and (d) an analysis of the results and discussion. The feasibility and effectiveness of the proposed approach leads to a precise and reliable model capable of predicting the variation of mechanical properties, depending on the tempering temperature, the tempering time and the cooling capacity of the quenching medium.
Extraction of natural anthocyanin and colors from pulp of jamun fruit.
Maran, J Prakash; Sivakumar, V; Thirugnanasambandham, K; Sridhar, R
2015-06-01
In this present study, natural pigment and colors from pulp of jamun fruit were extracted under different extraction conditions such as extraction temperature (40-60 ˚C), time (20-100 min) and solid-liquid ratio (1:10-1: 15 g/ml) by aqueous extraction method. Three factors with three levels Box-Behnken response surface design was employed to optimize and investigate the effect of process variables on the responses (total anthocyanin and color). The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed to predict the responses. Optimum extraction conditions for maximizing the extraction yield of total anthocyanin (10.58 mg/100 g) and colors (10618.3 mg/l) were found to be: extraction temperature of 44 °C, extraction time of 93 min and solid-liquid ratio of 1:15 g/ml. Under these conditions, experimental values are closely agreed with predicted values.
Lee, Seung Hee; Jang, Hyung Suk; Yang, Young Hee
2016-10-01
This study was done to investigate factors influencing successful aging in middle-aged women. A convenience sample of 103 middle-aged women was selected from the community. Data were collected using a structured questionnaire and analyzed using descriptive statistics, two-sample t-test, one-way ANOVA, Kruskal Wallis test, Pearson correlations, Spearman correlations and multiple regression analysis with the SPSS/WIN 22.0 program. Results of regression analysis showed that significant factors influencing successful aging were post-traumatic growth and social support. This regression model explained 48% of the variance in successful aging. Findings show that the concept 'post-traumatic growth' is an important factor influencing successful aging in middle-aged women. In addition, social support from friends/co-workers had greater influence on successful aging than social support from family. Thus, we need to consider the positive impact of post-traumatic growth and increase the chances of social participation in a successful aging program for middle-aged women.
Khanmohammadi, Mohammadreza; Bagheri Garmarudi, Amir; Samani, Simin; Ghasemi, Keyvan; Ashuri, Ahmad
2011-06-01
Attenuated Total Reflectance Fourier Transform Infrared (ATR-FTIR) microspectroscopy was applied for detection of colon cancer according to the spectral features of colon tissues. Supervised classification models can be trained to identify the tissue type based on the spectroscopic fingerprint. A total of 78 colon tissues were used in spectroscopy studies. Major spectral differences were observed in 1,740-900 cm(-1) spectral region. Several chemometric methods such as analysis of variance (ANOVA), cluster analysis (CA) and linear discriminate analysis (LDA) were applied for classification of IR spectra. Utilizing the chemometric techniques, clear and reproducible differences were observed between the spectra of normal and cancer cases, suggesting that infrared microspectroscopy in conjunction with spectral data processing would be useful for diagnostic classification. Using LDA technique, the spectra were classified into cancer and normal tissue classes with an accuracy of 95.8%. The sensitivity and specificity was 100 and 93.1%, respectively.
Mohamadzadeh Shirazi, Hamed; Karimi-Sabet, Javad; Ghotbi, Cyrus
2017-09-01
Microalgae as a candidate for production of biodiesel, possesses a hard cell wall that prevents intracellular lipids leaving out from the cells. Direct or in situ supercritical transesterification has the potential for destruction of microalgae hard cell wall and conversion of extracted lipids to biodiesel that consequently reduces the total energy consumption. Response surface methodology combined with central composite design was applied to investigate process parameters including: Temperature, Time, Methanol-to-dry algae, Hexane-to-dry algae, and Moisture content. Thirty-two experiments were designed and performed in a batch reactor, and biodiesel efficiency between 0.44% and 99.32% was obtained. According to fatty acid methyl ester yields, a quadratic experimental model was adjusted and the significance of parameters was evaluated using analysis of variance (ANOVA). Effects of single and interaction parameters were also interpreted. In addition, the effect of supercritical process on the ultrastructure of microalgae cell wall using scanning electron spectrometry (SEM) was surveyed. Copyright © 2017 Elsevier Ltd. All rights reserved.
An application of Six Sigma methodology to turnover intentions in health care.
Taner, Mehmet
2009-01-01
The purpose of this study is to show how the principles of Six Sigma can be applied to the high turnover problem of doctors in medical emergency services and paramedic backup. Six Sigma's define-measure-analyse-improve-control (DMAIC) is applied for reducing the turnover rate of doctors in an organisation operating in emergency services. Variables of the model are determined. Explanatory factor analysis, multiple regression, analysis of variance (ANOVA) and Gage R&R are employed for the analysis. Personal burnout/stress and dissatisfaction from salary were found to be the "vital few" variables. The organisation took a new approach by improving its initiatives to doctors' working conditions. Sigma level of the process is increased. New policy and process changes have been found to effectively decrease the incidence of turnover intentions. The improved process is gained, standardised and institutionalised. This study is one of the few papers in the literature that elaborates the turnover problem of doctors working in the emergency and paramedic backup services.
Effect of preparation conditions of activated carbon from bamboo waste for real textile wastewater.
Ahmad, A A; Hameed, B H
2010-01-15
This study deals with the use of activated carbon prepared from bamboo waste (BMAC), as an adsorbent for the removal of chemical oxygen demand (COD) and color of cotton textile mill wastewater. Bamboo waste was used to prepare activated carbon by chemical activation using phosphoric acid (H(3)PO(4)) as chemical agent. The effects of three preparation variables activation temperature, activation time and H(3)PO(4):precursor (wt%) impregnation ratio on the color and COD removal were investigated. Based on the central composite design (CCD) and quadratic models were developed to correlate the preparation variables to the color and COD. From the analysis of variance (ANOVA), the most influential factor on each experimental design response was identified. The optimum condition was obtained by using temperature of 556 degrees C, activation time of 2.33 h and chemical impregnation ratio of 5.24, which resulted in 93.08% of color and 73.98% of COD.
Estimating linear effects in ANOVA designs: the easy way.
Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana
2012-09-01
Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.
Kaploun, Kristen A; Abeare, Christopher A
2010-09-01
Four classification systems were examined using lateralised semantic priming in order to investigate whether degree or direction of handedness better captures the pattern of lateralised semantic priming. A total of 85 participants completed a lateralised semantic priming task and three handedness questionnaires. The classification systems tested were: (1) the traditional right- vs left-handed (RHs vs LHs); (2) a four-factor model of strong and weak right- and left-handers (SRHs, WRHs, SLHs, WLHs); (3) strong- vs mixed-handed (SHs vs MHs); and (4) a three-factor model of consistent left- (CLHs), inconsistent left- (ILHs), and consistent right-handers (CRHs). Mixed-factorial ANOVAs demonstrated significant visual field (VF) by handedness interactions for all but the third model. Results show that LHs, SLHs, CLHs, and ILHs responded faster to LVF targets, whereas RHs, SRHs, and CRHs responded faster to RVF targets; no significant VF by handedness interaction was found between SHs and MHs. The three-factor model better captures handedness group divergence on lateralised semantic priming by incorporating the direction of handedness as well as the degree. These findings help explain some of the variance in language lateralisation, demonstrating that direction of handedness is as important as degree. The need for greater consideration of handedness subgroups in laterality research is highlighted.
Cauwels, Rita G E C; Pieters, Ilse Y; Martens, Luc C; Verbeeck, Ronald M H
2010-04-01
Endodontic treatment of immature teeth is often complicated because of flaring root canals and open apices for which apexification is needed. Long-term prognosis for these teeth is surprisingly low because of cervical root fractures occurring after an impact of weak forces. In this study, an experimental model was developed to determine the fracture resistance of immature teeth and to test the hypothesis that endodontic materials succeed in reinforcing them. Compact and hollow bone cylinders from bovine femurs were used as standardized samples. In order to evaluate the experimental model, fracture resistance in both groups was evaluated by determining the ultimate force to fracture (UFF) under diametral tensile stress. Analysis of variance (ANOVA) revealed a statistically significant difference between the mean values of UFF for both groups, independently of the sampling location or subject. In a following setting, the hypothesis that obturation with gutta percha (GP), mineral trioxide aggregate (MTA), or calcium phosphate bone cement (CPBC) reinforces the hollow bone samples was investigated. Obturation resulted in a significant reinforcement for all materials, but the degree of reinforcement depended on the material. The experimental model appeared to be suitable for in vitro investigation of reinforcement and fracture resistance in a standardized way.
A dimensional comparison between delusional disorder, schizophrenia and schizoaffective disorder.
Muñoz-Negro, José E; Ibanez-Casas, Inmaculada; de Portugal, Enrique; Ochoa, Susana; Dolz, Montserrat; Haro, Josep M; Ruiz-Veguilla, Miguel; de Dios Luna Del Castillo, Juan; Cervilla, Jorge A
2015-12-01
Since the early description of paranoia, the nosology of delusional disorder has always been controversial. The old idea of unitary psychosis has now gained some renewed value from the dimensional continuum model of psychotic symptoms. 1. To study the psychopathological dimensions of the psychosis spectrum; 2. to explore the association between psychotic dimensions and categorical diagnoses; 3. to compare the different psychotic disorders from a psychopathological and functional point of view. This is an observational study utilizing a sample of some 550 patients with a psychotic disorder. 373 participants had a diagnosis of schizophrenia, 137 had delusional disorder and 40 with a diagnosis of schizoaffective disorder. The PANSS was used to elicit psychopathology and global functioning was ascertained using the GAF measure. Both exploratory and confirmatory factor analyses of the PANSS items were performed to extract psychopathological dimensions. Associations between diagnostic categories and dimensions were subsequently studied using ANOVA tests. 5 dimensions - manic, negative symptoms, depression, positive symptoms and cognitive - emerged. The model explained 57.27% of the total variance. The dimensional model was useful to explained differences and similarities between all three psychosis spectrum categories. The potential clinical usefulness of this dimensional model within and between clinical psychosis spectrum categories is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Bruesewitz, Denise A.; Tank, Jennifer L.; Bernot, Melody J.; Richardson, William B.; Strauss, Eric A.
2006-01-01
Zebra mussels (Dreissena polymorpha) have altered the structure of invaded ecosystems and exhibit characteristics that suggest they may influence ecosystem processes such as nitrogen (N) cycling. We measured denitrification rates seasonally on sediments underlying zebra mussel beds collected from the impounded zone of Navigation Pool 8 of the Upper Mississippi River. Denitrification assays were amended with nutrients to characterize variation in nutrient limitation of denitrification in the presence or absence of zebra mussels. Denitrification rates at zebra mussel sites were high relative to sites without zebra mussels in February 2004 (repeated measures analysis of variance (RM ANOVA), p = 0.005), potentially because of high NO3-N variability from nitrification of high NH4+ zebra mussel waste. Denitrification rates were highest in June 2003 (RM ANOVA, p 3-N concentrations during the study (linear regression, R2 = 0.72, p p ≤ 0.01). Examining how zebra mussels influence denitrification rates will aid in developing a more complete understanding of the impact of zebra mussels and more effective management strategies of eutrophic waters.
Quantitative knowledge acquisition for expert systems
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
A common problem in the design of expert systems is the definition of rules from data obtained in system operation or simulation. While it is relatively easy to collect data and to log the comments of human operators engaged in experiments, generalizing such information to a set of rules has not previously been a direct task. A statistical method is presented for generating rule bases from numerical data, motivated by an example based on aircraft navigation with multiple sensors. The specific objective is to design an expert system that selects a satisfactory suite of measurements from a dissimilar, redundant set, given an arbitrary navigation geometry and possible sensor failures. The systematic development is described of a Navigation Sensor Management (NSM) Expert System from Kalman Filter convariance data. The method invokes two statistical techniques: Analysis of Variance (ANOVA) and the ID3 Algorithm. The ANOVA technique indicates whether variations of problem parameters give statistically different covariance results, and the ID3 algorithms identifies the relationships between the problem parameters using probabilistic knowledge extracted from a simulation example set. Both are detailed.
Monte Carlo simulation of edge placement error
NASA Astrophysics Data System (ADS)
Kobayashi, Shinji; Okada, Soichiro; Shimura, Satoru; Nafus, Kathleen; Fonseca, Carlos; Estrella, Joel; Enomoto, Masashi
2018-03-01
In the discussion of edge placement error (EPE), we proposed interactive pattern fidelity error (IPFE) as an indicator to judge pass/fail of integrated patterns. IPFE consists of lower and upper layer EPEs (CD and center of gravity: COG) and overlay, which is decided from the combination of each maximum variation. We succeeded in obtaining the IPFE density function by Monte Carlo simulation. In the results, we also found that the standard deviation (σ) of each indicator should be controlled by 4.0σ, at the semiconductor grade, such as 100 billion patterns per die. Moreover, CD, COG and overlay were analyzed by analysis of variance (ANOVA); we can discuss all variations from wafer to wafer (WTW), pattern to pattern (PTP), line edge roughness (LWR) and stochastic pattern noise (SPN) on an equal footing. From the analysis results, we can determine that these variations belong to which process and tools. Furthermore, measurement length of LWR is also discussed in ANOVA. We propose that the measurement length for IPFE analysis should not be decided to the micro meter order, such as >2 μm length, but for which device is actually desired.
Suresh, S; Srivastava, V C; Mishrab, I M
2012-01-01
In the present paper, the removal of aniline by adsorption process onto granular activated carbon (GAC) is reported from aqueous solutions containing catechol and resorcinol separately. The Taguchi experimental design was applied to study the effect of such parameters as the initial component concentrations (C(0,i)) of two solutes (aniline and catechol or aniline and resorcinol) in the solution, temperature (T), adsorbent dosage (m) and contact time (t). The L27 orthogonal array consisting of five parameters each with three levels was used to determine the total amount of solutes adsorbed on GAC (q(tot), mmol/g) and the signal-to-noise ratio. The analysis of variance (ANOVA) was used to determine the optimum conditions. Under these conditions, the ANOVA shows that m is the most important parameter in the adsorption process. The most favourable levels of process parameters were T = 303 K, m = 10 g/l and t = 660 min for both the systems, qtot values in the confirmation experiments carried out at optimum conditions were 0.73 and 0.95 mmol/g for aniline-catechol and aniline-resorcinol systems, respectively.
Goals and potential career advancement of licensed practical nurses in Japan.
Ikeda, Mari; Inoue, Katsuya; Kamibeppu, Kiyoko
2008-10-01
To investigate the effects of personal and professional variables on career advancement intentions of working Licensed Practical Nurses (LPNs). In Japan, two levels of professional nursing licensures, the LPN and the registered nurse (RN), are likely to be integrated in the future. Therefore, it is important to know the career advancement intentions of LPNs. Questionnaires were sent to a sample of 356 LPNs. Analysis of variance (anova) and discriminative analysis were used. We found that those who had a positive image of LPNs along with a positive image of RNs were identified as showing interest in career advancement. The results of anova showed that age had a negative effect; however, discriminative analysis suggested that age is not as significant compared with other variables. Our results indicate that the 'image of RNs', and 'role-acceptance factors' have an effect on career advancement intentions of LPNs. Our results suggest that Nursing Managers should create a supportive working environment where the LPN would feel encouraged to carry out the nursing role, thereby creating a positive image of nursing in general which would lead to career motivation and pursuing RN status.
Thorlund, Kristian; Thabane, Lehana; Mills, Edward J
2013-01-11
Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.
NASA Astrophysics Data System (ADS)
Hastuti, S.; Harijono; Murtini, E. S.; Fibrianto, K.
2018-03-01
This current study is aimed to investigate the use of parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method. Ledre as Bojonegoro unique local food product was used as point of interest, in which 319 panelists were involved in the study. The result showed that ledre is characterized as easy-crushed texture, sticky in mouth, stingy sensation and easy to swallow. It has also strong banana flavour with brown in colour. Compared to eggroll and semprong, ledre has more variances in terms of taste as well the roll length. As RATA questionnaire is designed to collect categorical data, non-parametric approach is the common statistical procedure. However, similar results were also obtained as parametric approach, regardless the fact of non-normal distributed data. Thus, it suggests that parametric approach can be applicable for consumer study with large number of respondents, even though it may not satisfy the assumption of ANOVA (Analysis of Variances).
Coping and experiential avoidance: unique or overlapping constructs?
Karekla, Maria; Panayiotou, Georgia
2011-06-01
The present study examined associations between coping as measured by the Brief COPE and experiential avoidance as measured by the AAQ-II and the role of both constructs in predicting psychological distress and well-being. Specifically, associations between experiential avoidance and other types of coping were examined, and factor analysis addressed the question of whether experiential avoidance is part of coping or a related but independent construct. Results showed that experiential avoidance loads on the same factor as other emotion-focused and avoidant types of coping. The higher people are in experiential avoidance, the more they tend to utilize these types of coping strategies. Both experiential avoidance and coping predicted psychological distress and well-being, with most variance explained by coping but some additional variance explained by experiential avoidance. ANOVAS also showed gender differences in experiential avoidance and coping approaches. Results are discussed in light of previous relevant findings and future treatment relevant implications. Copyright © 2010 Elsevier Ltd. All rights reserved.
Kocalevent, Rüya-Daniela; Mierke, Annett; Danzer, Gerhard; Klapp, Burghard F.
2014-01-01
Objective Adjustment disorders are re-conceptualized in the DSM-5 as a stress-related disorder; however, besides the impact of an identifiable stressor, the specification of a stress concept, remains unclear. This study is the first to examine an existing stress-model from the general population, in patients diagnosed with adjustment disorders, using a longitudinal design. Methods The study sample consisted of 108 patients consecutively admitted for adjustment disorders. Associations of stress perception, emotional distress, resources, and mental health were measured at three time points: the outpatients’ presentation, admission for inpatient treatment, and discharge from the hospital. To evaluate a longitudinal stress model of ADs, we examined whether stress at admission predicted mental health at each of the three time points using multiple linear regressions and structural equation modeling. A series of repeated-measures one-way analyses of variance (rANOVAs) was performed to assess change over time. Results Significant within-participant changes from baseline were observed between hospital admission and discharge with regard to mental health, stress perception, and emotional distress (p<0.001). Stress perception explained nearly half of the total variance (44%) of mental health at baseline; the adjusted R2 increased (0.48), taking emotional distress (i.e., depressive symptoms) into account. The best predictor of mental health at discharge was the level of emotional distress (i.e., anxiety level) at baseline (β = −0.23, R2 corr = 0.56, p<0.001). With a CFI of 0.86 and an NFI of 0.86, the fit indices did not allow for acceptance of the stress-model (Cmin/df = 15.26; RMSEA = 0.21). Conclusions Stress perception is an important predictor in adjustment disorders, and mental health-related treatment goals are dependent on and significantly impacted by stress perception and emotional distress. PMID:24825165
Hakamata, Yuko; Izawa, Shuhei; Sato, Eisuke; Komi, Shotaro; Murayama, Norio; Moriguchi, Yoshiya; Hanakawa, Takashi; Inoue, Yusuke; Tagaya, Hirokuni
2013-11-01
Attentional bias (AB), selective information processing towards threat, can exacerbate anxiety and depression. Despite growing interest, physiological determinants of AB are yet to be understood. We examined whether stress hormone cortisol and its diurnal variation pattern contribute to AB. Eighty-seven healthy young adults underwent assessments for AB, anxious personality traits, depressive symptoms, and attentional function. Salivary cortisol was collected at three time points daily (at awakening, 30 min after awakening, and bedtime) for 2 consecutive days. We performed: (1) multiple regression analysis to examine the relationships between AB and the other measures and (2) analysis of variance (ANOVA) between groups with different cortisol variation patterns for the other measures. Multiple regression analysis revealed that higher cortisol levels at bedtime (p<0.001), an anxious personality trait (p=0.011), and years of education (p=0.036) were included in the optimal model to predict AB (adjusted R(2)=0.234, p<0.001). ANOVA further demonstrated significant mean differences in AB and depressive symptoms; individuals with blunted cortisol variation exhibited significantly greater AB and depression than those with moderate variation (p=0.037 and p=0.009, respectively). Neuropsychological assessment focused on attention and cortisol measurement at three time points daily. We showed that higher cortisol levels at bedtime and blunted cortisol variation are associated with greater AB. Individuals who have higher cortisol levels at diurnal trough might be at risk of clinical anxiety or depression but could also derive more benefits from the attentional-bias-modification program. © 2013 Elsevier B.V. All rights reserved.
Anthelmintic activity of Spigelia anthelmia extract against gastrointestinal nematodes of sheep.
Ademola, I O; Fagbemi, B O; Idowu, S O
2007-06-01
In vitro (larval development assay) and in vivo studies were conducted to determine possible direct anthelmintic effect of ethanolic and aqueous extracts of Spigelia anthelmia towards different ovine gastrointestinal nematodes. The effect of extracts on development and survival of infective larvae stage (L(3)) was assessed. Best-fit LC(50) values were computed by global model of non-linear regression curve fitting (95% confidence interval). Therapeutic efficacy of the ethanolic extracts administered orally at a dose rate of 125, 250, and 500 mg/kg, relative to a non-medicated control group of sheep harbouring naturally acquired infection of gastrointestinal nematodes, was evaluated in vivo.The presence of S. anthelmia extracts in the cultures decreased the survival of L(3) larvae. The LC(50) of aqueous extract (0.714 mg/ml) differ significantly from the LC(50) of the ethanolic extract (0.628 mg/ml) against the strongyles (p < 0.05, paired t-test). Faecal egg counts on day 12 after treatment showed that the extract is effective, relative to control (one-way analysis of variance [ANOVA], Dunnett's multiple comparison test) at 500 mg/kg against Strongyloides spp. (p < 0.01), 250 mg/kg against Oesophagostomum spp., Trichuris spp. (p < 0.05), and 125 mg/kg against Haemonchus spp. and Trichostrongylus spp. (p < 0.01). The effect of the doses is significant in all cases, the day after treatment is also extremely significant in most cases, whereas interaction between dose and day after treatment is significant (two-way ANOVA). S. anthelmia extract could, therefore, find application in the control of helminth in livestock, by the ethnoveterinary medicine approach.
Mittal, Nitika; Xia, Zeyang; Chen, Jie; Stewart, Kelton T; Liu, Sean Shih-Yao
2013-05-01
To quantify the three-dimensional moments and forces produced by pretorqued nickel-titanium (NiTi) rectangular archwires fully engaged in 0.018- and 0.022-inch slots of central incisor and molar edgewise and prescription brackets. Ten identical acrylic dental models with retroclined maxillary incisors were fabricated for bonding with various bracket-wire combinations. Edgewise, Roth, and MBT brackets with 0.018- and 0.022-inch slots were bonded in a simulated 2 × 4 clinical scenario. The left central incisor and molar were sectioned and attached to load cells. Correspondingly sized straight and pretorqued NiTi archwires were ligated to the brackets using 0.010-inch ligatures. Each load cell simultaneously measured three force (Fx, Fy, Fz) and three moment (Mx, My, Mz) components. The faciolingual, mesiodistal, and inciso-occluso/apical axes of the teeth corresponded to the x, y, and z axes of the load cells, respectively. Each wire was removed and retested seven times. Three-way analysis of variance (ANOVA) examined the effects of wire type, wire size, and bracket type on the measured orthodontic load systems. Interactions among the three effects were examined and pair-wise comparisons between significant combinations were performed. The force and moment components on each tooth were quantified according to their local coordinate axes. The three-way ANOVA interaction terms were significant for all force and moment measurements (P < .05), except for Fy (P > .05). The pretorqued wire generates a significantly larger incisor facial crown torquing moment in the MBT prescription compared to Roth, edgewise, and the straight NiTi wire.
Chevalier, Thérèse M.; Stewart, Garth; Nelson, Monty; McInerney, Robert J.; Brodie, Norman
2016-01-01
It has been well documented that IQ scores calculated using Canadian norms are generally 2–5 points lower than those calculated using American norms on the Wechsler IQ scales. However, recent findings have demonstrated that the difference may be significantly larger for individuals with certain demographic characteristics, and this has prompted discussion about the appropriateness of using the Canadian normative system with a clinical population in Canada. This study compared the interpretive effects of applying the American and Canadian normative systems in a clinical sample. We used a multivariate analysis of variance (ANOVA) to calculate differences between IQ and Index scores in a clinical sample, and mixed model ANOVAs to assess the pattern of differences across age and ability level. As expected, Full Scale IQ scores calculated using Canadian norms were systematically lower than those calculated using American norms, but differences were significantly larger for individuals classified as having extremely low or borderline intellectual functioning when compared with those who scored in the average range. Implications of clinically different conclusions for up to 52.8% of patients based on these discrepancies highlight a unique dilemma facing Canadian clinicians, and underscore the need for caution when choosing a normative system with which to interpret WAIS-IV results in the context of a neuropsychological test battery in Canada. Based on these findings, we offer guidelines for best practice for Canadian clinicians when interpreting data from neuropsychological test batteries that include different normative systems, and suggestions to assist with future test development. PMID:27246955
NASA Astrophysics Data System (ADS)
Lührs, Nikolas; Jager, Nicolas W.; Challies, Edward; Newig, Jens
2018-02-01
Public participation is potentially useful to improve public environmental decision-making and management processes. In corporate management, the Vroom-Yetton-Jago normative decision-making model has served as a tool to help managers choose appropriate degrees of subordinate participation for effective decision-making given varying decision-making contexts. But does the model recommend participatory mechanisms that would actually benefit environmental management? This study empirically tests the improved Vroom-Jago version of the model in the public environmental decision-making context. To this end, the key variables of the Vroom-Jago model are operationalized and adapted to a public environmental governance context. The model is tested using data from a meta-analysis of 241 published cases of public environmental decision-making, yielding three main sets of findings: (1) The Vroom-Jago model proves limited in its applicability to public environmental governance due to limited variance in its recommendations. We show that adjustments to key model equations make it more likely to produce meaningful recommendations. (2) We find that in most of the studied cases, public environmental managers (implicitly) employ levels of participation close to those that would have been recommended by the model. (3) An ANOVA revealed that such cases, which conform to model recommendations, generally perform better on stakeholder acceptance and environmental standards of outputs than those that diverge from the model. Public environmental management thus benefits from carefully selected and context-sensitive modes of participation.
Lührs, Nikolas; Jager, Nicolas W; Challies, Edward; Newig, Jens
2018-02-01
Public participation is potentially useful to improve public environmental decision-making and management processes. In corporate management, the Vroom-Yetton-Jago normative decision-making model has served as a tool to help managers choose appropriate degrees of subordinate participation for effective decision-making given varying decision-making contexts. But does the model recommend participatory mechanisms that would actually benefit environmental management? This study empirically tests the improved Vroom-Jago version of the model in the public environmental decision-making context. To this end, the key variables of the Vroom-Jago model are operationalized and adapted to a public environmental governance context. The model is tested using data from a meta-analysis of 241 published cases of public environmental decision-making, yielding three main sets of findings: (1) The Vroom-Jago model proves limited in its applicability to public environmental governance due to limited variance in its recommendations. We show that adjustments to key model equations make it more likely to produce meaningful recommendations. (2) We find that in most of the studied cases, public environmental managers (implicitly) employ levels of participation close to those that would have been recommended by the model. (3) An ANOVA revealed that such cases, which conform to model recommendations, generally perform better on stakeholder acceptance and environmental standards of outputs than those that diverge from the model. Public environmental management thus benefits from carefully selected and context-sensitive modes of participation.
Portfolio optimization with skewness and kurtosis
NASA Astrophysics Data System (ADS)
Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-04-01
Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.
The relative age effect among elite youth competitive swimmers.
Costa, Aldo M; Marques, Mário C; Louro, Hugo; Ferreira, Sandra S; Marinho, Daniel A
2013-01-01
The aim of this study was to analyse the relative age effect (RAE) in competitive swimming. The best 50 Portuguese swimmers (12- to 18-year-olds) for the main individual swimming pool events of both genders were considered. Analysis was conducted on 7813 swimming event participants, taking account of respective swimmer birth dates and the Fédération Internationale de Natation points gained. Differences in the distribution of birth dates by quarter year were determined using the Chi-square. A one-way analysis of variance ANOVA was used to test for differences measured in points between individuals by quarterly birth year intervals. A two-way analysis of variance ANOVA was also conducted to test the interaction between gender and seasonal birth date with regard to performance. The results show an inequitable distribution (p<0.01) of birth dates by quarter for almost all age groups and both genders. However, the distribution of birth dates by quarter for each considered swim event shows that RAE seems to exist only for 12-year-old females and 12- to 15-year-old males. Analysing mean swimming performance, post-hoc results (p<0.01) show no consistency in RAE. Higher performance occurs among older swimmers only in 100 m butterfly (female 1998, 1st≠2nd quarter, p=0.003). The results also show no interaction between gender and seasonal birth date (p<0.01). Findings of this study show that a higher number of swimmers, particular males, are born in the first two quarters of the year, although there is mostly no effect of seasonal birth date on performance differences within the top 50 swimmers.
Wu, Wenzheng; Ye, Wenli; Wu, Zichao; Geng, Peng; Wang, Yulei; Zhao, Ji
2017-01-01
The success of the 3D-printing process depends upon the proper selection of process parameters. However, the majority of current related studies focus on the influence of process parameters on the mechanical properties of the parts. The influence of process parameters on the shape-memory effect has been little studied. This study used the orthogonal experimental design method to evaluate the influence of the layer thickness H, raster angle θ, deformation temperature Td and recovery temperature Tr on the shape-recovery ratio Rr and maximum shape-recovery rate Vm of 3D-printed polylactic acid (PLA). The order and contribution of every experimental factor on the target index were determined by range analysis and ANOVA, respectively. The experimental results indicated that the recovery temperature exerted the greatest effect with a variance ratio of 416.10, whereas the layer thickness exerted the smallest effect on the shape-recovery ratio with a variance ratio of 4.902. The recovery temperature exerted the most significant effect on the maximum shape-recovery rate with the highest variance ratio of 1049.50, whereas the raster angle exerted the minimum effect with a variance ratio of 27.163. The results showed that the shape-memory effect of 3D-printed PLA parts depended strongly on recovery temperature, and depended more weakly on the deformation temperature and 3D-printing parameters. PMID:28825617
Hernández-Jiménez, Claudia; García-Torrentera, Rogelio; Olmos-Zúñiga, J. Raúl; Jasso-Victoria, Rogelio; Gaxiola-Gaxiola, Miguel O.; Baltazares-Lipp, Matilde; Gutiérrez-González, Luis H.
2014-01-01
The use of dry gases during mechanical ventilation has been associated with the risk of serious airway complications. The goal of the present study was to quantify the plasma levels of TNF-alpha and IL-6 and to determine the radiological, hemodynamic, gasometric, and microscopic changes in lung mechanics in dogs subjected to short-term mechanical ventilation with and without humidification of the inhaled gas. The experiment was conducted for 24 hours in 10 dogs divided into two groups: Group I (n = 5), mechanical ventilation with dry oxygen dispensation, and Group II (n = 5), mechanical ventilation with oxygen dispensation using a moisture chamber. Variance analysis was used. No changes in physiological, hemodynamic, or gasometric, and radiographic constants were observed. Plasma TNF-alpha levels increased in group I, reaching a maximum 24 hours after mechanical ventilation was initiated (ANOVA p = 0.77). This increase was correlated to changes in mechanical ventilation. Plasma IL-6 levels decreased at 12 hours and increased again towards the end of the study (ANOVA p>0.05). Both groups exhibited a decrease in lung compliance and functional residual capacity values, but this was more pronounced in group I. Pplat increased in group I (ANOVA p = 0.02). Inhalation of dry gas caused histological lesions in the entire respiratory tract, including pulmonary parenchyma, to a greater extent than humidified gas. Humidification of inspired gases can attenuate damage associated with mechanical ventilation. PMID:25036811
Hernández-Jiménez, Claudia; García-Torrentera, Rogelio; Olmos-Zúñiga, J Raúl; Jasso-Victoria, Rogelio; Gaxiola-Gaxiola, Miguel O; Baltazares-Lipp, Matilde; Gutiérrez-González, Luis H
2014-01-01
The use of dry gases during mechanical ventilation has been associated with the risk of serious airway complications. The goal of the present study was to quantify the plasma levels of TNF-alpha and IL-6 and to determine the radiological, hemodynamic, gasometric, and microscopic changes in lung mechanics in dogs subjected to short-term mechanical ventilation with and without humidification of the inhaled gas. The experiment was conducted for 24 hours in 10 dogs divided into two groups: Group I (n = 5), mechanical ventilation with dry oxygen dispensation, and Group II (n = 5), mechanical ventilation with oxygen dispensation using a moisture chamber. Variance analysis was used. No changes in physiological, hemodynamic, or gasometric, and radiographic constants were observed. Plasma TNF-alpha levels increased in group I, reaching a maximum 24 hours after mechanical ventilation was initiated (ANOVA p = 0.77). This increase was correlated to changes in mechanical ventilation. Plasma IL-6 levels decreased at 12 hours and increased again towards the end of the study (ANOVA p>0.05). Both groups exhibited a decrease in lung compliance and functional residual capacity values, but this was more pronounced in group I. Pplat increased in group I (ANOVA p = 0.02). Inhalation of dry gas caused histological lesions in the entire respiratory tract, including pulmonary parenchyma, to a greater extent than humidified gas. Humidification of inspired gases can attenuate damage associated with mechanical ventilation.
Anterior Tibial Translation in Collegiate Athletes with Normal Anterior Cruciate Ligament Integrity
Rosene, John M.; Fogarty, Tracey D.
1999-01-01
Objective: To examine differences in anterior tibial translation (ATT) among sports, sex, and leg dominance in collegiate athletes with normal anterior cruciate ligament integrity. Design and Setting: Subjects from various athletic teams were measured for ATT in right and left knees. Subjects: Sixty subjects were measured for ATT with a KT-1000 knee arthrometer. Measurements: Statistical analyses were computed for each sex and included a 2 × 3 × 4 mixed-factorial analysis of variance (ANOVA) for anterior cruciate ligament displacement, right and left sides, and force and sport. A 2 × 2 × 3 mixed-factorial ANOVA was computed to compare means for sex and force. A 2 × 3 mixed-factorial ANOVA was computed to compare sex differences across 3 forces. Results: For males and females, no significant interactions were found among leg, force, and sport for mean ATT, for leg and sport or leg and force, or for translation values between dominant and nondominant legs. Males had a significant interaction for force and sport, and a significant difference was found for side of body, since the right side had less translation than the left side. Females had greater ATT than males at all forces. Conclusions: Sex differences exist for ATT, and differences in ATT exist among sports for both sexes. Differences between the right and left sides of the body should be expected when making comparisons of ligamentous laxity. ImagesFigure 2.Figure 3.Figure 5. PMID:16558565
Liu, Yong; Su, Chao; Zhang, Hong; Li, Xiaoting; Pei, Jingfei
2014-01-01
Many studies indicated that industrialization and urbanization caused serious soil heavy metal pollution from industrialized age. However, fewer previous studies have conducted a combined analysis of the landscape pattern, urbanization, industrialization, and heavy metal pollution. This paper was aimed at exploring the relationships of heavy metals in the soil (Pb, Cu, Ni, As, Cd, Cr, Hg, and Zn) with landscape pattern, industrialisation, urbanisation in Taiyuan city using multivariate analysis. The multivariate analysis included correlation analysis, analysis of variance (ANOVA), independent-sample T test, and principal component analysis (PCA). Geographic information system (GIS) was also applied to determine the spatial distribution of the heavy metals. The spatial distribution maps showed that the heavy metal pollution of the soil was more serious in the centre of the study area. The results of the multivariate analysis indicated that the correlations among heavy metals were significant, and industrialisation could significantly affect the concentrations of some heavy metals. Landscape diversity showed a significant negative correlation with the heavy metal concentrations. The PCA showed that a two-factor model for heavy metal pollution, industrialisation, and the landscape pattern could effectively demonstrate the relationships between these variables. The model explained 86.71% of the total variance of the data. Moreover, the first factor was mainly loaded with the comprehensive pollution index (P), and the second factor was primarily loaded with landscape diversity and dominance (H and D). An ordination of 80 samples could show the pollution pattern of all the samples. The results revealed that local industrialisation caused heavy metal pollution of the soil, but such pollution could respond negatively to the landscape pattern. The results of the study could provide a basis for agricultural, suburban, and urban planning. PMID:25251460
Liu, Yong; Su, Chao; Zhang, Hong; Li, Xiaoting; Pei, Jingfei
2014-01-01
Many studies indicated that industrialization and urbanization caused serious soil heavy metal pollution from industrialized age. However, fewer previous studies have conducted a combined analysis of the landscape pattern, urbanization, industrialization, and heavy metal pollution. This paper was aimed at exploring the relationships of heavy metals in the soil (Pb, Cu, Ni, As, Cd, Cr, Hg, and Zn) with landscape pattern, industrialisation, urbanisation in Taiyuan city using multivariate analysis. The multivariate analysis included correlation analysis, analysis of variance (ANOVA), independent-sample T test, and principal component analysis (PCA). Geographic information system (GIS) was also applied to determine the spatial distribution of the heavy metals. The spatial distribution maps showed that the heavy metal pollution of the soil was more serious in the centre of the study area. The results of the multivariate analysis indicated that the correlations among heavy metals were significant, and industrialisation could significantly affect the concentrations of some heavy metals. Landscape diversity showed a significant negative correlation with the heavy metal concentrations. The PCA showed that a two-factor model for heavy metal pollution, industrialisation, and the landscape pattern could effectively demonstrate the relationships between these variables. The model explained 86.71% of the total variance of the data. Moreover, the first factor was mainly loaded with the comprehensive pollution index (P), and the second factor was primarily loaded with landscape diversity and dominance (H and D). An ordination of 80 samples could show the pollution pattern of all the samples. The results revealed that local industrialisation caused heavy metal pollution of the soil, but such pollution could respond negatively to the landscape pattern. The results of the study could provide a basis for agricultural, suburban, and urban planning.
NASA Astrophysics Data System (ADS)
Sandrini-Neto, L.; Lana, P. C.
2012-06-01
Heterogeneity in the distribution of organisms occurs at a range of spatial scales, which may vary from few centimeters to hundreds of kilometers. The exclusion of small-scale variability from routine sampling designs may confound comparisons at larger scales and lead to inconsistent interpretation of data. Despite its ecological and social-economic importance, little is known about the spatial structure of the mangrove crab Ucides cordatus in the southwest Atlantic. Previous studies have commonly compared densities at relatively broad scales, relying on alleged distribution patterns (e.g., mangroves of distinct composition and structure). We have assessed variability patterns of U. cordatus in mangroves of Paranaguá Bay at four levels of spatial hierarchy (10 s km, km, 10 s m and m) using a nested ANOVA and variance components measures. The potential role of sediment parameters, pneumatophore density, and organic matter content in regulating observed patterns was assessed by multiple regression models. Densities of total and non-commercial size crabs varied mostly at 10 s m to km scales. Densities of commercial size crabs differed at the scales of 10 s m and 10 s km. Variance components indicated that small-scale variation was the most important, contributing up to 70% of the crab density variability. Multiple regression models could not explain the observed variations. Processes driving differences in crab abundance were not related to the measured variables. Small-scale patchy distribution has direct implications to current management practices of U. cordatus. Future studies should consider processes operating at smaller scales, which are responsible for a complex mosaic of patches within previously described patterns.
Oily wastewater treatment by ultrafiltration using Taguchi experimental design.
Salahi, A; Mohammadi, T
2011-01-01
In this research, results of an experimental investigation on separation of oil from a real oily wastewater using an ultrafiltration (UF) polymeric membrane are presented. In order to enhance the performance of UF in API separator effluent treatment and to get more permeation flux (PF), effects of operating factors on the yield of PF were studied. Five factors at four levels were investigated: trans-membrane pressure (TMP), temperature (T), cross flow velocity (CFV), pH and salt concentration (SC). Taguchi method (L(16) orthogonal array (OA)) was used. Analysis of variance (ANOVA) was applied to calculate sum of square, variance, error variance and contribution percentage of each factor on response. The optimal levels thus determined for the four influential factors were: TMP, 3 bar; T, 40˚C; CFV, 1.0 m/s; SC, 25 g/L and pH, 8. The results showed that CFV and SC are the most and the least effective factors on PF, respectively. Increasing CFV, TMP, T and pH caused the better performance of UF membrane process due to enhancement of driving force and fouling residence. Also, effects of oil concentration (OC) in the wastewater on PF and total organic carbon (TOC) rejection were investigated. Finally, the highest TOC rejection was found to be 85%.
Chiu, Marcus Y L; Wei, Grace F W; Lee, Sing; Choovanichvong, Somrak; Wong, Frank H T
2013-02-01
Education and support for caregivers is lacking in Asia and the peer-led FamilyLink Education Programme (FLEP) is one of the few provisions to address this service gap. This study aims to evaluate quantitatively its efficacy in reducing subjective burdens and empowering the participants. One hundred and nine caregiver participants in three Asian cities were successfully surveyed at pre-intervention, post-intervention and six-month intervals with a number of standard inventories. Mixed analysis of variance (ANOVA) procedures showed significant programme impact over time intervals for all sites, and subsequently an empowerment measurement model was tested. FLEP was found effective in reducing worry and displeasure, significantly improving intra-psychic strain, depression and all empowerment measures. The measurement model had an acceptable good fit. Baseline difference showed no interference with the programme efficacy. Apart from the initial support for FLEP, the current study also provides some hindsight on the empowerment practice in mental health for Asia, whose sociocultural political contexts are vastly different from that of the developed countries. It remains to be seen whether qualitative data or more stringent research design will yield consistent results and whether FLEP can also work in rural areas.
Al-Anzi, Bader S; Al-Burait, Abdul Aziz; Thomas, Ashly; Ong, Chi Siang
2017-12-01
The present work assesses the production rate of cell phone e-waste in Kuwait by comparing the number of clients in three telecommunication service providers like Zain, Ooredoo, and Viva in the state of Kuwait over a period of 7 years from 2008 to 2015. An online survey was conducted to evaluate the growth in the number of clients in three cell phone companies, and the data analysis was carried out using statistical package for the social sciences (SPSS) software. The prediction of the growth percentage of the number of clients in each telecommunication company was analyzed using analysis of variance (ANOVA) test and followed by the regression model. The study shows that there is an increase in the number of clients in all three companies (Zain, Ooredoo, and Viva) between year 2008 and 2015, and it was estimated that approximately 7.9 million cell phone users would be achieved in the first quarter of 2015. Based on this predicted number of cell phone users, the production of e-waste would be 3 kt per year with an average growth of 12.7%.
Exploring the theory of planned behavior to explain sugar-sweetened beverage consumption.
Zoellner, Jamie; Estabrooks, Paul A; Davy, Brenda M; Chen, Yi-Chun Yvonnes; You, Wen
2012-01-01
To describe sugar-sweetened beverage (SSB) consumption and to establish psychometric properties and utility of a Theory of Planned Behavior (TPB) instrument for SSB consumption. This cross-sectional survey included 119 southwest Virginia participants. Most of the respondents were female (66%), white (89%), and had at least a high school education (79%), and their average age was 41.4 ± 13.5 years. A validated beverage questionnaire was used to measure SSB. Eleven TPB constructs were assessed with a 56-item instrument. Analyses included descriptive statistics, 1-way ANOVA, Cronbach α, and multiple regression. Sugar-sweetened beverage intake averaged 457 ± 430 kcal/d. The TPB model provided a moderate explanation of SSB intake (R(2) = 0.38; F = 13.10, P < .01). Behavioral intentions had the strongest relationships with SSB consumption, followed by attitudes, perceived behavioral control, and subjective norms. The 6 belief constructs did not predict significant variance in the models. Future efforts to comprehensively develop and implement interventions guided by the TPB hold promise for reducing SSB intake. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
2013-01-01
Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298
NASA Astrophysics Data System (ADS)
Ahmad, Mohd Azmier; Afandi, Nur Syahidah; Bello, Olugbenga Solomon
2017-05-01
This study investigates the adsorptive removal of malachite green (MG) dye from aqueous solutions using chemically modified lime-peel-based activated carbon (LPAC). The adsorbent prepared was characterized using FTIR, SEM, Proximate analysis and BET techniques, respectively. Central composite design (CCD) in response surface methodology (RSM) was used to optimize the adsorption process. The effects of three variables: activation temperature, activation time and chemical impregnation ratio (IR) using KOH and their effects on percentage of dye removal and LPAC yield were investigated. Based on CCD design, quadratic models and two factor interactions (2FI) were developed correlating the adsorption variables to the two responses. Analysis of variance (ANOVA) was used to judge the adequacy of the model. The optimum conditions of MG dye removal using LPAC are: activation temperature (796 °C), activation time (1.0 h) and impregnation ratio (2.6), respectively. The percentage of MG dye removal obtained was 94.68 % resulting in 17.88 % LPAC yield. The percentage of error between predicted and experimental results for the removal of MG dye is 0.4 %. Model prediction was in good agreement with experimental results and LPAC was found to be effective in removing MG dye from aqueous solution.
Surveying the comfort perception of the ergonomic design of bluetooth earphones.
Chiu, Hsiao-Ping; Chiang, Hsin-Yu; Liu, Chien-Hsiou; Wang, Ming-Hsu; Chiou, Wen-Ko
2014-01-01
Bluetooth earphones can facilitate communications among workers engaged in manual (e.g., professional driver)or visual tasks (e.g., security guard). If workers remove their Bluetooth earphones due to poor fit, then communication effectiveness will decline, especially during manual or visual tasks. (1) To identify which design properties of Bluetooth earphones can contribute to user comfort, and (2) to identify if person characteristics (i.e., gender and ear shapes in this study) are related to differences in comfort perception during earphone use. 198 participants were recruited for this study.Individuals used four models of Bluetooth earphones in randomized order while performing computer tasks and then completed questionnaires on comfort perception specifically designed for this study. The 2 × 3 × 4 mixed design analysis of variance (ANOVA) was conducted to investigate the effects of gender, ear shape, and model condition. The results indicated that there were significant differences in model on comfort perception. For earplugs, the shape of the earphone and the elasticity of material are important ergonomic concerns to improve the comfort perception. In addition, the adjustable tail length is an important ergonomic design property for the ear-hook. The information gained in this study should be useful in improving the ergonomic fitness of Bluetooth earphones.
Sánchez, Alberto; García, Manuel; Sebastián, Miguel Angel; Camacho, Ana María
2014-01-01
This work presents a hybrid (experimental-computational) application for improving the vibration behavior of structural components using a lightweight multilayer composite. The vibration behavior of a flat steel plate has been improved by the gluing of a lightweight composite formed by a core of polyurethane foam and two paper mats placed on its faces. This composite enables the natural frequencies to be increased and the modal density of the plate to be reduced, moving about the natural frequencies of the plate out of excitation range, thereby improving the vibration behavior of the plate. A specific experimental model for measuring the Operating Deflection Shape (ODS) has been developed, which enables an evaluation of the goodness of the natural frequencies obtained with the computational model simulated by the finite element method (FEM). The model of composite + flat steel plate determined by FEM was used to conduct parametric study, and the most influential factors for 1st, 2nd and 3rd mode were identified using a multifactor analysis of variance (Multifactor-ANOVA). The presented results can be easily particularized for other cases, as it may be used in cycles of continuous improvement as well as in the product development at the material, piece, and complete-system levels. PMID:24618779
A COMPARISON OF GREEN SUPPLY CHAIN MANAGEMENT PRACTICES AMONG INDUSTRIES SECTORS IN CHINA
NASA Astrophysics Data System (ADS)
Sun, Ying; Miyadera, Tetsuhiko; Fujita, Tsuyoshi
This paper aims to examine the differences of Green supply chain management (GSCM) implementation among chemical, automobile and machinery industries in China based on a questionnaire survey designed by Industrial Ecology at two industrial parks in Shenyang city. Exploratory factor analysis and one way analysis of variance (ANOVA) were used to analyze the data. The main result was that the GSCM practices of the three industries are still at a beginning stage. The level of GSCM practices of automobile industry (promoted by international market competition) was higher than those of chemical and machinery industry (promoted by domestic laws and policies).
Increasing selection response by Bayesian modeling of heterogeneous environmental variances
USDA-ARS?s Scientific Manuscript database
Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...
Davies, Craig; Coetzee, Maureen; Lyons, Candice L
2016-06-14
Constant and fluctuating temperatures influence important life-history parameters of malaria vectors which has implications for community organization and the malaria disease burden. The effects of environmental temperature on the hatch rate, survivorship and development rate of Anopheles arabiensis and An. quadriannulatus under conditions of inter- and intra-specific competition are studied. The eggs and larvae of laboratory established colonies were reared under controlled conditions at one constant (25 °C) and two fluctuating (20-30 °C and 18-35 °C) temperature treatments at a ratio of 1:0 or 1:1 (An. arabiensis: An. quadriannulatus). Monitoring of hatch rate, development rate and survival was done at three intervals, 6 to 8 h apart depending on developmental stage. Parametric ANOVAs were used where assumptions of equal variances and normality were met, and a Welch ANOVA where equal variance was violated (α = 0.05). Temperature significantly influenced the measured life-history traits and importantly, this was evident when these species co-occurred. A constant temperature resulted in a higher hatch rate in single species, larval treatments (P < 0.05). The treatment 18-35 °C generally reduced survivorship except for An. arabiensis in mixed, larval species treatments where it was similar to values reported for 25 °C. Survivorship of both species at 20-30 °C was not significantly impacted and the adult production was high across species treatments. The development rates at 25 °C and 20-30 °C were significantly different between species when reared alone and in mixed species from larvae and from eggs. The effect of temperature was more pronounced at 18-35 °C with An. arabiensis developing faster under both competitive scenarios and An. quadriannulatus slower, notably when in the presence of its competitor (P < 0.05). The influence of temperature treatment on the development rate and survival from egg/larvae to adult differed across species treatments. Fluctuating temperatures incorporating the extremes influence the key life-history parameters measured here with An. arabiensis outcompeting An. quadriannulatus under these conditions. The quantification of the response variables measured here improve our knowledge of the link between temperature and species interactions and provide valuable information for modelling of vector population dynamics.
Modeling Heterogeneous Variance-Covariance Components in Two-Level Models
ERIC Educational Resources Information Center
Leckie, George; French, Robert; Charlton, Chris; Browne, William
2014-01-01
Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…
Hargitai, János; Vezendi, László; Vigstrup, Jørgen; Eisgart, Finn; Lundbye-Christensen, Søren; Hargitai, Bálint; Vorum, Henrik
2013-12-20
A strong association exists between the use of tamsulosin and the occurance of intraoperative floppy iris syndrome. Several methods were advocated to overcome the progressive intraopertive miosis.Our purpose was to investigate the effect of a mydriatic-cocktail soaked cellulose sponge on perioperative pupil diameter in tamsulosin-treated patients undergoing elective cataract surgery. Patients using tamsulosin were dilated either with mydriatic-cocktail soaked sponge (group 1) or with conventional eyedrop regimen (group 2). Control patients not taking any α1 adrenergic receptor inhibtors were also dilated with mydriatic sponge (group 3).In all groups oxybuprocain 0.4%, cocain 4%, tropicamide 1%, phenylephrine 10%, diclophenac 0.1% along with chloramphenicol 0.5% were used preoperatively.Pupil diameter (mm) was measured preoperatively, after nucleus delivery, and before IOL implantation. Adverse effects associated with the use of sponge, minor and major intraoperative complications, the use of iris retractors and operation time were recorded.Differences in general between groups were analyzed with a one way analysis of variance (ANOVA); differences between groups in proportions were assessed by Fisher's exact test. Mean pupil diameter (mm) was preopertively: 7.52 ± 1.21, 7.30 ± 1.55 and 7.99 ± 0.96 (ANOVA: p = 0.079); after nucleus delivery: 6 ± 1.20, 6.29 ± 1.12 and 6.52 ± 0.81 (ANOVA: p = 0.123); before IOL implantation: 5.46 ± 1.06, 5.83 ± 1.09 and 6.17 ± 0.89 (ANOVA: p = 0.0291).No adverse effect related to sponge use was detected. Frequency of minor complications, and iris hook use was similar in the two tamsulosin treated group. Operation time did not differ significantly in the three groups. We have found that using a mydriatic cocktail-soaked wick - an alternative way to achieve intraoperative mydriasis for cataract surgery - was as effective and safe as the conventional repeated eyedrops regiment for tamsulosin treated patients. Current Controlled Trials ISRCTN37834752.
Custom-Molded Foot-Orthosis Intervention and Multisegment Medial Foot Kinematics During Walking
Cobb, Stephen C.; Tis, Laurie L.; Johnson, Jeffrey T.; Wang, Yong “Tai”; Geil, Mark D.
2011-01-01
Context: Foot-orthosis (FO) intervention to prevent and treat numerous lower extremity injuries is widely accepted clinically. However, the results of quantitative gait analyses have been equivocal. The foot models used, participants receiving intervention, and orthoses used might contribute to the variability. Objective: To investigate the effect of a custom-molded FO intervention on multisegment medial foot kinematics during walking in participants with low-mobile foot posture. Design: Crossover study. Setting: University biomechanics and ergonomics laboratory. Patients or Other Participants: Sixteen participants with low-mobile foot posture (7 men, 9 women) were assigned randomly to 1 of 2 FO groups. Interventions : After a 2-week period to break in the FOs, individuals participated in a gait analysis that consisted of 5 successful walking trials (1.3 to 1.4 m/s) during no-FO and FO conditions. Main Outcome Measure(s): Three-dimensional displacements during 4 subphases of stance (loading response, mid-stance, terminal stance, preswing) were computed for each multisegment foot model articulation. Results: Repeated-measures analyses of variance (ANOVAs) revealed that rearfoot complex dorsiflexion displacement during midstance was greater in the FO than the no-FO condition (F1,14 = 5.24, P = .04, partial η2 = 0.27). Terminal stance repeated-measures ANOVA results revealed insert-by-insert condition interactions for the first metatarsophalangeal joint complex (F1,14 = 7.87, P = .01, partial η2 = 0.36). However, additional follow-up analysis did not reveal differences between the no-FO and FO conditions for the balanced traditional orthosis (F1,14 = 4.32, P = .08, partial η2 = 0.38) or full-contact orthosis (F1,14 = 4.10, P = .08, partial η2 = 0.37). Conclusions: Greater rearfoot complex dorsiflexion during midstance associated with FO intervention may represent improved foot kinematics in people with low-mobile foot postures. Furthermore, FO intervention might partially correct dys-functional kinematic patterns associated with low-mobile foot postures. PMID:21944067
Planetarium instructional efficacy: A research synthesis
NASA Astrophysics Data System (ADS)
Brazell, Bruce D.
The purpose of the current study was to explore the instructional effectiveness of the planetarium in astronomy education using meta-analysis. A review of the literature revealed 46 studies related to planetarium efficacy. However, only 19 of the studies satisfied selection criteria for inclusion in the meta-analysis. Selected studies were then subjected to coding procedures, which extracted information such as subject characteristics, experimental design, and outcome measures. From these data, 24 effect sizes were calculated in the area of student achievement and five effect sizes were determined in the area of student attitudes using reported statistical information. Mean effect sizes were calculated for both the achievement and the attitude distributions. Additionally, each effect size distribution was subjected to homogeneity analysis. The attitude distribution was found to be homogeneous with a mean effect size of -0.09, which was not significant, p = .2535. The achievement distribution was found to be heterogeneous with a statistically significant mean effect size of +0.28, p < .05. Since the achievement distribution was heterogeneous, the analog to the ANOVA procedure was employed to explore variability in this distribution in terms of the coded variables. The analog to the ANOVA procedure revealed that the variability introduced by the coded variables did not fully explain the variability in the achievement distribution beyond subject-level sampling error under a fixed effects model. Therefore, a random effects model analysis was performed which resulted in a mean effect size of +0.18, which was not significant, p = .2363. However, a large random effect variance component was determined indicating that the differences between studies were systematic and yet to be revealed. The findings of this meta-analysis showed that the planetarium has been an effective instructional tool in astronomy education in terms of student achievement. However, the meta-analysis revealed that the planetarium has not been a very effective tool for improving student attitudes towards astronomy.
Hagquist, Curt; Andrich, David
2017-09-19
Rasch analysis with a focus on Differential Item Functioning (DIF) is increasingly used for examination of psychometric properties of health outcome measures. To take account of DIF in order to retain precision of measurement, split of DIF-items into separate sample specific items has become a frequently used technique. The purpose of the paper is to present and summarise recent advances of analysis of DIF in a unified methodology. In particular, the paper focuses on the use of analysis of variance (ANOVA) as a method to simultaneously detect uniform and non-uniform DIF, the need to distinguish between real and artificial DIF and the trade-off between reliability and validity. An illustrative example from health research is used to demonstrate how DIF, in this case between genders, can be identified, quantified and under specific circumstances accounted for using the Rasch model. Rasch analyses of DIF were conducted of a composite measure of psychosomatic problems using Swedish data from the Health Behaviour in School-aged Children study for grade 9 students collected during the 1985-2014 time periods. The procedures demonstrate how DIF can be identified efficiently by ANOVA of residuals, and how the magnitude of DIF can be quantified and potentially accounted for by resolving items according to identifiable groups and using principles of test equating on the resolved items. The results of the analysis also show that the real DIF in some items does affect person measurement estimates. Firstly, in order to distinguish between real and artificial DIF, the items showing DIF initially should not be resolved simultaneously but sequentially. Secondly, while resolving instead of deleting a DIF item may retain reliability, both options may affect the content validity negatively. Resolving items with DIF is not justified if the source of the DIF is relevant for the content of the variable; then resolving DIF may deteriorate the validity of the instrument. Generally, decisions on resolving items to deal with DIF should also rely on external information.
Juntavee, Niwut; Sirisathit, Issarawas
2018-01-01
This study evaluated marginal accuracy of full-arch zirconia restoration fabricated from two digital computer-aided design and computer-aided manufacturing (CAD-CAM) systems (Trios-3 and CS3500) in comparison to conventional cast metal restoration. A stainless steel model comprising two canine and two molar abutments was used as a master model for full-arch reconstruction. The canine and molar abutments were machined in a cylindrical shape with 5° taper and chamfer margin. The CAD-CAM systems based on the digital approach were used to construct the full-arch zirconia restoration. The conventional cast metal restoration was fabricated according to a conventional lost-wax technique using nickel-chromium alloys. Ten restorations were fabricated from each system. The marginal accuracy of each restoration was determined at four locations for each abutment. An analysis of variance (ANOVA) and Tukey's honest significant difference (HSD) multiple comparisons were used to determine statistically significant difference at 95% confidence interval. The mean values of marginal accuracy of restorations fabricated from conventional casting, Trios-3, and CS3500 were 48.59±4.16 μm, 53.50±5.66 μm, and 56.47±5.52 μm, respectively. ANOVA indicated significant difference in marginal fit of restorations among various systems. The marginal discrepancy of zirconia restoration fabricated from the CS3500 system demonstrated significantly larger gap than that fabricated from the 3Shape system ( p <0.05). Tukey's HSD multiple comparisons indicated that the zirconia restoration fabricated from either CS3500 or Trios-3 demonstrated a significantly larger marginal gap than the conventional cast metal restoration ( p <0.05). Full-arch zirconia restoration fabricated from the Trios-3 illustrated better marginal fits than that from the CS3500, although, both were slightly less accurate than the conventional cast restoration. However, the marginal discrepancies of restoration produced by both CAD-CAM systems were within the clinically acceptable range and satisfactorily precise to be suggested for construction full-arch zirconia restoration.
Khan, Mohammad Jakir Hossain; Hussain, Mohd Azlan; Mujtaba, Iqbal Mohammed
2014-01-01
Propylene is one type of plastic that is widely used in our everyday life. This study focuses on the identification and justification of the optimum process parameters for polypropylene production in a novel pilot plant based fluidized bed reactor. This first-of-its-kind statistical modeling with experimental validation for the process parameters of polypropylene production was conducted by applying ANNOVA (Analysis of variance) method to Response Surface Methodology (RSM). Three important process variables i.e., reaction temperature, system pressure and hydrogen percentage were considered as the important input factors for the polypropylene production in the analysis performed. In order to examine the effect of process parameters and their interactions, the ANOVA method was utilized among a range of other statistical diagnostic tools such as the correlation between actual and predicted values, the residuals and predicted response, outlier t plot, 3D response surface and contour analysis plots. The statistical analysis showed that the proposed quadratic model had a good fit with the experimental results. At optimum conditions with temperature of 75°C, system pressure of 25 bar and hydrogen percentage of 2%, the highest polypropylene production obtained is 5.82% per pass. Hence it is concluded that the developed experimental design and proposed model can be successfully employed with over a 95% confidence level for optimum polypropylene production in a fluidized bed catalytic reactor (FBCR). PMID:28788576
NASA Astrophysics Data System (ADS)
Rout, Sachindra K.; Choudhury, Balaji K.; Sahoo, Ranjit K.; Sarangi, Sunil K.
2014-07-01
The modeling and optimization of a Pulse Tube Refrigerator is a complicated task, due to its complexity of geometry and nature. The aim of the present work is to optimize the dimensions of pulse tube and regenerator for an Inertance-Type Pulse Tube Refrigerator (ITPTR) by using Response Surface Methodology (RSM) and Non-Sorted Genetic Algorithm II (NSGA II). The Box-Behnken design of the response surface methodology is used in an experimental matrix, with four factors and two levels. The diameter and length of the pulse tube and regenerator are chosen as the design variables where the rest of the dimensions and operating conditions of the ITPTR are constant. The required output responses are the cold head temperature (Tcold) and compressor input power (Wcomp). Computational fluid dynamics (CFD) have been used to model and solve the ITPTR. The CFD results agreed well with those of the previously published paper. Also using the results from the 1-D simulation, RSM is conducted to analyse the effect of the independent variables on the responses. To check the accuracy of the model, the analysis of variance (ANOVA) method has been used. Based on the proposed mathematical RSM models a multi-objective optimization study, using the Non-sorted genetic algorithm II (NSGA-II) has been performed to optimize the responses.
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.
Egilmez, Ferhan; Ergun, Gulfem; Kaya, Bekir M.
2013-01-01
Objective: The objective of this study was to compare microhardness of resin cements under different thicknesses of zirconia and the light transmittance of zirconia as a function of thickness. Study design: A total of 126 disc-shaped specimens (2 mm in height and 5 mm in diameter) were prepared from dual-cured resin cements (RelyX Unicem, Panavia F and Clearfil SA cement). Photoactivation was performed by using quartz tungsten halogen and light emitting diode light curing units under different thicknesses of zirconia. Then the specimens (n=7/per group) were stored in dry conditions in total dark at 37°C for 24 h. The Vicker’s hardness test was performed on the resin cement layer with a microhardness tester. Statistical significance was determined using multifactorial analysis of variance (ANOVA) (alpha=.05). Light transmittance of different thicknesses of zirconia (0.3, 0.5 and 0.8 mm) was measured using a hand-held radiometer (Demetron, Kerr). Data were analyzed using one-way ANOVA test (alpha=.05). Results: ANOVA revealed that resin cement and light curing unit had significant effects on microhardness (p < 0.001). Additionally, greater zirconia thickness resulted in lower transmittance. There was no correlation between the amount of light transmitted and microhardness of dual-cured resin cements (r = 0.073, p = 0.295). Conclusion: Although different zirconia thicknesses might result in insufficient light transmission, dual-cured resin cements under zirconia restorations could have adequate microhardness. Key words:Zirconia, microhardness, light transmittance, resin cement. PMID:23385497
The effects of succinylcholine or low-dose rocuronium to aid endotracheal intubation of adult sows
Duke-Novakovski, Tanya; Ambros, Barbara; Auckland, Crissie D.; Harding, John C.S.
2012-01-01
This randomized, prospective, blinded study compared the use of succinylcholine or rocuronium to aid endotracheal intubation of 27 adult sows [mean body weight 261 ± 28 (standard deviation) kg]. Preliminary trials allowed development of the intubation technique and skills. The sows were premedicated with azaperone, atropine, and morphine, and anesthesia was induced with thiopental [6 mg/kg body weight (BW)]. Nine sows each received succinylcholine (1.0 mg/kg BW), rocuronium (0.5 mg/kg BW), or saline (15 mL) after induction. Increments of thiopental (1 mg/kg BW) were used if swallowing impaired intubation. Intubation was performed 45 s after injection of the test drug and was timed and scored. The intubation scores were analyzed with Kruskal-Wallis analysis of variance (ANOVA). Time taken for intubation, body weight, and total dose of thiopental were analyzed with ANOVA and Bonferroni’s multiple-comparisons test. No significant differences (at P < 0.05) were found between the groups with regard to intubation score, time taken for intubation, or total thiopental dose. Thus, neuromuscular blocking agents did not aid endotracheal intubation of adult sows anesthetized with thiopental. PMID:22754096
Evaluating the efficacy of a chemistry video game
NASA Astrophysics Data System (ADS)
Shapiro, Marina
A quasi-experimental design pre-test/post-test intervention study utilizing a within group analysis was conducted with 45 undergraduate college chemistry students that investigated the effect of implementing a game-based learning environment into an undergraduate college chemistry course in order to learn if serious educational games (SEGs) can be used to achieve knowledge gains of complex chemistry concepts and to achieve increase in students' positive attitude toward chemistry. To evaluate if students learn chemistry concepts by participating in a chemistry game-based learning environment, a one-way repeated measures analysis of variance (ANOVA) was conducted across three time points (pre-test, post-test, delayed post-test which were chemistry content exams). Results showed that there was an increase in exam scores over time. The results of the ANOVA indicated a statistically significant time effect. To evaluate if students' attitude towards chemistry increased as a result of participating in a chemistry game-based learning environment a paired samples t-test was conducted using a chemistry attitudinal survey by Mahdi (2014) as the pre- and post-test. Results of the paired-samples t-test indicated that there was no significant difference in pre-attitudinal scores and post-attitudinal scores.
Synthesis, structure characterization and catalytic activity of nickel tungstate nanoparticles
NASA Astrophysics Data System (ADS)
Pourmortazavi, Seied Mahdi; Rahimi-Nasrabadi, Mehdi; Khalilian-Shalamzari, Morteza; Zahedi, Mir Mahdi; Hajimirsadeghi, Seiedeh Somayyeh; Omrani, Ismail
2012-12-01
Taguchi robust design was applied to optimize experimental parameters for controllable, simple and fast synthesis of nickel tungstate nanoparticles. NiWO4 nanoparticles were synthesized by precipitation reaction involving addition of nickel ion solution to the tungstate aqueous reagent and then formation of nickel tungstate nucleolus which are insoluble in aqueous media. Effects of various parameters such as nickel and tungstate concentrations, flow rate of reagent addition and reactor temperature on diameter of synthesized nickel tungstate nanoparticles were investigated experimentally by the aid of orthogonal array design. The results for analysis of variance (ANOVA) showed that particle size of nickel tungstate can be effectively tuned by controlling significant variables involving nickel and tungstate concentrations and flow rate; while, temperature of the reactor has a no considerable effect on the size of NiWO4 particles. The ANOVA results proposed the optimum conditions for synthesis of nickel tungstate nanoparticles via this technique. Also, under optimum condition nanoparticles of NiWO4 were prepared and their structure and chemical composition were characterized by means of EDAX, XRD, SEM, FT-IR spectroscopy, UV-vis spectroscopy, and photoluminescence. Finally, catalytic activity of the nanoparticles in a cycloaddition reaction was examined.
LED Curing Lights and Temperature Changes in Different Tooth Sites
Armellin, E.; Bovesecchi, G.; Coppa, P.; Pasquantonio, G.; Cerroni, L.
2016-01-01
Objectives. The aim of this in vitro study was to assess thermal changes on tooth tissues during light exposure using two different LED curing units. The hypothesis was that no temperature increase could be detected within the dental pulp during polymerization irrespective of the use of a composite resin or a light-curing unit. Methods. Caries-free human first molars were selected, pulp residues were removed after root resection, and four calibrated type-J thermocouples were positioned. Two LED lamps were tested; temperature measurements were made on intact teeth and on the same tooth during curing of composite restorations. The data was analyzed by one-way analysis of variance (ANOVA), Wilcoxon test, Kruskal-Wallis test, and Pearson's χ 2. After ANOVA, the Bonferroni multiple comparison test was performed. Results. Polymerization data analysis showed that in the pulp chamber temperature increase was higher than that without resin. Starlight PRO, in the same condition of Valo lamp, showed a lower temperature increase in pre- and intrapolymerization. A control group (without composite resin) was evaluated. Significance. Temperature increase during resin curing is a function of the rate of polymerization, due to the exothermic polymerization reaction, the energy from the light unit, and time of exposure. PMID:27195282
Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.
Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank
2017-12-01
Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.
Influence of translucence/opacity and shade in the flexural strength of lithium disilicate ceramics
Santos, Mila Oliveira; do Amaral, Flávia Lucisano Botelho; França, Fabiana Mantovani Gomes; Basting, Roberta Tarkany
2015-01-01
Background: Lithium disilicate ceramic system consists of glass ceramic ingots with different grades of translucence which may influence its flexural strength. Aims: To assess the three-point flexural strength of the different lithium disilicate-based ceramic ingots (IPS e.max Press/Ivoclar Vivadent) with different levels of translucence and shade. Materials and Methods: Six groups of ceramic ingots were selected to represent different levels of translucence and shade (HTA1, HTBL2, LTA2, LTB2, MO2, and HO). They measured 25 mm × 5 mm × 2 mm (n = 10), according to ISO 6872 specifications, and tested on a universal test machine (EMIC). Statistical Analysis Used: A one-way analysis of variance (ANOVA) was used (α = 0.05). Results: The results (in MPa) were: HTA1 = 392.98; HTBL2 = 390.74; LTA2 = 390.46; LTB2 = 389.92; MO2 = 390.43; HO = 391.96. ANOVA showed no significant difference among groups (P = 0.1528). Conclusions: Different levels of translucence, opacity and shade of ingots did not affect their mechanical strength, and the use of these ceramics should be guided by the esthetic demands of each clinical situation. PMID:26430304
Detection of burst suppression patterns in EEG using recurrence rate.
Liang, Zhenhu; Wang, Yinghua; Ren, Yongshao; Li, Duan; Voss, Logan; Sleigh, Jamie; Li, Xiaoli
2014-01-01
Burst suppression is a unique electroencephalogram (EEG) pattern commonly seen in cases of severely reduced brain activity such as overdose of general anesthesia. It is important to detect burst suppression reliably during the administration of anesthetic or sedative agents, especially for cerebral-protective treatments in various neurosurgical diseases. This study investigates recurrent plot (RP) analysis for the detection of the burst suppression pattern (BSP) in EEG. The RP analysis is applied to EEG data containing BSPs collected from 14 patients. Firstly we obtain the best selection of parameters for RP analysis. Then, the recurrence rate (RR), determinism (DET), and entropy (ENTR) are calculated. Then RR was selected as the best BSP index one-way analysis of variance (ANOVA) and multiple comparison tests. Finally, the performance of RR analysis is compared with spectral analysis, bispectral analysis, approximate entropy, and the nonlinear energy operator (NLEO). ANOVA and multiple comparison tests showed that the RR could detect BSP and that it was superior to other measures with the highest sensitivity of suppression detection (96.49%, P = 0.03). Tracking BSP patterns is essential for clinical monitoring in critically ill and anesthetized patients. The purposed RR may provide an effective burst suppression detector for developing new patient monitoring systems.
NASA Astrophysics Data System (ADS)
Pourmortazavi, Seied Mahdi; Rahimi-Nasrabadi, Mehdi; Aghazadeh, Mustafa; Ganjali, Mohammad Reza; Karimi, Meisam Sadeghpour; Norouzi, Parviz
2017-12-01
This work focuses on the application of an orthogonal array design to the optimization of the facile direct carbonization reaction for the synthesis of neodymium carbonate nanoparticles, were the product particles are prepared based on the direct precipitation of their ingredients. To optimize the method the influences of the major operating conditions on the dimensions of the neodymium carbonate particles were quantitatively evaluated through the analysis of variance (ANOVA). It was observed that the crystalls of the carbonate salt can be synthesized by controlling neodymium concentration and flow rate, as well as reactor temperature. Based on the results of ANOVA, 0.03 M, 2.5 mL min-1 and 30 °C are the optimum values for the above-mentioend parameters and controlling the parameters at these values yields nanoparticles with the sizes of about of 31 ± 2 nm. The product of this former stage was next used as the feed for a thermal decomposition procedure which yielding neodymium oxide nanoparticles. The products were studied through X-ray diffraction (XRD), SEM, TEM, FT-IR and thermal analysis techniques. In addition, the photocatalytic activity of dyspersium carbonate and dyspersium oxide nanoparticles were investigated using degradation of methyl orange (MO) under ultraviolet light.
Beverage consumption in low income, "milk-friendly" families.
Glanville, N Theresa; McIntyre, Lynn
2009-01-01
Beverage consumption by poor, lone mother-led, "milk-friendly" families living in Atlantic Canada was characterized over a one-month income cycle. Beverage intake and food security status were assessed weekly, using a 24-hour dietary recall and the Cornell-Radimer food insecurity questionnaire. Families were classified as "milk friendly" if total consumption of milk was 720 mL on a single day during the month. Beverage intake was assessed using t-tests, analysis of variance (ANOVA), repeated measures ANOVA with post hoc comparisons, and chi-square analysis. Milk consumption by milk-friendly families (76; total sample, 129) was highest at the time of the month when they had the most money to spend. During all time intervals, mothers consumed the least amount of milk and children aged one to three years consumed the most. Mothers consumed carbonated beverages disproportionately, while children of all ages consumed more fruit juice/drink. Mothers' coffee consumption was profoundly increased when either they or their children were hungry. The quality of beverage intake by members of low-income households fluctuates in accordance with financial resources available to purchase foods. Mothers' beverage intake is compromised by the degree of food insecurity the family experiences.
The effect of country wealth on incidence of breast cancer.
Coccia, Mario
2013-09-01
The aim of this study is to analyze the relationship between the incidence of breast cancer and income per capita across countries. Data on breast cancer incidence in 52 countries were obtained from GLOBOCAN, along with economic indicators of gross domestic product per capita from the World Bank. Number of computed tomography scanners and magnetic resonance imaging (from World Health Organization) were used as a surrogate for technology and access to screening for cancer diagnosis. Statistical analyses for correlation and regression were performed, along with an analysis of variance (ANOVA). A strong positive association between breast cancer incidence and gross domestic product per capita, Pearson's r = 65.4 %, controlling latitude, density of computed tomography scanners and magnetic resonance imaging was found in countries of temperate zones. The estimated relationship suggests that 1 % higher gross domestic product per capita, within the temperate zones (latitudes), increases the expected age-standardized breast cancer incidence by about 35.6 % (p < 0.001). ANOVA confirms these vital results. While some have argued that latitude and seasonality may affect breast cancer incidence, these findings suggest that wealthier nations may have a higher incidence of breast cancer independent of geographic location and screening technology.
LED Curing Lights and Temperature Changes in Different Tooth Sites.
Armellin, E; Bovesecchi, G; Coppa, P; Pasquantonio, G; Cerroni, L
2016-01-01
Objectives. The aim of this in vitro study was to assess thermal changes on tooth tissues during light exposure using two different LED curing units. The hypothesis was that no temperature increase could be detected within the dental pulp during polymerization irrespective of the use of a composite resin or a light-curing unit. Methods. Caries-free human first molars were selected, pulp residues were removed after root resection, and four calibrated type-J thermocouples were positioned. Two LED lamps were tested; temperature measurements were made on intact teeth and on the same tooth during curing of composite restorations. The data was analyzed by one-way analysis of variance (ANOVA), Wilcoxon test, Kruskal-Wallis test, and Pearson's χ (2). After ANOVA, the Bonferroni multiple comparison test was performed. Results. Polymerization data analysis showed that in the pulp chamber temperature increase was higher than that without resin. Starlight PRO, in the same condition of Valo lamp, showed a lower temperature increase in pre- and intrapolymerization. A control group (without composite resin) was evaluated. Significance. Temperature increase during resin curing is a function of the rate of polymerization, due to the exothermic polymerization reaction, the energy from the light unit, and time of exposure.
Ramezaninia, Javad; Naghibi Sistani, Mohammad Mehdi; Ahangari, Zohreh; Gholinia, Hemmat; Jahanian, Iman; Gharekhani, Samaneh
2018-04-11
The aim of this study was to compare the effect of different modes of toothbrushing education (lecture, video and pamphlet) on the dental plaque index (PI) of adolescents. The cluster randomized intervention was performed on 128 participants aged 12 years, who were allocated into four groups based on the type of intervention. Group 1: no intervention; and groups 2, 3, 4: education via lecture, video, and pamphlet, respectively (n = 32). Their plaque index was measured at the baseline, 24 h and two months later. Data were analyzed by repeated measures analysis of variance (ANOVA), one-way ANOVA, independent and paired t-test. The plaque indices of groups 2, 3, 4 at 24 h (p values < 0.001) and two months (p values < 0.001) showed a significant reduction when compared to the baseline. The lowest PI score was observed in the pamphlet, video and lecture groups at 24 h, respectively. After 2 months, the lowest score of PI was measured in lecture, video and pamphlet groups, respectively; however, these differences were non-significant. Therefore, toothbrushing education via lecture, video and pamphlet reduced the dental plaque index with the same effectiveness.
Silveira, Anelise; Armijo-Olivo, Susan; Gadotti, Inae C; Magee, David
2014-01-01
To compare the masticatory and cervical muscle tenderness and pain sensitivity in the hand (remote region) between patients with temporomandibular disorders (TMD) and healthy controls. Twenty female subjects were diagnosed with chronic TMD, and 20 were considered healthy. Subjects completed the Neck Disability Index and Limitations of Daily Functions in a TMD questionnaire. Tenderness of the masticatory and cervical muscles and pain sensitivity in the hand were measured using an algometer. Three-way mixed analysis of variance (ANOVA) evaluated differences in muscle tenderness between groups. One-way ANOVA compared pain sensitivity in the hand between groups. Effect sizes were assessed using Cohen guidelines. Significantly increased masticatory and cervical muscle tenderness and pain sensitivity in the hand were found in subjects with TMD when compared with healthy subjects. Moderate to high effect sizes showed the clinical relevance of the findings. The results of this study have highlighted the importance of assessing TMD patients not only in the craniofacial region but also in the neck and other parts of the body. Future studies should focus on testing the effectiveness of treatments addressing the neck and the pain sensitivity in the hand in patients with TMD.
Shukor, Hafiza; Abdeshahian, Peyman; Al-Shorgani, Najeeb Kaid Nasser; Hamid, Aidil Abdul; Rahman, Norliza A; Kalil, Mohd Sahaid
2016-10-01
Catalytic depolymerization of mannan composition of palm kernel cake (PKC) by mannanase was optimized to enhance the release of mannan-derived monomeric sugars for further application in acetone-butanol-ethanol (ABE) fermentation. Efficiency of enzymatic hydrolysis of PKC was studied by evaluating effects of PKC concentration, mannanase loading, hydrolysis pH value, reaction temperature and hydrolysis time on production of fermentable sugars using one-way analysis of variance (ANOVA). The ANOVA results revealed that all factors studied had highly significant effects on total sugar liberated (P<0.01). The optimum conditions for PKC hydrolysis were 20% (w/v) PKC concentration, 5% (w/w) mannanase loading, hydrolysis pH 4.5, 45°C temperature and 72h hydrolysis time. Enzymatic experiments in optimum conditions revealed total fermentable sugars of 71.54±2.54g/L were produced including 67.47±2.51g/L mannose and 2.94±0.03g/L glucose. ABE fermentation of sugar hydrolysate by Clostridium saccharoperbutylacetonicum N1-4 resulted in 3.27±1.003g/L biobutanol. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rapid development of xylanase assay conditions using Taguchi methodology.
Prasad Uday, Uma Shankar; Bandyopadhyay, Tarun Kanti; Bhunia, Biswanath
2016-11-01
The present investigation is mainly concerned with the rapid development of extracellular xylanase assay conditions by using Taguchi methodology. The extracellular xylanase was produced from Aspergillus niger (KP874102.1), a new strain isolated from a soil sample of the Baramura forest, Tripura West, India. Four physical parameters including temperature, pH, buffer concentration and incubation time were considered as key factors for xylanase activity and were optimized using Taguchi robust design methodology for enhanced xylanase activity. The main effect, interaction effects and optimal levels of the process factors were determined using signal-to-noise (S/N) ratio. The Taguchi method recommends the use of S/N ratio to measure quality characteristics. Based on analysis of the S/N ratio, optimal levels of the process factors were determined. Analysis of variance (ANOVA) was performed to evaluate statistically significant process factors. ANOVA results showed that temperature contributed the maximum impact (62.58%) on xylanase activity, followed by pH (22.69%), buffer concentration (9.55%) and incubation time (5.16%). Predicted results showed that enhanced xylanase activity (81.47%) can be achieved with pH 2, temperature 50°C, buffer concentration 50 Mm and incubation time 10 min.
Radiation shielding quality assurance
NASA Astrophysics Data System (ADS)
Um, Dallsun
For the radiation shielding quality assurance, the validity and reliability of the neutron transport code MCNP, which is now one of the most widely used radiation shielding analysis codes, were checked with lot of benchmark experiments. And also as a practical example, follows were performed in this thesis. One integral neutron transport experiment to measure the effect of neutron streaming in iron and void was performed with Dog-Legged Void Assembly in Knolls Atomic Power Laboratory in 1991. Neutron flux was measured six different places with the methane detectors and a BF-3 detector. The main purpose of the measurements was to provide benchmark against which various neutron transport calculation tools could be compared. Those data were used in verification of Monte Carlo Neutron & Photon Transport Code, MCNP, with the modeling for that. Experimental results and calculation results were compared in both ways, as the total integrated value of neutron fluxes along neutron energy range from 10 KeV to 2 MeV and as the neutron spectrum along with neutron energy range. Both results are well matched with the statistical error +/-20%. MCNP results were also compared with those of TORT, a three dimensional discrete ordinates code which was developed by Oak Ridge National Laboratory. MCNP results are superior to the TORT results at all detector places except one. This means that MCNP is proved as a very powerful tool for the analysis of neutron transport through iron & air and further it could be used as a powerful tool for the radiation shielding analysis. For one application of the analysis of variance (ANOVA) to neutron and gamma transport problems, uncertainties for the calculated values of critical K were evaluated as in the ANOVA on statistical data.
Christ, George J; Hsieh, Yi; Zhao, Weixin; Schenk, Gregory; Venkateswarlu, Karicheti; Wang, Hong-Zhan; Tar, Moses T; Melman, Arnold
2006-05-01
To establish the methods, feasibility and utility of evaluating the impact of diabetes on bladder and erectile function in the same rat, as more than half of diabetic patients have bladder dysfunction, and half of diabetic men have erectile dysfunction, but the severity of coincident disease has not been rigorously assessed. In all, 16 F-344 rats had diabetes induced by streptozotocin (STZ), and were divided into insulin-treated (five) and untreated (11), and compared with age-matched controls (10), all assessed in parallel. All STZ rats were diabetic for 8-11 weeks. Cystometric studies were conducted on all rats, with cavernosometric studies conducted on a subset of rats. There were insulin-reversible increases in the following cystometric variables; bladder weight, bladder capacity, micturition volume, residual volume, micturition pressure and spontaneous activity (P < 0.05, in all, one-way analysis of variance, anova). Cavernosometry showed a diabetes-related, insulin-reversible decline in the cavernosal nerve-stimulated intracavernosal pressure (ICP) response at all levels of current stimulation (P < 0.05, in all one-way anova). Plotting erectile capacity (i.e. ICP) against bladder capacity showed no correlation between the extent of the decline in erectile capacity and the magnitude of the increase in bladder capacity. These studies extend previous work to indicate that the extent of diabetes-related bladder and erectile dysfunction can vary in the same rat. As such, these findings highlight the importance of evaluating the impact of diabetes on multiple organ systems in the lower urinary tract. Future studies using this model system should lead to a better understanding of the initiation, development, progression and coincidence of these common diabetic complications.
Alqahtani, Fawaz
2017-01-01
The purpose of this study was to determine the effect of two extraoral computer-aided design (CAD) and computer-aided manufacturing (CAM) systems, in comparison with conventional techniques, on the marginal fit of monolithic CAD/CAM lithium disilicate ceramic crowns. This is an in vitro interventional study. The study was carried out at the Department of Prosthodontics, School of Dentistry, Prince Sattam Bin Abdul-Aziz University, Saudi Arabia, from December 2015 to April 2016. A marginal gap of 60 lithium disilicate crowns was evaluated by scanning electron microscopy. In total, 20 pressable lithium disilicate (IPS e.max Press [Ivoclar Vivadent]) ceramic crowns were fabricated using the conventional lost-wax technique as a control group. The experimental all-ceramic crowns were produced based on a scan stone model and milled using two extraoral CAD/CAM systems: the Cerec group was fabricated using the Cerec CAD/CAM system, and the Trios group was fabricated using Trios CAD and milled using Wieland Zenotec CAM. One-way analysis of variance (ANOVA) and the Scheffe post hoc test were used for statistical comparison of the groups (α=0.05). The mean (±standard deviation) of the marginal gap of each group was as follows: the Control group was 91.15 (±15.35) µm, the Cerec group was 111.07 (±6.33) µm, and the Trios group was 60.17 (±11.09) µm. One-way ANOVA and the Scheffe post hoc test showed a statistically significant difference in the marginal gap between all groups. It can be concluded from the current study that all-ceramic crowns, fabricated using the CAD/CAM system, show a marginal accuracy that is acceptable in clinical environments. The Trios CAD group displayed the smallest marginal gap.
NASA Astrophysics Data System (ADS)
Bisyri Husin Musawi Maliki, Ahmad; Razali Abdullah, Mohamad; Juahir, Hafizan; Muhamad, Wan Siti Amalina Wan; Afiqah Mohamad Nasir, Nur; Muazu Musa, Rabiu; Musliha Mat-Rasid, Siti; Adnan, Aleesha; Azura Kosni, Norlaila; Abdullah, Farhana; Ain Shahirah Abdullah, Nurul
2018-04-01
The main purpose of this study was to develop Anthropometric, Growth and Maturity Index (AGaMI) in soccer and explore its differences to soccer player physical attributes, fitness, motivation and skills. A total 223 adolescent soccer athletes aged 12 to 18 years old were selected as respondent. AGaMI was develop based on anthropometric components (bicep, tricep, subscapular, suprailiac, calf circumference and muac) with growth and maturity component using tanner scale. Meanwhile, relative performance namely physical, fitness, motivation and skills attributes of soccer were measured as dependent variables. The Principal Component Analysis (PCA) and Analysis of Variance (ANOVA) are used to achieve the objective in this study. AGaMI had categorized players into three different groups namely; high (5 players), moderate (88 players) and low (91 players). PCA revealed a moderate to very strong dominant range of 0.69 to 0.90 of factor loading on AGaMI. Further analysis assigned AGaMI groups as treated as independent variables (IV) and physical, fitness, motivation and skills attributes were treated as dependent variables (DV). Finally, ANOVA showed that flexibility, leg power, age, weight, height, sitting height, short and long pass are the most significant parameters statistically differentiate by the groups of AGaMI (p<0.05). As a summary, body fat mass, growth and maturity are an essential component differentiating the output of the soccer players relative performance. In future, information of the AGaMI model are useful to the coach and players for identifying the suitable biological and physiological demand reflects more comprehensive means of youth soccer relative performance. This study further highlights the importance of assessing AGaMI when identifying soccer relative performance.
Enzymatic Hydrolysis of Pretreated Fibre Pressed Oil Palm Frond by using Sacchariseb C6
NASA Astrophysics Data System (ADS)
Hashim, F. S.; Yussof, H. W.; Zahari, M. A. K. M.; Rahman, R. A.; Illias, R. M.
2017-06-01
Enzymatic hydrolysis becomes a prominent technology for conversion of cellulosic biomass to its glucose monomers that requires an action of cellulolytic enzymes in a sequential and synergistic manner. In this study, the effect of agitation speed, glucan loading, enzyme loading, temperature and reaction time on the production of glucose from fibre pressed oil palm frond (FPOPF) during enzymatic hydrolysis was screened by a half factorial design 25-1 using Response Surface Methodology (RSM). The FPOPF sample was first delignified by alkaline pretreatment at 4.42 (w/v) sodium hydroxide for an hour prior to enzymatic hydrolysis using commercial cellulase enzyme, Sacchariseb C6. The effect of enzymatic hydrolysis on the structural of FPOPF has been evaluated by Scanning Electron Microscopy (SEM) analysis. Characterization of raw FPOPF comprised of 4.5 extractives, 40.7 glucan, 26.1 xylan, 26.2 lignin and 1.8 ash, whereas for pretreated FPOPF gave 0.3 extractives, 61.4 glucan, 20.4 xylan, 13.3 lignin and 1.3 ash. From this study, it was found that the best enzymatic hydrolysis condition yielded 33.01 ± 0.73 g/L of glucose when performed at 200 rpm of agitation speed, 60 FPU/mL of enzyme loading, 4 (w/w) of glucan loading, temperature at 55 □ and 72 hours of reaction time. The model obtained was significant with p-value <0.0001 as verified by the analysis of variance (ANOVA). The coefficient of determination (R2) from ANOVA study was 0.9959. Overall, it can be concluded that addition of Sacchariseb C6 during enzymatic hydrolysis from pretreated FPOPF produce high amount of glucose that enhances it potential for industrial application. This glucose can be further used to produce high-value products.
Aalizadeh, Bahman; Mohammadzadeh, Hassan; Khazani, Ali; Dadras, Ali
2016-01-01
Background: Physical exercises can influence some anthropometric and fitness components differently. The aim of present study was to evaluate how a relatively long-term training program in 11-14-year-old male Iranian students affects their anthropometric and motor performance measures. Methods: Measurements were conducted on the anthropometric and fitness components of participants (n = 28) prior to and following the program. They trained 20 weeks, 1.5 h/session with 10 min rest, in 4 times trampoline training programs per week. Motor performance of all participants was assessed using standing long jump and vertical jump based on Eurofit Test Battery. Results: The analysis of variance (ANOVA) repeated measurement test showed a statistically significant main effect of time in calf girth P = 0.001, fat% P = 0.01, vertical jump P = 0.001, and long jump P = 0.001. The ANOVA repeated measurement test revealed a statistically significant main effect of group in fat% P = 0.001. Post hoc paired t-tests indicated statistical significant differences in trampoline group between the two measurements about calf girth (t = −4.35, P = 0.001), fat% (t = 5.87, P = 0.001), vertical jump (t = −5.53, P = 0.001), and long jump (t = −10.00, P = 0.001). Conclusions: We can conclude that 20-week trampoline training with four physical activity sessions/week in 11–14-year-old students seems to have a significant effect on body fat% reduction and effective results in terms of anaerobic physical fitness. Therefore, it is suggested that different training model approach such as trampoline exercises can help students to promote the level of health and motor performance. PMID:27512557
Aalizadeh, Bahman; Mohammadzadeh, Hassan; Khazani, Ali; Dadras, Ali
2016-01-01
Physical exercises can influence some anthropometric and fitness components differently. The aim of present study was to evaluate how a relatively long-term training program in 11-14-year-old male Iranian students affects their anthropometric and motor performance measures. Measurements were conducted on the anthropometric and fitness components of participants (n = 28) prior to and following the program. They trained 20 weeks, 1.5 h/session with 10 min rest, in 4 times trampoline training programs per week. Motor performance of all participants was assessed using standing long jump and vertical jump based on Eurofit Test Battery. The analysis of variance (ANOVA) repeated measurement test showed a statistically significant main effect of time in calf girth P = 0.001, fat% P = 0.01, vertical jump P = 0.001, and long jump P = 0.001. The ANOVA repeated measurement test revealed a statistically significant main effect of group in fat% P = 0.001. Post hoc paired t-tests indicated statistical significant differences in trampoline group between the two measurements about calf girth (t = -4.35, P = 0.001), fat% (t = 5.87, P = 0.001), vertical jump (t = -5.53, P = 0.001), and long jump (t = -10.00, P = 0.001). We can conclude that 20-week trampoline training with four physical activity sessions/week in 11-14-year-old students seems to have a significant effect on body fat% reduction and effective results in terms of anaerobic physical fitness. Therefore, it is suggested that different training model approach such as trampoline exercises can help students to promote the level of health and motor performance.
2012-01-01
The optimization processes of photo degradation are complicated and expensive when it is performed with traditional methods such as one variable at a time. In this research, the condition of ortho-cresol (o-cresol) photo degradation was optimized by using a semi empirical method. First of all, the experiments were designed with four effective factors including irradiation time, pH, photo catalyst’s amount, o-cresol concentration and photo degradation % as response by response surface methodology (RSM). The RSM used central composite design (CCD) method consists of 30 runs to obtain the actual responses. The actual responses were fitted with the second order algebraic polynomial equation to select a model (suggested model). The suggested model was validated by a few numbers of excellent statistical evidences in analysis of variance (ANOVA). The used evidences include high F-value (143.12), very low P-value (<0.0001), non-significant lack of fit, the determination coefficient (R2 = 0.99) and the adequate precision (47.067). To visualize the optimum, the validated model simulated the condition of variables and response (photo degradation %) be using a few number of three dimensional plots (3D). To confirm the model, the optimums were performed in laboratory. The results of performed experiments were quite close to the predicted values. In conclusion, the study indicated that the model is successful to simulate the optimum condition of o-cresol photo degradation under visible-light irradiation by manganese doped ZnO nanoparticles. PMID:22909072
Francy, D.S.; Hart, T.L.; Virosteck, C.M.
1996-01-01
Bacterial injury, survival, and regrowth were investigated by use of replicate flow-through incubation chambers placed in the Cuyahoga River or Lake Erie in the greater Cleveland metropolitan area during seven 4-day field studies. The chambers contained wastewater or combined-sewer-overflow (CSO) effluents treated three ways-unchlorinated, chlorinated, and dechlorinated. At timestep intervals, the chamber contents were analyzed for concentrations of injured and healthy fecal coliforms by use of standard selective and enhanced-recovery membrane-filtration methods. Mean percent injuries and survivals were calculated from the fecal-coliform concentration data for each field study. The results of analysis of variance (ANOVA) indicated that treatment affected mean percent injury and survival, whereas site did not. In the warm-weather Lake Erie field study, but not in the warm-weather Cuyahoga River studies, the results of ANOVA indicated that dechlorination enhanced the repair of injuries and regrowth of chlorine-injured fecal coliforms on culture media over chlorination alone. The results of ANOVA on the percent injury from CSO effluent field studies indicated that dechlorination reduced the ability of organisms to recover and regrow on culture media over chlorination alone. However, because of atypical patterns of concentration increases and decreases in some CSO effluent samples, more work needs to be done before the effect of dechlorination and chlorination on reducing fecal-coliform concentrations in CSO effluents can be confirmed. The results of ANOVA on percent survivals found statistically significant differences among the three treatment methods for all but one study. Dechlorination was found to be less effective than chlorination alone in reducing the survival of fecal coliforms in wastewater effluent, but not in CSO effluent. If the concentration of fecal coliforms determined by use of the enhanced-recovery method can be predicted accurately from the concentration found by use of the standard method, then increased monitoring and expense to detect chlorine-injured organisms would be unnecessary. The results of linear regression analysis, however, indicated that the relation between enhanced-recovery and standard-method concentrations was best represented when the data were grouped by treatment. The model generated from linear regression of the unchlorinated data set provided an accurate estimate of enhanced-recovery concentrations from standard-method concentrations, whereas the models generated from the chlorinated and dechlorinated data sets did not. In addition, evaluation of fecal-coliform concentrations found in field studies in terms of Ohio recreational water-quality standards showed that concentrations obtained by standard and enhanced-recovery methods were not comparable. Sample treatment and analysis methods were found to affect the percentage of samples meeting and exceeding Ohio's bathing-water, primary-contact, and secondary-contact standards. Therefore, determining the health risk of swimming in receiving waters was often difficult without information on enhanced-recovery method concentrations and was especially difficult in waters receiving high proportions of chlorinated or dechlorinated effluents.
Optimisation of warpage on plastic injection moulding part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.
2017-09-01
The warpage is often encountered which occur during injection moulding process of thin shell part depending the process condition. The statistical design of experiment method which are Integrating Finite Element (FE) Analysis, moldflow analysis and response surface methodology (RSM) are the stage of few ways in minimize the warpage values of x,y and z on thin shell plastic parts that were investigated. A battery cover of a remote controller is one of the thin shell plastic part that produced by using injection moulding process. The optimum process condition parameter were determined as to achieve the minimum warpage from being occur. Packing pressure, Cooling time, Melt temperature and Mould temperature are 4 parameters that considered in this study. A two full factorial experimental design was conducted in Design Expert of RSM analysis as to combine all these parameters study. FE analysis result gain from analysis of variance (ANOVA) method was the one of the important process parameters influenced warpage. By using RSM, a predictive response surface model for warpage data will be shown.
Zembura, Paweł; Żyśko, Jolanta
2015-01-01
The study attempted to analyse the concept of spectators’ motives at mixed martial arts (MMA) events in Poland. In addition, we investigated the relation between motives and sports media consumption. The sample consisted of 273 people attending three similar, regional MMA events. Exploratory factor analysis was used to refine the structure of motives. Confirmatory factor analysis showed a reasonable fit of the obtained model (RMSEA = 0.41). Using ANOVA we found three significant differences in assessment of motives, based on gender. The factor of aesthetics and knowledge was ranked the highest for men and women. Men rated drama and violence, while women perceived socializing and crowd experience, and drama, as the following factors. Path analysis indicated that these motives explained 56% of variance in media consumption for men and 57% for women. The findings showed that the motive of vicarious achievement was the main predictor of media consumption for men, while aesthetics and knowledge were the key predictors for women. The results and ideas for further research are discussed. PMID:26240663
Drabik-Markiewicz, G; Dejaegher, B; De Mey, E; Kowalska, T; Paelinck, H; Vander Heyden, Y
2011-06-15
The influence of biogenic amines (i.e. putrescine, cadaverine, spermidine and spermine) on the N-nitrosamine formation in heated cured lean meat was studied in the presence or absence of sodium nitrite and at different meat processing temperatures. Experimental evidence was produced using gas chromatography with thermal energy analysis detection (GC-TEA). Concentration of N-nitrosamines was modelled as a function of the temperature and the nitrite concentration for two situations, i.e. presence or absence of added biogenic amines to the meat. The significance of the influence of the changing parameters was evaluated by ANOVA (Analysis of Variance). It was found that higher processing temperatures and higher added amounts of sodium nitrite increase the yields of N-nitrosodimethylamine (NDMA) and N-nitrosopiperidine (NPIP). Spermidine and putrescine amplify the formation of NDMA, but spermine and cadeverine do not influence the formation of this N-nitrosamine. Spermidine and cadeverine cause a significant increase of NPIP. Beside N-nitrosopyrrolidine (NPYR) in some rare cases, no other volatile N-nitrosamines are detected. Copyright © 2010 Elsevier Ltd. All rights reserved.
Shen, Fei; Wu, Jian; Ying, Yibin; Li, Bobin; Jiang, Tao
2013-12-15
Discrimination of Chinese rice wines from three well-known wineries ("Guyuelongshan", "Kuaijishan", and "Pagoda") in China has been carried out according to mineral element contents in this study. Nineteen macro and trace mineral elements (Na, Mg, Al, K, Ca, Mn, Fe, Cu, Zn, V, Cr, Co, Ni, As, Se, Mo, Cd, Ba and Pb) were determined by inductively coupled plasma mass spectrometry (ICP-MS) in 117 samples. Then the experimental data were subjected to analysis of variance (ANOVA) and principal component analysis (PCA) to reveal significant differences and potential patterns between samples. Stepwise linear discriminant analysis (LDA) and partial least square discriminant analysis (PLS-DA) were applied to develop classification models and achieved correct classified rates of 100% and 97.4% for the prediction sample set, respectively. The discrimination could be attributed to different raw materials (mainly water) and elaboration processes employed. The results indicate that the element compositions combined with multivariate analysis can be used as fingerprinting techniques to protect prestigious wineries and enable the authenticity of Chinese rice wine. Copyright © 2013 Elsevier Ltd. All rights reserved.
Joyce, Christopher; Burnett, Angus; Cochrane, Jodie; Reyes, Alvaro
2016-01-01
It is unknown whether skilled golfers will modify their kinematics when using drivers of different shaft properties. This study aimed to firstly determine if golf swing kinematics and swing parameters and related launch conditions differed when using modified drivers, then secondly, determine which kinematics were associated with clubhead speed. Twenty high level amateur male golfers (M ± SD: handicap = 1.9 ± 1.9 score) had their three-dimensional (3D) trunk and wrist kinematics collected for two driver trials. Swing parameters and related launch conditions were collected using a launch monitor. A one-way repeated measures ANOVA revealed significant (p ≤ 0.003) between driver differences; specifically, faster trunk axial rotation velocity and an early wrist release for the low kick point driver. Launch angle was shown to be 2° lower for the high kick point driver. Regression models for both drivers explained a significant amount of variance (60-67%) in clubhead speed. Wrist kinematics were most associated with clubhead speed, indicating the importance of the wrists in producing clubhead speed regardless of driver shaft properties.
NASA Astrophysics Data System (ADS)
Nasr, M.; Anwar, S.; El-Tamimi, A.; Pervaiz, S.
2018-04-01
Titanium and its alloys e.g. Ti6Al4V have widespread applications in aerospace, automotive and medical industry. At the same time titanium and its alloys are regarded as difficult to machine materials due to their high strength and low thermal conductivity. Significant efforts have been dispensed to improve the accuracy of the machining processes for Ti6Al4V. The current study present the use of the rotary ultrasonic drilling (RUD) process for machining high quality holes in Ti6Al4V. The study takes into account the effects of the main RUD input parameters including spindle speed, ultrasonic power, feed rate and tool diameter on the key output responses related to the accuracy of the drilled holes including cylindricity and overcut errors. Analysis of variance (ANOVA) was employed to study the influence of the input parameters on cylindricity and overcut error. Later, regression models were developed to find the optimal set of input parameters to minimize the cylindricity and overcut errors.
NASA Astrophysics Data System (ADS)
Yin, Shaohua; Lin, Guo; Li, Shiwei; Peng, Jinhui; Zhang, Libo
2016-09-01
Microwave heating has been applied in the field of drying rare earth carbonates to improve drying efficiency and reduce energy consumption. The effects of power density, material thickness and drying time on the weight reduction (WR) are studied using response surface methodology (RSM). The results show that RSM is feasible to describe the relationship between the independent variables and weight reduction. Based on the analysis of variance (ANOVA), the model is in accordance with the experimental data. The optimum experiment conditions are power density 6 w/g, material thickness 15 mm and drying time 15 min, resulting in an experimental weight reduction of 73%. Comparative experiments show that microwave drying has the advantages of rapid dehydration and energy conservation. Particle analysis shows that the size distribution of rare earth carbonates after microwave drying is more even than those in an oven. Based on these findings, microwave heating technology has an important meaning to energy-saving and improvement of production efficiency for rare earth smelting enterprises and is a green heating process.
Organizational variables on nurses' job performance in Turkey: nursing assessments.
Top, Mehmet
2013-01-01
The purpose of this study was to describe the influence of organizational variables on hospital staff nurses' job performance as reported by staff nurses in two cities in Turkey. Hospital ownership status, employment status were examined for their effect on this influence. The reported influence of organizational variables on job performance was measured by a questionnaire developed for this study. Nurses were asked to evaluate the influence of 28 organizational variables on their job performance using a five-point Likert-type scale (1- Never effective, 5- Very effective). The study used comparative and descriptive study design. The staff nurses who were included in this study were 831 hospital staff nurses. Descriptive statistics, frequencies, t-test, ANOVA and factor analysis were used for data analysis. The study showed the relative importance of the 28 organizational variables in influencing nurses' job performance. Nurses in this study reported that workload and technological support are the most influential organizational variables on their job performance. Factor analysis yielded a five-factor model that explained 53.99% of total variance. Administratively controllable influence job organizational variables influence job performance of nurses in different magnitude.
Optimization of Robotic Spray Painting process Parameters using Taguchi Method
NASA Astrophysics Data System (ADS)
Chidhambara, K. V.; Latha Shankar, B.; Vijaykumar
2018-02-01
Automated spray painting process is gaining interest in industry and research recently due to extensive application of spray painting in automobile industries. Automating spray painting process has advantages of improved quality, productivity, reduced labor, clean environment and particularly cost effectiveness. This study investigates the performance characteristics of an industrial robot Fanuc 250ib for an automated painting process using statistical tool Taguchi’s Design of Experiment technique. The experiment is designed using Taguchi’s L25 orthogonal array by considering three factors and five levels for each factor. The objective of this work is to explore the major control parameters and to optimize the same for the improved quality of the paint coating measured in terms of Dry Film thickness(DFT), which also results in reduced rejection. Further Analysis of Variance (ANOVA) is performed to know the influence of individual factors on DFT. It is observed that shaping air and paint flow are the most influencing parameters. Multiple regression model is formulated for estimating predicted values of DFT. Confirmation test is then conducted and comparison results show that error is within acceptable level.
NASA Astrophysics Data System (ADS)
Abd Kadir, N.; Aminanda, Y.; Ibrahim, M. S.; Mokhtar, H.
2016-10-01
A statistical analysis was performed to evaluate the effect of factor and to obtain the optimum configuration of Kraft paper honeycomb. The factors considered in this study include density of paper, thickness of paper and cell size of honeycomb. Based on three level factorial design, two-factor interaction model (2FI) was developed to correlate the factors with specific energy absorption and specific compression strength. From the analysis of variance (ANOVA), the most influential factor on responses and the optimum configuration was identified. After that, Kraft paper honeycomb with optimum configuration is used to fabricate foam-filled paper honeycomb with five different densities of polyurethane foam as filler (31.8, 32.7, 44.5, 45.7, 52 kg/m3). The foam-filled paper honeycomb is subjected to quasi-static compression loading. Failure mechanism of the foam-filled honeycomb was identified, analyzed and compared with the unfilled paper honeycomb. The peak force and energy absorption capability of foam-filled paper honeycomb are increased up to 32% and 30%, respectively, compared to the summation of individual components.
NASA Astrophysics Data System (ADS)
Valizadeh, Maryam; Sohrabi, Mahmoud Reza
2018-03-01
In the present study, artificial neural networks (ANNs) and support vector regression (SVR) as intelligent methods coupled with UV spectroscopy for simultaneous quantitative determination of Dorzolamide (DOR) and Timolol (TIM) in eye drop. Several synthetic mixtures were analyzed for validating the proposed methods. At first, neural network time series, which one type of network from the artificial neural network was employed and its efficiency was evaluated. Afterwards, the radial basis network was applied as another neural network. Results showed that the performance of this method is suitable for predicting. Finally, support vector regression was proposed to construct the Zilomole prediction model. Also, root mean square error (RMSE) and mean recovery (%) were calculated for SVR method. Moreover, the proposed methods were compared to the high-performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them. Also, the effect of interferences was investigated in spike solutions.
Electrocoagulation efficiency of the tannery effluent treatment using aluminium electrodes.
Espinoza-Quiñones, Fernando R; Fornari, Marilda M T; Módenes, Aparecido N; Palácio, Soraya M; Trigueros, Daniela E G; Borba, Fernando H; Kroumov, Alexander D
2009-01-01
An electro-coagulation laboratory scale system using aluminium plates electrodes was studied for the removal of organic and inorganic pollutants as a by-product from leather finishing industrial process. A fractional factorial 2(3) experimental design was applied in order to obtain optimal values of the system state variables. The electro-coagulation (EC) process efficiency was based on the chemical oxygen demand (COD), turbidity, total suspended solid, total fixed solid, total volatile solid, and chemical element concentration values. Analysis of variance (ANOVA) for final pH, total fixed solid (TFS), turbidity and Ca concentration have confirmed the predicted models by the experimental design within a 95% confidence level. The reactor working conditions close to real effluent pH (7.6) and electrolysis time in the range 30-45 min were enough to achieve the cost effective reduction factors of organic and inorganic pollutants' concentrations. An appreciable improvement in COD removal efficiency was obtained for electro-coagulation treatment. Finally, the technical-economical analysis results have clearly shown that the electro-coagulation method is very promising for industrial application.
Hohn, M. Ed; Nuhfer, E.B.; Vinopal, R.J.; Klanderman, D.S.
1980-01-01
Classifying very fine-grained rocks through fabric elements provides information about depositional environments, but is subject to the biases of visual taxonomy. To evaluate the statistical significance of an empirical classification of very fine-grained rocks, samples from Devonian shales in four cored wells in West Virginia and Virginia were measured for 15 variables: quartz, illite, pyrite and expandable clays determined by X-ray diffraction; total sulfur, organic content, inorganic carbon, matrix density, bulk density, porosity, silt, as well as density, sonic travel time, resistivity, and ??-ray response measured from well logs. The four lithologic types comprised: (1) sharply banded shale, (2) thinly laminated shale, (3) lenticularly laminated shale, and (4) nonbanded shale. Univariate and multivariate analyses of variance showed that the lithologic classification reflects significant differences for the variables measured, difference that can be detected independently of stratigraphic effects. Little-known statistical methods found useful in this work included: the multivariate analysis of variance with more than one effect, simultaneous plotting of samples and variables on canonical variates, and the use of parametric ANOVA and MANOVA on ranked data. ?? 1980 Plenum Publishing Corporation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomassen, Mads; Skov, Vibe; Eiriksdottir, Freyja
2006-06-16
The quality of DNA microarray based gene expression data relies on the reproducibility of several steps in a microarray experiment. We have developed a spotted genome wide microarray chip with oligonucleotides printed in duplicate in order to minimise undesirable biases, thereby optimising detection of true differential expression. The validation study design consisted of an assessment of the microarray chip performance using the MessageAmp and FairPlay labelling kits. Intraclass correlation coefficient (ICC) was used to demonstrate that MessageAmp was significantly more reproducible than FairPlay. Further examinations with MessageAmp revealed the applicability of the system. The linear range of the chips wasmore » three orders of magnitude, the precision was high, as 95% of measurements deviated less than 1.24-fold from the expected value, and the coefficient of variation for relative expression was 13.6%. Relative quantitation was more reproducible than absolute quantitation and substantial reduction of variance was attained with duplicate spotting. An analysis of variance (ANOVA) demonstrated no significant day-to-day variation.« less
Naftz, D.L.; Schuster, P.F.; Reddy, M.M.
1994-01-01
One hundred samples were collected from the surface of the Upper Fremont Glacier at equally spaced intervals defined by an 8100m2 snow grid to asesss the significance of lateral variability in major-ion concentrations and del oxygen-18 values. Comparison of the observed variability of each chemical constituent to the variability expected by measurement error indicated substantial lateral variability with the surface-snow layer. Results of the nested ANOVA indicate most of the variance for every constituent is in the values grouped at the two smaller geographic scales (between 506m2 and within 506m2 sections). The variance data from the snow grid were used to develop equations to evaluate the significance of both positive and negative concentration/value peaks of nitrate and del oxygen-18 with depth, in a 160m ice core. Values of del oxygen-18 in the section from 110-150m below the surface consistently vary outside the expected limits and possibly represents cooler temperatures during the Little Ice Age from about 1810 to 1725 A.D. -from Authors
Risk modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-09-01
Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.
Cognitive functioning and insight in schizophrenia and in schizoaffective disorder.
Birindelli, Nadia; Montemagni, Cristiana; Crivelli, Barbara; Bava, Irene; Mancini, Irene; Rocca, Paola
2014-01-01
The aim of this study was to investigate cognitive functioning and insight of illness in two groups of patients during their stable phases, one with schizophrenia and one with schizoaffective disorder. We recruited 104 consecutive outpatients, 64 with schizophrenia, 40 with schizoaffective disorder, in the period between July 2010 and July 2011. They all fulfilled formal Diagnostic and Statistical Manual of Mental disorders (DSM-IV-TR) diagnostic criteria for schizophrenia and schizoaffective disorder. Psychiatric assessment included the Clinical Global Impression Scale-Severity (CGI-S), the Positive and Negative Sindrome Scale (PANSS), the Calgary Depression Scale for Schizophrenia (CDSS) and the Global Assessment of Functioning (GAF). Insight of illness was evaluated using SUMD. Neuropsychological assessment included Winsconsin Card Sorting Test (WCST), California Verbal Learning Test (CVLT), Stroop Test and Trail Making Test (TMT). Differences between the groups were tested using Chi-square test for categorical variables and one-way analysis of variance (ANOVA) for continuous variables. All variables significantly different between the two groups of subjects were subsequently analysed using a logistic regression with a backward stepwise procedure using diagnosis (schizophrenia/schizoaffective disorder) as dependent variable. After backward selection of variables, four variables predicted a schizoaffective disorder diagnosis: marital status, a higher number of admission, better attentive functions and awareness of specific signs or symptoms of disease. The prediction model accounted for 55% of the variance of schizoaffective disorder diagnosis. With replication, our findings would allow higher diagnostic accuracy and have an impact on clinical decision making, in light of an amelioration of vocational functioning.
Exertional heat illness incidence and on-site medical team preparedness in warm weather
NASA Astrophysics Data System (ADS)
Hosokawa, Yuri; Adams, William M.; Belval, Luke N.; Davis, Robert J.; Huggins, Robert A.; Jardine, John F.; Katch, Rachel K.; Stearns, Rebecca L.; Casa, Douglas J.
2018-03-01
To investigate the influence of estimated wet bulb globe temperature (WBGT) and the International Institute of Race Medicine (IIRM) activity modification guidelines on the incidence of exertional heat stroke (EHS) and heat exhaustion (HEx) and the ability of an on-site medical team to treat those afflicted. Medical records of EHS and HEx patients over a 17-year period from the New Balance Falmouth Road Race were examined. Climatologic data from nearby weather stations were obtained to calculate WBGT with the Australian Bureau of Meteorology (WBGTA) and Liljegren (WBGTL) models. Incidence rate (IR) of EHS, HEx, and combined total of EHS and HEx (COM) were calculated, and linear regression analyses were performed to assess the relationship between IR and WBGTA or WBGTL. One-way ANOVA was performed to compare differences in EHS, HEx, and COM incidence to four alert levels in the IIRM guidelines. Incidence of EHS, HEx, and COM was 2.12, 0.98, and 3.10 cases per 1000 finishers. WBGTA explained 48, 4, and 46% of the variance in EHS, HEx, and COM IR; WBGTL explained 63, 13, and 69% of the variance in EHS, HEx, and COM IR. Main effect of WBGTA and WBGTL on the alert levels were observed in EHS and COM IR (p < 0.05). The cumulative number of EHS patients treated did not exceed the number of cold water immersion tubs available to treat them. EHS IR increased as WBGT and IIRM alert level increased, indicating the need for appropriate risk mitigation strategies and on-site medical treatment.
Exertional heat illness incidence and on-site medical team preparedness in warm weather.
Hosokawa, Yuri; Adams, William M; Belval, Luke N; Davis, Robert J; Huggins, Robert A; Jardine, John F; Katch, Rachel K; Stearns, Rebecca L; Casa, Douglas J
2018-03-29
To investigate the influence of estimated wet bulb globe temperature (WBGT) and the International Institute of Race Medicine (IIRM) activity modification guidelines on the incidence of exertional heat stroke (EHS) and heat exhaustion (HEx) and the ability of an on-site medical team to treat those afflicted. Medical records of EHS and HEx patients over a 17-year period from the New Balance Falmouth Road Race were examined. Climatologic data from nearby weather stations were obtained to calculate WBGT with the Australian Bureau of Meteorology (WBGT A ) and Liljegren (WBGT L ) models. Incidence rate (IR) of EHS, HEx, and combined total of EHS and HEx (COM) were calculated, and linear regression analyses were performed to assess the relationship between IR and WBGT A or WBGT L . One-way ANOVA was performed to compare differences in EHS, HEx, and COM incidence to four alert levels in the IIRM guidelines. Incidence of EHS, HEx, and COM was 2.12, 0.98, and 3.10 cases per 1000 finishers. WBGT A explained 48, 4, and 46% of the variance in EHS, HEx, and COM IR; WBGT L explained 63, 13, and 69% of the variance in EHS, HEx, and COM IR. Main effect of WBGT A and WBGT L on the alert levels were observed in EHS and COM IR (p < 0.05). The cumulative number of EHS patients treated did not exceed the number of cold water immersion tubs available to treat them. EHS IR increased as WBGT and IIRM alert level increased, indicating the need for appropriate risk mitigation strategies and on-site medical treatment.
Association between ADH1C and ALDH2 polymorphisms and alcoholism in a Turkish sample.
Ayhan, Yavuz; Gürel, Şeref Can; Karaca, Özgür; Zoto, Teuta; Hayran, Mutlu; Babaoğlu, Melih; Yaşar, Ümit; Bozkurt, Atilla; Dilbaz, Nesrin; Uluğ, Berna Diclenur; Demir, Başaran
2015-04-01
Polymorphisms in the genes encoding alcohol metabolizing enzymes are associated with alcohol dependence. To evaluate the association between the alcohol dehydrogenase 1C (ADH1C) Ile350Val and aldehyde dehydrogenase 2 (ALDH2) Glu504Lys polymorphisms and alcohol dependence in a Turkish sample. 235 individuals (115 alcohol-dependent patients and 120 controls) were genotyped for ADH1C and ALDH2 with PCR-RFLP (polymerase chain reaction-restriction fragment length polymorphism). Association between the polymorphisms and family history, daily and maximum amount of alcohol consumed was investigated. The associations between alcohol dependence, severity of consumption and family history and the polymorphisms were analyzed by chi-square or Fisher's exact test where necessary. Relationship between genotypes and dependence related features was evaluated using analysis of variance (ANOVA). The -350Val allele for ADH1C (ADH1C*2) was increased in alcohol-dependent patients (P = 0.05). In individuals with a positive family history, the genotype distribution differed significantly (P = 0.031) and more patients carried the Val allele compared with controls (P = 0.025). Genotyping of 162 participants did not reveal the -504Lys allele in ALDH2. These findings suggest that ADH1C*2 is associated with alcohol dependence in the Turkish population displaying a dominant inheritance model. ADH1C*2 allele may contribute to the variance in heritability of alcohol dependence. The ALDH2 -504Lys/Lys or Glu/Lys genotypes were not present in alcohol-dependent patients, similar to that seen in European populations and in contrast to the findings in the Asian populations.
NASA Astrophysics Data System (ADS)
Kondapalli, S. P.
2017-12-01
In the present work, pulsed current microplasma arc welding is carried out on AISI 321 austenitic stainless steel of 0.3 mm thickness. Peak current, Base current, Pulse rate and Pulse width are chosen as the input variables, whereas grain size and hardness are considered as output responses. Response surface method is adopted by using Box-Behnken Design, and in total 27 experiments are performed. Empirical relation between input and output response is developed using statistical software and analysis of variance (ANOVA) at 95% confidence level to check the adequacy. The main effect and interaction effect of input variables on output response are also studied.
NASA Astrophysics Data System (ADS)
Hashim, S. H. A.; Hamid, F. A.; Kiram, J. J.; Sulaiman, J.
2017-09-01
This paper aims to investigate the relationship between factors that affecting the demand for broadband and the level of satisfaction. Previous researchers have found that the adoption of broadband is greatly influenced by many factors. Thus, in this study, a self-administered questionnaire was developed to obtain the factors affecting demand for broadband among broadband customers as well as their level of satisfaction. Pearson correlation, one-way analysis of variance (ANOVA) and t-test were used for statistical interpretation of the relationship. This study shows that there are better relationships between several factors over demand for broadband and satisfaction level.
Nickson, Dennis; Timming, Andrew R; Re, Daniel; Perrett, David I
2016-01-01
Using mixed design analysis of variance (ANOVA), this paper investigates the effects of a subtle simulated increase in adiposity on women's employment chances in the service sector. Employing a unique simulation of altering individuals' BMIs and the literature on "aesthetic labour", the study suggests that, especially for women, being heavier, but still within a healthy BMI, deleteriously impacts on hireability ratings. The paper explores the gendered dimension of this prejudice by asking whether female employees at the upper end of a healthy BMI range are likely to be viewed more negatively than their overtly overweight male counterparts. The paper concludes by considering the implications of these findings.
Validation of a Task Network Human Performance Model of Driving
2007-04-01
34 Table 23. NASA - TLX scores for study conditions...35 Table 24. ANOVA for NASA - TLX scores for study conditions (α = 0.05)...............................35 Table 25...Significant difference between conditions for NASA - TLX in the simulator study.....36 Table 26. ANOVA table for mental demand subscale of NASA - TLX
Renan-Ordine, Rômulo; Alburquerque-Sendín, Francisco; de Souza, Daiana Priscila Rodrigues; Cleland, Joshua A; Fernández-de-Las-Peñas, César
2011-02-01
A randomized controlled clinical trial. To investigate the effects of trigger point (TrP) manual therapy combined with a self-stretching program for the management of patients with plantar heel pain. Previous studies have reported that stretching of the calf musculature and the plantar fascia are effective management strategies for plantar heel pain. However, it is not known if the inclusion of soft tissue therapy can further improve the outcomes in this population. Sixty patients, 15 men and 45 women (mean ± SD age, 44 ± 10 years) with a clinical diagnosis of plantar heel pain were randomly divided into 2 groups: a self-stretching (Str) group who received a stretching protocol, and a self-stretching and soft tissue TrP manual therapy (Str-ST) group who received TrP manual interventions (TrP pressure release and neuromuscular approach) in addition to the same self-stretching protocol. The primary outcomes were physical function and bodily pain domains of the quality of life SF-36 questionnaire. Additionally, pressure pain thresholds (PPT) were assessed over the affected gastrocnemii and soleus muscles, and over the calcaneus, by an assessor blinded to the treatment allocation. Outcomes of interest were captured at baseline and at a 1-month follow-up (end of treatment period). Mixed-model ANOVAs were used to examine the effects of the interventions on each outcome, with group as the between-subjects variable and time as the within-subjects variable. The primary analysis was the group-by-time interaction. The 2 × 2 mixed-model analysis of variance (ANOVA) revealed a significant group-by-time interaction for the main outcomes of the study: physical function (P = .001) and bodily pain (P = .005); patients receiving a combination of self-stretching and TrP tissue intervention experienced a greater improvement in physical function and a greater reduction in pain, as compared to those receiving the self-stretching protocol. The mixed ANOVA also revealed significant group-by-time interactions for changes in PPT over the gastrocnemii and soleus muscles, and the calcaneus (all P<.001). Patients receiving a combination of self-stretching and TrP tissue intervention showed a greater improvement in PPT, as compared to those who received only the self-stretching protocol. This study provides evidence that the addition of TrP manual therapies to a self-stretching protocol resulted in superior short-term outcomes as compared to a self-stretching program alone in the treatment of patients with plantar heel pain. Therapy, level 1b.
Normality of raw data in general linear models: The most widespread myth in statistics
Kery, Marc; Hatfield, Jeff S.
2003-01-01
In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.
NASA Astrophysics Data System (ADS)
Zuo, Xue; Zhu, Hua; Zhou, Yuankai; Ding, Cong; Sun, Guodong
2016-08-01
Relationships between material hardness, turning parameters (spindle speed and feed rate) and surface parameters (surface roughness Ra, fractal dimension D and characteristic roughness τ∗) are studied and modeled using response surface methodology (RSM). The experiments are carried out on a CNC lathe for six carbon steel material AISI 1010, AISI 1020, AISI 1030, AISI 1045, AISI 1050 and AISI 1060. The profile of turned surface and the surface roughness value are measured by a JB-5C profilometer. Based on the profile data, D and τ∗ are computed through the root-mean-square method. The analysis of variance (ANOVA) reveals that spindle speed is the most significant factors affecting Ra, while material hardness is the most dominant parameter affecting τ∗. Material hardness and spindle speed have the same influence on D. Feed rate has less effect on three surface parameters than spindle speed and material hardness. The second-order models of RSM are established for estimating Ra, D and τ∗. The validity of the developed models is approximately 80%. The response surfaces show that a surface with small Ra and large D and τ∗ can be obtained by selecting a high speed and a large hardness material. According to the established models, Ra, D and τ∗ of six carbon steels surfaces can be predicted under cutting conditions studied in this paper. The results have an instructive meaning to estimate the surface quality before turning.
Li, Lin; Dai, Jia-Xi; Xu, Le; Huang, Zhen-Xia; Pan, Qiong; Zhang, Xi; Jiang, Mei-Yun; Chen, Zhao-Hong
2017-06-01
To observe the effect of a rehabilitation intervention on the comprehensive health status of patients with hand burns. Most studies of hand-burn patients have focused on functional recovery. There have been no studies involving a biological-psychological-social rehabilitation model of hand-burn patients. A randomized controlled design was used. Patients with hand burns were recruited to the study, and sixty patients participated. Participants were separated into two groups: (1) The rehabilitation intervention model group (n=30) completed the rehabilitation intervention model, which included the following measures: enhanced social support, intensive health education, comprehensive psychological intervention, and graded exercise. (2) The control group (n=30) completed routine treatment. Intervention lasted 5 weeks. Analysis of variance (ANOVA) and Student's t test were conducted. The rehabilitation intervention group had significantly better scores than the control group for comprehensive health, physical function, psychological function, social function, and general health. The differences between the index scores of the two groups were statistically significant. The rehabilitation intervention improved the comprehensive health status of patients with hand burns and has favorable clinical application. The comprehensive rehabilitation intervention model used here provides scientific guidance for medical staff aiming to improve the integrated health status of hand-burn patients and accelerate their recovery. What does this paper contribute to the wider global clinical community? Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Does pressure matter in creating burns in a porcine model?
Singer, Adam J; Taira, Breena R; Anderson, Ryon; McClain, Steve A; Rosenberg, Lior
2010-01-01
Multiple animal models of burn injury have been reported, and only some of these have been fully validated. One of the most popular approaches is burn infliction by direct contact with the heat source. Previous investigators have reported that the pressure of application of the contact burn infliction device does not affect the depth of injury. We hypothesized that the depth of injury would increase with increasing pressure of application in a porcine burn model. Forty mid-dermal contact burns measuring 25 x 25 mm were created on the back and flanks of an anesthetized domestic pig (50 kg) using a brass bar preheated in 80 degrees C water for a period of 30 or 20 seconds. The bars were applied using a spring-loaded device designed to control the amount of pressure applied to the skin. The pressures applied by the brass bar were gravity (0.2 kg), 2.0, 2.7, 3.8, and 4.5 kg in replicates of eight. One hour later, 8-mm full-thickness biopsies were obtained for histologic analysis using Elastic Van Gieson staining by a board-certified dermatopathologist masked to burn conditions. The depth of complete and partial collagen injury was measured from the level of the basement membrane using a microscopic micrometer measuring lens. Groups were compared with analysis of variance (ANOVA). The association between depth of injury and pressure was determined with Pearson correlations. The mean (95% confidence interval) depths of complete collagen injury with 30-second exposures were as follows: gravity only, 0.51 (0.39-0.66) mm; 2.0 kg, 0.72 (0.55-0.88) mm; 2.7 kg, 0.68 (0.55-1.00) mm; 3.8 kg, 0.92 (0.80-1.00) mm; and 4.5 kg, 1.65 (1.55-1.75) mm. The differences in depth of injury between the various pressure groups were significant (ANOVA, P < .001). The mean (95% confidence interval) depths of partial collagen injury were as follows: gravity only, 1.10 (0.92-1.30) mm; 2.0 kg, 1.46 (1.28-1.63) mm; 2.7 kg, 1.51 (1.34-1.64) mm; 3.8 kg, 1.82 (1.71-1.94) mm; and 4.5 kg, 2.50 (2.39-2.62) mm; and ANOVA, P = .001. The associations between pressure of application and depth of complete and partial collagen injury were 0.73 (P < .001) and 0.65 (P < .001), respectively. There is a direct association between the pressure of burn device application and depth of injury. Future studies should standardize and specify the amount of pressure applied using the burn infliction device.
Markowitz portfolio optimization model employing fuzzy measure
NASA Astrophysics Data System (ADS)
Ramli, Suhailywati; Jaaman, Saiful Hafizah
2017-04-01
Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.
Wei, Grace F. W.; Lee, Sing; Choovanichvong, Somrak; Wong, Frank H. T.
2013-01-01
Background: Education and support for caregivers is lacking in Asia and the peer-led FamilyLink Education Programme (FLEP) is one of the few provisions to address this service gap. This study aims to evaluate quantitatively its efficacy in reducing subjective burdens and empowering the participants. Method: One hundred and nine caregiver participants in three Asian cities were successfully surveyed at pre-intervention, post-intervention and six-month intervals with a number of standard inventories. Mixed analysis of variance (ANOVA) procedures showed significant programme impact over time intervals for all sites, and subsequently an empowerment measurement model was tested. Results: FLEP was found effective in reducing worry and displeasure, significantly improving intra-psychic strain, depression and all empowerment measures. The measurement model had an acceptable good fit. Baseline difference showed no interference with the programme efficacy. Conclusions: Apart from the initial support for FLEP, the current study also provides some hindsight on the empowerment practice in mental health for Asia, whose sociocultural political contexts are vastly different from that of the developed countries. It remains to be seen whether qualitative data or more stringent research design will yield consistent results and whether FLEP can also work in rural areas. PMID:21971981
Buratti, C; Barbanera, M; Lascaro, E; Cotana, F
2018-03-01
The aim of the present study is to analyze the influence of independent process variables such as temperature, residence time, and heating rate on the torrefaction process of coffee chaff (CC) and spent coffee grounds (SCGs). Response surface methodology and a three-factor and three-level Box-Behnken design were used in order to evaluate the effects of the process variables on the weight loss (W L ) and the Higher Heating Value (HHV) of the torrefied materials. Results showed that the effects of the three factors on both responses were sequenced as follows: temperature>residence time>heating rate. Data obtained from the experiments were analyzed by analysis of variance (ANOVA) and fitted to second-order polynomial models by using multiple regression analysis. Predictive models were determined, able to obtain satisfactory fittings of the experimental data, with coefficient of determination (R 2 ) values higher than 0.95. An optimization study using Derringer's desired function methodology was also carried out and the optimal torrefaction conditions were found: temperature 271.7°C, residence time 20min, heating rate 5°C/min for CC and 256.0°C, 20min, 25°C/min for SCGs. The experimental values closely agree with the corresponding predicted values. Copyright © 2017 Elsevier Ltd. All rights reserved.
Xu, Dongjuan; Gao, Jie; Wang, Xiaojuan; Huang, Liqun; Wang, Kefang
2017-08-01
This study examined the prevalence of overactive bladder (OAB) and investigated the impact of OAB on quality of life (QOL) in patients with type 2 diabetes in Mainland China. A total of 1025 patients with type 2 diabetes were surveyed. Patients were grouped into no OAB, dry OAB, and wet OAB groups according to the presence of OAB and urge incontinence. Descriptive analyses, one-way analysis of variance (ANOVA) and multivariable regression models were conducted to assess the prevalence of OAB and the effect of OAB on QOL. The prevalence of OAB among patients with type 2 diabetes was 13.9% (with dry OAB, 6.1%; with wet OAB, 7.8%). Multivariable regression models showed that OAB symptoms caused significant deterioration of the physical and mental aspects of QOL. Compared with dry OAB, wet OAB further decreased the mental aspect of QOL. Moreover, the effect sizes of the impacts of dry and wet OAB on QOL were larger than those of diabetic neuropathy or retinopathy, diabetes duration, or urinary tract infection history. OAB is more common in patients with type 2 diabetes than in the general population and substantially decreases patient QOL. Copyright © 2017 Elsevier Inc. All rights reserved.
Najafpoor, Ali Asghar; Jonidi Jafari, Ahmad; Hosseinzadeh, Ahmad; Khani Jazani, Reza; Bargozin, Hasan
2018-01-01
Treatment with a non-thermal plasma (NTP) is a new and effective technology applied recently for conversion of gases for air pollution control. This research was initiated to optimize the efficient application of the NTP process in benzene, toluene, ethyl-benzene, and xylene (BTEX) removal. The effects of four variables including temperature, initial BTEX concentration, voltage, and flow rate on the BTEX elimination efficiency were investigated using response surface methodology (RSM). The constructed model was evaluated by analysis of variance (ANOVA). The model goodness-of-fit and statistical significance was assessed using determination coefficients (R 2 and R 2 adj ) and the F-test. The results revealed that the R 2 proportion was greater than 0.96 for BTEX removal efficiency. The statistical analysis demonstrated that the BTEX removal efficiency was significantly correlated with the temperature, BTEX concentration, voltage, and flow rate. Voltage was the most influential variable affecting the dependent variable as it exerted a significant effect (p < 0.0001) on the response variable. According to the achieved results, NTP can be applied as a progressive, cost-effective, and practical process for treatment of airstreams polluted with BTEX in conditions of low residence time and high concentrations of pollutants.
Pattern uniformity control in integrated structures
NASA Astrophysics Data System (ADS)
Kobayashi, Shinji; Okada, Soichiro; Shimura, Satoru; Nafus, Kathleen; Fonseca, Carlos; Biesemans, Serge; Enomoto, Masashi
2017-03-01
In our previous paper dealing with multi-patterning, we proposed a new indicator to quantify the quality of final wafer pattern transfer, called interactive pattern fidelity error (IPFE). It detects patterning failures resulting from any source of variation in creating integrated patterns. IPFE is a function of overlay and edge placement error (EPE) of all layers comprising the final pattern (i.e. lower and upper layers). In this paper, we extend the use cases with Via in additional to the bridge case (Block on Spacer). We propose an IPFE budget and CD budget using simple geometric and statistical models with analysis of a variance (ANOVA). In addition, we validate the model with experimental data. From the experimental results, improvements in overlay, local-CDU (LCDU) of contact hole (CH) or pillar patterns (especially, stochastic pattern noise (SPN)) and pitch walking are all critical to meet budget requirements. We also provide a special note about the importance of the line length used in analyzing LWR. We find that IPFE and CD budget requirements are consistent to the table of the ITRS's technical requirement. Therefore the IPFE concept can be adopted for a variety of integrated structures comprising digital logic circuits. Finally, we suggest how to use IPFE for yield management and optimization requirements for each process.
Siddiki, Nayyarzia; Nantung, Tommy; Kim, Daehyeon
2014-01-01
In order to implement MEPDG hierarchical inputs for unbound and subgrade soil, a database containing subgrade M R, index properties, standard proctor, and laboratory M R for 140 undisturbed roadbed soil samples from six different districts in Indiana was created. The M R data were categorized in accordance with the AASHTO soil classifications and divided into several groups. Based on each group, this study develops statistical analysis and evaluation datasets to validate these models. Stress-based regression models were evaluated using a statistical tool (analysis of variance (ANOVA)) and Z-test, and pertinent material constants (k 1, k 2 and k 3) were determined for different soil types. The reasonably good correlations of material constants along with M R with routine soil properties were established. Furthermore, FWD tests were conducted on several Indiana highways in different seasons, and laboratory resilient modulus tests were performed on the subgrade soils that were collected from the falling weight deflectometer (FWD) test sites. A comparison was made of the resilient moduli obtained from the laboratory resilient modulus tests with those from the FWD tests. Correlations between the laboratory resilient modulus and the FWD modulus were developed and are discussed in this paper. PMID:24701162
Comparing estimates of genetic variance across different relationship models.
Legarra, Andres
2016-02-01
Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.
Shivakumar, Hagalavadi Nanjappa; Patel, Pragnesh Bharat; Desai, Bapusaheb Gangadhar; Ashok, Purnima; Arulmozhi, Sinnathambi
2007-09-01
A 32 factorial design was employed to produce glipizide lipospheres by the emulsification phase separation technique using paraffin wax and stearic acid as retardants. The effect of critical formulation variables, namely levels of paraffin wax (X1) and proportion of stearic acid in the wax (X2) on geometric mean diameter (dg), percent encapsulation efficiency (% EE), release at the end of 12 h (rel12) and time taken for 50% of drug release (t50), were evaluated using the F-test. Mathematical models containing only the significant terms were generated for each response parameter using the multiple linear regression analysis (MLRA) and analysis of variance (ANOVA). Both formulation variables studied exerted a significant influence (p < 0.05) on the response parameters. Numerical optimization using the desirability approach was employed to develop an optimized formulation by setting constraints on the dependent and independent variables. The experimental values of dg, % EE, rel12 and t50 values for the optimized formulation were found to be 57.54 +/- 1.38 mum, 86.28 +/- 1.32%, 77.23 +/- 2.78% and 5.60 +/- 0.32 h, respectively, which were in close agreement with those predicted by the mathematical models. The drug release from lipospheres followed first-order kinetics and was characterized by the Higuchi diffusion model. The optimized liposphere formulation developed was found to produce sustained anti-diabetic activity following oral administration in rats.
Student Expenses in Residency Interviewing
Walling, Anne; Nilsen, Kari; Callaway, Paul; Grothusen, Jill; Gillenwater, Cole; King, Samantha; Unruh, Gregory
2017-01-01
Background The student costs of residency interviewing are of increasing concern but limited current information is available. Updated, more detailed information would assist students and residency programs in decisions about residency selection. The study objective was to measure the expenses and time spent in residency interviewing by the 2016 graduating class of the University of Kansas School of Medicine and assess the impact of gender, regional campus location, and primary care application. Methods All 195 students who participated in the 2016 National Residency Matching Program (NRMP) received a 33 item questionnaire addressing interviewing activity, expenses incurred, time invested and related factors. Main measures were self-reported estimates of expenses and time spent interviewing. Descriptive analyses were applied to participant characteristics and responses. Multivariate analysis of variance (MANOVA) and chi-square tests compared students by gender, campus (main/regional), and primary care/other specialties. Analyses of variance (ANOVA) on the dependent variables provided follow-up tests on significant MANOVA results. Results A total of 163 students (84%) completed the survey. The average student reported 38 (1–124) applications, 16 (1–54) invitations, 11 (1–28) completed interviews, and spent $3,500 ($20–$12,000) and 26 (1–90) days interviewing. No significant differences were found by gender. After MANOVA and ANOVA analyses, non-primary care applicants reported significantly more applications, interviews, and expenditures, but less program financial support. Regional campus students reported significantly fewer invitations, interviews, and days interviewing, but equivalent costs when controlled for primary care application. Cost was a limiting factor in accepting interviews for 63% and time for 53% of study respondents. Conclusions Students reported investing significant time and money in interviewing. After controlling for other variables, primary care was associated with significantly lowered expenses. Regional campus location was associated with fewer interviews and less time interviewing. Gender had no significant impact on any aspect studied. PMID:29472969
Unraveling bacterial fingerprints of city subways from microbiome 16S gene profiles.
Walker, Alejandro R; Grimes, Tyler L; Datta, Somnath; Datta, Susmita
2018-05-22
Microbial communities can be location specific, and the abundance of species within locations can influence our ability to determine whether a sample belongs to one city or another. As part of the 2017 CAMDA MetaSUB Inter-City Challenge, next generation sequencing (NGS) data was generated from swipe samples collected from subway stations in Boston, New York City hereafter New York, and Sacramento. DNA was extracted and Illumina sequenced. Sequencing data was provided for all cities as part of 2017 CAMDA contest challenge dataset. Principal component analysis (PCA) showed clear clustering of the samples for the three cities, with a substantial proportion of the variance explained by the first three components. We ran two different classifiers and results were robust for error rate (< 6%) and accuracy (> 95%). The analysis of variance (ANOVA) demonstrated that overall, bacterial composition across the three cities is significantly different. A similar conclusion was reached using a novel bootstrap based test using diversity indices. Last but not least, a co-abundance association network analyses for the taxonomic levels "order", "family", and "genus" found different patterns of bacterial networks for the three cities. Bacterial fingerprint can be useful to predict sample provenance. In this work prediction of provenance reported with over 95% accuracy. Association based network analysis, emphasized similarities between the closest cities sharing common bacterial composition. ANOVA showed different patterns of bacterial amongst cities, and these findings strongly suggest that bacterial signature across multiple cities are different. This work advocates a data analysis pipeline which could be followed in order to get biological insight from this data. However, the biological conclusions from this analysis is just an early indication out of a pilot microbiome data provided to us through CAMDA 2017 challenge and will be subject to change as we get more complete data sets in the near future. This microbiome data can have potential applications in forensics, ecology, and other sciences. This article was reviewed by Klas Udekwu, Alexandra Graf, and Rafal Mostowy.
The critical role of uncertainty in projections of hydrological extremes
NASA Astrophysics Data System (ADS)
Meresa, Hadush K.; Romanowicz, Renata J.
2017-08-01
This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.
Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.
2014-01-01
Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.
Gloss and Stain Resistance of Ceramic-Polymer CAD/CAM Restorative Blocks.
Lawson, Nathaniel C; Burgess, John O
2016-03-01
To evaluate the gloss and stain resistance of several new ceramic-polymer CAD/CAM blocks Specimens (4 mm) were sectioned from: Enamic (polymer-infused ceramic), LAVA Ultimate (nano-ceramic reinforced polymer), e.max (lithium disilicate), Paradigm C (porcelain), and Paradigm MZ100 (composite). Specimens were wet polished on a polishing wheel to either 320 grit silicon paper (un-polished, N = 8) or 2000 grit silicon carbide papers followed by a 0.05 μm alumina slurry (polished, N = 8). Initial gloss and color (L*a*b*) values were measured. Specimens were stored in a staining solution at 37°C in darkness for 12 days (simulating 1 year). After storage, L*a*b* values re-measured. Change in color was reported as ΔE00 based on the CIEDE2000 formula. Gloss and ΔE00 were analyzed by two-way analysis of variance (ANOVA) (alpha = .05). Separate one-way ANOVA and Tukey post-hoc analyses were performed for both polish conditions and all materials. Two-way ANOVA showed that factors material, polish and their interaction were significant for both gloss and ΔE00 (p < .01). Post-hoc analysis reveals that polished specimens had significantly less color change than un-polished specimens for Paradigm C and LAVA Ultimate. E.max had significantly higher gloss and less color change than all other materials. The composition and polish of CAD/CAM materials affects gloss and stain resistance. Ceramic-polymer hybrid materials can achieve the high gloss required for esthetic restorations. These materials should be polished in order to minimize staining. If polished, all of the tested materials exhibited clinically acceptable color changes at 1 year of simulated staining. (J Esthet Restor Dent 28:S40-S45, 2016). © 2015 Wiley Periodicals, Inc.
Araújo Oliveira Ferreira, Dyna Mara; Costa, Yuri Martins; de Quevedo, Henrique Müller; Bonjardim, Leonardo Rigoldi; Rodrigues Conti, Paulo César
2018-05-15
To assess the modulatory effects of experimental psychological stress on the somatosensory evaluation of myofascial temporomandibular disorder (TMD) patients. A total of 20 women with myofascial TMD and 20 age-matched healthy women were assessed by means of a standardized battery of quantitative sensory testing. Cold detection threshold (CDT), warm detection threshold (WDT), cold pain threshold (CPT), heat pain threshold (HPT), mechanical pain threshold (MPT), wind-up ratio (WUR), and pressure pain threshold (PPT) were performed on the facial skin overlying the masseter muscle. The variables were measured in three sessions: before (baseline) and immediately after the Paced Auditory Serial Addition Task (PASAT) (stress) and then after a washout period of 20 to 30 minutes (poststress). Mixed analysis of variance (ANOVA) was applied to the data, and the significance level was set at P = .050. A significant main effect of the experimental session on all thermal tests was found (ANOVA: F > 4.10, P < .017), where detection tests presented an increase in thresholds in the poststress session compared to baseline (CDT, P = .012; WDT, P = .040) and pain thresholds were reduced in the stress (CPT, P < .001; HPT, P = .001) and poststress sessions (CPT, P = .005; HPT, P = .006) compared to baseline. In addition, a significant main effect of the study group on all mechanical tests (MPT, WUR, and PPT) was found (ANOVA: F > 4.65, P < .037), where TMD patients were more sensitive than healthy volunteers. Acute mental stress conditioning can modulate thermal sensitivity of the skin overlying the masseter in myofascial TMD patients and healthy volunteers. Therefore, psychological stress should be considered in order to perform an unbiased somatosensory assessment of TMD patients.
Haloi, Anjali; Limbu, Dhruba Kumar
2013-01-01
In the present study an attempt has been made to report on the nutritional status of the Assamese Muslim women of Dadara and Agyathuri villages of the Kamrup district in Assam, India on their basis of body mass index (BMI) and haemoglobin (hb) content. Cross sectional data on 1034 women belonging to the age group of 19 years and above were collected following internationally accepted standards. The fertility of mothers by BMI range was found to be highest (6.50 (mean) +/- 0.14 (SE) and range being 1-11) amongst underweight mothers. The one-way analysis of variance (ANOVA) test of BMI and fertility shows significant relation between different BMI groups withp < 0.01. Highest haemoglobin levels were recorded in the age group of < or = 23 years with a mean of 11.61 +/- 0.06 g/dl, the range being 9.8-13.9 g/dl. Whereas lowest levels of haemoglobin were found in the age groups of 44+ years having a mean value of 10.26 +/- 0.04 g/dl and a range of 9.2-11.8 g/dl. The ANOVA analysis for haemoglobin content and corresponding fertility rates show significant difference between different hemoglobin levels with their live births at p < 0.01. The summary of ANOVA analysis for haemoglobin and BMI range shows the significant difference between groups i.e., normal, overweight and underweight. The t-value and F-ratio is 118.61 and 14068.42, respectively, which is significant at 1% probability. The authors conclude a general trend in the study population of women with high fertility having poor nutritional status. These findings might be important in formulating responsive health policies in an underdeveloped region.
Students' perceptions of vertical and horizontal integration in a discipline-based dental school.
Postma, T C; White, J G
2017-05-01
Integration is a key concern in discipline-based undergraduate dental curricula. Therefore, this study compared feedback on integration from students who participated in different instructional designs in a Comprehensive Patient Care course. The study was conducted at the University of Pretoria (2009-2011). Third-year cohorts (Cohorts A, B and C) participated in pre-clinical case-based learning, whilst fourth-year cohorts (Cohorts D and E) received didactic teaching in Comprehensive Patient Care. Cohorts A, D and E practised clinical Comprehensive Patient Care in a discipline-based clinic. Cohort B conducted their Comprehensive Patient Care patient examinations in a dedicated facility supervised by dedicated faculty responsible to teach integration. Students had to indicate on visual analogue scales whether the way they were taught at the school helped them to integrate knowledge from the same (horizontal integration) and preceding (vertical integration) year of study. The end-points of the scales were defined as 'definitely' and 'not at all'. Analysis of variance (ANOVA) was employed to measure the differences between cohorts according to the year of study. Third-year case-based learning cohorts rated the horizontal integration close to 80/100 and vertical integration ranging from 64 to 71/100. In year four, Cohort B rated vertical and horizontal integration 9-15% higher (ANOVA, P < 0.05) than Cohorts A and D. In year five, Cohort A rated vertical and horizontal integration 11-18% higher (ANOVA, P < 0.05) than Cohorts D and E. Pre-clinical case-based learning and Comprehensive Patient Care supervised by dedicated faculty were associated with more favourable perceptions about integration in the discipline-based undergraduate dental curriculum. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The use of reinforced composite resin cement as compensation for reduced post length.
Nissan, J; Dmitry, Y; Assif, D
2001-09-01
Cements that yield high retentive values are believed to allow use of shorter posts. This study investigated the use of reinforced composite resin cement as compensation for reduced dowel length. The retention values of stainless steel posts (parallel-sided ParaPost and tapered Dentatus in 5-, 8-, and 10-mm lengths) luted with Flexi-Flow titanium-reinforced composite resin and zinc phosphate cements were evaluated. Single-rooted extracted human teeth with crowns (n = 120), removed at the cementoenamel junction, were randomly divided into 4 groups of 30 samples each. Different post lengths were luted with either Flexi-Flow or zinc phosphate. Each sample was placed into a specialized jig and on a tensile testing machine with a crosshead speed of 2 mm/min, applied until failure. The effect of different posts and cements on the force required to dislodge the dowels was evaluated with multiple analyses of variance (ANOVA). One-way ANOVA with Scheffé contrast was applied to determine the effect of different post lengths on the retentive failure of posts luted with the 2 agents. Flexi-Flow reinforced composite resin cement significantly increased retention of ParaPost and Dentatus dowels (P<.001) compared with zinc phosphate. One-way ANOVA revealed no statistically significant difference (P>.05) between mean retention of both dowels luted with Flexi-Flow for all posts length used (5 mm = 8 mm = 10 mm). Mean retention values of the groups luted with zinc phosphate showed a statistically significant difference (P<.001) for the different post lengths (10 > 8 > 5 mm). Parallel-sided ParaPost dowels demonstrated a higher mean retention than tapered Dentatus dowels (P<.001). In this study, Flexi-Flow reinforced composite resin cement compensated for the reduced length of shorter parallel-sided ParaPost and tapered Dentatus dowels.
A class of multi-period semi-variance portfolio for petroleum exploration and development
NASA Astrophysics Data System (ADS)
Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei
2012-10-01
Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.
Cojocaru, C; Khayet, M; Zakrzewska-Trznadel, G; Jaworska, A
2009-08-15
The factorial design of experiments and desirability function approach has been applied for multi-response optimization in pervaporation separation process. Two organic aqueous solutions were considered as model mixtures, water/acetonitrile and water/ethanol mixtures. Two responses have been employed in multi-response optimization of pervaporation, total permeate flux and organic selectivity. The effects of three experimental factors (feed temperature, initial concentration of organic compound in feed solution, and downstream pressure) on the pervaporation responses have been investigated. The experiments were performed according to a 2(3) full factorial experimental design. The factorial models have been obtained from experimental design and validated statistically by analysis of variance (ANOVA). The spatial representations of the response functions were drawn together with the corresponding contour line plots. Factorial models have been used to develop the overall desirability function. In addition, the overlap contour plots were presented to identify the desirability zone and to determine the optimum point. The optimal operating conditions were found to be, in the case of water/acetonitrile mixture, a feed temperature of 55 degrees C, an initial concentration of 6.58% and a downstream pressure of 13.99 kPa, while for water/ethanol mixture a feed temperature of 55 degrees C, an initial concentration of 4.53% and a downstream pressure of 9.57 kPa. Under such optimum conditions it was observed experimentally an improvement of both the total permeate flux and selectivity.
NASA Astrophysics Data System (ADS)
Kim, Yura; Jun, Mikyoung; Min, Seung-Ki; Suh, Myoung-Seok; Kang, Hyun-Suk
2016-05-01
CORDEX-East Asia, a branch of the coordinated regional climate downscaling experiment (CORDEX) initiative, provides high-resolution climate simulations for the domain covering East Asia. This study analyzes temperature data from regional climate models (RCMs) participating in the CORDEX - East Asia region, accounting for the spatial dependence structure of the data. In particular, we assess similarities and dissimilarities of the outputs from two RCMs, HadGEM3-RA and RegCM4, over the region and over time. A Bayesian functional analysis of variance (ANOVA) approach is used to simultaneously model the temperature patterns from the two RCMs for the current and future climate. We exploit nonstationary spatial models to handle the spatial dependence structure of the temperature variable, which depends heavily on latitude and altitude. For a seasonal comparison, we examine changes in the winter temperature in addition to the summer temperature data. We find that the temperature increase projected by RegCM4 tends to be smaller than the projection of HadGEM3-RA for summers, and that the future warming projected by HadGEM3-RA tends to be weaker for winters. Also, the results show that there will be a warming of 1-3°C over the region in 45 years. More specifically, the warming pattern clearly depends on the latitude, with greater temperature increases in higher latitude areas, which implies that warming may be more severe in the northern part of the domain.
Variance analysis of forecasted streamflow maxima in a wet temperate climate
NASA Astrophysics Data System (ADS)
Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.
2018-05-01
Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.
Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Whisson, Desley A.; Takekawa, John Y.
2000-01-01
Aquatic hazing devices recently have been developed as a possible means of deterring waterbirds from oil spills, thereby reducing casualties. However, the effectiveness of these devices has not been examined with rigorous statistical tests. We conducted a study in the San Francisco Bay estuary to develop a design for testing the effectiveness of an aquatic hazing device on waterbirds in open water. Transects marked with poles at 100-m inter- vals up to 800 m from the hazing device were established at two sites separated by three km in the north bay. Alter- nating two-day test and control periods were conducted at each site. Observers in over-water blinds counted the number, species and behavior (swimming, diving, or preening) of birds on transects each day. Aerial surveys of birds within four km of the device were conducted at the beginning of each test. For both aerial and ground surveys, a three-way mixed model analysis of variance test was used to examine trial, distance from the device, and treatment (device on or off) fixed effects, and site as a random effect on numbers of Greater and Lesser scaup (Aythya affinis and A. marila), Surf Scoter (Melanitta perspicillata), and all other waterbirds. We could not detect a significant deter- rent effect of the hazing device in either aerial surveys of all ducks or scaup (all ducks, F,283 = 1.1; Scaup, F28,230 = 0.9, all n.s.; 3-factor ANOVA), or ground surveys for all ducks or scaup (all ducks, F28,23 = 1.0; scaup, F2s,230 = 0.9, all n.s.; 3-factor ANOVA). There was a significant trial-by-treatment interaction for Surf Scoters (F4,9 = 5.4, P = 0.02; 3-factor ANOVA), but Surf Scoter numbers fluctuated greatly among trials so the effect of the device on this species was not clear. Birds did not alter their behavior when the device was active. In general, although aquatic hazing devices have potential to reduce waterbird mortality in oil spills, the tested device was not effective as a deterrent for waterfowl in experimental trials on the estuary. Received 27 September 1999, accepted 3January 2000.
Virkutyte, Jurate; Rokhina, Ekaterina; Jegatheesan, Veeriah
2010-03-01
Electro-Fenton denitrification of a model wastewater was studied using platinized titanium electrodes in a batch electrochemical reactor. The model wastewater was prepared from components based on the real aquaculture effluent with nitrate concentrations varying from 200 to 800 mg L(-1). The technical as well as scientific feasibility of the method was assessed by the relationship between the most significant process variables such as various Fenton's reagent to hydrogen peroxide ratios (1:5; 1:20 and 1:50) and current densities (0.17 mA cm(-2), 0.34 mA cm(-2) and 0.69 mA cm(-2)) and their response on denitrification efficiency in terms of nitrate degradation using central composite Box-Behnken experimental design was determined. The goodness of the model was checked by the coefficient of determination R(2) (0.9775), the corresponding analysis of variance P>F and a parity plot. The ANOVA results indicated that the proposed model was significant and therefore can be used to optimize denitrification of a model wastewater. The optimum reaction conditions were found to be 1:20 Fenton's reagent/hydrogen peroxide ratio, 400 mg L(-1) initial nitrate concentration and 0.34 mA cm(-2) current density. Treatment costs in terms of electricity expenditure at 0.17, 0.34 and 0.69 mA cm(-2) was 7.6, 16 and 41.8 euro, respectively, per kilogram of nitrates and 1, 2 and 4 euro, respectively, per cubic meter of wastewater. 2009 Elsevier Ltd. All rights reserved.
Pull out strength calculator for pedicle screws using a surrogate ensemble approach.
Varghese, Vicky; Ramu, Palaniappan; Krishnan, Venkatesh; Saravana Kumar, Gurunathan
2016-12-01
Pedicle screw instrumentation is widely used in the treatment of spinal disorders and deformities. Currently, the surgeon decides the holding power of instrumentation based on the perioperative feeling which is subjective in nature. The objective of the paper is to develop a surrogate model which will predict the pullout strength of pedicle screw based on density, insertion angle, insertion depth and reinsertion. A Taguchi's orthogonal array was used to design an experiment to find the factors effecting pullout strength of pedicle screw. The pullout studies were carried using polyaxial pedicle screw on rigid polyurethane foam block according to American society for testing of materials (ASTM F543). Analysis of variance (ANOVA) and Tukey's honestly significant difference multiple comparison tests were done to find factor effect. Based on the experimental results, surrogate models based on Krigging, polynomial response surface and radial basis function were developed for predicting the pullout strength for different combination of factors. An ensemble of these surrogates based on weighted average surrogate model was also evaluated for prediction. Density, insertion depth, insertion angle and reinsertion have a significant effect (p <0.05) on pullout strength of pedicle screw. Weighted average surrogate performed the best in predicting the pull out strength amongst the surrogate models considered in this study and acted as insurance against bad prediction. A predictive model for pullout strength of pedicle screw was developed using experimental values and surrogate models. This can be used in pre-surgical planning and decision support system for spine surgeon. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Temporal variation of out-of-hospital cardiac arrests in an equatorial climate.
Ong, Marcus Eh; Ng, Faith Sp; Yap, Susan; Yong, Kok Leong; Peberdy, Mary A; Ornato, Joseph P
2010-01-01
We aimed to determine whether there is a seasonal variation of out-of-hospital cardiac arrests (OHCA) in an equatorial climate, which does not experience seasonal environmental change. We conducted an observational prospective study looking at the occurrence of OHCA in Singapore. Included were all patients with OHCA presented to Emergency Departments across the country. We examined the monthly, daily, and hourly number of cases over a three-year period. Data was analyzed using analysis of variance (ANOVA). From October, 1st 2001 to October, 14th 2004, 2428 patients were enrolled in the study. Mean age for cardiac arrests was 60.6 years with 68.0% male. Ethnic distribution was 69.5% Chinese, 15.0% Malay, 11.0% Indian, and 4.4% Others. There was no significant seasonal variation (spring/summer/fall/winter) of events (ANOVA P = 0.71), monthly variation (P = 0.88) or yearly variation (P = 0.26). We did find weekly peaks on Mondays and a circadian pattern with daily peaks from 9-10 am. We did not find any discernable seasonal pattern of cardiac arrests. This contrasts with findings from temperate countries and suggests a climatic influence on cardiac arrest occurrence. We also found that sudden cardiac arrests follow a circadian pattern.
Efficacy of Miswak toothpaste and mouthwash on cariogenic bacteria
Al-Dabbagh, Samim A.; Qasim, Huda J.; Al-Derzi, Nadia A.
2016-01-01
Objectives: To evaluate the efficacy of Salvadora persica (Miswak) products on cariogenic bacteria in comparison with ordinary toothpaste. Methods: The study was conducted in Zakho city, Kurdistan region, Iraq during the period from October 2013 to January 2014. A randomized controlled clinical trial of 40 students randomly allocated into 4 groups. They were instructed to use Mismark toothpaste, Miswak mouthwash, and ordinary toothpaste with water or with normal saline. Salivary samples were collected at 3-time intervals: before, immediately after use, and after 2 weeks of use. The effect of each method on Streptococcus mutans and Lactobacilli was evaluated by using caries risk test. Results: One-way repeated measure analysis of variance (ANOVA), one-way ANOVA, and least significant difference tests were used. Miswak wash has a significant reduction effect on both bacteria immediately and after 2 weeks of use. Miswak paste has a similar effect on Lactobacilli, while Streptococcus mutans showed a significant decrease only after 2 weeks of use. Ordinary paste showed a non significant effect on both bacteria at both time intervals; while the addition of normal saline showed a significant effect on both bacteria only after 2 weeks of use. Conclusion: Miswak products, especially mouth wash, were more effective in reducing the growth of cariogenic bacteria than ordinary toothpaste. PMID:27570858
NASA Astrophysics Data System (ADS)
Sazali, Siti Nurlydia; Hazmi, Izfa Riza; Rahim, Faszly; Abang, Fatimah; Jemain, Abdul Aziz
2018-04-01
The recognition of intraspecific variation could enhance knowledge and understanding on the population divergence that might be resulted from different geographical areas. To study the possible effect derived from different locations, a morphometric study of the red stripe weevils, Rhynchophorus vulneratus from different localities in Malaysia was conducted using field and voucher specimens. A total of twenty-three morphological characters were examined from 108 individuals of R. vulneratus representing population of Kota Samarahan, Mukah and central of Peninsular Malaysia. The data were subjected to univariate one-way single factor analysis of variance (ANOVA) and analysed in factor analysis using SPSS version 22.0 software. Univariate ANOVA showed that all tested variables were significantly different (p<0.05) except for mesocoxal distance (MSD), meanwhile from the factor analysis, the first three factors with eigenvalues greater than 1.0 were successfully extracted, resulting in a high variation of 82.687%. For factor 1, 39.213% of total variation was recorded, factor 2 accounted for 34.096% and factor 3 contributing to 9.377%, respectively. The mixed plotting among the twenty-three morphological characters suggests a strong correlation among the parameters examined and further statistical analysis should be conducted to include environmental factors such as habitat types, food availability and predation effect.
Visuoconstructional Impairment in Subtypes of Mild Cognitive Impairment
Ahmed, Samrah; Brennan, Laura; Eppig, Joel; Price, Catherine C.; Lamar, Melissa; Delano-Wood, Lisa; Bangen, Katherine J.; Edmonds, Emily C.; Clark, Lindsey; Nation, Daniel A.; Jak, Amy; Au, Rhoda; Swenson, Rodney; Bondi, Mark W.; Libon, David J.
2018-01-01
Clock Drawing Test performance was examined alongside other neuropsychological tests in mild cognitive impairment (MCI). We tested the hypothesis that clock-drawing errors are related to executive impairment. The current research examined 86 patients with MCI for whom, in prior research, cluster analysis was used to sort patients into dysexecutive (dMCI, n=22), amnestic (aMCI, n=13), and multi-domain (mMCI, n=51) subtypes. First, principal components analysis (PCA) and linear regression examined relations between clock-drawing errors and neuropsychological test performance independent of MCI subtype. Second, between-group differences were assessed with analysis of variance (ANOVA) where MCI subgroups were compared to normal controls (NC). PCA yielded a 3-group solution. Contrary to expectations, clock-drawing errors loaded with lower performance on naming/lexical retrieval, rather than with executive tests. Regression analyses found increasing clock-drawing errors to command were associated with worse performance only on naming/lexical retrieval tests. ANOVAs revealed no differences in clock-drawing errors between dMCI versus mMCI or aMCI versus NCs. Both the dMCI and mMCI groups generated more clock-drawing errors than the aMCI and NC groups in the command condition. In MCI, language-related skills contribute to clock-drawing impairment. PMID:26397732
Application of texture analysis method for mammogram density classification
NASA Astrophysics Data System (ADS)
Nithya, R.; Santhi, B.
2017-07-01
Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.
Foley, Jessica M.; Ettenhofer, Mark L.; Kim, Michelle S.; Behdin, Nina; Castellon, Steven A.; Hinkin, Charles H.
2013-01-01
The present study examined the impact of cognitive reserve in maintaining intact neuropsychological (NP) function among older HIV-positive individuals, a uniquely at-risk subgroup. Participants included 129 individuals classified by HIV serostatus, age group, and NP impairment. A three-way analysis of variance (ANOVA) followed by a series of within-group ANOVA and multiple regression analyses were conducted to investigate the pattern of cognitive reserve (vs. other protective) influence among groups with varying risks of NP impairment. Results indicated a significant age ×HIV status interaction, with older HIV-positive individuals demonstrating higher cognitive reserve than subgroups with less risk for NP compromise (younger age and/or HIV-negative). Results demonstrated higher cognitive reserve specific to NP-intact older HIV-positive individuals. Within this group, the interaction of younger age and higher cognitive reserve independently contributed to cognitive status when controlling for psychiatric, immunological, and psychosocial protective mechanisms, suggesting the importance of cognitive reserve beyond other protective mechanisms in maintaining optimal NP functioning in those individuals most at risk. Alongside younger age, factors contributing to cognitive reserve (i.e., education and estimated premorbid intelligence) may provide substantial benefit for older HIV-positive adults who are at high risk for NP compromise. PMID:22385375
Alemu, Sisay Mulugeta; Habtewold, Tesfa Dejenie; Haile, Yohannes Gebreegziabhere
2017-01-01
Globally 3 to 8% of reproductive age women are suffering from premenstrual dysphoric disorder (PMDD). Several mental and reproductive health-related factors cause low academic achievement during university education. However, limited data exist in Ethiopia. The aim of the study was to investigate mental and reproductive health correlates of academic performance. Institution based cross-sectional study was conducted with 667 Debre Berhan University female students from April to June 2015. Academic performance was the outcome variable. Mental and reproductive health characteristics were explanatory variables. Two-way analysis of variance (ANOVA) test of association was applied to examine group difference in academic performance. Among 529 students who participated, 49.3% reported mild premenstrual syndrome (PMS), 36.9% reported moderate/severe PMS, and 13.8% fulfilled PMDD diagnostic criteria. The ANOVA test of association revealed that there was no significant difference in academic performance between students with different level of PMS experience ( F -statistic = 0.08, p value = 0.93). Nevertheless, there was a significant difference in academic performance between students with different length of menses ( F -statistic = 5.15, p value = 0.006). There was no significant association between PMS experience and academic performance, but on the other hand, the length of menses significantly associated with academic performance.
NASA Astrophysics Data System (ADS)
Odume, O. N.; Muller, W. J.; Palmer, C. G.; Arimoro, F. O.
Swartkops River is located in Eastern Cape of South Africa and drains a heavily industrialised catchment and has suffered deterioration in water quality due to pollution. Water quality impairment in the Swartkops River has impacted on its biota. Deformities in the mouth parts of larval Chironomidae, particularly of the mentum, represent sub-lethal effects of exposure to pollutants, and were therefore employed as indictors of pollution in the Swartkops River. Chironomid larvae were collected using the South African Scoring System version 5 (SASS5) protocol. A total of 4838 larvae, representing 26 taxa from four sampling sites during four seasons were screened for mentum deformities. The community incidences of mentum deformity were consistently higher than 8% at Sites 2-4, indicating pollution stress in the river. Analysis of variance (ANOVA) conducted on arcsine transformed data revealed that the mean community incidence of mentum deformity was significantly higher (p < 0.05) at Site 3. ANOVA did not reveal statistically significant differences (p > 0.05) between seasons across sites. Severe deformities were consistently higher at Site 3. Strong correlations were found between deformity indices and the concentrations of dissolved oxygen (DO), total inorganic nitrogen (TIN), orthophosphate-phosphorus (PO4-P), electrical conductivity (EC) and turbidity.
Salmani, M H; Mokhtari, M; Raeisi, Z; Ehrampoush, M H; Sadeghian, H A
2017-09-01
Wastewater containing pharmaceutical residual components must be treated before being discharged to the environment. This study was conducted to investigate the efficiency of tungsten-carbon nanocomposite in diclofenac removal using design of experiment (DOE). The 27 batch adsorption experiments were done by choosing three effective parameters (pH, adsorbent dose, and initial concentration) at three levels. The nanocomposite was prepared by tungsten oxide and activated carbon powder in a ratio of 1 to 4 mass. The remaining concentration of diclofenac was measured by a spectrometer with adding reagents of 2, 2'-bipyridine, and ferric chloride. Analysis of variance (ANOVA) was applied to determine the main and interaction effects. The equilibrium time for removal process was determined as 30 min. It was observed that the pH had the lowest influence on the removal efficiency of diclofenac. Nanocomposite gave a high removal at low concentration of 5.0 mg/L. The maximum removal for an initial concentration of 5.0 mg/L was 88.0% at contact time of 30 min. The results of ANOVA showed that adsorbent mass was among the most effective variables. Using DOE as an efficient method revealed that tungsten-carbon nanocomposite has high efficiency in the removal of residual diclofenac from the aqueous solution.
Friction Forces during Sliding of Various Brackets for Malaligned Teeth: An In Vitro Study
Crincoli, Vito; Di Bisceglie, Maria Beatrice; Balsamo, Antonio; Serpico, Vitaliano; Chiatante, Francesco; Pappalettere, Carmine; Boccaccio, Antonio
2013-01-01
Aims. To measure the friction force generated during sliding mechanics with conventional, self-ligating (Damon 3 mx, Smart Clip, and Time 3) and low-friction (Synergy) brackets using different archwire diameters and ligating systems in the presence of apical and buccal malalignments of the canine. Methods. An experimental setup reproducing the right buccal segment of the maxillary arch was designed to measure the friction force generated at the bracket/wire and wire/ligature interfaces of different brackets. A complete factorial plan was drawn up and a three-way analysis of variance (ANOVA) was carried out to investigate whether the following factors affect the values of friction force: (i) degree of malalignment, (ii) diameter of the orthodontic wire, and (iii) bracket/ligature combination. Tukey post hoc test was also conducted to evaluate any statistically significant differences between the bracket/ligature combinations analyzed. Results. ANOVA showed that all the above factors affect the friction force values. The friction force released during sliding mechanics with conventional brackets is about 5-6times higher than that released with the other investigated brackets. A quasilinear increase of the frictional forces was observed for increasing amounts of apical and buccal malalignments. Conclusion. The Synergy bracket with silicone ligature placed around the inner tie-wings appears to yield the best performance. PMID:23533364
The heart rate response to nintendo wii boxing in young adults.
Bosch, Pamela R; Poloni, Joseph; Thornton, Andrew; Lynskey, James V
2012-06-01
To determine if 30 minutes of Nintendo Wii Sports boxing provides cardiorespiratory benefits and contributes to the daily exercise recommendations for healthy young adults. Twenty healthy 23- to 27-year-olds participated in two sessions to measure maximum heart rate (HR(max)) via a treadmill test and heart rate (HR) response to 30 minutes of Wii Sports boxing. Heart rate in beats per minute (bpm) was measured continuously, and exercise intensity during each minute of play was stratified as a percentage of HR(max). Mixed designs analysis of variance (ANOVA) and Pearson product moment correlations were used to analyze the data. Mean (SD) HR response to boxing was 143 (15) bpm or 77.5% (10.0%) of HR(max). The mean HR response for experienced participants was significantly lower than inexperienced participants, P = .007. The ANOVA revealed a significant interaction between experience and time spent at various intensities, P = .009. Experienced participants spent more time in light to vigorous intensities, inexperienced participants in moderate to very hard intensities. Fitness was not correlated with mean HR response to boxing, P = .49. Thirty minutes of Nintendo Wii Sports boxing provides a moderate to vigorous aerobic response in healthy young adults and can contribute to daily recommendations for physical activity.
The Heart Rate Response to Nintendo Wii Boxing in Young Adults
Bosch, Pamela R.; Poloni, Joseph; Thornton, Andrew; Lynskey, James V.
2012-01-01
Purpose To determine if 30 minutes of Nintendo Wii Sports boxing provides cardiorespiratory benefits and contributes to the daily exercise recommendations for healthy young adults. Methods Twenty healthy 23- to 27-year-olds participated in two sessions to measure maximum heart rate (HRmax) via a treadmill test and heart rate (HR) response to 30 minutes of Wii Sports boxing. Heart rate in beats per minute (bpm) was measured continuously, and exercise intensity during each minute of play was stratified as a percentage of HRmax. Mixed designs analysis of variance (ANOVA) and Pearson product moment correlations were used to analyze the data. Results Mean (SD) HR response to boxing was 143 (15) bpm or 77.5% (10.0%) of HRmax. The mean HR response for experienced participants was significantly lower than inexperienced participants, P = .007. The ANOVA revealed a significant interaction between experience and time spent at various intensities, P = .009. Experienced participants spent more time in light to vigorous intensities, inexperienced participants in moderate to very hard intensities. Fitness was not correlated with mean HR response to boxing, P = .49. Conclusion Thirty minutes of Nintendo Wii Sports boxing provides a moderate to vigorous aerobic response in healthy young adults and can contribute to daily recommendations for physical activity. PMID:22833705
Song, Minju; Shin, Yooseok; Park, Jeong-Won; Roh, Byoung-Duck
2015-02-01
This study was performed to determine whether the combined use of one-bottle self-etch adhesives and composite resins from same manufacturers have better bond strengths than combinations of adhesive and resins from different manufacturers. 25 experimental micro-shear bond test groups were made from combinations of five dentin adhesives and five composite resins with extracted human molars stored in saline for 24 hr. Testing was performed using the wire-loop method and a universal testing machine. Bond strength data was statistically analyzed using two way analysis of variance (ANOVA) and Tukey's post hoc test. Two way ANOVA revealed significant differences for the factors of dentin adhesives and composite resins, and significant interaction effect (p < 0.001). All combinations with Xeno V (Dentsply De Trey) and Clearfil S(3) Bond (Kuraray Dental) adhesives showed no significant differences in micro-shear bond strength, but other adhesives showed significant differences depending on the composite resin (p < 0.05). Contrary to the other adhesives, Xeno V and BondForce (Tokuyama Dental) had higher bond strengths with the same manufacturer's composite resin than other manufacturer's composite resin. Not all combinations of adhesive and composite resin by same manufacturers failed to show significantly higher bond strengths than mixed manufacturer combinations.
McEvoy, Maureen Patricia; Williams, Marie T; Olds, Timothy Stephen
2010-01-01
Previous survey tools operationalising knowledge, attitudes or beliefs about evidence-based practice (EBP) have shortcomings in content, psychometric properties and target audience. This study developed and psychometrically assessed a self-report trans-professional questionnaire to describe an EBP profile. Sixty-six items were collated from existing EBP questionnaires and administered to 526 academics and students from health and non-health backgrounds. Principal component factor analysis revealed the presence of five factors (Relevance, Terminology, Confidence, Practice and Sympathy). Following expert panel review and pilot testing, the 58-item final questionnaire was disseminated to 105 subjects on two occasions. Test-retest and internal reliability were quantified using intra-class correlation coefficients (ICCs) and Cronbach's alpha, convergent validity against a commonly used EBP questionnaire by Pearson's correlation coefficient and discriminative validity via analysis of variance (ANOVA) based on exposure to EBP training. The final questionnaire demonstrated acceptable internal consistency (Cronbach's alpha 0.96), test-retest reliability (ICCs range 0.77-0.94) and convergent validity (Practice 0.66, Confidence 0.80 and Sympathy 0.54). Three factors (Relevance, Terminology and Confidence) distinguished EBP exposure groups (ANOVA p < 0.001-0.004). The evidence-based practice profile (EBP(2)) questionnaire is a reliable instrument with the ability to discriminate for three factors, between respondents with differing EBP exposures.
NASA Astrophysics Data System (ADS)
Maleki, Mahnam; Farzin, Mahmud; Mosaddegh, Peiman
2018-06-01
In this study, the effect of high density polyethylene (HDPE) and calcium carbonate (CaCO3) addition into constant amount of low density polyethylene/linear low density polyethylene (LDPE/LLDPE) matrix was investigated by using different mechanical and thermal parameters. Then, analysis of variance (ANOVA) was used to investigate the normal distribution of obtained data. Finally, sample containing 50 Phr of HDPE and 7 Phr of CaCO3 microparticles, was determined as optimized sample. The effect of different process parameters such as injecting back pressure, cooling and retention time, on mechanical and thermal properties of optimized sample was investigated as well. Also to investigate the effect of the number of recycling processes on the mechanical and thermal properties, two dominant degradation mechanisms were suggested. The first was the decreasing of chains molecular weight and formation of short length chains and the later was the formation of crosslinks and three dimensional networks. Results indicated that by increasing the number of recycling processes, crystallinity, melting point, modulus, strength at yielding point and toughness in comparison to pristine sample decreased at first and then showed an ascending trend. Elongation at break by increasing of the number of recycling processes, generally increased in comparison with initial sample.
Impact of bleaching agents on water sorption and solubility of resin luting cements.
Torabi Ardakani, Mahshid; Atashkar, Berivan; Bagheri, Rafat; Burrow, Michael F
2017-08-01
The aim of the present study was to evaluate the effect of distilled water and home and office bleaching agents on the sorption and solubility of resin luting cements. A total of 18 disc-shaped specimens were prepared from each of four resin cements: G-CEM LinkAce, Panavia F, Rely X Unicem, and seT. Specimens were cured according to the manufacturers' instructions and randomly divided into three groups of six, where they were treated with either an office or home bleaching agent or immersed in distilled water (control). Water sorption and solubility were measured by weighing the specimens before and after immersion and desiccation. Data were analyzed using Pearson correlation coefficient, two-way analysis of variance (ANOVA) and Tukey's test. There was a significant, positive correlation between sorption and solubility. Two-way anova showed significant differences among all resin cements tested for either sorption or solubility. Water sorption and solubility of all cements were affected significantly by office bleaching, and even more by home bleaching agents. Sorption and solubility behavior of the studied cements were highly correlated and significantly affected by applying either office or home bleaching agents; seT showed the highest sorption and solubility, whereas Rely X Unicem revealed the lowest. © 2016 John Wiley & Sons Australia, Ltd.
Local and systemic effect of transfection-reagent formulated DNA vectors on equine melanoma.
Mählmann, Kathrin; Feige, Karsten; Juhls, Christiane; Endmann, Anne; Schuberth, Hans-Joachim; Oswald, Detlef; Hellige, Mareu; Doherr, Marcus; Cavalleri, Jessika-M V
2015-05-14
Equine melanoma has a high incidence in grey horses. Xenogenic DNA vaccination may represent a promising therapeutic approach against equine melanoma as it successfully induced an immunological response in other species suffering from melanoma and in healthy horses. In a clinical study, twenty-seven, grey, melanoma-bearing, horses were assigned to three groups (n = 9) and vaccinated on days 1, 22, and 78 with DNA vectors encoding for equine (eq) IL-12 and IL-18 alone or in combination with either human glycoprotein (hgp) 100 or human tyrosinase (htyr). Horses were vaccinated intramuscularly, and one selected melanoma was locally treated by intradermal peritumoral injection. Prior to each injection and on day 120, the sizes of up to nine melanoma lesions per horse were measured by caliper and ultrasound. Specific serum antibodies against hgp100 and htyr were measured using cell based flow-cytometric assays. An Analysis of Variance (ANOVA) for repeated measurements was performed to identify statistically significant influences on the relative tumor volume. For post-hoc testing a Tukey-Kramer Multiple-Comparison Test was performed to compare the relative volumes on the different examination days. An ANOVA for repeated measurements was performed to analyse changes in body temperature over time. A one-way ANOVA was used to evaluate differences in body temperature between the groups. A p-value < 0.05 was considered significant for all statistical tests applied. In all groups, the relative tumor volume decreased significantly to 79.1 ± 26.91% by day 120 (p < 0.0001, Tukey-Kramer Multiple-Comparison Test). Affiliation to treatment group, local treatment and examination modality had no significant influence on the results (ANOVA for repeated measurements). Neither a cellular nor a humoral immune response directed against htyr or hgp100 was detected. Horses had an increased body temperature on the day after vaccination. This is the first clinical report on a systemic effect against equine melanoma following treatment with DNA vectors encoding eqIL12 and eqIL18 and formulated with a transfection reagent. Addition of DNA vectors encoding hgp100 respectively htyr did not potentiate this effect.
Local and systemic effect of transfection-reagent formulated DNA vectors on equine melanoma.
Mählmann, Kathrin; Feige, Karsten; Juhls, Christiane; Endmann, Anne; Schuberth, Hans-Joachim; Oswald, Detlef; Hellige, Maren; Doherr, Marcus; Cavalleri, Jessika-M V
2015-06-11
Equine melanoma has a high incidence in grey horses. Xenogenic DNA vaccination may represent a promising therapeutic approach against equine melanoma as it successfully induced an immunological response in other species suffering from melanoma and in healthy horses. In a clinical study, twenty-seven, grey, melanoma-bearing, horses were assigned to three groups (n = 9) and vaccinated on days 1, 22, and 78 with DNA vectors encoding for equine (eq) IL-12 and IL-18 alone or in combination with either human glycoprotein (hgp) 100 or human tyrosinase (htyr). Horses were vaccinated intramuscularly, and one selected melanoma was locally treated by intradermal peritumoral injection. Prior to each injection and on day 120, the sizes of up to nine melanoma lesions per horse were measured by caliper and ultrasound. Specific serum antibodies against hgp100 and htyr were measured using cell based flow-cytometric assays. An Analysis of Variance (ANOVA) for repeated measurements was performed to identify statistically significant influences on the relative tumor volume. For post-hoc testing a Tukey-Kramer Multiple-Comparison Test was performed to compare the relative volumes on the different examination days. An ANOVA for repeated measurements was performed to analyse changes in body temperature over time. A one-way ANOVA was used to evaluate differences in body temperature between the groups. A p-value < 0.05 was considered significant for all statistical tests applied. In all groups, the relative tumor volume decreased significantly to 79.1 ± 26.91% by day 120 (p < 0.0001, Tukey-Kramer Multiple-Comparison Test). Affiliation to treatment group, local treatment and examination modality had no significant influence on the results (ANOVA for repeated measurements). Neither a cellular nor a humoral immune response directed against htyr or hgp100 was detected. Horses had an increased body temperature on the day after vaccination. This is the first clinical report on a systemic effect against equine melanoma following treatment with DNA vectors encoding eqIL12 and eqIL18 and formulated with a transfection reagent. Addition of DNA vectors encoding hgp100 respectively htyr did not potentiate this effect.
Yoo, Ji Won; Lee, Dong Ryul; Cha, Young Joo; You, Sung Hyun
2017-01-01
The purpose of the present study was to compare therapeutic effects of an electromyography (EMG) biofeedback augmented by virtual reality (VR) and EMG biofeedback alone on the triceps and biceps (T:B) muscle activity imbalance and elbow joint movement coordination during a reaching motor taskOBJECTIVE: To compare therapeutic effects of an electromyography (EMG) biofeedback augmented by virtual reality (VR) and EMG biofeedback alone on the triceps and biceps muscle activity imbalance and elbow joint movement coordination during a reaching motor task in normal children and children with spastic cerebral palsy (CP). 18 children with spastic CP (2 females; mean±standard deviation = 9.5 ± 1.96 years) and 8 normal children (3 females; mean ± standard deviation = 9.75 ± 2.55 years) were recruited from a local community center. All children with CP first underwent one intensive session of EMG feedback (30 minutes), followed by one session of the EMG-VR feedback (30 minutes) after a 1-week washout period. Clinical tests included elbow extension range of motion (ROM), biceps muscle strength, and box and block test. EMG triceps and biceps (T:B) muscle activity imbalance and reaching movement acceleration coordination were concurrently determined by EMG and 3-axis accelerometer measurements respectively. Independent t-test and one-way repeated analysis of variance (ANOVA) were performed at p < 0.05. The one-way repeated ANOVA was revealed to be significantly effective in elbow extension ROM (p = 0.01), biceps muscle strength (p = 0.01), and box and block test (p = 0.03). The one-way repeated ANOVA also revealed to be significantly effective in the peak triceps muscle activity (p = 0.01). However, one-way repeated ANOVA produced no statistical significance in the composite 3-dimensional movement acceleration coordination data (p = 0.12). The present study is a first clinical trial that demonstrated the superior benefits of the EMG biofeedback when augmented by virtual reality exercise games in children with spastic CP. The augmented EMG and VR feedback produced better neuromuscular balance control in the elbow joint than the EMG biofeedback alone.
2013-01-01
Background A strong association exists between the use of tamsulosin and the occurance of intraoperative floppy iris syndrome. Several methods were advocated to overcome the progressive intraopertive miosis. Our purpose was to investigate the effect of a mydriatic-cocktail soaked cellulose sponge on perioperative pupil diameter in tamsulosin-treated patients undergoing elective cataract surgery. Methods Patients using tamsulosin were dilated either with mydriatic-cocktail soaked sponge (group 1) or with conventional eyedrop regimen (group 2). Control patients not taking any α1 adrenergic receptor inhibtors were also dilated with mydriatic sponge (group 3). In all groups oxybuprocain 0.4%, cocain 4%, tropicamide 1%, phenylephrine 10%, diclophenac 0.1% along with chloramphenicol 0.5% were used preoperatively. Pupil diameter (mm) was measured preoperatively, after nucleus delivery, and before IOL implantation. Adverse effects associated with the use of sponge, minor and major intraoperative complications, the use of iris retractors and operation time were recorded. Differences in general between groups were analyzed with a one way analysis of variance (ANOVA); differences between groups in proportions were assessed by Fisher’s exact test. Results Mean pupil diameter (mm) was preopertively: 7.52 ± 1.21, 7.30 ± 1.55 and 7.99 ± 0.96 (ANOVA: p = 0.079); after nucleus delivery: 6 ± 1.20, 6.29 ± 1.12 and 6.52 ± 0.81 (ANOVA: p = 0.123); before IOL implantation: 5.46 ± 1.06, 5.83 ± 1.09 and 6.17 ± 0.89 (ANOVA: p = 0.0291). No adverse effect related to sponge use was detected. Frequency of minor complications, and iris hook use was similar in the two tamsulosin treated group. Operation time did not differ significantly in the three groups. Conclusion We have found that using a mydriatic cocktail-soaked wick – an alternative way to achieve intraoperative mydriasis for cataract surgery – was as effective and safe as the conventional repeated eyedrops regiment for tamsulosin treated patients. Trial registration Current Controlled Trials ISRCTN37834752 PMID:24359572
Timming, Andrew R.; Re, Daniel; Perrett, David I.
2016-01-01
Using mixed design analysis of variance (ANOVA), this paper investigates the effects of a subtle simulated increase in adiposity on women’s employment chances in the service sector. Employing a unique simulation of altering individuals’ BMIs and the literature on “aesthetic labour”, the study suggests that, especially for women, being heavier, but still within a healthy BMI, deleteriously impacts on hireability ratings. The paper explores the gendered dimension of this prejudice by asking whether female employees at the upper end of a healthy BMI range are likely to be viewed more negatively than their overtly overweight male counterparts. The paper concludes by considering the implications of these findings. PMID:27603519
Statistical design of quantitative mass spectrometry-based proteomic experiments.
Oberg, Ann L; Vitek, Olga
2009-05-01
We review the fundamental principles of statistical experimental design, and their application to quantitative mass spectrometry-based proteomics. We focus on class comparison using Analysis of Variance (ANOVA), and discuss how randomization, replication and blocking help avoid systematic biases due to the experimental procedure, and help optimize our ability to detect true quantitative changes between groups. We also discuss the issues of pooling multiple biological specimens for a single mass analysis, and calculation of the number of replicates in a future study. When applicable, we emphasize the parallels between designing quantitative proteomic experiments and experiments with gene expression microarrays, and give examples from that area of research. We illustrate the discussion using theoretical considerations, and using real-data examples of profiling of disease.
Blood lead levels and risk factors in pregnant women from Durango, Mexico.
La-Llave-León, Osmel; Estrada-Martínez, Sergio; Manuel Salas-Pacheco, José; Peña-Elósegui, Rocío; Duarte-Sustaita, Jaime; Candelas Rangel, Jorge-Luís; García Vargas, Gonzalo
2011-01-01
In this cross-sectional study the authors determined blood lead levels (BLLs) and some risk factors for lead exposure in pregnant women. Two hundred ninety-nine pregnant women receiving medical attention by the Secretary of Health, State of Durango, Mexico, participated in this study between 2007 and 2008. BLLs were evaluated with graphite furnace atomic absorption spectrometry. The authors used Student t test, 1-way analysis of variance (ANOVA), and linear regression as statistical treatments. BLLs ranged from 0.36 to 23.6 μg/dL (mean = 2.79 μg/dL, standard deviation = 2.14). Multivariate analysis showed that the main predictors of BLLs were working in a place where lead is used, using lead glazed pottery, and eating soil.
The Outlier Detection for Ordinal Data Using Scalling Technique of Regression Coefficients
NASA Astrophysics Data System (ADS)
Adnan, Arisman; Sugiarto, Sigit
2017-06-01
The aims of this study is to detect the outliers by using coefficients of Ordinal Logistic Regression (OLR) for the case of k category responses where the score from 1 (the best) to 8 (the worst). We detect them by using the sum of moduli of the ordinal regression coefficients calculated by jackknife technique. This technique is improved by scalling the regression coefficients to their means. R language has been used on a set of ordinal data from reference distribution. Furthermore, we compare this approach by using studentised residual plots of jackknife technique for ANOVA (Analysis of Variance) and OLR. This study shows that the jackknifing technique along with the proper scaling may lead us to reveal outliers in ordinal regression reasonably well.
NASA Astrophysics Data System (ADS)
Rahman, Mohamed Abd; Yeakub Ali, Mohammad; Saddam Khairuddin, Amir
2017-03-01
This paper presents the study on vibration and surface roughness of Inconel 718 workpiece produced by micro end-milling using Mikrotools Integrated Multi-Process machine tool DT-110 with control parameters; spindle speed (15000 rpm and 30000 rpm), feed rate (2 mm/min and 4 mm/min) and depth of cut (0.10 mm and 0.15mm). The vibration was measured using DYTRAN accelerometer instrument and the average surface roughness Ra was measured using Wyko NT1100. The analysis of variance (ANOVA) by using Design Expert software revealed that feed rate and depth of cut are the most significant factors on vibration meanwhile for average surface roughness, Ra, spindle speed is the most significant factor.
NASA Astrophysics Data System (ADS)
Venkata Subbaiah, K.; Raju, Ch.; Suresh, Ch.
2017-08-01
The present study aims to compare the conventional cutting inserts with wiper cutting inserts during the hard turning of AISI 4340 steel at different workpiece hardness. Type of insert, hardness, cutting speed, feed, and depth of cut are taken as process parameters. Taguchi’s L18 orthogonal array was used to conduct the experimental tests. Parametric analysis carried in order to know the influence of each process parameter on the three important Surface Roughness Characteristics (Ra, Rz, and Rt) and Material Removal Rate. Taguchi based Grey Relational Analysis (GRA) used to optimize the process parameters for individual response and multi-response outputs. Additionally, the analysis of variance (ANOVA) is also applied to identify the most significant factor.
A SAS(®) macro implementation of a multiple comparison post hoc test for a Kruskal-Wallis analysis.
Elliott, Alan C; Hynan, Linda S
2011-04-01
The Kruskal-Wallis (KW) nonparametric analysis of variance is often used instead of a standard one-way ANOVA when data are from a suspected non-normal population. The KW omnibus procedure tests for some differences between groups, but provides no specific post hoc pair wise comparisons. This paper provides a SAS(®) macro implementation of a multiple comparison test based on significant Kruskal-Wallis results from the SAS NPAR1WAY procedure. The implementation is designed for up to 20 groups at a user-specified alpha significance level. A Monte-Carlo simulation compared this nonparametric procedure to commonly used parametric multiple comparison tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Whole-animal metabolic rate is a repeatable trait: a meta-analysis.
Nespolo, Roberto F; Franco, Marcela
2007-06-01
Repeatability studies are gaining considerable interest among physiological ecologists, particularly in traits affected by high environmental/residual variance, such as whole-animal metabolic rate (MR). The original definition of repeatability, known as the intraclass correlation coefficient, is computed from the components of variance obtained in a one-way ANOVA on several individuals from which two or more measurements are performed. An alternative estimation of repeatability, popular among physiological ecologists, is the Pearson product-moment correlation between two consecutive measurements. However, despite the more than 30 studies reporting repeatability of MR, so far there is not a definite synthesis indicating: (1) whether repeatability changes in different types of animals; (2) whether some kinds of metabolism are more repeatable than others; and most important, (3) whether metabolic rate is significantly repeatable. We performed a meta-analysis to address these questions, as well as to explore the historical trend in repeatability studies. Our results show that metabolic rate is significantly repeatable and its effect size is not statistically affected by any of the mentioned factors (i.e. repeatability of MR does not change in different species, type of metabolism, time between measurements, and number of individuals). The cumulative meta-analysis revealed that repeatability studies in MR have already reached an asymptotical effect size with no further change either in its magnitude and/or variance (i.e. additional studies will not contribute significantly to the estimator). There was no evidence of strong publication bias.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seong W. Lee
During this reporting period, the literature survey including the gasifier temperature measurement literature, the ultrasonic application and its background study in cleaning application, and spray coating process are completed. The gasifier simulator (cold model) testing has been successfully conducted. Four factors (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. The Analysis of Variance (ANOVA) was applied to analyze the test data. The analysis shows that all four factors are significant to the temperature measurements in the gasifier simulator (cold model). The regression analysis for the case with the normalizedmore » room temperature shows that linear model fits the temperature data with 82% accuracy (18% error). The regression analysis for the case without the normalized room temperature shows 72.5% accuracy (27.5% error). The nonlinear regression analysis indicates a better fit than that of the linear regression. The nonlinear regression model's accuracy is 88.7% (11.3% error) for normalized room temperature case, which is better than the linear regression analysis. The hot model thermocouple sleeve design and fabrication are completed. The gasifier simulator (hot model) design and the fabrication are completed. The system tests of the gasifier simulator (hot model) have been conducted and some modifications have been made. Based on the system tests and results analysis, the gasifier simulator (hot model) has met the proposed design requirement and the ready for system test. The ultrasonic cleaning method is under evaluation and will be further studied for the gasifier simulator (hot model) application. The progress of this project has been on schedule.« less
NASA Astrophysics Data System (ADS)
Sahu, Neelesh Kumar; Andhare, Atul B.; Andhale, Sandip; Raju Abraham, Roja
2018-04-01
Present work deals with prediction of surface roughness using cutting parameters along with in-process measured cutting force and tool vibration (acceleration) during turning of Ti-6Al-4V with cubic boron nitride (CBN) inserts. Full factorial design is used for design of experiments using cutting speed, feed rate and depth of cut as design variables. Prediction model for surface roughness is developed using response surface methodology with cutting speed, feed rate, depth of cut, resultant cutting force and acceleration as control variables. Analysis of variance (ANOVA) is performed to find out significant terms in the model. Insignificant terms are removed after performing statistical test using backward elimination approach. Effect of each control variables on surface roughness is also studied. Correlation coefficient (R2 pred) of 99.4% shows that model correctly explains the experiment results and it behaves well even when adjustment is made in factors or new factors are added or eliminated. Validation of model is done with five fresh experiments and measured forces and acceleration values. Average absolute error between RSM model and experimental measured surface roughness is found to be 10.2%. Additionally, an artificial neural network model is also developed for prediction of surface roughness. The prediction results of modified regression model are compared with ANN. It is found that RSM model and ANN (average absolute error 7.5%) are predicting roughness with more than 90% accuracy. From the results obtained it is found that including cutting force and vibration for prediction of surface roughness gives better prediction than considering only cutting parameters. Also, ANN gives better prediction over RSM models.
Sources of uncertainty in hydrological climate impact assessment: a cross-scale study
NASA Astrophysics Data System (ADS)
Hattermann, F. F.; Vetter, T.; Breuer, L.; Su, Buda; Daggupati, P.; Donnelly, C.; Fekete, B.; Flörke, F.; Gosling, S. N.; Hoffmann, P.; Liersch, S.; Masaki, Y.; Motovilov, Y.; Müller, C.; Samaniego, L.; Stacke, T.; Wada, Y.; Yang, T.; Krysnaova, V.
2018-01-01
Climate change impacts on water availability and hydrological extremes are major concerns as regards the Sustainable Development Goals. Impacts on hydrology are normally investigated as part of a modelling chain, in which climate projections from multiple climate models are used as inputs to multiple impact models, under different greenhouse gas emissions scenarios, which result in different amounts of global temperature rise. While the goal is generally to investigate the relevance of changes in climate for the water cycle, water resources or hydrological extremes, it is often the case that variations in other components of the model chain obscure the effect of climate scenario variation. This is particularly important when assessing the impacts of relatively lower magnitudes of global warming, such as those associated with the aspirational goals of the Paris Agreement. In our study, we use ANOVA (analyses of variance) to allocate and quantify the main sources of uncertainty in the hydrological impact modelling chain. In turn we determine the statistical significance of different sources of uncertainty. We achieve this by using a set of five climate models and up to 13 hydrological models, for nine large scale river basins across the globe, under four emissions scenarios. The impact variable we consider in our analysis is daily river discharge. We analyze overall water availability and flow regime, including seasonality, high flows and low flows. Scaling effects are investigated by separately looking at discharge generated by global and regional hydrological models respectively. Finally, we compare our results with other recently published studies. We find that small differences in global temperature rise associated with some emissions scenarios have mostly significant impacts on river discharge—however, climate model related uncertainty is so large that it obscures the sensitivity of the hydrological system.
Fitts, Douglas A
2017-09-21
The variable criteria sequential stopping rule (vcSSR) is an efficient way to add sample size to planned ANOVA tests while holding the observed rate of Type I errors, α o , constant. The only difference from regular null hypothesis testing is that criteria for stopping the experiment are obtained from a table based on the desired power, rate of Type I errors, and beginning sample size. The vcSSR was developed using between-subjects ANOVAs, but it should work with p values from any type of F test. In the present study, the α o remained constant at the nominal level when using the previously published table of criteria with repeated measures designs with various numbers of treatments per subject, Type I error rates, values of ρ, and four different sample size models. New power curves allow researchers to select the optimal sample size model for a repeated measures experiment. The criteria held α o constant either when used with a multiple correlation that varied the sample size model and the number of predictor variables, or when used with MANOVA with multiple groups and two levels of a within-subject variable at various levels of ρ. Although not recommended for use with χ 2 tests such as the Friedman rank ANOVA test, the vcSSR produces predictable results based on the relation between F and χ 2 . Together, the data confirm the view that the vcSSR can be used to control Type I errors during sequential sampling with any t- or F-statistic rather than being restricted to certain ANOVA designs.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Beer consumers' perceptions of the health aspects of alcoholic beverages.
Wright, C A; Bruhn, C M; Heymann, H; Bamforth, C W
2008-01-01
Consumers' perceptions about alcohol are shaped by numerous factors. This environment includes advertisements, public service announcements, product labels, various health claims, and warnings about the dangers of alcohol consumption. This study used focus groups and questionnaires to examine consumers' perceptions of alcoholic beverages based on their nutritional value and health benefits. The overall purpose of this study was to examine beer consumers' perceptions of the health attributes and content of alcoholic beverages. Volunteers were surveyed at large commercial breweries in California, Missouri, and New Hampshire. The anonymous, written survey was presented in a self-explanatory format and was completed in 5 to 10 min. The content and style of the survey were derived from focus groups conducted in California. The data are separated by location, gender, and over or under the age of 30. Parametric data on beverage rating were analyzed using analysis of variance (ANOVA) while the nonparametric data from True/False or Yes/No questions were analyzed using chi-square. Although statistically significant variances did exist between survey location, gender, and age, general trends emerged in areas of inquiry. The findings indicate that a great opportunity exists to inform consumers about the health benefits derived from the moderate consumption of all alcoholic beverages.
Ergonomics and musculoskeletal disorder: as an occupational hazard in dentistry.
Gopinadh, Anne; Devi, Kolli Naga Neelima; Chiramana, Sandeep; Manne, Prakash; Sampath, Anche; Babu, Muvva Suresh
2013-03-01
Musculoskeletal disorders (MSDs) are commonly experienced in dentistry. The objective of this study is to determine the prevalence of ergonomics and MSDs among dental professionals. A cross-sectional survey was conducted among 170 dentists of different specialties. The questionnaire gathered information regarding demographic details, MSDs, work duration, working status, awareness of ergonomics, etc. Data was analyzed using SPSS version 15.0. Student's t-test and analysis of variance (ANOVA) test was used for comparison in mean scores. Stepwise multiple linear regression analysis was used to assess the independent variables that significantly influenced the variance in the dependent variable (pain). It was found that 73.9% of the participants reported musculoskeletal pain and most common painful sites were neck and back. More than half of the participants, i.e. 232 (59.3%) were aware of correct ergonomic posture regarding dental. Almost percentage of pain increased significantly with increase in age and working time. Among all specialties, prosthodontics were found to have more prevalence of MSDs. The appearance of musculoskeletal symptoms among dental professionals was quite common. It suggested that ergonomics should be covered in the educational system to reduce risks to dental practitioners.
Hoy, Madita; Strauß, Bernhard; Kröger, Christoph; Brenk-Franz, Katja
2018-06-22
The New Sexual Satisfaction Scale (NSSS) is an internationally established questionnaire for assessing sexual satisfaction. It is based on 2 subscales (ego-centered and partner- and sexual activity-centered sexual satisfaction). The aim of the study was to evaluate the German short version of the questionnaire (NSSS-SD) in a representative sample (N=2524). In addition, relationships between sexual satisfaction and sociodemographic factors (age, sex, education) and characteristics of partnership and sexuality (relationship satisfaction, coitus frequency, number of sexual partners) were examined. The internal consistency of the NSSS-SD was excellent (Cronbach's Alpha = 0.96). The 2-dimensional structure of the long version could not be confirmed for the short version. One factor could be extracted, which explains 68.94% of the variance. An analysis of variance (ANOVA) revealed statistically significant differences in sexual satisfaction with respect to age, education, relationship satisfaction and coitus frequency. Sex and number of sexual partners did not influence sexual satisfaction. The NSSS-SD is a reliable questionnaire of sexual satisfaction for sexually active individuals. For sexually inactive individuals, a change of the instruction or a visual analogue scale might be useful. © Georg Thieme Verlag KG Stuttgart · New York.
Balkam, Jane A Johnston; Cadwell, Karin; Fein, Sara B
2011-07-01
The purpose of this study was to evaluate the impact of the individual services offered via a workplace lactation program of one large public-sector employer on the duration of any breastfeeding and exclusive breastfeeding. Exclusive breastfeeding was defined as exclusive feeding of human milk for the milk feeding. A cross-sectional mailed survey approach was used. The sample (n = 128) consisted of women who had used at least one component of the lactation program in the past 3 years and who were still employed at the same organization when data were collected. Descriptive statistics included frequency distributions and contingency table analysis. Chi-square analysis was used for comparison of groups, and both analysis of variance (ANOVA) and univariate analysis of variance from a general linear model were used for comparison of means. The survey respondents were primarily older, white, married, well-educated, high-income women. More of the women who received each lactation program service were exclusively breastfeeding at 6 months of infant age in all categories of services, with significant differences in the categories of telephone support and return to work consultation. After adjusting for race and work status, logistic regression analysis showed the number of services received was positively related to exclusive breastfeeding at 6 months and participation in a return to work consultation was positively related to any breastfeeding at 6 months. The study demonstrated that the workplace lactation program had a positive impact on duration of breastfeeding for the women who participated. Participation in the telephone support and return to work consultation services, and the total number of services used were related to longer duration of exclusive and/or any breastfeeding.
Mohammadi, Mahboobeh; Alavi, Mousa; Bahrami, Masoud; Zandieh, Zahra
2017-01-01
Promotion of self-care ability among older people is an essential means to help maintain and improve their health. However, the role of spiritual and social health has not yet been considered in detail in the context of self-care ability among elderly. The aim of this study was to assess the relationship between spiritual and social health and self-care ability of older people referred to community health centers in Isfahan. In this cross-sectional correlation study, 200 people, aged 60 years and older, referred to healthcare centers in 2016 were recruited through convenience sampling method. Data were collected by four-part tool comprising of: (a) demographics, (b) Ellison and Palotzin's spiritual well-being scale, (c) Kees's "social health" scale, and (d) self-care ability scale for the elderly by Soderhamn's; data were analyzed by descriptive and inferential (independent t -test, analysis of variance - ANOVA, Pearson's coefficient tests, and multiple regression analysis) statistics by SPSS16 software. Findings showed that the entered predictor variables were accounted for 41% of total variance ( R 2 ) of the two self-care ability in the model ( p < 0.001, F 3, 199 = 46.02). Two out of the three predictor variables including religious well-being and social health, significantly predicted the self-care ability of older people. The results of this study emphasized on the relationship between spiritual and social health of the elderly people and their ability to self-care. Therefore, it would be recommended to keep the focus of the service resources towards improving social and spiritual health to improve self-care ability in elderly people.
Contrast agent comparison for three-dimensional micro-CT angiography: A cadaveric study.
Kingston, Mitchell J; Perriman, Diana M; Neeman, Teresa; Smith, Paul N; Webb, Alexandra L
2016-07-01
Barium sulfate and lead oxide contrast media are frequently used for cadaver-based angiography studies. These contrast media have not previously been compared to determine which is optimal for the visualisation and measurement of blood vessels. In this study, the lower limb vessels of 16 embalmed Wistar rats, and four sets of cannulae of known diameter, were injected with one of three different contrast agents (barium sulfate and resin, barium sulfate and gelatin, and lead oxide combined with milk powder). All were then scanned using micro-computed tomography (CT) angiography and 3-D reconstructions generated. The number of branching generations of the rat lower limb vessels were counted and compared between the contrast agents using ANOVA. The diameter of the contrast-filled cannulae, were measured and used to calculate the accuracy of the measurements by comparing the bias and variance of the estimates. Intra- and inter-observer reliability were calculated using intra-class correlation coefficients. There was no significant difference (mean difference [MD] 0.05; MD 95% confidence interval [CI] -0.83 to 0.93) between the number of branching generations for barium sulfate-resin and lead oxide-milk powder. Barium sulfate-resin demonstrated less bias and less variance of the estimates (MD 0.03; standard deviation [SD] 1.96 mm) compared to lead oxide-milk powder (MD 0.11; SD 1.96 mm) for measurements of contrast-filled cannulae scanned at high resolution. Barium sulfate-resin proved to be more accurate than lead oxide-milk powder for high resolution micro-CT scans and is preferred due to its non-toxicity. This technique could be applied to any embalmed specimen model. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Manoj, Smita Sara; Cherian, K P; Chitre, Vidya; Aras, Meena
2013-12-01
There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregum Soft™, 3 M ESPE and the four impression techniques used were (1) Monophase impression technique using medium body impression material. (2) One step double mix impression technique using heavy body and light body impression materials simultaneously. (3) Two step double mix impression technique using a cellophane spacer (heavy body material used as a preliminary impression to create a wash space with a cellophane spacer, followed by the use of light body material). (4) Matrix impression using a matrix of polyether occlusal registration material. The matrix is loaded with heavy body material followed by a pick-up impression in medium body material. For each technique, thirty impressions were made of a stainless steel master model that contained three complete crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring eight dimensions (mesiodistal, faciolingual and inter-abutment) on stone dies poured from impressions of the master model. A two-tailed t test was carried out to test the significance in difference of the distances between the master model and the stone models. One way analysis of variance (ANOVA) was used for multiple group comparison followed by the Bonferroni's test for pair wise comparison. The accuracy was tested at α = 0.05. In general, polyether impression material produced stone dies that were smaller except for the dies produced from the one step double mix impression technique. The ANOVA revealed a highly significant difference for each dimension measured (except for the inter-abutment distance between the first and the second die) between any two groups of stone models obtained from the four impression techniques. Pair wise comparison for each measurement did not reveal any significant difference (except for the faciolingual distance of the third die) between the casts produced using the two step double mix impression technique and the matrix impression system. The two step double mix impression technique produced stone dies that showed the least dimensional variation. During fabrication of a cast restoration, laboratory procedures should not only compensate for the cement thickness, but also for the increase or decrease in die dimensions.
Structural changes and out-of-sample prediction of realized range-based variance in the stock market
NASA Astrophysics Data System (ADS)
Gong, Xu; Lin, Boqiang
2018-03-01
This paper aims to examine the effects of structural changes on forecasting the realized range-based variance in the stock market. Considering structural changes in variance in the stock market, we develop the HAR-RRV-SC model on the basis of the HAR-RRV model. Subsequently, the HAR-RRV and HAR-RRV-SC models are used to forecast the realized range-based variance of S&P 500 Index. We find that there are many structural changes in variance in the U.S. stock market, and the period after the financial crisis contains more structural change points than the period before the financial crisis. The out-of-sample results show that the HAR-RRV-SC model significantly outperforms the HAR-BV model when they are employed to forecast the 1-day, 1-week, and 1-month realized range-based variances, which means that structural changes can improve out-of-sample prediction of realized range-based variance. The out-of-sample results remain robust across the alternative rolling fixed-window, the alternative threshold value in ICSS algorithm, and the alternative benchmark models. More importantly, we believe that considering structural changes can help improve the out-of-sample performances of most of other existing HAR-RRV-type models in addition to the models used in this paper.
Modeling rainfall-runoff relationship using multivariate GARCH model
NASA Astrophysics Data System (ADS)
Modarres, R.; Ouarda, T. B. M. J.
2013-08-01
The traditional hydrologic time series approaches are used for modeling, simulating and forecasting conditional mean of hydrologic variables but neglect their time varying variance or the second order moment. This paper introduces the multivariate Generalized Autoregressive Conditional Heteroscedasticity (MGARCH) modeling approach to show how the variance-covariance relationship between hydrologic variables varies in time. These approaches are also useful to estimate the dynamic conditional correlation between hydrologic variables. To illustrate the novelty and usefulness of MGARCH models in hydrology, two major types of MGARCH models, the bivariate diagonal VECH and constant conditional correlation (CCC) models are applied to show the variance-covariance structure and cdynamic correlation in a rainfall-runoff process. The bivariate diagonal VECH-GARCH(1,1) and CCC-GARCH(1,1) models indicated both short-run and long-run persistency in the conditional variance-covariance matrix of the rainfall-runoff process. The conditional variance of rainfall appears to have a stronger persistency, especially long-run persistency, than the conditional variance of streamflow which shows a short-lived drastic increasing pattern and a stronger short-run persistency. The conditional covariance and conditional correlation coefficients have different features for each bivariate rainfall-runoff process with different degrees of stationarity and dynamic nonlinearity. The spatial and temporal pattern of variance-covariance features may reflect the signature of different physical and hydrological variables such as drainage area, topography, soil moisture and ground water fluctuations on the strength, stationarity and nonlinearity of the conditional variance-covariance for a rainfall-runoff process.
Vitezica, Zulma G; Varona, Luis; Legarra, Andres
2013-12-01
Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or "breeding" values of individuals are generated by substitution effects, which involve both "biological" additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the "genotypic" value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts.
Idowu, Sunday Olakunle; Adeyemo, Morenikeji Ambali; Ogbonna, Udochi Ihechiluru
2009-01-01
Background Determination of lipophilicity as a tool for predicting pharmacokinetic molecular behavior is limited by the predictive power of available experimental models of the biomembrane. There is current interest, therefore, in models that accurately simulate the biomembrane structure and function. A novel bio-device; a lipid thin film, was engineered as an alternative approach to the previous use of hydrocarbon thin films in biomembrane modeling. Results Retention behavior of four structurally diverse model compounds; 4-amino-3,5-dinitrobenzoic acid (ADBA), naproxen (NPX), nabumetone (NBT) and halofantrine (HF), representing 4 broad classes of varying molecular polarities and aqueous solubility behavior, was investigated on the lipid film, liquid paraffin, and octadecylsilane layers. Computational, thermodynamic and image analysis confirms the peculiar amphiphilic configuration of the lipid film. Effect of solute-type, layer-type and variables interactions on retention behavior was delineated by 2-way analysis of variance (ANOVA) and quantitative structure property relationships (QSPR). Validation of the lipid film was implemented by statistical correlation of a unique chromatographic metric with Log P (octanol/water) and several calculated molecular descriptors of bulk and solubility properties. Conclusion The lipid film signifies a biomimetic artificial biological interface capable of both hydrophobic and specific electrostatic interactions. It captures the hydrophilic-lipophilic balance (HLB) in the determination of lipophilicity of molecules unlike the pure hydrocarbon film of the prior art. The potentials and performance of the bio-device gives the promise of its utility as a predictive analytic tool for early-stage drug discovery science. PMID:19735551
NASA Astrophysics Data System (ADS)
Singh, Jaswinder; Chauhan, Amit
2017-12-01
This study investigates the mechanical behavior of aluminum 2024 matrix composites reinforced with silicon carbide and red mud particles. The hybrid reinforcements were successfully incorporated into the alloy matrix using the stir casting process. An orthogonal array based on Taguchi's technique was used to acquire experimental data for mechanical properties (hardness and impact energy) of the composites. The analysis of variance (ANOVA) and response surface methodology (RSM) techniques were used to evaluate the influence of test parameters (reinforcement ratio, particle size and ageing time). The morphological analysis of the surfaces (fractured during impact tests) was conducted to identify the failure mechanism. Finally, a confirmation experiment was performed to check the adequacy of the developed model. The results indicate that the ageing time is the most effective parameter as far as the hardness of the hybrid composites is concerned. It has also been revealed that red mud wt.% has maximum influence on the impact energy characteristics of the hybrid composites. The study concludes that Al2024/SiC/red mud hybrid composites possess superior mechanical performance in comparison to pure alloy under optimized conditions.
Attitudes towards and knowledge about homosexuality among medical students in Zagreb.
Grabovac, Igor; Abramović, Marija; Komlenović, Gordana; Milosević, Milan; Mustajbegović, Jadranka
2014-03-01
The aim of the study was to investigate whether students in their fifth and sixth years of medical school in Zagreb have homophobic attitudes and assess their knowledge about homosexuality. A survey was conducted among fifth and sixth year medical students during the 2009/2010 academic year. The survey consisted of general demographic data, two validated questionnaires--"Knowledge about Homosexuality Questionnaire" and "Heterosexual Attitudes towards Homosexuality Scale"--and questions about personal experiences created for this study. The mean knowledge scores were X = 14.8 out of 20. Furthermore, gender differences in attitudes were observed, indicating less negative attitudes among the female participants. The regression model was significant (ANOVA: Sum of Squares = 38.065; df = 17, Mean Square= 2239, F = 10.6; p < 0.001) with 38% of explained variance. The significant predictor variables that indicate lower attitudes about homosexuality score were female gender (beta= -0.14, p = 0.015), sixth year of study (beta = -0.16, p = 0.009) and more knowledge about homosexuality (beta = -0.48, p < 0.001). Negative attitudes are present among the students; therefore, educational efforts should be included in the curricula of medical schools to diminish the negative perceptions of the lesbian, gay, bisexual and transgender community.
NASA Astrophysics Data System (ADS)
Ibrahim, M. Z.; Alrozi, R.; Zubir, N. A.; Bashah, N. A.; Ali, S. A. Md; Ibrahim, N.
2018-05-01
The oxidation process such as heterogeneous Fenton and/or Fenton-like reactions is considered as an effective and efficient method for treatment of dye degradation. In this study, the degradation of Acid Orange 7 (AO7) was investigated by using Fe3-xCoxO4 as a heterogeneous Fenton-like catalyst. Response surface methodology (RSM) was used to optimize the operational parameters condition and the interaction of two or more parameters. The parameter studies were catalyst dosage (X1 ), pH (X2 ) and H2O2 concentration (X3 ) towards AO7 degradation. Based on analysis of variance (ANOVA), the derived quadratic polynomial model was significant whereby the predicted values matched the experimental values with regression coefficient of R2 = 0.9399. The optimum condition for AO7 degradation was obtained at catalyst dosage of 0.84 g/L, pH of 3 and H2O2 concentration of 46.70 mM which resulted in 86.30% removal of AO7 dye. These findings present new insights into the influence of operational parameters in the heterogeneous Fenton-like oxidation of AO7 using Fe3-xCoxO4 catalyst.
Skromanis, Sarah; Cooling, Nick; Rodgers, Bryan; Purton, Terry; Fan, Frances; Bridgman, Heather; Harris, Keith; Presser, Jennifer; Mond, Jonathan
2018-06-01
International students comprise an increasingly larger proportion of higher education students globally. Empirical evidence about the health and well-being of these students is, however, limited. We sought to examine the health and well-being of international students, primarily from Asian countries, attending the University of Tasmania, Australia, using domestic students as a comparison group. Ethics approval was given to invite (via email) all currently enrolled students to participate in the study by completing a pilot-tested, online survey. The survey was completed by 382 international students (response rate = 8.9%) and 1013 domestic students (9.2%). Independent samples t -tests, analysis of variance (ANOVA) and chi-square tests were used for bivariate comparisons between international and domestic students, and between subgroups of international students. Regression models were used to examine the associations between student status (international vs. domestic) and health outcomes, controlling for demographic and enrolment variables. International students, particularly male students, were found to be at increased risk of several adverse health outcomes while also being less likely to seek help for mental health and related problems. The findings indicate the need for accessible, targeted, culturally-sensitive health promotion and early intervention programs.
Active commuting among K-12 educators: a study examining walking and biking to work.
Bopp, Melissa; Hastmann, Tanis J; Norton, Alyssa N
2013-01-01
Walking and biking to work, active commuting (AC) is associated with many health benefits, though rates of AC remain low in the US. K-12 educators represent a significant portion of the workforce, and employee health and associated costs may have significant economic impact. Therefore, the purpose of this study was to examine the current rates of AC and factors associated with AC among K-12 educators. A volunteer sample of K-12 educators (n = 437) was recruited to participate in an online survey. Participants responded about AC patterns and social ecological influences on AC (individual, interpersonal, institutional, community, and environmental factors). t-tests and ANOVAs examined trends in AC, and Pearson correlations examined the relationship between AC and dependent variables. Multiple regression analysis determined the relative influence of individual, interpersonal, institutional, community, and environmental levels on AC. Participants actively commuted 0.51 ± 1.93 times/week. There were several individual, interpersonal, institutional, community, and environmental factors significantly related to AC. The full model explained 60.8% of the variance in AC behavior. This study provides insight on the factors that determine K-12 educators mode of commute and provide some insight for employee wellness among this population.
Mollah, Mohammad Manir Hossain; Jamal, Rahman; Mokhtar, Norfilza Mohd; Harun, Roslan; Mollah, Md. Nurul Haque
2015-01-01
Background Identifying genes that are differentially expressed (DE) between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA), are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression. Results The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0) to outlying expressions and larger weights (≤ 1) to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA. Conclusion Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed) perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large-sample cases in the presence of more than 50% outlying genes. The proposed method also exhibited better performance than the other methods for m > 2 conditions with multiple patterns of expression, where the BetaEB was not extended for this condition. Therefore, the proposed approach would be more suitable and reliable on average for the identification of DE genes between two or more conditions with multiple patterns of expression. PMID:26413858
Jongerling, Joran; Laurenceau, Jean-Philippe; Hamaker, Ellen L
2015-01-01
In this article we consider a multilevel first-order autoregressive [AR(1)] model with random intercepts, random autoregression, and random innovation variance (i.e., the level 1 residual variance). Including random innovation variance is an important extension of the multilevel AR(1) model for two reasons. First, between-person differences in innovation variance are important from a substantive point of view, in that they capture differences in sensitivity and/or exposure to unmeasured internal and external factors that influence the process. Second, using simulation methods we show that modeling the innovation variance as fixed across individuals, when it should be modeled as a random effect, leads to biased parameter estimates. Additionally, we use simulation methods to compare maximum likelihood estimation to Bayesian estimation of the multilevel AR(1) model and investigate the trade-off between the number of individuals and the number of time points. We provide an empirical illustration by applying the extended multilevel AR(1) model to daily positive affect ratings from 89 married women over the course of 42 consecutive days.
Sirisathit, Issarawas
2018-01-01
Objective This study evaluated marginal accuracy of full-arch zirconia restoration fabricated from two digital computer-aided design and computer-aided manufacturing (CAD-CAM) systems (Trios-3 and CS3500) in comparison to conventional cast metal restoration. Materials and methods A stainless steel model comprising two canine and two molar abutments was used as a master model for full-arch reconstruction. The canine and molar abutments were machined in a cylindrical shape with 5° taper and chamfer margin. The CAD-CAM systems based on the digital approach were used to construct the full-arch zirconia restoration. The conventional cast metal restoration was fabricated according to a conventional lost-wax technique using nickel–chromium alloys. Ten restorations were fabricated from each system. The marginal accuracy of each restoration was determined at four locations for each abutment. An analysis of variance (ANOVA) and Tukey’s honest significant difference (HSD) multiple comparisons were used to determine statistically significant difference at 95% confidence interval. Results The mean values of marginal accuracy of restorations fabricated from conventional casting, Trios-3, and CS3500 were 48.59±4.16 μm, 53.50±5.66 μm, and 56.47±5.52 μm, respectively. ANOVA indicated significant difference in marginal fit of restorations among various systems. The marginal discrepancy of zirconia restoration fabricated from the CS3500 system demonstrated significantly larger gap than that fabricated from the 3Shape system (p<0.05). Tukey’s HSD multiple comparisons indicated that the zirconia restoration fabricated from either CS3500 or Trios-3 demonstrated a significantly larger marginal gap than the conventional cast metal restoration (p<0.05). Conclusion Full-arch zirconia restoration fabricated from the Trios-3 illustrated better marginal fits than that from the CS3500, although, both were slightly less accurate than the conventional cast restoration. However, the marginal discrepancies of restoration produced by both CAD-CAM systems were within the clinically acceptable range and satisfactorily precise to be suggested for construction full-arch zirconia restoration. PMID:29497334
Structural analysis of muscles elevating the hyolaryngeal complex.
Pearson, William G; Langmore, Susan E; Yu, Louis B; Zumwalt, Ann C
2012-12-01
A critical event of pharyngeal swallowing is the elevation of the hyolaryngeal complex to open the upper esophageal sphincter. Current swallowing theory assigns this function to the submental and thyrohyoid muscles. However, the attachments of the long pharyngeal muscles indicate that they could contribute to this function, yet their role is uninvestigated in humans. In addition, there is evidence the posterior digastric and stylohyoid contribute to hyoid elevation. A cadaver model was used to document the structural properties of muscles. These properties were used to model muscle groups as force vectors and analyze their potential for hyolaryngeal elevation. Vector magnitude was determined using physiological cross-sectional areas (PCSAs) of muscles calculated from structural properties of muscle taken from 12 hemisected cadaver specimens. Vector direction (lines of action) was calculated from the three-dimensional coordinates of muscle attachment sites. Unit force vectors in the superior direction of submental, suprahyoid (which includes the submental muscles), long pharyngeal, and thyrohyoid muscles were derived and compared by an analysis of variance (ANOVA) to document each muscle's potential contribution to hyolaryngeal elevation. An ANOVA with Tukey HSD post hoc analysis of unit force vectors showed no statistically significant difference between the submental (0.92 ± 0.24 cm(2)) and long pharyngeal (0.73 ± 0.20 cm(2)) muscles. Both demonstrated greater potential to elevate the hyolaryngeal complex than the thyrohyoid (0.49 ± 0.18 cm(2)), with P < 0.01 and P < 0.05, respectively. The suprahyoid muscles (1.52 ± 0.35 cm(2)) demonstrated the greatest potential to elevate the hyolaryngeal complex: greater than both the long pharyngeal muscles (P < 0.01) and the thyrohyoid (P < 0.01). The submental and thyrohyoid muscles by convention are thought to elevate the hyolaryngeal complex. This study demonstrates that structurally the long pharyngeal muscles have similar potential to contribute to this critical function, with the suprahyoid muscles having the greatest potential. If verified by functional data, these findings would amend current swallowing theory.
Knaak, Stephanie; Szeto, Andrew Ch; Fitch, Kathryn; Modgill, Geeta; Patten, Scott
2015-01-01
Stigmatization among healthcare providers towards mental illnesses can present obstacles to effective caregiving. This may be especially the case for borderline personality disorder (BPD). Our study measured the impact of a three hour workshop on BPD and dialectical behavior therapy (DBT) on attitudes and behavioral intentions of healthcare providers towards persons with BPD as well as mental illness more generally. The intervention involved educational and social contact elements, all focused on BPD. The study employed a pre-post design. We adopted the approach of measuring stigmatization towards persons with BPD in one half of the attendees and stigmatization towards persons with a mental illness in the other half. The stigma-assessment tool was the Opening Minds Scale for Healthcare Providers (OMS-HC). Two versions of the scale were employed - the original version and a 'BPD-specific' version. A 2x2 mixed model factorial analysis of variance (ANOVA) was conducted on the dependent variable, stigma score. The between-subject factor was survey type. The within-subject factor was time. The mixed-model ANOVA produced a significant between-subject main effect for survey type, with stigma towards persons with BPD being greater than that towards persons with a mental illness more generally. A significant within-subject main effect for time was also observed, with participants showing significant improvement in stigma scores at Time 2. The main effects were subsumed by a significant interaction between time and survey type. Bonferroni post hoc tests indicated significant improvement in attitudes towards BPD and mental illness more generally, although there was a greater improvement in attitudes towards BPD. Although effectiveness cannot be conclusively demonstrated with the current research design, results are encouraging that the intervention was successful at improving healthcare provider attitudes and behavioral intentions towards persons with BPD. The results further suggest that anti stigma interventions effective at combating stigma against a specific disorder may also have positive generalizable effects towards a broader set of mental illnesses, albeit to a lessened degree.
NASA Astrophysics Data System (ADS)
Rexer, Moritz; Hirt, Christian
2015-09-01
Classical degree variance models (such as Kaula's rule or the Tscherning-Rapp model) often rely on low-resolution gravity data and so are subject to extrapolation when used to describe the decay of the gravity field at short spatial scales. This paper presents a new degree variance model based on the recently published GGMplus near-global land areas 220 m resolution gravity maps (Geophys Res Lett 40(16):4279-4283, 2013). We investigate and use a 2D-DFT (discrete Fourier transform) approach to transform GGMplus gravity grids into degree variances. The method is described in detail and its approximation errors are studied using closed-loop experiments. Focus is placed on tiling, azimuth averaging, and windowing effects in the 2D-DFT method and on analytical fitting of degree variances. Approximation errors of the 2D-DFT procedure on the (spherical harmonic) degree variance are found to be at the 10-20 % level. The importance of the reference surface (sphere, ellipsoid or topography) of the gravity data for correct interpretation of degree variance spectra is highlighted. The effect of the underlying mass arrangement (spherical or ellipsoidal approximation) on the degree variances is found to be crucial at short spatial scales. A rule-of-thumb for transformation of spectra between spherical and ellipsoidal approximation is derived. Application of the 2D-DFT on GGMplus gravity maps yields a new degree variance model to degree 90,000. The model is supported by GRACE, GOCE, EGM2008 and forward-modelled gravity at 3 billion land points over all land areas within the SRTM data coverage and provides gravity signal variances at the surface of the topography. The model yields omission errors of 9 mGal for gravity (1.5 cm for geoid effects) at scales of 10 km, 4 mGal (1 mm) at 2-km scales, and 2 mGal (0.2 mm) at 1-km scales.
Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G
2009-09-01
The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.
A stochastic hybrid model for pricing forward-start variance swaps
NASA Astrophysics Data System (ADS)
Roslan, Teh Raihana Nazirah
2017-11-01
Recently, market players have been exposed to the astounding increase in the trading volume of variance swaps. In this paper, the forward-start nature of a variance swap is being inspected, where hybridizations of equity and interest rate models are used to evaluate the price of discretely-sampled forward-start variance swaps. The Heston stochastic volatility model is being extended to incorporate the dynamics of the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. This is essential since previous studies on variance swaps were mainly focusing on instantaneous-start variance swaps without considering the interest rate effects. This hybrid model produces an efficient semi-closed form pricing formula through the development of forward characteristic functions. The performance of this formula is investigated via simulations to demonstrate how the formula performs for different sampling times and against the real market scenario. Comparison done with the Monte Carlo simulation which was set as our main reference point reveals that our pricing formula gains almost the same precision in a shorter execution time.
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Narayanan, Sarath Kumar; Cohen, Ralph Clinton; Shun, Albert
2014-06-01
Minimal access techniques have transformed the way pediatric surgery is practiced. Due to various constraints, surgical residency programs have not been able to tutor adequate training skills in the routine setting. The advent of new technology and methods in minimally invasive surgery (MIS), has similarly contributed to the need for systematic skills' training in a safe, simulated environment. To enable the training of the proper technique among pediatric surgery trainees, we have advanced a porcine non-survival model for endoscopic surgery. The technical advancements over the past 3 years and a subjective validation of the porcine model from 114 participating trainees using a standard questionnaire and a 5-point Likert scale have been described here. Mean attitude scores and analysis of variance (ANOVA) were used for statistical analysis of the data. Almost all trainees agreed or strongly agreed that the animal-based model was appropriate (98.35%) and also acknowledged that such workshops provided adequate practical experience before attempting on human subjects (96.6%). Mean attitude score for respondents was 19.08 (SD 3.4, range 4-20). Attitude scores showed no statistical association with years of experience or the level of seniority, indicating a positive attitude among all groups of respondents. Structured porcine-based MIS training should be an integral part of skill acquisition for pediatric surgery trainees and the experience gained can be transferred into clinical practice. We advocate that laparoscopic training should begin in a controlled workshop setting before procedures are attempted on human patients.
Morningness-Eveningness, Chronotypes and Health-Impairing Behaviors in Adolescents
Urbán, Róbert; Magyaródi, Tímea; Rigó, Adrien
2013-01-01
The impact of diurnal preferences on health-related behaviors is acknowledged but relatively understudied. The aim of this study was threefold: (1) testing the measurement model of the Hungarian version of the reduced Horne-Östberg Morningness-Eveningness Questionnaire (Hungarian Version of the rMEQ); (2) estimating chronotypes and their prevalence; and (3) analyzing the relationship between morningness-eveningness/chronotypes and health-impairing behaviors, including smoking, alcohol use, and physical inactivity in adolescents. Self-reported data on the Hungarian version of the rMEQ, smoking, alcohol use, and physical inactivity obtained from Hungarian high-school students (ninth grade, N = 2565) were analyzed with confirmatory factor analysis (CFA), latent profile analysis (LPA), structural equation modeling, and analysis of variance (ANOVA). A one-factor model of morningness was supported, which included rising time, peak time, retiring time, and self-evaluation of chronotype. Morningness was significantly associated with a lower likelihood of smoking and alcohol use, and also with a lower level of physical inactivity. Using LPA, the authors identified three chronotypes: intermediate type (50.7%), morning type (30.5%), and evening type (18.8%). Compared to the evening-type participants, intermediate- and morning-type participants were significantly less likely to experiment with smoking, to smoke nondaily, and to smoke daily. Moreover, both intermediate- and morning-type students reported less lifetime alcohol use and less physical inactivity than evening-type students. Chronopsychological research can help to understand the relatively unexplored determinants of health-impairing behaviors in adolescents associated with chronotype. PMID:21452919
Molan, Amirarsalan Mehrara; Hummer, Joseph E
2017-12-01
Interchanges have high crash rates and large impacts on traffic operations. The main objective of this research is to analyze the safety performance of two new interchanges, the synchronized interchange and the Milwaukee B interchange. The primary method of study was microscopic simulation modeling using the Surrogate Safety Assessment Model (SSAM) program to estimate the quantity and type of conflicting interactions in each interchange. A comprehensive series of simulation scenarios were considered to include different conditions of traffic volumes, traffic turning ratios, traffic distribution, and heavy vehicles percentages. Afterward, outcomes were analyzed with two-way Analyses of Variance (ANOVAs) to compare the mean values of conflicts. Based on the results, the diverging diamond interchange (DDI) and Milwaukee B were the safest designs regarding observed conflicting interactions in the simulation models; however, the DDI did not seem as reliable from the viewpoint of wrong way movements. The new synchronized interchange, the parclo B, and the Milwaukee A (an existing interchange in Milwaukee, WI) showed the same rate of conflicts. The synchronized interchange may be advantageous because it was estimated to reduce the severity of crashes due to fewer crossing conflicts, a lower speed of conflicts, and a higher time to collision. The conventional diamond was the most dangerous design based on our measures. The DDI and the synchronized interchange look like plausible substitutes for reconstructing an unsafe diamond interchange due to the similarities in their required space. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza
2017-07-01
In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.
Comparing Mapped Plot Estimators
Paul C. Van Deusen
2006-01-01
Two alternative derivations of estimators for mean and variance from mapped plots are compared by considering the models that support the estimators and by simulation. It turns out that both models lead to the same estimator for the mean but lead to very different variance estimators. The variance estimators based on the least valid model assumptions are shown to...
Khani, Rouhollah; Sobhani, Sara; Beyki, Mostafa Hossein
2016-03-15
2-Hydroxyethylammonium sulfonate immobilized on γ-Fe2O3 nanoparticles (γ-Fe2O3-2-HEAS) was synthesized by the reaction of n-butylsulfonated γ-Fe2O3 with ethanolamine. The structure of the resulting product was confirmed by fourier transform infrared (FT-IR) spectra, X-ray diffraction (XRD) spectrometry, transmission electron microscopy (TEM), thermogravimetric analysis (TGA), elemental analysis, N2 adsorption-desorption and vibrating sample magnetometer (VSM) techniques. The supported ionic liquid on γ-Fe2O3 was applied as a new and green adsorbent to remove Pb(II) from aqueous solution. The effect of adsorption parameters such as pH, shaking time and amount of the adsorbent were investigated using two level three factor (2(3)) full factorial central composite design with the help of Design-Expert, Stat-Ease Inc. version 9.0 software. The significance of independent variables and their interactions were tested by means of the analysis of variance (ANOVA) with 95% confidence limits (α=0.05). The thermodynamic parameters of the adsorption process are estimated. It is found that the process is exothermic and spontaneous. The Langmuir and Freundlich models have been also applied to evaluate the removal efficiency and the data were correlated well with the Freundlich model. Copyright © 2015 Elsevier Inc. All rights reserved.
Using food as a reward: An examination of parental reward practices.
Roberts, Lindsey; Marx, Jenna M; Musher-Eizenman, Dara R
2018-01-01
Eating patterns and taste preferences are often established early in life. Many studies have examined how parental feeding practices may affect children's outcomes, including food intake and preference. The current study focused on a common food parenting practice, using food as a reward, and used Latent Profile Analysis (LPA) to examine whether mothers (n = 376) and fathers (n = 117) of children ages 2.8 to 7.5 (M = 4.7; SD = 1.1) grouped into profiles (i.e., subgroups) based on how they use of food as a reward. The 4-class model was the best-fitting LPA model, with resulting classes based on both the frequency and type of reward used. Classes were: infrequent reward (33%), tangible reward (21%), food reward (27%), and frequent reward (19%). The current study also explored whether children's eating styles (emotional overeating, rood fussiness, food responsiveness, and satiety responsiveness) and parenting style (Authoritative, Authoritarian, and Permissive) varied by reward profile. Analyses of Variance (ANOVA) revealed that the four profiles differed significantly for all outcome variables except satiety responsiveness. It appears that the use of tangible and food-based rewards have important implications in food parenting. More research is needed to better understand how the different rewarding practices affect additional child outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Panić, Sanja; Rakić, Dušan; Guzsvány, Valéria; Kiss, Erne; Boskovic, Goran; Kónya, Zoltán; Kukovecz, Ákos
2015-12-01
The aim of this work was to evaluate significant factors affecting the thiamethoxam adsorption efficiency using oxidized multi-walled carbon nanotubes (MWCNTs) as adsorbents. Five factors (initial solution concentration of thiamethoxam in water, temperature, solution pH, MWCNTs weight and contact time) were investigated using 2V(5-1) fractional factorial design. The obtained linear model was statistically tested using analysis of variance (ANOVA) and the analysis of residuals was used to investigate the model validity. It was observed that the factors and their second-order interactions affecting the thiamethoxam removal can be divided into three groups: very important, moderately important and insignificant ones. The initial solution concentration was found to be the most influencing parameter on thiamethoxam adsorption from water. Optimization of the factors levels was carried out by minimizing those parameters which are usually critical in real life: the temperature (energy), contact time (money) and weight of MWCNTs (potential health hazard), in order to maximize the adsorbed amount of the pollutant. The results of maximal adsorbed thiamethoxam amount in both real and optimized experiments indicate that among minimized parameters the adsorption time is one that makes the largest difference. The results of this study indicate that fractional factorial design is very useful tool for screening the higher number of parameters and reducing the number of adsorption experiments. Copyright © 2015 Elsevier Ltd. All rights reserved.
Maliki, Raphiou; Sinsin, Brice; Floquet, Anne; Cornet, Denis; Malezieux, Eric; Vernier, Philippe
2016-01-01
Traditional yam-based cropping systems (shifting cultivation, slash-and-burn, and short fallow) often result in deforestation and soil nutrient depletion. The objective of this study was to determine the impact of yam-based systems with herbaceous legumes on dry matter (DM) production (tubers, shoots), nutrients removed and recycled, and the soil fertility changes. We compared smallholders' traditional systems (1-year fallow of Andropogon gayanus-yam rotation, maize-yam rotation) with yam-based systems integrated herbaceous legumes (Aeschynomene histrix/maize intercropping-yam rotation, Mucuna pruriens/maize intercropping-yam rotation). The experiment was conducted during the 2002 and 2004 cropping seasons with 32 farmers, eight in each site. For each of them, a randomized complete block design with four treatments and four replicates was carried out using a partial nested model with five factors: Year, Replicate, Farmer, Site, and Treatment. Analysis of variance (ANOVA) using the general linear model (GLM) procedure was applied to the dry matter (DM) production (tubers, shoots), nutrient contribution to the systems, and soil properties at depths 0-10 and 10-20 cm. DM removed and recycled, total N, P, and K recycled or removed, and soil chemical properties (SOM, N, P, K, and pH water) were significantly improved on yam-based systems with legumes in comparison with traditional systems.
Sinsin, Brice; Floquet, Anne; Cornet, Denis; Malezieux, Eric; Vernier, Philippe
2016-01-01
Traditional yam-based cropping systems (shifting cultivation, slash-and-burn, and short fallow) often result in deforestation and soil nutrient depletion. The objective of this study was to determine the impact of yam-based systems with herbaceous legumes on dry matter (DM) production (tubers, shoots), nutrients removed and recycled, and the soil fertility changes. We compared smallholders' traditional systems (1-year fallow of Andropogon gayanus-yam rotation, maize-yam rotation) with yam-based systems integrated herbaceous legumes (Aeschynomene histrix/maize intercropping-yam rotation, Mucuna pruriens/maize intercropping-yam rotation). The experiment was conducted during the 2002 and 2004 cropping seasons with 32 farmers, eight in each site. For each of them, a randomized complete block design with four treatments and four replicates was carried out using a partial nested model with five factors: Year, Replicate, Farmer, Site, and Treatment. Analysis of variance (ANOVA) using the general linear model (GLM) procedure was applied to the dry matter (DM) production (tubers, shoots), nutrient contribution to the systems, and soil properties at depths 0–10 and 10–20 cm. DM removed and recycled, total N, P, and K recycled or removed, and soil chemical properties (SOM, N, P, K, and pH water) were significantly improved on yam-based systems with legumes in comparison with traditional systems. PMID:27446635
Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.
Weaver, Bruce; Black, Ryan A
2015-06-01
Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seong W. Lee
The project entitled, ''Innovative Instrumentation and Analysis of the Temperature Measurement for High Temperature Gasification'', was successfully completed by the Principal Investigator, Dr. S. Lee and his research team in the Center for Advanced Energy Systems and Environmental Control Technologies at Morgan State University. The major results and outcomes were presented in semi-annual progress reports and annual project review meetings/presentations. Specifically, the literature survey including the gasifier temperature measurement, the ultrasonic application in cleaning application, and spray coating process and the gasifier simulator (cold model) testing has been successfully conducted during the first year. The results show that four factorsmore » (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. Then the gasifier simulator (hot model) design and the fabrication as well as the systematic tests on hot model were completed to test the significant factors on temperature measurement in the second year. The advanced Industrial analytic methods such as statistics-based experimental design, analysis of variance (ANOVA) and regression methods were applied in the hot model tests. The results show that operational parameters (i.e. air flow rate, water flow rate, fine dust particle amount, ammonia addition) presented significant impact on the temperature measurement inside the gasifier simulator. The experimental design and ANOVA are very efficient way to design and analyze the experiments. The results show that the air flow rate and fine dust particle amount are statistically significant to the temperature measurement. The regression model provided the functional relation between the temperature and these factors with substantial accuracy. In the last year of the project period, the ultrasonic and subsonic cleaning methods and coating materials were tested/applied on the thermocouple cleaning according to the proposed approach. Different frequency, application time and power of the ultrasonic/subsonic output were tested. The results show that the ultrasonic approach is one of the best methods to clean the thermocouple tips during the routine operation of the gasifier. In addition, the real time data acquisition system was also designed and applied in the experiments. This advanced instrumentation provided the efficient and accurate data acquisition for this project. In summary, the accomplishment of the project provided useful information of the ultrasonic cleaning method applied in thermocouple tip cleaning. The temperature measurement could be much improved both in accuracy and duration provided that the proposed approach is widely used in the gasification facilities.« less
Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph
2016-01-01
Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760
A two step Bayesian approach for genomic prediction of breeding values.
Shariati, Mohammad M; Sørensen, Peter; Janss, Luc
2012-05-21
In genomic models that assign an individual variance to each marker, the contribution of one marker to the posterior distribution of the marker variance is only one degree of freedom (df), which introduces many variance parameters with only little information per variance parameter. A better alternative could be to form clusters of markers with similar effects where markers in a cluster have a common variance. Therefore, the influence of each marker group of size p on the posterior distribution of the marker variances will be p df. The simulated data from the 15th QTL-MAS workshop were analyzed such that SNP markers were ranked based on their effects and markers with similar estimated effects were grouped together. In step 1, all markers with minor allele frequency more than 0.01 were included in a SNP-BLUP prediction model. In step 2, markers were ranked based on their estimated variance on the trait in step 1 and each 150 markers were assigned to one group with a common variance. In further analyses, subsets of 1500 and 450 markers with largest effects in step 2 were kept in the prediction model. Grouping markers outperformed SNP-BLUP model in terms of accuracy of predicted breeding values. However, the accuracies of predicted breeding values were lower than Bayesian methods with marker specific variances. Grouping markers is less flexible than allowing each marker to have a specific marker variance but, by grouping, the power to estimate marker variances increases. A prior knowledge of the genetic architecture of the trait is necessary for clustering markers and appropriate prior parameterization.
Experimental study on behaviors of dielectric elastomer based on acrylonitrile butadiene rubber
NASA Astrophysics Data System (ADS)
An, Kuangjun; Chuc, Nguyen Huu; Kwon, Hyeok Yong; Phuc, Vuong Hong; Koo, Jachoon; Lee, Youngkwan; Nam, Jaedo; Choi, Hyouk Ryeol
2010-04-01
Previously, the dielectric elastomer based on Acrylonitrile Butadiene Rubber (NBR), called synthetic elastomer has been reported by our group. It has the advantages that its characteristics can be modified according to the requirements of performances, and thus, it is applicable to a wide variety of applications. In this paper, we address the effects of additives and vulcanization conditions on the overall performance of synthetic elastomer. In the present work, factors to have effects on the performances are extracted, e.g additives such as dioctyl phthalate (DOP), barium titanium dioxide (BaTiO3) and vulcanization conditions such as dicumyl peroxide (DCP), cross-linking times. Also, it is described how the performances can be optimized by using DOE (Design of Experiments) technique and experimental results are analyzed by ANOVA (Analysis of variance).
NASA Astrophysics Data System (ADS)
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
This study conducts the simulation on optimisation of injection moulding process parameters using Autodesk Moldflow Insight (AMI) software. This study has applied some process parameters which are melt temperature, mould temperature, packing pressure, and cooling time in order to analyse the warpage value of the part. Besides, a part has been selected to be studied which made of Polypropylene (PP). The combination of the process parameters is analysed using Analysis of Variance (ANOVA) and the optimised value is obtained using Response Surface Methodology (RSM). The RSM as well as Genetic Algorithm are applied in Design Expert software in order to minimise the warpage value. The outcome of this study shows that the warpage value improved by using RSM and GA.
Brown, Angus M
2010-04-01
The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.
Relationships between compassion fatigue, burnout, and turnover intention in Korean hospital nurses.
Sung, Kiwol; Seo, Youngsook; Kim, Jee Hee
2012-12-01
This study aimed to identify relationships between compassion fatigue, burnout, and turnover intention in Korean hospital nurses. In total, 142 hospital nurses were surveyed as part of data collection. Data related to compassion fatigue, burnout, and turnover intention were collected using a questionnaire between May 2011 and September 2011. The data analysis was performed using PASW 19.0 program, which included one-way ANOVA, independent t-tests, Pearson's correlation coefficient, and hierarchical regression analysis. This study detected a positive correlation between compassion fatigue and burnout(r=.37, p<.001), and turnover intention(r=.55, p<.001). Compassion fatigue accounted for 29.6% of the variance for turnover intention among Korean hospital nurses. The results indicate that it is necessary to reduce compassion fatigue, and turnover intention among Korean hospital nurses.
Effect of surface treatments on the bond strengths of facing composite resins to zirconia copings.
Tsumita, M; Kokubo, Y; Kano, T
2012-09-01
The present study evaluated and compared the bond strength between zirconia and facing composite resin using different surface conditioning methods before and after thermocycling. Four primers, three opaque resins, and two facing composite resins were used, and 10 surface treatment procedures were conducted. The bond strength was measured before and after 4,000 cycles of thermocycling. The mean values of each group were statistically analyzed using one-way analysis of variance (ANOVA). The bond strengths of facing composite resins to zirconia after various treatments varied depending on the primers, opaque resins, body resins, and thermocycling. The application of primers and opaque resins to the zirconia surface after sandblasting is expected to yield strong bond strength of the facing composite resin (Estenia CG&B) even after thermocycling.
Analytical methods development for supramolecular design in solar hydrogen production
NASA Astrophysics Data System (ADS)
Brown, J. R.; Elvington, M.; Mongelli, M. T.; Zigler, D. F.; Brewer, K. J.
2006-08-01
In the investigation of alternative energy sources, specifically, solar hydrogen production from water, the ability to perform experiments with a consistent and reproducible light source is key to meaningful photochemistry. The design, construction, and evaluation of a series of LED array photolysis systems for high throughput photochemistry have been performed. Three array systems of increasing sophistication are evaluated using calorimetric measurements and potassium tris(oxalato)ferrate(II) chemical actinometry and compared with a traditional 1000 W Xe arc lamp source. The results are analyzed using descriptive statistics and analysis of variance (ANOVA). The third generation array is modular, and controllable in design. Furthermore, the third generation array system is shown to be comparable in both precision and photonic output to a 1000 W Xe arc lamp.
Oliveira, Ana; Cruz, Joana; Jácome, Cristina; Marques, Alda
2018-01-01
Purpose: To estimate the within-day test-retest reliability and standard error of measurement (SEM) of the unsupported upper limb exercise test (UULEX) in adults without disabilities and to determine the effects of age and gender on performance of the UULEX. Method: A cross-sectional study was conducted with 100 adults without disabilities (44 men, mean age 44.2 [SD 26] y; 56 women, mean age 38.1 [SD 24.1] y). Participants performed three UULEX tests to establish within-day reliability, measured using an intra-class correlation coefficient (ICC) model 2 (two-way random effects) with a single rater (ICC[2,1]) and SEM. The effects of age and gender were examined using two-factor mixed-design analysis of variance (ANOVA) and one-way repeated-measures ANOVA. For analysis purposes, four sub-groups were created: younger adults, older adults, men, and women. Results: Excellent within-day reliability and a small SEM were found in the four sub-groups (younger adults: ICC[2,1]=0.88; 95% CI: 0.82, 0.92; SEM∼40 s; older adults: ICC[2,1]=0.82; 95% CI: 0.72, 0.90; SEM∼50 s; men: ICC[2,1]=0.93; 95% CI: 0.88, 0.96; SEM∼30 s; women: ICC[2,1]=0.85; 95% CI: 0.78, 0.91; SEM∼45 s). Younger adults took, on average, 308.24 seconds longer than older adults to perform the test; older adults performed significantly better on the third test ( p <0.0001; η 2 =0.096). Gender effects were not found ( p >0.05). Conclusion: The within-day test-retest reliability and SEM values of the UULEX may be used to define the magnitude of the error obtained with repeated measures. One UULEX test seems to be adequate for younger adults to achieve reliable results, whereas three tests seem to be needed for older adults.
Liu, Quan; Ma, Li; Fan, Shou-Zen; Abbod, Maysam F; Shieh, Jiann-Shing
2018-01-01
Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients' anaesthetic level during surgeries.