Abdia, Younathan; Kulasekera, K B; Datta, Somnath; Boakye, Maxwell; Kong, Maiying
2017-09-01
Propensity score based statistical methods, such as matching, regression, stratification, inverse probability weighting (IPW), and doubly robust (DR) estimating equations, have become popular in estimating average treatment effect (ATE) and average treatment effect among treated (ATT) in observational studies. Propensity score is the conditional probability receiving a treatment assignment with given covariates, and propensity score is usually estimated by logistic regression. However, a misspecification of the propensity score model may result in biased estimates for ATT and ATE. As an alternative, the generalized boosting method (GBM) has been proposed to estimate the propensity score. GBM uses regression trees as weak predictors and captures nonlinear and interactive effects of the covariate. For GBM-based propensity score, only IPW methods have been investigated in the literature. In this article, we provide a comparative study of the commonly used propensity score based methods for estimating ATT and ATE, and examine their performances when propensity score is estimated by logistic regression and GBM, respectively. Extensive simulation results indicate that the estimators for ATE and ATT may vary greatly due to different methods. We concluded that (i) regression may not be suitable for estimating ATE and ATT regardless of the estimation method of propensity score; (ii) IPW and stratification usually provide reliable estimates of ATT when propensity score model is correctly specified; (iii) the estimators of ATE based on stratification, IPW, and DR are close to the underlying true value of ATE when propensity score is correctly specified by logistic regression or estimated using GBM. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Carneiro, Pedro; Heckman, James J; Vytlacil, Edward
2010-01-01
This paper develops methods for evaluating marginal policy changes. We characterize how the effects of marginal policy changes depend on the direction of the policy change, and show that marginal policy effects are fundamentally easier to identify and to estimate than conventional treatment parameters. We develop the connection between marginal policy effects and the average effect of treatment for persons on the margin of indifference between participation in treatment and nonparticipation, and use this connection to analyze both parameters. We apply our analysis to estimate the effect of marginal changes in tuition on the return to going to college.
Evidence-Based Medicine, Heterogeneity of Treatment Effects, and the Trouble with Averages
Kravitz, Richard L; Duan, Naihua; Braslow, Joel
2004-01-01
Evidence-based medicine is the application of scientific evidence to clinical practice. This article discusses the difficulties of applying global evidence (“average effects” measured as population means) to local problems (individual patients or groups who might depart from the population average). It argues that the benefit or harm of most treatments in clinical trials can be misleading and fail to reveal the potentially complex mixture of substantial benefits for some, little benefit for many, and harm for a few. Heterogeneity of treatment effects reflects patient diversity in risk of disease, responsiveness to treatment, vulnerability to adverse effects, and utility for different outcomes. Recognizing these factors, researchers can design studies that better characterize who will benefit from medical treatments, and clinicians and policymakers can make better use of the results. PMID:15595946
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2015-01-01
Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
Gruber, Joshua S; Arnold, Benjamin F; Reygadas, Fermin; Hubbard, Alan E; Colford, John M
2014-05-01
Complier average causal effects (CACE) estimate the impact of an intervention among treatment compliers in randomized trials. Methods used to estimate CACE have been outlined for parallel-arm trials (e.g., using an instrumental variables (IV) estimator) but not for other randomized study designs. Here, we propose a method for estimating CACE in randomized stepped wedge trials, where experimental units cross over from control conditions to intervention conditions in a randomized sequence. We illustrate the approach with a cluster-randomized drinking water trial conducted in rural Mexico from 2009 to 2011. Additionally, we evaluated the plausibility of assumptions required to estimate CACE using the IV approach, which are testable in stepped wedge trials but not in parallel-arm trials. We observed small increases in the magnitude of CACE risk differences compared with intention-to-treat estimates for drinking water contamination (risk difference (RD) = -22% (95% confidence interval (CI): -33, -11) vs. RD = -19% (95% CI: -26, -12)) and diarrhea (RD = -0.8% (95% CI: -2.1, 0.4) vs. RD = -0.1% (95% CI: -1.1, 0.9)). Assumptions required for IV analysis were probably violated. Stepped wedge trials allow investigators to estimate CACE with an approach that avoids the stronger assumptions required for CACE estimation in parallel-arm trials. Inclusion of CACE estimates in stepped wedge trials with imperfect compliance could enhance reporting and interpretation of the results of such trials.
Weirich, Scott R; Silverstein, Joann; Rajagopalan, Balaji
2011-08-01
There is increasing interest in decentralization of wastewater collection and treatment systems. However, there have been no systematic studies of the performance of small treatment facilities compared with larger plants. A statistical analysis of 4 years of discharge monthly report (DMR) data from 210 operating wastewater treatment facilities was conducted to determine the effect of average flow rate and capacity utilization on effluent biochemical oxygen demand (BOD), total suspended solids (TSS), ammonia, and fecal coliforms relative to permitted values. Relationships were quantified using generalized linear models (GLMs). Small facilities (40 m³/d) had violation rates greater than 10 times that of the largest facilities (400,000 m³/d) for BOD, TSS, and ammonia. For facilities with average flows less than 40,000 m³/d, increasing capacity utilization was correlated with increased effluent levels of BOD and TSS. Larger facilities tended to operate at flows closer to their design capacity while maintaining treatment suggesting greater efficiency.
What Is the Minimum Information Needed to Estimate Average Treatment Effects in Education RCTs?
ERIC Educational Resources Information Center
Schochet, Peter Z.
2014-01-01
Randomized controlled trials (RCTs) are considered the "gold standard" for evaluating an intervention's effectiveness. Recently, the federal government has placed increased emphasis on the use of opportunistic experiments. A key criterion for conducting opportunistic experiments, however, is that there is relatively easy access to data…
ERIC Educational Resources Information Center
Wanzek, Jeanne; Petscher, Yaacov; Al Otaiba, Stephanie; Kent, Shawn C.; Schatschneider, Christopher; Haynes, Martha; Rivas, Brenna K.; Jones, Francesca G.
2016-01-01
The present study used a randomized control trial to examine the effects of a widely used multicomponent Tier 2-type intervention, Passport to Literacy, on the reading ability of 221 fourth graders who initially scored at or below the 30th percentile in reading comprehension. Intervention was provided by research staff to groups of 4-7 students…
ERIC Educational Resources Information Center
Wanzek, Jeanne; Petscher, Yaacov; Al Otaiba, Stephanie; Kent, Shawn C.; Schatschneider, Christopher; Haynes, Martha; Rivas, Brenna K.; Jones, Francesca G.
2016-01-01
The present study used a randomized control trial to examine the effects of a widely used multicomponent Tier 2-type intervention, Passport to Literacy, on the reading ability of 221 fourth graders who initially scored at or below the 30th percentile in reading comprehension. Intervention was provided by research staff to groups of 4-7 students…
Lenis, David; Ebnesajjad, Cyrus F; Stuart, Elizabeth A
2017-04-01
One of the main limitations of causal inference methods is that they rely on the assumption that all variables are measured without error. A popular approach for handling measurement error is simulation-extrapolation (SIMEX). However, its use for estimating causal effects have been examined only in the context of an additive, non-differential, and homoscedastic classical measurement error structure. In this article we extend the SIMEX methodology, in the context of a mean reverting measurement error structure, to a doubly robust estimator of the average treatment effect when a single covariate is measured with error but the outcome and treatment and treatment indicator are not. Throughout this article we assume that an independent validation sample is available. Simulation studies suggest that our method performs better than a naive approach that simply uses the covariate measured with error. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Balzer, Laura B.; Petersen, Maya L.; van der Laan, Mark J.
2016-01-01
In cluster randomized trials, the study units usually are not a simple random sample from some clearly defined target population. Instead, the target population tends to be hypothetical or ill-defined, and the selection of study units tends to be systematic, driven by logistical and practical considerations. As a result, the population average treatment effect (PATE) may be neither well-defined nor easily interpretable. In contrast, the sample average treatment effect (SATE) is the mean difference in the counterfactual outcomes for the study units. The sample parameter is easily interpretable and arguably the most relevant when the study units are not sampled from some specific super-population of interest. Furthermore, in most settings the sample parameter will be estimated more efficiently than the population parameter. To the best of our knowledge, this is the first paper to propose using targeted maximum likelihood estimation (TMLE) for estimation and inference of the sample effect in trials with and without pair-matching. We study the asymptotic and finite sample properties of the TMLE for the sample effect and provide a conservative variance estimator. Finite sample simulations illustrate the potential gains in precision and power from selecting the sample effect as the target of inference. This work is motivated by the Sustainable East Africa Research in Community Health (SEARCH) study, a pair-matched, community randomized trial to estimate the effect of population-based HIV testing and streamlined ART on the five-year cumulative HIV incidence (NCT01864603). The proposed methodology will be used in the primary analysis for the SEARCH trial. PMID:27087478
Balzer, Laura B; Petersen, Maya L; van der Laan, Mark J
2016-09-20
In cluster randomized trials, the study units usually are not a simple random sample from some clearly defined target population. Instead, the target population tends to be hypothetical or ill-defined, and the selection of study units tends to be systematic, driven by logistical and practical considerations. As a result, the population average treatment effect (PATE) may be neither well defined nor easily interpretable. In contrast, the sample average treatment effect (SATE) is the mean difference in the counterfactual outcomes for the study units. The sample parameter is easily interpretable and arguably the most relevant when the study units are not sampled from some specific super-population of interest. Furthermore, in most settings, the sample parameter will be estimated more efficiently than the population parameter. To the best of our knowledge, this is the first paper to propose using targeted maximum likelihood estimation (TMLE) for estimation and inference of the sample effect in trials with and without pair-matching. We study the asymptotic and finite sample properties of the TMLE for the sample effect and provide a conservative variance estimator. Finite sample simulations illustrate the potential gains in precision and power from selecting the sample effect as the target of inference. This work is motivated by the Sustainable East Africa Research in Community Health (SEARCH) study, a pair-matched, community randomized trial to estimate the effect of population-based HIV testing and streamlined ART on the 5-year cumulative HIV incidence (NCT01864603). The proposed methodology will be used in the primary analysis for the SEARCH trial. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to
Barraclough, Brendan; Li, Jonathan G; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-21
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to
Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B
2017-04-01
Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.
ERIC Educational Resources Information Center
Park, Soojin
2015-01-01
Identifying the causal mechanisms is becoming more essential in social and medical sciences. In the presence of treatment non-compliance, the Intent-To-Treated effect (hereafter, ITT effect) is identified as long as the treatment is randomized (Angrist et al., 1996). However, the mediated portion of effect is not identified without additional…
ERIC Educational Resources Information Center
Schochet, Peter Z.
2009-01-01
This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the…
ERIC Educational Resources Information Center
Gage, Nicholas A.; Leite, Walter; Childs, Karen; Kincaid, Don
2017-01-01
The relationship between school-wide positive behavioral interventions and supports (SWPBIS) and school-level academic achievement has not been established. Most experimental research has found little to no evidence that SWPBIS has a distal effect on school-level achievement. Yet, an underlying assumption of SWPBIS is that improving social…
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-05-07
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.
Leacy, Finbarr P.; Stuart, Elizabeth A.
2013-01-01
Summary Propensity and prognostic score methods seek to improve the quality of causal inference in non-randomized or observational studies by replicating the conditions found in a controlled experiment, at least with respect to observed characteristics. Propensity scores model receipt of the treatment of interest; prognostic scores model the potential outcome under a single treatment condition. While the popularity of propensity score methods continues to grow, prognostic score methods and methods combining propensity and prognostic scores have thus far received little attention. To this end, we performed a simulation study that compared subclassification and full matching on a single estimated propensity or prognostic score with three approaches combining estimated propensity and prognostic scores: full matching on a Mahalanobis distance combining the estimated propensity and prognostic scores (FULL-MAHAL); full matching on the estimated prognostic propensity score within propensity score calipers (FULL-PGPPTY); and subclassification on an estimated propensity and prognostic score grid with 5 × 5 subclasses (SUBCLASS(5*5)). We considered settings in which one, both or neither score model was misspecified. The data generating mechanisms varied in the degree of linearity and additivity in the true treatment assignment and outcome models. FULL-MAHAL and FULL-PGPPTY exhibited strong to superior performance in root mean square error terms across all simulation settings and scenarios. Methods combining propensity and prognostic scores were no less robust to model misspecification than single-score methods even when both score models were incorrectly specified. Our findings support the joint use of propensity and prognostic scores in estimation of the average treatment effect on the treated. PMID:24151187
Barraclough, B; Li, J; Liu, C; Yan, G
2014-06-15
Purpose: Fourier-based deconvolution approaches used to eliminate ion chamber volume averaging effect (VAE) suffer from measurement noise. This work aims to investigate a novel method to account for ion chamber VAE through convolution in a commercial treatment planning system (TPS). Methods: Beam profiles of various field sizes and depths of an Elekta Synergy were collected with a finite size ion chamber (CC13) to derive a clinically acceptable beam model for a commercial TPS (Pinnacle{sup 3}), following the vendor-recommended modeling process. The TPS-calculated profiles were then externally convolved with a Gaussian function representing the chamber (σ = chamber radius). The agreement between the convolved profiles and measured profiles was evaluated with a one dimensional Gamma analysis (1%/1mm) as an objective function for optimization. TPS beam model parameters for focal and extra-focal sources were optimized and loaded back into the TPS for new calculation. This process was repeated until the objective function converged using a Simplex optimization method. Planar dose of 30 IMRT beams were calculated with both the clinical and the re-optimized beam models and compared with MapCHEC™ measurements to evaluate the new beam model. Results: After re-optimization, the two orthogonal source sizes for the focal source reduced from 0.20/0.16 cm to 0.01/0.01 cm, which were the minimal allowed values in Pinnacle. No significant change in the parameters for the extra-focal source was observed. With the re-optimized beam model, average Gamma passing rate for the 30 IMRT beams increased from 92.1% to 99.5% with a 3%/3mm criterion and from 82.6% to 97.2% with a 2%/2mm criterion. Conclusion: We proposed a novel method to account for ion chamber VAE in a commercial TPS through convolution. The reoptimized beam model, with VAE accounted for through a reliable and easy-to-implement convolution and optimization approach, outperforms the original beam model in standard IMRT QA
Pirracchio, Romain; Carone, Marco
2016-01-01
Consistency of the propensity score estimators rely on correct specification of the propensity score model. The propensity score is frequently estimated using a main effect logistic regression. It has recently been shown that the use of ensemble machine learning algorithms, such as the Super Learner, could improve covariate balance and reduce bias in a meaningful manner in the case of serious model misspecification for treatment assignment. However, the loss functions normally used by the Super Learner may not be appropriate for propensity score estimation since the goal in this problem is not to optimize propensity score prediction but rather to achieve the best possible balance in the covariate distribution between treatment groups. In a simulation study, we evaluated the benefit of a modification of the Super Learner by propensity score estimation geared toward achieving covariate balance between the treated and untreated after matching on the propensity score. Our simulation study included six different scenarios characterized by various degrees of deviation from the usual main term logistic model for the true propensity score and outcome as well as the presence (or not) of instrumental variables. Our results suggest that the use of this adapted Super Learner to estimate the propensity score can further improve the robustness of propensity score matching estimators.
Identification and estimation of survivor average causal effects
Tchetgen, Eric J Tchetgen
2014-01-01
In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24889022
Lagrangian averages, averaged Lagrangians, and the mean effects of fluctuations in fluid dynamics.
Holm, Darryl D.
2002-06-01
We begin by placing the generalized Lagrangian mean (GLM) equations for a compressible adiabatic fluid into the Euler-Poincare (EP) variational framework of fluid dynamics, for an averaged Lagrangian. This is the Lagrangian averaged Euler-Poincare (LAEP) theorem. Next, we derive a set of approximate small amplitude GLM equations (glm equations) at second order in the fluctuating displacement of a Lagrangian trajectory from its mean position. These equations express the linear and nonlinear back-reaction effects on the Eulerian mean fluid quantities by the fluctuating displacements of the Lagrangian trajectories in terms of their Eulerian second moments. The derivation of the glm equations uses the linearized relations between Eulerian and Lagrangian fluctuations, in the tradition of Lagrangian stability analysis for fluids. The glm derivation also uses the method of averaged Lagrangians, in the tradition of wave, mean flow interaction. Next, the new glm EP motion equations for incompressible ideal fluids are compared with the Euler-alpha turbulence closure equations. An alpha model is a GLM (or glm) fluid theory with a Taylor hypothesis closure. Such closures are based on the linearized fluctuation relations that determine the dynamics of the Lagrangian statistical quantities in the Euler-alpha equations. Thus, by using the LAEP theorem, we bridge between the GLM equations and the Euler-alpha closure equations, through the small-amplitude glm approximation in the EP variational framework. We conclude by highlighting a new application of the GLM, glm, and alpha-model results for Lagrangian averaged ideal magnetohydrodynamics. (c) 2002 American Institute of Physics.
The EffectLiteR Approach for Analyzing Average and Conditional Effects.
Mayer, Axel; Dietzfelbinger, Lisa; Rosseel, Yves; Steyer, Rolf
2016-01-01
We present a framework for estimating average and conditional effects of a discrete treatment variable on a continuous outcome variable, conditioning on categorical and continuous covariates. Using the new approach, termed the EffectLiteR approach, researchers can consider conditional treatment effects given values of all covariates in the analysis and various aggregates of these conditional treatment effects such as average effects, effects on the treated, or aggregated conditional effects given values of a subset of covariates. Building on structural equation modeling, key advantages of the new approach are (1) It allows for latent covariates and outcome variables; (2) it permits (higher order) interactions between the treatment variable and categorical and (latent) continuous covariates; and (3) covariates can be treated as stochastic or fixed. The approach is illustrated by an example, and open source software EffectLiteR is provided, which makes a detailed analysis of effects conveniently accessible for applied researchers.
The causal meaning of Fisher’s average effect
LEE, JAMES J.; CHOW, CARSON C.
2013-01-01
Summary In order to formulate the Fundamental Theorem of Natural Selection, Fisher defined the average excess and average effect of a gene substitution. Finding these notions to be somewhat opaque, some authors have recommended reformulating Fisher’s ideas in terms of covariance and regression, which are classical concepts of statistics. We argue that Fisher intended his two averages to express a distinction between correlation and causation. On this view, the average effect is a specific weighted average of the actual phenotypic changes that result from physically changing the allelic states of homologous genes. We show that the statistical and causal conceptions of the average effect, perceived as inconsistent by Falconer, can be reconciled if certain relationships between the genotype frequencies and non-additive residuals are conserved. There are certain theory-internal considerations favouring Fisher’s original formulation in terms of causality; for example, the frequency-weighted mean of the average effects equaling zero at each locus becomes a derivable consequence rather than an arbitrary constraint. More broadly, Fisher’s distinction between correlation and causation is of critical importance to gene-trait mapping studies and the foundations of evolutionary biology. PMID:23938113
Effects of spatial variability and scale on areal -average evapotranspiration
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, Eric F.
1993-01-01
This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.
The Health Effects of Income Inequality: Averages and Disparities.
Truesdale, Beth C; Jencks, Christopher
2016-01-01
Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.
27 CFR 19.249 - Average effective tax rate.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Average effective tax rate. 19.249 Section 19.249 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Distilled Spirits Taxes Effective Tax Rates § 19...
Collision and average velocity effects on the ratchet pinch
Vlad, M.; Benkadda, S.
2008-03-15
A ratchet-type average velocity V{sup R} appears for test particles moving in a stochastic potential and a magnetic field that is space dependent. This model is developed by including particle collisions and an average velocity. We show that these components of the motion can destroy the ratchet velocity but they also can produce significant increase of V{sup R}, depending on the parameters. The amplification of the ratchet pinch is a nonlinear effect that appears in the presence of trajectory eddying.
Effects of velocity averaging on the shapes of absorption lines
NASA Technical Reports Server (NTRS)
Pickett, H. M.
1980-01-01
The velocity averaging of collision cross sections produces non-Lorentz line shapes, even at densities where Doppler broadening is not apparent. The magnitude of the effects will be described using a model in which the collision broadening depends on a simple velocity power law. The effect of the modified profile on experimental measures of linewidth, shift and amplitude will be examined and an improved approximate line shape will be derived.
The Lake Wobegon Effect: Are All Cancer Patients above Average?
Wolf, Jacqueline H; Wolf, Kevin S
2013-01-01
Context When elderly patients face a terminal illness such as lung cancer, most are unaware that what we term in this article “the Lake Wobegon effect” taints the treatment advice imparted to them by their oncologists. In framing treatment plans, cancer specialists tend to intimate that elderly patients are like the children living in Garrison Keillor's mythical Lake Wobegon: above average and thus likely to exceed expectations. In this article, we use the story of our mother's death from lung cancer to investigate the consequences of elderly people's inability to reconcile the grave reality of their illness with the overly optimistic predictions of their physicians. Methods In this narrative analysis, we examine the routine treatment of elderly, terminally ill cancer patients through alternating lenses: the lens of a historian of medicine who also teaches ethics to medical students and the lens of an actuary who is able to assess physicians’ claims for the outcome of medical treatments. Findings We recognize that a desire to instill hope in patients shapes physicians’ messages. We argue, however, that the automatic optimism conveyed to elderly, dying patients by cancer specialists prompts those patients to choose treatment that is ineffective and debilitating. Rather than primarily prolong life, treatments most notably diminish patients’ quality of life, weaken the ability of patients and their families to prepare for their deaths, and contribute significantly to the unsustainable costs of the U.S. health care system. Conclusions The case described in this article suggests how physicians can better help elderly, terminally ill patients make medical decisions that are less damaging to them and less costly to the health care system. PMID:24320166
Thermal effects in high average power optical parametric amplifiers.
Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas
2013-03-01
Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.
Average power effects in parametric oscillators and amplifiers
NASA Technical Reports Server (NTRS)
Barnes, Norman P.; Williams-Byrd, Julie A.
1995-01-01
Average power effects relative to the operation of parametric oscillators and amplifiers have been calculated. Temperature gradients have been calculated for both radial and longitudinal heat extraction. In many instances, the thermal load on a parametric oscillator is higher than the thermal load on a parametric amplifier with the same pump power. Having one or both these wavelengths resonant increases the chances that a generated photon will be absorbed by the nonlinear crystal. Temperature profiles and thermal diffusion time constants have been calculated for Gaussian beams, given the heat-deposition rate. With radical heat extraction the temperature profile can be expressed in a power series or approximated by a Gaussian distribution function.
NASA Astrophysics Data System (ADS)
Wood, Brian D.; Cherblanc, Fabien; Quintard, Michel; Whitaker, Stephen
2003-08-01
In this work, we use the method of volume averaging to determine the effective dispersion tensor for a heterogeneous porous medium; closure for the averaged equation is obtained by solution of a concentration deviation equation over a periodic unit cell. Our purpose is to show how the method of volume averaging with closure can be rectified with the results obtained by other upscaling methods under particular conditions. Although this rectification is something that is generally believed to be true, there has been very little research that explores this issue explicitly. We show that under certain limiting (but mild) assumptions, the closure problem provides a Fourier series solution for the effective dispersion tensor. When second-order spatial stationarity is imposed on the velocity field, the method yields a simple Fourier series that converges to an integral form in the limit as the period of the unit cell approaches infinity. This limiting result is identical to the quasi-Fickian forms that have been developed previously via ensemble averaging by [1993] and recently by [2000] except in the definition of the averaging operation. As a second objective we have conducted a numerical study to evaluate the influence of the size of the averaging volume on the effective dispersion tensor and its volume averaged statistics. This second objective is complimentary in many ways to recent research reported by [1999] (via ensemble averaging) and by [1999] (via volume averaging) on the block-averaged effective dispersion tensor. The variability of the effective dispersion tensor from realization to realization is assessed by computing the volume-averaged effective dispersion tensor for an ensemble of finite fields with the same (ensemble) statistics. Ensembles were generated using three different sizes of unit cells. All three unit cell sizes yield similar results for the value of the mean effective dispersion tensor. However, the coefficient of variation depends strongly
Microstructural effects on the average properties in porous battery electrodes
NASA Astrophysics Data System (ADS)
García-García, Ramiro; García, R. Edwin
2016-03-01
A theoretical framework is formulated to analytically quantify the effects of the microstructure on the average properties of porous electrodes, including reactive area density and the through-thickness tortuosity as observed in experimentally-determined tomographic sections. The proposed formulation includes the microstructural non-idealities but also captures the well-known perfectly spherical limit. Results demonstrate that in the absence of any particle alignment, the through-thickness Bruggeman exponent α, reaches an asymptotic value of α ∼ 2 / 3 as the shape of the particles become increasingly prolate (needle- or fiber-like). In contrast, the Bruggeman exponent diverges as the shape of the particles become increasingly oblate, regardless of the degree of particle alignment. For aligned particles, tortuosity can be dramatically suppressed, e.g., α → 1 / 10 for ra → 1 / 10 and MRD ∼ 40 . Particle size polydispersity impacts the porosity-tortuosity relation when the average particle size is comparable to the thickness of the electrode layers. Electrode reactivity density can be arbitrarily increased as the particles become increasingly oblate, but asymptotically reach a minimum value as the particles become increasingly prolate. In the limit of a porous electrode comprised of fiber-like particles, the area density decreases by 24% , with respect to a distribution of perfectly spherical particles.
Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2011-01-01
In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…
Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2011-01-01
In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…
Effects of Polynomial Trends on Detrending Moving Average Analysis
NASA Astrophysics Data System (ADS)
Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing
2015-07-01
The detrending moving average (DMA) algorithm is one of the best performing methods to quantify the long-term correlations in nonstationary time series. As many long-term correlated time series in real systems contain various trends, we investigate the effects of polynomial trends on the scaling behaviors and the performances of three widely used DMA methods including backward algorithm (BDMA), centered algorithm (CDMA) and forward algorithm (FDMA). We derive a general framework for polynomial trends and obtain analytical results for constant shifts and linear trends. We find that the behavior of the CDMA method is not influenced by constant shifts. In contrast, linear trends cause a crossover in the CDMA fluctuation functions. We also find that constant shifts and linear trends cause crossovers in the fluctuation functions obtained from the BDMA and FDMA methods. When a crossover exists, the scaling behavior at small scales comes from the intrinsic time series while that at large scales is dominated by the constant shifts or linear trends. We also derive analytically the expressions of crossover scales and show that the crossover scale depends on the strength of the polynomial trends, the Hurst index, and in some cases (linear trends for BDMA and FDMA) the length of the time series. In all cases, the BDMA and the FDMA behave almost the same under the influence of constant shifts or linear trends. Extensive numerical experiments confirm excellently the analytical derivations. We conclude that the CDMA method outperforms the BDMA and FDMA methods in the presence of polynomial trends.
Omura, Yoshiaki; Lu, Dominic; Jones, Marilyn K; Nihrane, Abdallah; Duvvi, Harsha; Yapor, Dario; Shimotsuura, Yasuhiro; Ohki, Motomu
2016-01-01
During the past 10 years, the author had found that the optimal dose of Vitamin D3 400 I.U. has safe & effective anticancer effects, while commonly used 2000-5000 I.U. of Vit. D3 often creates a 2-3 time increase in cancer markers. We examined the concentration of Taurine in normal internal organs and in cancer using Bi-Digital O-Ring Test. We found that Taurine levels in normal tissue are 4-6ng. But, the amount of Taurine of average normal value of 5.0-5.25ng was strikingly reduced to 0.0025-0.0028ng in this study of several examples in adenocarcinomas of the esophagus, stomach, pancreas, colon, prostate, and lung, as well as breast cancer. The lowest Taurine levels of 0.0002-0.0005ng were found in so called Zika virus infected babies from Brazil with microcephaly. While Vitamin D3 receptor stimulant 1α, 25 (OH)2D3 in normal tissues was 0.45-0.53ng, they were reduced to 0.025-0.006ng in cancers (1/100th-1/200th of normal value), particularly in various adenocarcinomas. All of these adenocarcinomas had about 1500ng HPV-16 viral infection. In 500 breast cancers, about 97% had HPV-16. The optimal dose of Taurine for average adult has been found to be about 175mg, rather than the widely used 500mg. In addition, since Taurine is markedly reduced to close to 1/1000th-1/2000th of its normal value in these cancer tissues, we examined the effect of the optimal dose of Taurine on cancer patients. Optimal dose of Taurine produced a very significant decrease in cancer-associated parameters, such as Oncogene C-fosAb2 & Integrin α5β1 being reduced to less than 1/1,000th, and 8-OH-dG (which increases in the presence of DNA mutation) reduced to less than 1/10th. The optimal dose of Taurine 175mg for average adult various cancer patient 3 times a day alone provide beneficial effects with very significant anti-cancer effects with strikingly increased urinary excretion of bacteria, viruses, & funguses, asbestos, toxic metals & other toxic substances. However, optimal doses of
The Effect of Honors Courses on Grade Point Averages
ERIC Educational Resources Information Center
Spisak, Art L.; Squires, Suzanne Carter
2016-01-01
High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…
George, Rohini; Suh, Yelin; Murphy, Martin; Williamson, Jeffrey; Weiss, Elizabeth; Keall, Paul
2008-01-01
Real-time tumor targeting involves the continuous realignment of the radiation beam with the tumor. Real-time tumor targeting offers several advantages such as improved accuracy of tumor treatment and reduced dose to surrounding tissue. Current limitations to this technique include mechanical motion constraints. The purpose of this study was to investigate an alternative treatment scenario using a moving average algorithm. The algorithm, using a suitable averaging period, accounts for variations in the average tumor position, but respiratory induced target position variations about this average are ignored during delivery and can be treated as a random error during planning. In order to test the method a comparison between five different treatment techniques was performed: (1) moving average algorithm, (2) real-time motion tracking, (3) respiration motion gating (at both inhale and exhale), (4) moving average gating (at both inhale and exhale) and (5) static beam delivery. Two data sets were used for the purpose of this analysis: (a) external respiratory-motion traces using different coaching techniques included 331 respiration motion traces from 24 lung-cancer patients acquired using three different breathing types [free breathing (FB), audio coaching (A) and audio-visual biofeedback (AV)]; (b) 3D tumor motion included implanted fiducial motion data for over 160 treatment fractions for 46 thoracic and abdominal cancer patients obtained from the Cyberknife Synchrony. The metrics used for comparison were the group systematic error (M), the standard deviation (SD) of the systematic error (Σ) and the root mean square of the random error (σ). Margins were calculated using the formula by Stroom et al. [Int. J. Radiat. Oncol., Biol., Phys. 43(4), 905–919 (1999)]: 2Σ+0.7σ. The resultant calculations for implanted fiducial motion traces (all values in cm) show that M and Σ are negligible for moving average algorithm, moving average gating, and real-time tracking (i
George, Rohini; Suh, Yelin; Murphy, Martin; Williamson, Jeffrey; Weiss, Elizabeth; Keall, Paul
2008-06-01
Real-time tumor targeting involves the continuous realignment of the radiation beam with the tumor. Real-time tumor targeting offers several advantages such as improved accuracy of tumor treatment and reduced dose to surrounding tissue. Current limitations to this technique include mechanical motion constraints. The purpose of this study was to investigate an alternative treatment scenario using a moving average algorithm. The algorithm, using a suitable averaging period, accounts for variations in the average tumor position, but respiratory induced target position variations about this average are ignored during delivery and can be treated as a random error during planning. In order to test the method a comparison between five different treatment techniques was performed: (1) moving average algorithm, (2) real-time motion tracking, (3) respiration motion gating (at both inhale and exhale), (4) moving average gating (at both inhale and exhale) and (5) static beam delivery. Two data sets were used for the purpose of this analysis: (a) external respiratory-motion traces using different coaching techniques included 331 respiration motion traces from 24 lung-cancer patients acquired using three different breathing types [free breathing (FB), audio coaching (A) and audio-visual biofeedback (AV)]; (b) 3D tumor motion included implanted fiducial motion data for over 160 treatment fractions for 46 thoracic and abdominal cancer patients obtained from the Cyberknife Synchrony. The metrics used for comparison were the group systematic error (M), the standard deviation (SD) of the systematic error (sigma) and the root mean square of the random error (sigma). Margins were calculated using the formula by Stroom et al. [Int. J. Radiat. Oncol., Biol., Phys. 43(4), 905-919 (1999)]: 2sigma + 0.7sigma. The resultant calculations for implanted fiducial motion traces (all values in cm) show that M and sigma are negligible for moving average algorithm, moving average gating, and real
Zigler, Corwin Matthew; Dominici, Francesca
2014-01-01
Causal inference with observational data frequently relies on the notion of the propensity score (PS) to adjust treatment comparisons for observed confounding factors. As decisions in the era of “big data” are increasingly reliant on large and complex collections of digital data, researchers are frequently confronted with decisions regarding which of a high-dimensional covariate set to include in the PS model in order to satisfy the assumptions necessary for estimating average causal effects. Typically, simple or ad-hoc methods are employed to arrive at a single PS model, without acknowledging the uncertainty associated with the model selection. We propose three Bayesian methods for PS variable selection and model averaging that 1) select relevant variables from a set of candidate variables to include in the PS model and 2) estimate causal treatment effects as weighted averages of estimates under different PS models. The associated weight for each PS model reflects the data-driven support for that model’s ability to adjust for the necessary variables. We illustrate features of our proposed approaches with a simulation study, and ultimately use our methods to compare the effectiveness of surgical vs. nonsurgical treatment for brain tumors among 2,606 Medicare beneficiaries. Supplementary materials are available online. PMID:24696528
Zigler, Corwin Matthew; Dominici, Francesca
2014-01-01
Causal inference with observational data frequently relies on the notion of the propensity score (PS) to adjust treatment comparisons for observed confounding factors. As decisions in the era of "big data" are increasingly reliant on large and complex collections of digital data, researchers are frequently confronted with decisions regarding which of a high-dimensional covariate set to include in the PS model in order to satisfy the assumptions necessary for estimating average causal effects. Typically, simple or ad-hoc methods are employed to arrive at a single PS model, without acknowledging the uncertainty associated with the model selection. We propose three Bayesian methods for PS variable selection and model averaging that 1) select relevant variables from a set of candidate variables to include in the PS model and 2) estimate causal treatment effects as weighted averages of estimates under different PS models. The associated weight for each PS model reflects the data-driven support for that model's ability to adjust for the necessary variables. We illustrate features of our proposed approaches with a simulation study, and ultimately use our methods to compare the effectiveness of surgical vs. nonsurgical treatment for brain tumors among 2,606 Medicare beneficiaries. Supplementary materials are available online.
Effect of brotizolam on the averaged photopalpebral reflex in man
Tanaka, M.; Isozaki, H.; Mizuki, Y.; Inanaga, K.
1983-01-01
1 The photopalpebral reflex (PPR) is a useful method to assess level of arousal. Healthy males were given either brotizolam (0.0625, 0.125, 0.25 or 0.5 mg) or placebo within a double-blind, crossover design. Changes in PPR and subjective assessments were observed for 5 h after medication. 2 Prolongation of the latencies of PPR were dose dependent, and the amplitude tended to be reduced. These effects appeared within 30 min, and lasted about 4 h. 3 The dose-response curve of the maximum prolongation of the latencies was linear. 4 Sleepiness and slight ataxia were observed after drug ingestion. Sleepiness was correlated with the prolongation of the PPR latencies. 5 Brotizolam could be a potent hypnotic, with rapid onset and moderate duration of action, and it has no severe side-effects. PMID:6661378
Velocity Averaging, Kinetic Formulations and Regularizing Effects in Quasilinear PDEs
2005-10-31
nonlinear conservation laws. In [LPT94a], Lions, Perthame & Tadmor have shown that entropy solutions of such laws admit a regularizing effect of a fractional...one augments (1.1) with additional conditions on the behavior of Φ(ρ) for a large enough family of entropies Φ’s. These additional entropy conditions...imply that g is in fact a positive distribution, g = m ∈ M+, measuring the entropy dissipation of the nonlinear equation. We arrive at the kinetic
Effects of Spatial Variability on Annual Average Water Balance
NASA Astrophysics Data System (ADS)
Milly, P. C. D.; Eagleson, P. S.
1987-11-01
Spatial variability of soil and vegetation causes spatial variability of the water balance. For an area in which the water balance is not affected by lateral water flow, the frequency distributions of storm surface runoff, evapotranspiration, and drainage to groundwater are derivable from distributions of soil hydraulic parameters by means of a point water balance model and local application of the vegetal equilibrium hypothesis. Means and variances of the components of the budget can be found by Monte Carlo simulation or by approximate local expansions. For a fixed set of mean soil parameters, soil spatial variability may induce significant changes in the areal mean water balance, particularly if storm surface runoff occurs. Variability of the pore size distribution index and permeability has a much larger effect than that of effective porosity on the means and variances of water balance variables. The importance of the pore size distribution index implies that the microscopic similarity assumption may underestimate the effects of soil spatial variability. In general, the presence of soil variability reduces the sensitivity of water balance to mean properties. For small levels of soil variability, there exists a unique equivalent homogeneous soil type that reproduces the budget components and the mean soil moisture saturation of an inhomogeneous area.
Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro; Matsuo, Yukinori; Ueki, Nami; Nakamura, Akira; Iizuka, Yusuke; Mampuya, Wambaka Ange; Mizowaki, Takashi; Hiraoka, Masahiro
2016-01-01
The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D95, D90, D50, and D2 of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose-calculation algorithm or the dose
NASA Astrophysics Data System (ADS)
Aarthi, G.; Prabu, K.; Reddy, G. Ramachandra
2017-02-01
The average spectral efficiency (ASE) is investigated for the free space optical (FSO) communications employing On-Off keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems with and without pointing errors over the Gamma-Gamma (GG) channels. Additionally, the impact of aperture averaging on the ASE is explored. The influence of different turbulence conditions along with varying receiver aperture has been studied and analyzed. For the considered system, the exact average channel capacity (ACC) expressions are derived using Meijer G function. Results reveal that when pointing errors are introduced, there is a significant reduction in the ASE performance. The enhancement in the ASE can be achieved with an increase in the receiver aperture across various turbulence regimes and reducing the beam radius in the presence of pointing errors, but the rate of increment of ASE reduces with a larger diameter and it is saturated finally. The coherent OWC system provides better ASE performance of 49 bits/s/Hz at the average transmitted optical power of 5 dBm with an aperture diameter of 10 cm and 34 bits/s/Hz without and with pointing errors under strong turbulence respectively.
Shan, Na; Xu, Ping-Feng
2016-11-01
In randomized trials with noncompliance, causal effects cannot be identified without strong assumptions. Therefore, several authors have considered bounds on the causal effects. Applying an idea of VanderWeele (), Chiba () gave bounds on the average causal effects in randomized trials with noncompliance using the information on the randomized assignment, the treatment received and the outcome under monotonicity assumptions about covariates. But he did not consider any observed covariates. If there are some observed covariates such as age, gender, and race in a trial, we propose new bounds using the observed covariate information under some monotonicity assumptions similar to those of VanderWeele and Chiba. And we compare the three bounds in a real example. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Goodman, L. S.; Salter, F. O.
1968-01-01
Digital filter suppresses the effects of nonstatistical noise bursts on data averaged over multichannel scaler. Interposed between the sampled channels and the digital averaging system, it uses binary logic circuitry to compare the number of counts per channel with the average number of counts per channel.
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley
2009-01-01
In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This report uses a causal inference and instrumental variables framework to examine the…
Diedrichs, Phillippa C; Lee, Christina
2010-06-01
Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers.
Beyond intent to treat (ITT): A complier average causal effect (CACE) estimation primer.
Peugh, James L; Strotman, Daniel; McGrady, Meghan; Rausch, Joseph; Kashikar-Zuck, Susmita
2017-02-01
Randomized control trials (RCTs) have long been the gold standard for allowing causal inferences to be made regarding the efficacy of a treatment under investigation, but traditional RCT data analysis perspectives do not take into account a common reality: imperfect participant compliance to treatment. Recent advances in both maximum likelihood parameter estimation and mixture modeling methodology have enabled treatment effects to be estimated, in the presence of less than ideal levels of participant compliance, via a Complier Average Causal Effect (CACE) structural equation mixture model. CACE is described in contrast to "intent to treat" (ITT), "per protocol", and "as treated" RCT data analysis perspectives. CACE model assumptions, specification, estimation, and interpretation will all be demonstrated with simulated data generated from a randomized controlled trial of cognitive-behavioral therapy for Juvenile Fibromyalgia. CACE analysis model figures, linear model equations, and Mplus estimation syntax examples are all provided. Data needed to reproduce analyses in this article are available as supplemental materials (online only) in the Appendix of this article.
Analysis of average density difference effect in a new two-lane lattice model
NASA Astrophysics Data System (ADS)
Zhang, Geng; Sun, Di-Hua; Zhao, Min; Liu, Wei-Ning; Cheng, Sen-Lin
2015-11-01
A new lattice model is proposed by taking the average density difference effect into account for two-lane traffic system according to Transportation Cyber-physical Systems. The influence of average density difference effect on the stability of traffic flow is investigated through linear stability theory and nonlinear reductive perturbation method. The linear analysis results reveal that the unstable region would be reduced by considering the average density difference effect. The nonlinear kink-antikink soliton solution derived from the mKdV equation is analyzed to describe the properties of traffic jamming transition near the critical point. Numerical simulations confirm the analytical results showing that traffic jam can be suppressed efficiently by considering the average density difference effect for two-lane traffic system.
Wang, Chi; Dominici, Francesca; Parmigiani, Giovanni; Zigler, Corwin Matthew
2015-09-01
Confounder selection and adjustment are essential elements of assessing the causal effect of an exposure or treatment in observational studies. Building upon work by Wang et al. (2012, Biometrics 68, 661-671) and Lefebvre et al. (2014, Statistics in Medicine 33, 2797-2813), we propose and evaluate a Bayesian method to estimate average causal effects in studies with a large number of potential confounders, relatively few observations, likely interactions between confounders and the exposure of interest, and uncertainty on which confounders and interaction terms should be included. Our method is applicable across all exposures and outcomes that can be handled through generalized linear models. In this general setting, estimation of the average causal effect is different from estimation of the exposure coefficient in the outcome model due to noncollapsibility. We implement a Bayesian bootstrap procedure to integrate over the distribution of potential confounders and to estimate the causal effect. Our method permits estimation of both the overall population causal effect and effects in specified subpopulations, providing clear characterization of heterogeneous exposure effects that may vary considerably across different covariate profiles. Simulation studies demonstrate that the proposed method performs well in small sample size situations with 100-150 observations and 50 covariates. The method is applied to data on 15,060 US Medicare beneficiaries diagnosed with a malignant brain tumor between 2000 and 2009 to evaluate whether surgery reduces hospital readmissions within 30 days of diagnosis.
Thermal motion in proteins: Large effects on the time-averaged interaction energies
Goethe, Martin Rubi, J. Miguel; Fita, Ignacio
2016-03-15
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
The Better-than-Average Effect and 1 Corinthians 13: A Classroom Exercise
ERIC Educational Resources Information Center
Swenson, John Eric, III; Schneller, Gregory R.; Henderson, Joy Ann
2014-01-01
People tend to evaluate themselves more favorably than they evaluate others, a tendency that is known as the better-than-average effect (BTA effect; Alicke, 1985; Brown, 1986). In an attempt to demonstrate the concept of the BTA effect, a classroom exercise was conducted with 78 undergraduate students in an "Introduction to Psychology"…
The Dopaminergic Midbrain Mediates an Effect of Average Reward on Pavlovian Vigor.
Rigoli, Francesco; Chew, Benjamin; Dayan, Peter; Dolan, Raymond J
2016-09-01
Dopamine plays a key role in motivation. Phasic dopamine response reflects a reinforcement prediction error (RPE), whereas tonic dopamine activity is postulated to represent an average reward that mediates motivational vigor. However, it has been hard to find evidence concerning the neural encoding of average reward that is uncorrupted by influences of RPEs. We circumvented this difficulty in a novel visual search task where we measured participants' button pressing vigor in a context where information (underlying an RPE) about future average reward was provided well before the average reward itself. Despite no instrumental consequence, participants' pressing force increased for greater current average reward, consistent with a form of Pavlovian effect on motivational vigor. We recorded participants' brain activity during task performance with fMRI. Greater average reward was associated with enhanced activity in dopaminergic midbrain to a degree that correlated with the relationship between average reward and pressing vigor. Interestingly, an opposite pattern was observed in subgenual cingulate cortex, a region implicated in negative mood and motivational inhibition. These findings highlight a crucial role for dopaminergic midbrain in representing aspects of average reward and motivational vigor.
NASA Technical Reports Server (NTRS)
Merceret, Francis J.
1995-01-01
This document presents results of a field study of the effect of sheltering of wind sensors by nearby foliage on the validity of wind measurements at the Space Shuttle Landing Facility (SLF). Standard measurements are made at one second intervals from 30-feet (9.1-m) towers located 500 feet (152 m) from the SLF centerline. The centerline winds are not exactly the same as those measured by the towers. A companion study, Merceret (1995), quantifies the differences as a function of statistics of the observed winds and distance between the measurements and points of interest. This work examines the effect of nearby foliage on the accuracy of the measurements made by any one sensor, and the effects of averaging on interpretation of the measurements. The field program used logarithmically spaced portable wind towers to measure wind speed and direction over a range of conditions as a function of distance from the obstructing foliage. Appropriate statistics were computed. The results suggest that accurate measurements require foliage be cut back to OFCM standards. Analysis of averaging techniques showed that there is no significant difference between vector and scalar averages. Longer averaging periods reduce measurement error but do not otherwise change the measurement in reasonably steady flow regimes. In rapidly changing conditions, shorter averaging periods may be required to capture trends.
Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin; Grimm, Andrew J; Cooper, David A; Lewin, Sharon R; Søgaard, Ole S; Rasmussen, Thomas A; Kent, Stephen J; Kelleher, Anthony D; Davenport, Miles P
2015-07-01
HIV infection can be effectively controlled by anti-retroviral therapy (ART) in most patients. However therapy must be continued for life, because interruption of ART leads to rapid recrudescence of infection from long-lived latently infected cells. A number of approaches are currently being developed to 'purge' the reservoir of latently infected cells in order to either eliminate infection completely, or significantly delay the time to viral recrudescence after therapy interruption. A fundamental question in HIV research is how frequently the virus reactivates from latency, and thus how much the reservoir might need to be reduced to produce a prolonged antiretroviral-free HIV remission. Here we provide the first direct estimates of the frequency of viral recrudescence after ART interruption, combining data from four independent cohorts of patients undergoing treatment interruption, comprising 100 patients in total. We estimate that viral replication is initiated on average once every ≈6 days (range 5.1- 7.6 days). This rate is around 24 times lower than previous thought, and is very similar across the cohorts. In addition, we analyse data on the ratios of different 'reactivation founder' viruses in a separate cohort of patients undergoing ART-interruption, and estimate the frequency of successful reactivation to be once every 3.6 days. This suggests that a reduction in the reservoir size of around 50-70-fold would be required to increase the average time-to-recrudescence to about one year, and thus achieve at least a short period of anti-retroviral free HIV remission. Our analyses suggests that time-to-recrudescence studies will need to be large in order to detect modest changes in the reservoir, and that macaque models of SIV latency may have much higher frequencies of viral recrudescence after ART interruption than seen in human HIV infection. Understanding the mean frequency of recrudescence from latency is an important first step in approaches to prolong
Effects of average speed enforcement on speed compliance and crashes: a review of the literature.
Soole, David W; Watson, Barry C; Fleiter, Judy J
2013-05-01
Average speed enforcement is a relatively new approach gaining popularity throughout Europe and Australia. This paper reviews the evidence regarding the impact of this approach on vehicle speeds, crash rates and a number of additional road safety and public health outcomes. The economic and practical viability of the approach as a road safety countermeasure is also explored. A literature review, with an international scope, of both published and grey literature was conducted. There is a growing body of evidence to suggest a number of road safety benefits associated with average speed enforcement, including high rates of compliance with speed limits, reductions in average and 85th percentile speeds and reduced speed variability between vehicles. Moreover, the approach has been demonstrated to be particularly effective in reducing excessive speeding behaviour. Reductions in crash rates have also been reported in association with average speed enforcement, particularly in relation to fatal and serious injury crashes. In addition, the approach has been shown to improve traffic flow, reduce vehicle emissions and has also been associated with high levels of public acceptance. Average speed enforcement offers a greater network-wide approach to managing speeds that reduces the impact of time and distance halo effects associated with other automated speed enforcement approaches. Although comparatively expensive it represents a highly reliable approach to speed enforcement that produces considerable returns on investment through reduced social and economic costs associated with crashes.
Estimation of genetic parameters for average daily gain using models with competition effects
USDA-ARS?s Scientific Manuscript database
Components of variance for ADG with models including competition effects were estimated from data provided by Pig Improvement Company on 11,235 pigs from 4 selected lines of swine. Fifteen pigs with average age of 71 d were randomly assigned to a pen by line and sex and taken off test after approxi...
The Nexus between the Above-Average Effect and Cooperative Learning in the Classroom
ERIC Educational Resources Information Center
Breneiser, Jennifer E.; Monetti, David M.; Adams, Katharine S.
2012-01-01
The present study examines the above-average effect (Chambers & Windschitl, 2004; Moore & Small, 2007) in assessments of task performance. Participants completed self-estimates of performance and group estimates of performance, before and after completing a task. Participants completed a task individually and in groups. Groups were…
Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging
Soltanian-Zadeh, H. Henry Ford Hospital, Detroit, MI ); Windham, J.P. ); Yagle, A.E. )
1993-08-01
Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images.
ERIC Educational Resources Information Center
Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.
2010-01-01
This paper presents a systematic review of published data on the performance of sub-Saharan Africans on Raven's Progressive Matrices. The specific goals were to estimate the average level of performance, to study the Flynn Effect in African samples, and to examine the psychometric meaning of Raven's test scores as measures of general intelligence.…
The effect of surface roughness on the average film thickness between lubricated rollers
NASA Technical Reports Server (NTRS)
Chow, L. S. H.; Cheng, H. S.
1976-01-01
The Christensen theory of stochastic models for hydrodynamic lubrication of rough surfaces is extended to elastohydrodynamic lubrication between two rollers. The Grubin-type equation including asperity effects in the inlet region is derived. Solutions for the reduced pressure at the entrance as a function of the ratio of the average nominal film thickness to the rms surface roughness (in terms of standard deviation), have been obtained numerically. Results were obtained for purely transverse and purely longitudinal surface roughness for cases with or without slip. The reduced pressure is shown to decrease slightly by considering longitudinal surface roughness. Transverse surface roughness has a slight beneficial effect on the average film thickness at the inlet. The same approach was used to study the effect of surface roughness on lubrication between rigid rollers and lubrication of an infinitely-wide slider bearing. The effects of surface roughness are shown to be similar to those found in elastohydrodynamic contacts.
Phillips, Erika; Krawitz, Brian D.; Garg, Reena; Salim, Sarwat; Geyman, Lawrence S.; Efstathiadis, Eleni; Carroll, Joseph; Rosen, Richard B.; Chui, Toco Y. P.
2017-01-01
Objectives To assess the effect of image registration and averaging on the visualization and quantification of the radial peripapillary capillary (RPC) network on optical coherence tomography angiography (OCTA). Methods Twenty-two healthy controls were imaged with a commercial OCTA system (AngioVue, Optovue, Inc.). Ten 10x10° scans of the optic disc were obtained, and the most superficial layer (50-μm slab extending from the inner limiting membrane) was extracted for analysis. Rigid registration was achieved using ImageJ, and averaging of each 2 to 10 frames was performed in five ~2x2° regions of interest (ROI) located 1° from the optic disc margin. The ROI were automatically skeletonized. Signal-to-noise ratio (SNR), number of endpoints and mean capillary length from the skeleton, capillary density, and mean intercapillary distance (ICD) were measured for the reference and each averaged ROI. Repeated measures analysis of variance was used to assess statistical significance. Three patients with primary open angle glaucoma were also imaged to compare RPC density to controls. Results Qualitatively, vessels appeared smoother and closer to histologic descriptions with increasing number of averaged frames. Quantitatively, number of endpoints decreased by 51%, and SNR, mean capillary length, capillary density, and ICD increased by 44%, 91%, 11%, and 4.5% from single frame to 10-frame averaged, respectively. The 10-frame averaged images from the glaucomatous eyes revealed decreased density correlating to visual field defects and retinal nerve fiber layer thinning. Conclusions OCTA image registration and averaging is a viable and accessible method to enhance the visualization of RPCs, with significant improvements in image quality and RPC quantitative parameters. With this technique, we will be able to non-invasively and reliably study RPC involvement in diseases such as glaucoma. PMID:28068370
A second-order closure model for the effect of averaging time on turbulent plume dispersion
Sykes, R.I.; Gabruk, R.S.
1996-12-31
Turbulent dispersion in the atmosphere is a result of chaotic advection by a wide spectrum of eddy motions. In genera, the larger scale motions behave like a time-dependent, spatially inhomogeneous mean wind and produce coherent meandering of a pollutant cloud or plume, while the smaller scale motions act to diffuse the pollutant and mix it with the ambient air. The distinction between the two types of motion is dependent on both the sampling procedure and the scale of the pollutant cloud. For the case of a continuous plume of material, the duration of the sampling time (the time average period) determines the effective size of the plume. The objective is the development of a practical scheme for representing the effect of time-averaging on plume width. The model must describe relative dispersion in the limit of short-term averages, and give the absolute, or ensemble, dispersion rate for long-term sampling. The authors shall generalize the second-order closure ensemble dispersion model of Sykes et al. to include the effect of time-averaging, so they first briefly review the basic model.
NASA Astrophysics Data System (ADS)
Kuang, Hua; Xu, Zhi-Peng; Li, Xing-Li; Lo, Siu-Ming
2017-04-01
In this paper, an extended car-following model is proposed to simulate traffic flow by considering average headway of preceding vehicles group in intelligent transportation systems environment. The stability condition of this model is obtained by using the linear stability analysis. The phase diagram can be divided into three regions classified as the stable, the metastable and the unstable ones. The theoretical result shows that the average headway plays an important role in improving the stabilization of traffic system. The mKdV equation near the critical point is derived to describe the evolution properties of traffic density waves by applying the reductive perturbation method. Furthermore, through the simulation of space-time evolution of the vehicle headway, it is shown that the traffic jam can be suppressed efficiently with taking into account the average headway effect, and the analytical result is consistent with the simulation one.
NASA Astrophysics Data System (ADS)
Chakraborty, Mousumi; Bawuah, Prince; Tan, Nicholas; Ervasti, Tuomas; Pääkkönen, Pertti; Zeitler, J. Axel; Ketolainen, Jarkko; Peiponen, Kai-Erik
2016-08-01
In this paper, we have studied terahertz (THz) pulse time delay of porous pharmaceutical microcrystalline compacts and also pharmaceutical tablets that contain indomethacin (painkiller) as an active pharmaceutical ingredient (API) and microcrystalline cellulose as the matrix of the tablet. The porosity of a pharmaceutical tablet is important because it affects the release of drug substance. In addition, surface roughness of the tablet has much importance regarding dissolution of the tablet and hence the rate of drug release. Here, we show, using a training set of tablets containing API and with a priori known tablet's quality parameters, that the effective refractive index (obtained from THz time delay data) of such porous tablets correlates with the average surface roughness of a tablet. Hence, THz pulse time delay measurement in the transmission mode provides information on both porosity and the average surface roughness of a compact. This is demonstrated for two different sets of pharmaceutical tablets having different porosity and average surface roughness values.
Colorectal cancer outcomes and treatment patterns in patients too young for average-risk screening.
Abdelsattar, Zaid M; Wong, Sandra L; Regenbogen, Scott E; Jomaa, Diana M; Hardiman, Karin M; Hendren, Samantha
2016-03-15
Although colorectal cancer (CRC) screening guidelines recommend initiating screening at age 50 years, the percentage of cancer cases in younger patients is increasing. To the authors' knowledge, the national treatment patterns and outcomes of these patients are largely unknown. The current study was a population-based, retrospective cohort study of the nationally representative Surveillance, Epidemiology, and End Results registry for patients diagnosed with CRC from 1998 through 2011. Patients were categorized as being younger or older than the recommended screening age. Differences with regard to stage of disease at diagnosis, patterns of therapy, and disease-specific survival were compared between age groups using multinomial regression, multiple regression, Cox proportional hazards regression, and Weibull survival analysis. Of 258,024 patients with CRC, 37,847 (15%) were aged <50 years. Young patients were more likely to present with regional (relative risk ratio, 1.3; P<.001) or distant (relative risk ratio, 1.5; P<.001) disease. Patients with CRC with distant metastasis in the younger age group were more likely to receive surgical therapy for their primary tumor (adjusted probability: 72% vs 63%; P<.001), and radiotherapy also was more likely in younger patients with CRC (adjusted probability: 53% vs 48%; P<.001). Patients younger than the recommended screening age had better overall disease-specific survival (hazards ratio, 0.77; P<.001), despite a larger percentage of these individuals presenting with advanced disease. Patients with CRC diagnosed at age <50 years are more likely to present with advanced-stage disease. However, they receive more aggressive therapy and achieve longer disease-specific survival, despite the greater percentage of patients with advanced-stage disease. These findings suggest the need for improved risk assessment and screening decisions for younger adults. © 2016 American Cancer Society.
Coyle, Thomas R; Rindermann, Heiner; Hancock, Dale
2016-10-01
Cognitive ability stimulates economic productivity. However, the effects of cognitive ability may be stronger in free and open economies, where competition rewards merit and achievement. To test this hypothesis, ability levels of intellectual classes (top 5%) and average classes (country averages) were estimated using international student assessments (Programme for International Student Assessment; Trends in International Mathematics and Science Study; and Progress in International Reading Literacy Study) (N = 99 countries). The ability levels were correlated with indicators of economic freedom (Fraser Institute), scientific achievement (patent rates), innovation (Global Innovation Index), competitiveness (Global Competitiveness Index), and wealth (gross domestic product). Ability levels of intellectual and average classes strongly predicted all economic criteria. In addition, economic freedom moderated the effects of cognitive ability (for both classes), with stronger effects at higher levels of freedom. Effects were particularly robust for scientific achievements when the full range of freedom was analyzed. The results support cognitive capitalism theory: cognitive ability stimulates economic productivity, and its effects are enhanced by economic freedom. © The Author(s) 2016.
Xing, Guangzhen; Yang, Ping; He, Longbiao; Feng, Xiujuan
2016-09-01
The purpose of this work was to improve the existing models that allow spatial averaging effects of piezoelectric hydrophones to be accounted for. The model derived in the present study is valid for a planar source and was verified using transducers operating at 5 and 20MHz. It is based on Fresnel approximation and enables corrections for both on-axis and off-axis measurements. A single-integral approximate formula for the axial acoustic pressure was derived, and the validity of the Fresnel approximation in the near field of the planar transducer was examined. The numerical results obtained using 5 and 20MHz planar transmitters with an effective diameter of 12.7mm showed that the derived model could account for spatial averaging effects to within 0.2% with Beissner's exact integral (Beissner, 1981), for k(a+b)2≫π (where k is the circular wavenumber, and a and b are the effective radii of the transmitter and hydrophone, respectively). The field distributions along the acoustic axis and the beam directivity patterns are also included in the model. The spatial averaging effects of the hydrophone were generally observed to cause underestimation of the absolute pressure amplitudes of the acoustic beam, and overestimation of the cross-sectional size of the beam directivity pattern. However, the cross-sectional size of the directivity pattern was also found to be underestimated in the "far zone" (beyond Y0=a(2)/λ) of the transmitter. The results of this study indicate that the spatial averaging effect on the beam directivity pattern is negligible for π(γ(2)+4γ)s≪1 (where γ=b/a, and s is the normalized distance to the planar transducer).
Is Scientifically Based Reading Instruction Effective for Students with Below-Average IQs?
ERIC Educational Resources Information Center
Allor, Jill H.; Mathes, Patricia G.; Roberts, J. Kyle; Cheatham, Jennifer P.; Al Otaiba, Stephanie
2014-01-01
This longitudinal randomized-control trial investigated the effectiveness of scientifically based reading instruction for students with IQs ranging from 40 to 80, including students with intellectual disability (ID). Students were randomly assigned into treatment (n = 76) and contrast (n = 65) groups. Students in the treatment group received…
Comparing effects in spike-triggered averages of rectified EMG across different behaviors
Davidson, Adam G.; O’Dell, Ryan; Chan, Vanessa; Schieber, Marc H.
2007-01-01
Effects in spike-triggered averages (SpikeTAs) of rectified electromyographic activity (EMG) compiled for the same neuron-muscle pair during various behaviors often appear different. Do these differences represent significant changes in the effect of the neuron on the muscle activity? Quantitative comparison of such differences has been limited by two methodological problems, which we address here. First, although the linear baseline trend of many SpikeTAs can be adjusted with ramp subtraction, the curvilinear baseline trend of other SpikeTAs can not. To address this problem, we estimated baseline trends using a form of moving average. Artificial triggers were created in 1 ms increments from 40 ms before to 40 ms after each spike used to compile the SpikeTA. These 81 triggers were used to compile another average of rectified EMG, which we call a single-spike increment shifted average (single-spike ISA). Single-spike ISAs were averaged to produce an overall ISA, which captured slow trends in the baseline EMG while distributing any spike-locked features evenly throughout the 80 ms analysis window. The overall ISA then was subtracted from the initial SpikeTA, removing any slow baseline trends for more accurate measurement of SpikeTA effects. Second, the measured amplitude and temporal characteristics of SpikeTA effects produced by the same neuron-muscle pair may vary during different behaviors. But whether or not such variation is significant has been difficult to ascertain. We therefore applied a multiple fragment approach to permit statistical comparison of the measured features of SpikeTA effects for the same neuron-muscle pair during different behavioral epochs. Spike trains recorded in each task were divided into non-overlapping fragments of 100 spikes each, and a separate, ISA-corrected, SpikeTA was compiled for each fragment. Measurements made on these fragment SpikeTAs then were used as test statistics for comparison of peak percent increase, mean percent
Effects of stream-associated fluctuations upon the radial variation of average solar-wind parameters
NASA Technical Reports Server (NTRS)
Goldstein, B. E.; Jokipii, J. R.
1976-01-01
The effects of nonlinear fluctuations due to solar wind streams upon radial gradients of average solar wind parameters are computed, using a numerical MHD model for both spherically symmetric time dependent and corotating equatorial flow approximations. Significant effects of correlations are found between fluctuations upon the gradients of azimuthal magnetic fields, radial velocity, density and azimuthal velocity. Between 400 to 900 solar radii stream interactions have transferred the major portion of the angular momentum flux to the magnetic field; at even greater distances the plasma again carries the bulk of the angular momentum flux. The average azimuthal component of the magnetic field may decrease as much as 10% faster than the Archimedean spiral out to 6 AU due to stream interactions, but this result is dependent upon inner boundary conditions.
Luciano, Margaret M; Mathieu, John E; Ruddy, Thomas M
2014-03-01
External leaders continue to be an important source of influence even when teams are empowered, but it is not always clear how they do so. Extending research on structurally empowered teams, we recognize that teams' external leaders are often responsible for multiple teams. We adopt a multilevel approach to model external leader influences at both the team level and the external leader level of analysis. In doing so, we distinguish the influence of general external leader behaviors (i.e., average external leadership) from those that are directed differently toward the teams that they lead (i.e., relative external leadership). Analysis of data collected from 451 individuals, in 101 teams, reporting to 25 external leaders, revealed that both relative and average external leadership related positively to team empowerment. In turn, team empowerment related positively to team performance and member job satisfaction. However, while the indirect effects were all positive, we found that relative external leadership was not directly related to team performance, and average external leadership evidenced a significant negative direct influence. Additionally, relative external leadership exhibited a significant direct positive influence on member job satisfaction as anticipated, whereas average external leadership did not. These findings attest to the value in distinguishing external leaders' behaviors that are exhibited consistently versus differentially across empowered teams. Implications and future directions for the study and management of external leaders overseeing multiple teams are discussed.
Janesko, Benjamin G; Scuseria, Gustavo E
2006-09-28
We present a model for electromagnetic enhancements in surface enhanced Raman optical activity (SEROA) spectroscopy. The model extends previous treatments of SEROA to substrates, such as metal nanoparticles in solution, that are orientationally averaged with respect to the laboratory frame. Our theoretical treatment combines analytical expressions for unenhanced Raman optical activity with molecular polarizability tensors that are dressed by the substrate's electromagnetic enhancements. We evaluate enhancements from model substrates to determine preliminary scaling laws and selection rules for SEROA. We find that dipolar substrates enhance Raman optical activity (ROA) scattering less than Raman scattering. Evanescent gradient contributions to orientationally averaged ROA scale to first or higher orders in the gradient of the incident plane-wave field. These evanescent gradient contributions may be large for substrates with quadrupolar responses to the plane-wave field gradient. Some substrates may also show a ROA contribution that depends only on the molecular electric dipole-electric dipole polarizability. These conclusions are illustrated via numerical calculations of surface enhanced Raman and ROA spectra from (R)-(-)-bromochlorofluoromethane on various model substrates.
Lammel, Gerhard
2004-01-01
With the aim to investigate the justification of time-averaging of climate parameters in multicompartment modelling the effects of various climate parameters and different modes of entry on the predicted substances' total environmental burdens and the compartmental fractions were studied. A simple, non-steady state zero-dimensional (box) mass-balance model of intercompartmental mass exchange which comprises four compartments was used for this purpose. Each two runs were performed, one temporally unresolved (time-averaged conditions) and a time-resolved (hourly or higher) control run. In many cases significant discrepancies are predicted, depending on the substance and on the parameter. We find discrepancies exceeding 10% relative to the control run and up to an order of magnitude for prediction of the total environmental burden from neglecting seasonalities of the soil and ocean temperatures and the hydroxyl radical concentration in the atmosphere and diurnalities of atmospheric mixing depth and the hydroxyl radical concentration in the atmosphere. Under some conditions it was indicated that substance sensitivity could be explained by the magnitude of the sink terms in the compartment(s) with parameters varying. In general, however, any key for understanding substance sensitivity seems not be linked in an easy manner to the properties of the substance, to the fractions of its burden or to the sink terms in either of the compartments with parameters varying. Averaging of diurnal variability was found to cause errors of total environmental residence time of different sign for different substances. The effects of time-averaging of several parameters are in general not additive but synergistic as well as compensatory effects occur. An implication of these findings is that the ranking of substances according to persistence is sensitive to time resolution on the scale of hours to months. As a conclusion it is recommended to use high temporal resolution in multi
NASA Astrophysics Data System (ADS)
Guo, Xiaofeng; Yang, Ting; Sun, Yele
2015-08-01
Based on observations at the heights of 140 and 280 m on the Beijing 325-m meteorological tower, this study presents an assessment of the averaging period effects on eddy-covariance measurements of the momentum/scalar flux and transport efficiency during wintertime haze pollution. The study period, namely from January 6 to February 28 2013, is divided into different episodes of particulate pollution, as featured by varied amounts of the turbulent exchange and conditions of the atmospheric stability. Overall, turbulent fluxes of the momentum and scalars (heat, water vapor, and CO2) increase with the averaging period, namely from 5, 15, and 30 up to 60 min, an outcome most evident during the `transient' episodes (each lasting for 2-3 days, i.e., preceded and followed by clean-air days with mean concentrations of PM1 less than 40 μg m-3). The conventional choice of 30 min is deemed to be appropriate for calculating the momentum flux and its transport efficiency. By comparison, scalar fluxes and their transport efficiencies appear more sensitive to the choice of an averaging period, particularly at the upper level (i.e., 280 m). It is presupposed that, for urban environments, calculating the momentum and scalar fluxes could invoke separate averaging periods, rather than relying on a single prescription (e.g., 30 min). Furthermore, certain characteristics of urban turbulence are found less sensitive to the choice of an averaging period, such as the relationship between the heat-to-momentum transport efficiency and the local stability parameter.
Fractional averaging of repetitive waveforms induced by self-imaging effects
NASA Astrophysics Data System (ADS)
Romero Cortés, Luis; Maram, Reza; Azaña, José
2015-10-01
We report the theoretical prediction and experimental observation of averaging of stochastic events with an equivalent result of calculating the arithmetic mean (or sum) of a rational number of realizations of the process under test, not necessarily limited to an integer record of realizations, as discrete statistical theory dictates. This concept is enabled by a passive amplification process, induced by self-imaging (Talbot) effects. In the specific implementation reported here, a combined spectral-temporal Talbot operation is shown to achieve undistorted, lossless repetition-rate division of a periodic train of noisy waveforms by a rational factor, leading to local amplification, and the associated averaging process, by the fractional rate-division factor.
Mental health care and average happiness: strong effect in developed nations.
Touburg, Giorgio; Veenhoven, Ruut
2015-07-01
Mental disorder is a main cause of unhappiness in modern society and investment in mental health care is therefore likely to add to average happiness. This prediction was checked in a comparison of 143 nations around 2005. Absolute investment in mental health care was measured using the per capita number of psychiatrists and psychologists working in mental health care. Relative investment was measured using the share of mental health care in the total health budget. Average happiness in nations was measured with responses to survey questions about life-satisfaction. Average happiness appeared to be higher in countries that invest more in mental health care, both absolutely and relative to investment in somatic medicine. A data split by level of development shows that this difference exists only among developed nations. Among these nations the link between mental health care and happiness is quite strong, both in an absolute sense and compared to other known societal determinants of happiness. The correlation between happiness and share of mental health care in the total health budget is twice as strong as the correlation between happiness and size of the health budget. A causal effect is likely, but cannot be proved in this cross-sectional analysis.
Effects of an absorbing boundary on the average volume visited by N spherical Brownian particles
NASA Astrophysics Data System (ADS)
Larralde, Hernan; M. Berezhkovskii, Alexander; Weiss, George H.
2003-12-01
The number of distinct sites visited by a lattice random walk and its continuum analog, the volume swept out by a diffusing spherical particle are used to model different phenomena in physics, chemistry and biology. Therefore the problem of finding statistical properties of these random variables is of importance. There have been several studies of the more general problem of the volume of the region explored by N random walks or Brownian particles in an unbounded space. We here study the effects of a planar absorbing boundary on the average of this volume. The boundary breaks the translational invariance of the space, and introduces an additional spatial parameter, the initial distance of the Brownian particles from the surface. We derive expressions for the average volume visited in three dimensions and the average span in one dimension as functions of the time for given values of the initial distance to the absorbing boundary and N. The results can be transformed to those for N lattice random walks by appropriately choosing the radius and diffusion constant of the spheres.
Effects of surface roughness on the average heat transfer of an impinging air jet
Beitelmal, A.H.; Saad, M.A.; Patel, C.D.
2000-01-01
Localized cooling by impinging flow has been used in many industrial applications such as in cooling of gas turbine blades and drying processes. Here, effect of surface roughness of a uniformly heated plate on the average heat transfer characteristics of an impinging air jet was experimentally investigated. Two aluminum plates, one with a flat surface and the second with some roughness added to the surface were fabricated. The roughness took the shape of a circular array of protrusions of 0.5mm base and 0.5mm height. A circular Kapton heater of the same diameter as the plates (70mm) supplied the necessary power. The surfaces of the plates were polished to reduce radiation heat losses and the back and sides insulated to reduce conduction heat losses. temperatures were measured over a Reynolds number ranging from 9,600 to 38,500 based on flow rate through a 6.85mm diameter nozzle. The temperature measurements were repeated for nozzle exit-to-plate spacing, z/d, ranging from 1 to 10. The average Nusselt number for both cases was plotted versus the Reynolds number and their functional correlation was determined. The results indicate an increase of up to 6.0% of the average Nusselt number due to surface roughness. This modest increase provides evidence to encourage further investigation and characterization of the surface roughness as a parameter for enhancing heat transfer.
Stafford, P. J.; Cooper, J.; de Bono, D. P.; Vincent, R.; Garratt, C. J.
1995-01-01
OBJECTIVE--To investigate the effects of low dose sotalol on the signal averaged surface P wave in patients with paroxysmal atrial fibrillation. DESIGN--A longitudinal within patient crossover study. SETTING--Cardiac departments of a regional cardiothoracic centre and a district general hospital. PATIENTS--Sixteen patients with documented paroxysmal atrial fibrillation. The median (range) age of the patients was 65.5 (36-70) years; 11 were men. MAIN OUTCOME MEASURES--Analysis of the signal averaged P wave recorded from patients not receiving antiarrhythmic medication and after 4-6 weeks' treatment with sotalol. P wave limits were defined automatically by a computer algorithm. Filtered P wave duration and energies contained in frequency bands from 20, 30, 40, 60, and 80 to 150 Hz of the P wave spectrum expressed as absolute values (P20, P30, etc) and as ratios of high to low frequency energy (PR20, PR30, etc) were measured. RESULTS--No difference in P wave duration was observed between the groups studied (mean (SEM) 149 (4) without medication and 152 (3) ms with sotalol). Significant decreases in high frequency P wave energy (for example P60: 4.3 (0.4) v 3.3 (0.3) microV2.s, P = 0.003) and energy ratio (PR60: 5.6 (0.5) v 4.7 (0.6), P = 0.03) were observed during sotalol treatment. These changes were independent of heart rate. CONCLUSIONS--Treatment with low dose sotalol reduces high frequency P wave energy but does not change P wave duration. These results are consistent with the class III effect of the drug and suggest that signal averaging of the surface P wave may be a useful non-invasive measure of drug induced changes in atrial electrophysiology. PMID:8541169
Time-averages for Plane Travelling Waves—The Effect of Attenuation: I, Adiabatic Approach
NASA Astrophysics Data System (ADS)
Makarov, S. N.
1993-05-01
The analysis of the effect of attenuation on the time-averages for a plane travelling wave is presented. The barotropic equation of state is considered: i.e., acoustic heating is assumed to be negligible. The problem statement consists of calculating means in a finite region bounded by a transducer surface as well as by a perfectly absorbing surface, respectively. Although the simple wave approximation cannot be used throughout the field it is still valid near the perfect absorber. The result for radiation pressure is different from the conclusions given previously by Beyer and Livett, Emery and Leeman.
NASA Astrophysics Data System (ADS)
Zha, Guofeng; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang
2016-03-01
For analyzing the three dimension (3D) spatial resolving performance of Multi-Transmitter Single-Receiver (MTSR) array radar with stochastic signals, the spatial average ambiguity function (SAAF) was introduced. The analytic expression of SAAF of array radar with stochastic is derived. To analyze the effects of array geometry, comparisons are implemented for three typical array geometries including circular, decussate and planar configuration. Simulated results illustrate that the spatial resolving performance is better for the circular array than that of others. Furthermore, it is shown that the array aperture size and the target's radial range are the main factors impacting the resolving performance.
The effect of three-dimensional fields on bounce averaged particle drifts in a tokamak
Hegna, C. C.
2015-07-15
The impact of applied 3D magnetic fields on the bounce-averaged precessional drifts in a tokamak plasma are calculated. Local 3D MHD equilibrium theory is used to construct solutions to the equilibrium equations in the vicinity of a magnetic surface for a large aspect ratio circular tokamak perturbed by applied 3D fields. Due to modulations of the local shear caused by near-resonant Pfirsch-Schlüter currents, relatively weak applied 3D fields can have a large effect on trapped particle precessional drifts.
Effective Block-Scale Dispersion and Its Self-Averaging Behavior in Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
de Barros, Felipe; Dentz, Marco
2015-04-01
Upscaled (effective) dispersion coefficients in spatially heterogeneous flow fields must (1) account for the sub-scale variability that is filtered out by homogenization and (2) be modeled as a random function to incorporate the uncertainty associated with non-ergodic solute bodies. In this study, we use the framework developed in de Barros and Rubin (2011) [de Barros F.P.J. and Rubin Y., Modelling of block-scale macrodispersion as a random function. Journal of Fluid Mechanics 676 (2011): 514-545] to develop novel semi-analytical expressions for the first two statistical moments of the block-effective dispersion coefficients in three-dimensional spatially random flow fields as a function of the key characteristic length scales defining the transport problem. The derived expressions are based on perturbation theory and limited to weak-to-mild heterogeneity and uniform-in-the-mean steady state flow fields. The semi-analytical solutions provide physical insights of the main controlling factors influencing the temporal scaling of the dispersion coefficient of the solute body and its self-averaging dispersion behavior. Our results illustrate the relevance of the joint influence of the block-scale and local-scale dispersion in diminishing the macrodispersion variance under non-ergodic conditions. The impact of the statistical anisotropy ratio in the block-effective macrodispersion self-averaging behavior is also investigated. The analysis performed in this work has implications in numerical modeling and grid design.
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
The Treatment Effect of Grade Repetitions
ERIC Educational Resources Information Center
Mahjoub, Mohamed-Badrane
2017-01-01
This paper estimates the treatment effect of grade repetitions in French junior high schools, using a value-added test score as outcome and quarter of birth as instrument. With linear two-stage least squares, local average treatment effect is estimated at around 1.6 times the standard deviation of the achievement gain. With non-linear…
Catalogue of averaged stellar effective magnetic fields. I. Chemically peculiar A and B type stars
NASA Astrophysics Data System (ADS)
Bychkov, V. D.; Bychkova, L. V.; Madej, J.
2003-08-01
This paper presents the catalogue and the method of determination of averaged quadratic effective magnetic fields < B_e > for 596 main sequence and giant stars. The catalogue is based on measurements of the stellar effective (or mean longitudinal) magnetic field strengths B_e, which were compiled from the existing literature. We analysed the properties of 352 chemically peculiar A and B stars in the catalogue, including Am, ApSi, He-weak, He-rich, HgMn, ApSrCrEu, and all ApSr type stars. We have found that the number distribution of all chemically peculiar (CP) stars vs. averaged magnetic field strength is described by a decreasing exponential function. Relations of this type hold also for stars of all the analysed subclasses of chemical peculiarity. The exponential form of the above distribution function can break down below about 100 G, the latter value representing approximately the resolution of our analysis for A type stars. Table A.1 and its references are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/407/631 and Tables 3 to 9 are only available in electronic form at http://www.edpsciences.org
Ren, Dongxu; Zhao, Huiying; Zhang, Chupeng; Yuan, Daocheng; Xi, Jianpu; Zhu, Xueliang; Ban, Xinxing; Dong, Longchao; Gu, Yawen; Jiang, Chunye
2016-01-01
A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m. PMID:27089348
Ren, Dongxu; Zhao, Huiying; Zhang, Chupeng; Yuan, Daocheng; Xi, Jianpu; Zhu, Xueliang; Ban, Xinxing; Dong, Longchao; Gu, Yawen; Jiang, Chunye
2016-04-14
A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method's theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.
The effect of aperture averaging upon tropospheric delay fluctuations seen with a DSN antenna
NASA Technical Reports Server (NTRS)
Linfield, R.
1996-01-01
The spectrum of tropospheric delay fluctuations expected for a DSN antenna at time scales less than 100 s has been calculated. A new feature included in these calculations is the effect of aperture averaging, which causes a reduction in delay fluctuations on time scales less than the antenna wind speed crossing time, approximately equal to 5-10 s. On time scales less than a few seconds, the Allan deviation sigma(sub y)(Delta(t)) varies as (Delta(t))(sup +1), rather than sigma(sub y)(Delta(t)) varies as (Delta(t))(exp -1/6) without aperture averaging. Due to thermal radiometer noise, calibration of tropospheric delay fluctuations with water vapor radiometers will not be possible on time scales less than approximately 10 s. However, the tropospheric fluctuation level will be small enough that radio science measurements with a spacecraft on time scales less than a few seconds will be limited by the stability of frequency standards and/or other nontropospheric effects.
Effects of turbulence on average refraction angles in occultations by planetary atmospheres
NASA Technical Reports Server (NTRS)
Eshleman, V. R.; Haugstad, B. S.
1978-01-01
Four separable effects of atmospheric turbulence on average refraction angles in occultation experiments are derived from a simplified analysis, and related to more general formulations by B. S. Haugstad. The major contributors are shown to be due to gradients in height of the strength of the turbulence, and the sense of the resulting changes in refraction angles is explained in terms of Fermat's principle. Because the results of analyses of such gradient effects by W. B. Hubbard and J. R. Jokipii are expressed in other ways, a special effort is made to compare all of the predictions on a common basis. We conclude that there are fundamental differences, and use arguments based on energy conservation and Fermat's principle to help characterize the discrepancies.
Effects of turbulence on average refraction angles in occultations by planetary atmospheres
NASA Technical Reports Server (NTRS)
Eshleman, V. R.; Haugstad, B. S.
1978-01-01
Four separable effects of atmospheric turbulence on average refraction angles in occultation experiments are derived from a simplified analysis, and related to more general formulations by B. S. Haugstad. The major contributors are shown to be due to gradients in height of the strength of the turbulence, and the sense of the resulting changes in refraction angles is explained in terms of Fermat's principle. Because the results of analyses of such gradient effects by W. B. Hubbard and J. R. Jokipii are expressed in other ways, a special effort is made to compare all of the predictions on a common basis. We conclude that there are fundamental differences, and use arguments based on energy conservation and Fermat's principle to help characterize the discrepancies.
NASA Astrophysics Data System (ADS)
Nitta, Junsaku; Bergsten, Tobias
2008-03-01
Time reversal symmetric Al’tshuler-Aronov-Spivak (AAS) oscillations are measured in an array of InGaAs mesoscopic loops. We confirm that gate voltage dependence of h/2 e period oscillations is due to spin interference from the effect of ensemble average on the AAS and Aharonov-Bohm (AB) amplitudes. This spin interference is based on the time reversal Aharonov-Casher (AC) effect. The AC interference oscillations are controlled over several periods. This result shows evidence for electrical manipulation of the spin precession angle in an InGaAs two-dimensional electron gas channel. We control the precession rate in a precise and predictable way with an electrostatic gate.
Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P
2016-01-01
Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample (N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.
Effect of anode/filter combination on average glandular dose in mammography.
Biegała, Michał; Jakubowska, Teresa; Markowska, Karolina
2015-01-01
A comparative analysis of the mean glandular doses was conducted in 100 female patients who underwent screening mammography in 2011 and 2013. Siemens Mammomat Novation with the application of the W/Rh anode/filter combination was used in 2011, whereas in 2013 anode/filter combination was Mo/Mo or Mo/Rh. The functioning of mammography was checked and the effectiveness of the automatic exposure control (AEC) system was verified by measuring compensation of changes in the phantom thickness and measuring tube voltage. On the base of exposure parameters, an average glandular dose for each of 100 female patients was estimated. The images obtained by using AEC system had the acceptable threshold contrast visibility irrespective of the applied anode/filter combination. Mean glandular doses in the females, examined with the application of the W/Rh anode/filter combination, were on average 23.6% lower than that of the Mo/Mo or Mo/Rh anode/filter combinations. It is recommended to use a combination of the W/Rh anode /filter which exhibited lower mean glandular doses.
Cloutier, G; Allard, L; Guo, Z; Durand, L G
1992-03-01
The effect of averaging cardiac Doppler spectrograms on the reduction of their amplitude variability was investigated in 30 patients. Beat-to-beat variations in the amplitude of Doppler spectrograms were also analysed. The quantification of amplitude variability was based on the computation of the area under the absolute value of the derivative function of each spectrum composing mean spectrograms. Fast Fourier transform using a Hanning window was used to compute Doppler spectra. Results obtained over systolic and diastolic periods showed that the reduction of amplitude variability followed an exponentially decreasing curve characterised by the equation f (r) = 100 e-beta(r-1), where r is the number of cardiac cycles, beta the exponentially decreasing rate, and 100 the normalised variability for r = 1. In systole, the decreasing rate beta was 0.165, whereas in diastole it was 0.225. Reductions of the variability in systole for a number of cardiac cycles of 5, 10, 15, and 20 were 48, 77, 90 and 96 per cent, respectively. In diastole, reductions of the variability for the same numbers of cardiac cycles were 59, 87, 96 and 99 per cent, respectively. Based on these results, it can be concluded that no significant improvement in the reduction of amplitude variability may be obtained by averaging more than 20 cardiac cycles.
Munoz-Menendez, Cristina; Conde-Leboran, Ivan; Baldomir, Daniel; Chubykalo-Fesenko, Oksana; Serantes, David
2015-11-07
An efficient and safe hyperthermia cancer treatment requires the accurate control of the heating performance of magnetic nanoparticles, which is directly related to their size. However, in any particle system the existence of some size polydispersity is experimentally unavoidable, which results in a different local heating output and consequently a different hyperthermia performance depending on the size of each particle. With the aim to shed some light on this significant issue, we have used a Monte Carlo technique to study the role of size polydispersity in heat dissipation at both the local (single particle) and global (macroscopic average) levels. We have systematically varied size polydispersity, temperature and interparticle dipolar interaction conditions, and evaluated local heating as a function of these parameters. Our results provide a simple guide on how to choose, for a given polydispersity degree, the more adequate average particle size so that the local variation in the released heat is kept within some limits that correspond to safety boundaries for the average-system hyperthermia performance. All together we believe that our results may help in the design of more effective magnetic hyperthermia applications.
The effect of temperature on the average volume of Barkhausen jump on Q235 carbon steel
NASA Astrophysics Data System (ADS)
Guo, Lei; Shu, Di; Yin, Liang; Chen, Juan; Qi, Xin
2016-06-01
On the basis of the average volume of Barkhausen jump (AVBJ) vbar generated by irreversible displacement of magnetic domain wall under the effect of the incentive magnetic field on ferromagnetic materials, the functional relationship between saturation magnetization Ms and temperature T is employed in this paper to deduce the explicit mathematical expression among AVBJ vbar, stress σ, incentive magnetic field H and temperature T. Then the change law between AVBJ vbar and temperature T is researched according to the mathematical expression. Moreover, the tensile and compressive stress experiments are carried out on Q235 carbon steel specimens at different temperature to verify our theories. This paper offers a series of theoretical bases to solve the temperature compensation problem of Barkhausen testing method.
NASA Technical Reports Server (NTRS)
Connell, Peter S.; Kinnison, Douglas E.; Wuebbles, Donald J.; Burley, Joel D.; Johnston, Harold S.
1994-01-01
We have investigated the effects of incorporating representations of heterogeneous chemical processes associated with stratospheric sulfuric acid aerosol into the LLNL two-dimensional, zonally averaged, model of the troposphere and stratosphere. Using distributions of aerosol surface area and volume density derived from SAGE II satellite observations, we were primarily interested in changes in partitioning within the Cl- and N- families in the lower stratosphere, compared to a model including only gas phase photochemical reactions. We have considered the heterogeneous hydrolysis reactions N2O5 + H2O(aerosol) yields 2 HNO3 and ClONO2 + H2O(aerosol) yields HOCl + HNO3 alone and in combination with the proposed formation of nitrosyl sulfuric acid (NSA) in the aerosol and its reaction with HCl. Inclusion of these processes produces significant changes in partitioning in the NO(y) and ClO(y) families in the middle stratosphere.
Finite size effects in the averaged eigenvalue density of Wigner random-sign real symmetric matrices
NASA Astrophysics Data System (ADS)
Dhesi, G. S.; Ausloos, M.
2016-06-01
Nowadays, strict finite size effects must be taken into account in condensed matter problems when treated through models based on lattices or graphs. On the other hand, the cases of directed bonds or links are known to be highly relevant in topics ranging from ferroelectrics to quotation networks. Combining these two points leads us to examine finite size random matrices. To obtain basic materials properties, the Green's function associated with the matrix has to be calculated. To obtain the first finite size correction, a perturbative scheme is hereby developed within the framework of the replica method. The averaged eigenvalue spectrum and the corresponding Green's function of Wigner random sign real symmetric N ×N matrices to order 1 /N are finally obtained analytically. Related simulation results are also presented. The agreement is excellent between the analytical formulas and finite size matrix numerical diagonalization results, confirming the correctness of the first-order finite size expression.
Volokin, Den; ReLlez, Lark
2014-01-01
The presence of atmosphere can appreciably warm a planet's surface above the temperature of an airless environment. Known as a natural Greenhouse Effect (GE), this near-surface Atmospheric Thermal Enhancement (ATE) as named herein is presently entirely attributed to the absorption of up-welling long-wave radiation by greenhouse gases. Often quoted as 33 K for Earth, GE is estimated as a difference between planet's observed mean surface temperature and an effective radiating temperature calculated from the globally averaged absorbed solar flux using the Stefan-Boltzmann (SB) radiation law. This approach equates a planet's average temperature in the absence of greenhouse gases or atmosphere to an effective emission temperature assuming ATE ≡ GE. The SB law is also routinely employed to estimating the mean temperatures of airless bodies. We demonstrate that this formula as applied to spherical objects is mathematically incorrect owing to Hölder's inequality between integrals and leads to biased results such as a significant underestimation of Earth's ATE. We derive a new expression for the mean physical temperature of airless bodies based on an analytic integration of the SB law over a sphere that accounts for effects of regolith heat storage and cosmic background radiation on nighttime temperatures. Upon verifying our model against Moon surface temperature data provided by the NASA Diviner Lunar Radiometer Experiment, we propose it as a new analytic standard for evaluating the thermal environment of airless bodies. Physical evidence is presented that Earth's ATE should be assessed against the temperature of an equivalent airless body such as the Moon rather than a hypothetical atmosphere devoid of greenhouse gases. Employing the new temperature formula we show that Earth's total ATE is ~90 K, not 33 K, and that ATE = GE + TE, where GE is the thermal effect of greenhouse gases, while TE > 15 K is a thermodynamic enhancement independent of the
Liang, Yuanyuan; Ehler, Benjamin R; Hollenbeak, Christopher S; Turner, Barbara J
2015-02-01
The complier average causal effect (CACE) analysis addresses noncompliance with intervention and missing end-point measures in randomized controlled trials. To conduct a CACE analysis for the Peer Coach and Office Staff Support Trial examining the intervention's effect among "compliers," defined as subjects who would have received an effective dose of the intervention had it been offered, and to compare with an intention-to-treat analysis. A randomized controlled trial of 280 African American patients aged 40-75 with sustained uncontrolled hypertension from 2 general internal medicine practices. Change in 4-year coronary heart disease (CHD) risk (primary) and in systolic blood pressure (SBP) (secondary) from the baseline to the end of the 6-month intervention. Of 136 intervention subjects, 68% were compliers who had significantly more end points measured (86% vs. 34% for CHD risk; 99% vs. 57% for SBP) and lower baseline CHD risk (5% vs. 7.5%) and SBP (139 vs. 144 mm Hg) compared with noncompliers. In the intention-to-treat analysis, the effect of offering the intervention was nonsignificant for 4-year CHD risk (P=0.08) but significant for SBP (P=0.003). CACE analyses showed that receipt of an effective dose of the intervention resulted in a 1% greater reduction in 4-year CHD risk (P<0.05) and at least 8.1 mm Hg greater reduction in SBP compared with compliers in the control group (P<0.05). Among compliers, an effective dose of peer coach and office-based support resulted in significant reductions in 4-year CHD risk and SBP. More intensive interventions are likely to be required for noncompliers.
Stanger, Catherine; Ryan, Stacy R.; Fu, Hongyun; Budney, Alan J.
2011-01-01
Background Children of substance abusers are at risk for behavioral/emotional problems. To improve outcomes for these children, we developed and tested an intervention that integrated a novel contingency management (CM) program designed to enhance compliance with an empirically-validated parent training curriculum. CM provided incentives for daily monitoring of parenting and child behavior, completion of home practice assignments, and session attendance. Methods Forty-seven mothers with substance abuse or dependence were randomly assigned to parent training + incentives (PTI) or parent training without incentives (PT). Children were 55% male, ages 2-7 years. Results Homework completion and session attendance did not differ between PTI and PT mothers, but PTI mothers had higher rates of daily monitoring. PTI children had larger reductions in child externalizing problems in all models. Complier Average Causal Effects (CACE) analyses showed additional significant effects of PTI on child internalizing problems, parent problems and parenting. These effects were not significant in standard Intent-to-Treat analyses. Conclusion Results suggest our incentive program may offer a method for boosting outcomes. PMID:21466925
Effect of filter on average glandular dose and image quality in digital mammography
NASA Astrophysics Data System (ADS)
Songsaeng, C.; Krisanachinda, A.; Theerakul, K.
2016-03-01
To determine the average glandular dose and entrance surface air kerma in both phantoms and patients to assess image quality for different target-filters (W/Rh and W/Ag) in digital mammography system. The compressed breast thickness, compression force, average glandular dose, entrance surface air kerma, peak kilovoltage and tube current time were recorded and compared between W/Rh and W/Ag target filter. The CNR and the figure of merit were used to determine the effect of target filter on image quality. The mean AGD of the W/Rh target filter was 1.75 mGy, the mean ESAK was 6.67 mGy, the mean CBT was 54.1 mm, the mean CF was 14 1bs. The mean AGD of W/Ag target filter was 2.7 mGy, the mean ESAK was 12.6 mGy, the mean CBT was 75.5 mm, the mean CF was 15 1bs. In phantom study, the AGD was 1.2 mGy at 4 cm, 3.3 mGy at 6 cm and 3.83 mGy at 7 cm thickness. The FOM was 24.6, CNR was 9.02 at thickness 6 cm. The FOM was 18.4, CNR was 8.6 at thickness 7 cm. The AGD from Digital Mammogram system with W/Rh of thinner CBT was lower than the AGD from W/Ag target filter.
Extracurricular Activities and Their Effect on the Student's Grade Point Average: Statistical Study
ERIC Educational Resources Information Center
Bakoban, R. A.; Aljarallah, S. A.
2015-01-01
Extracurricular activities (ECA) are part of students' everyday life; they play important roles in students' lives. Few studies have addressed the question of how student engagements to ECA affect student's grade point average (GPA). This research was conducted to know whether the students' grade point average in King Abdulaziz University,…
Heckman, James J.; Humphries, John Eric; Veramendi, Gregory
2016-01-01
This paper develops robust models for estimating and interpreting treatment effects arising from both ordered and unordered multistage decision problems. Identification is secured through instrumental variables and/or conditional independence (matching) assumptions. We decompose treatment effects into direct effects and continuation values associated with moving to the next stage of a decision problem. Using our framework, we decompose the IV estimator, showing that IV generally does not estimate economically interpretable or policy relevant parameters in prototypical dynamic discrete choice models, unless policy variables are instruments. Continuation values are an empirically important component of estimated total treatment effects of education. We use our analysis to estimate the components of what LATE estimates in a dynamic discrete choice model. PMID:27041793
Heckman, James J; Humphries, John Eric; Veramendi, Gregory
2016-02-01
This paper develops robust models for estimating and interpreting treatment effects arising from both ordered and unordered multistage decision problems. Identification is secured through instrumental variables and/or conditional independence (matching) assumptions. We decompose treatment effects into direct effects and continuation values associated with moving to the next stage of a decision problem. Using our framework, we decompose the IV estimator, showing that IV generally does not estimate economically interpretable or policy relevant parameters in prototypical dynamic discrete choice models, unless policy variables are instruments. Continuation values are an empirically important component of estimated total treatment effects of education. We use our analysis to estimate the components of what LATE estimates in a dynamic discrete choice model.
Pinheiro, Samya de Lara Lins de Araujo; Saldiva, Paulo Hilário Nascimento; Schwartz, Joel; Zanobetti, Antonella
2014-01-01
OBJECTIVE To analyze the effect of air pollution and temperature on mortality due to cardiovascular and respiratory diseases. METHODS We evaluated the isolated and synergistic effects of temperature and particulate matter with aerodynamic diameter < 10 µm (PM10) on the mortality of individuals > 40 years old due to cardiovascular disease and that of individuals > 60 years old due to respiratory diseases in Sao Paulo, SP, Southeastern Brazil, between 1998 and 2008. Three methodologies were used to evaluate the isolated association: time-series analysis using Poisson regression model, bidirectional case-crossover analysis matched by period, and case-crossover analysis matched by the confounding factor, i.e., average temperature or pollutant concentration. The graphical representation of the response surface, generated by the interaction term between these factors added to the Poisson regression model, was interpreted to evaluate the synergistic effect of the risk factors. RESULTS No differences were observed between the results of the case-crossover and time-series analyses. The percentage change in the relative risk of cardiovascular and respiratory mortality was 0.85% (0.45;1.25) and 1.60% (0.74;2.46), respectively, due to an increase of 10 μg/m3 in the PM10 concentration. The pattern of correlation of the temperature with cardiovascular mortality was U-shaped and that with respiratory mortality was J-shaped, indicating an increased relative risk at high temperatures. The values for the interaction term indicated a higher relative risk for cardiovascular and respiratory mortalities at low temperatures and high temperatures, respectively, when the pollution levels reached approximately 60 μg/m3. CONCLUSIONS The positive association standardized in the Poisson regression model for pollutant concentration is not confounded by temperature, and the effect of temperature is not confounded by the pollutant levels in the time-series analysis. The simultaneous exposure
Pinheiro, Samya de Lara Lins de Araujo; Saldiva, Paulo Hilário Nascimento; Schwartz, Joel; Zanobetti, Antonella
2014-12-01
OBJECTIVE To analyze the effect of air pollution and temperature on mortality due to cardiovascular and respiratory diseases. METHODS We evaluated the isolated and synergistic effects of temperature and particulate matter with aerodynamic diameter < 10 µm (PM10) on the mortality of individuals > 40 years old due to cardiovascular disease and that of individuals > 60 years old due to respiratory diseases in Sao Paulo, SP, Southeastern Brazil, between 1998 and 2008. Three methodologies were used to evaluate the isolated association: time-series analysis using Poisson regression model, bidirectional case-crossover analysis matched by period, and case-crossover analysis matched by the confounding factor, i.e., average temperature or pollutant concentration. The graphical representation of the response surface, generated by the interaction term between these factors added to the Poisson regression model, was interpreted to evaluate the synergistic effect of the risk factors. RESULTS No differences were observed between the results of the case-crossover and time-series analyses. The percentage change in the relative risk of cardiovascular and respiratory mortality was 0.85% (0.45;1.25) and 1.60% (0.74;2.46), respectively, due to an increase of 10 μg/m3 in the PM10 concentration. The pattern of correlation of the temperature with cardiovascular mortality was U-shaped and that with respiratory mortality was J-shaped, indicating an increased relative risk at high temperatures. The values for the interaction term indicated a higher relative risk for cardiovascular and respiratory mortalities at low temperatures and high temperatures, respectively, when the pollution levels reached approximately 60 μg/m3. CONCLUSIONS The positive association standardized in the Poisson regression model for pollutant concentration is not confounded by temperature, and the effect of temperature is not confounded by the pollutant levels in the time-series analysis. The simultaneous exposure
Effects of Averaging Mass on Predicted Specific Absorption Rate (SAR) Values
2002-09-01
lung, liver , muscle, cerebral spinal fluid, nerve spinal, heart) in relation to various frequencies and orientations. This parametric study...relatively low ratios between spatial peak SAR and whole body SAR average were found in heart, liver , lung outer and lung inner (between 3 and 7) (see...MASS ON PREDICTED SAR 38 Figure 19. Ratios between peak localized SAR and whole body SAR average for liver for various mass intervals (1g and 10 g
NASA Astrophysics Data System (ADS)
Cheng, C.; Perfect, E.; Cropper, C.
2011-12-01
Numerical models are an important tool in petroleum engineering, geoscience, and environmental applications, e.g. feasibility evaluation and prediction for enhanced oil recovery, enhanced geothermal systems, geological carbon storage, and remediation of contaminated sites. Knowledge of capillary pressure-saturation functions is essential in such applications for simulating multiphase fluid flow and chemical transport in variably-saturated rocks and soils in the subsurface. Parameters from average capillary pressure-saturation functions are sometimes employed due to their relative ease of measurement in the laboratory. However, the use of average capillary pressure-saturation function parameters instead of point capillary pressure-saturation function parameters for numerical simulations of flow and transport can result in significant errors, especially in the case of coarse-grained sediments and fractured rocks. Such erroneous predications can impose great risks and challenges to decision-making. In this paper we present a comparison of simulation results based on average and point estimates of van Genuchten model parameters (Sr, α, and n) for Berea sandstone, packed glass beads, and Hanford sediments. The capillary pressure-saturation functions were measured using steady-state centrifugation. Average and point parameters were estimated for each sample using the averaging and integral methods, respectively. Results indicated that the Sr and α parameters estimated using averaging and integral methods were close to a 1-to-1 correspondence, with R-squared values of 0.958 and 0.994, respectively. The n parameter, however, showed a major curvilinear deviation from the 1-to-1 line for the two estimation methods. This trend indicates that the averaging method systematically underestimates the n parameter relative to the point-based estimates of the integral method leading to an over predication of the breadth of the pore size distribution. Forward numerical simulations
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Lightfoot, Guy; Stevens, John
2014-01-01
This study seeks to identify whether the most efficient artefact rejection (AR) level to use for recording the newborn auditory brainstem response (ABR) changes with recording conditions and if so, to suggest a simple strategy for testers to adopt when faced with nonideal test conditions. Twenty-six babies referred from the English Newborn Hearing Screening Programme were tested with ABR as a routine component of their postscreening assessment but their raw EEG responses were recorded for off-line analysis. One hundred 61 second data samples (equivalent to 3000 stimuli) were reaveraged off-line at five AR levels and two AR levels with Bayesian averaging; a total of 700 waveforms. An objective measurement of residual noise was used to determine the most efficient AR level (i.e., associated with the highest signal to noise ratio) to use in low, moderate, and severe noise conditions. The best performing AR levels were as follows: (1) low noise conditions: conventional averaging with AR = ±5 μV, (2) moderate noise conditions: conventional averaging with AR = ±5 μV or ± 6.5μV; Bayesian averaging with AR ±10 μV, and (3) severe noise conditions: Bayesian averaging with AR = ±10 μV. In severe noise conditions a more lenient AR level was most efficient for conventional averaging but a greater number of sweeps would be needed to reduce the noise allowed to enter the average. An interactive AR strategy has been proposed, including the trade-off between AR level and the number of sweeps required to control residual noise. AR level does influence test efficiency and the optimum level depends on the prevailing noise levels, which can change during the test session. It is important that testers are aware of this and develop evidence-based skills to optimize test quality, particularly in challenging test conditions.
The effect of the behavior of an average consumer on the public debt dynamics
NASA Astrophysics Data System (ADS)
De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele
2017-09-01
An important issue within the present economic crisis is understanding the dynamics of the public debt of a given country, and how the behavior of average consumers and tax payers in that country affects it. Starting from a model of the average consumer behavior introduced earlier by the authors, we propose a simple model to quantitatively address this issue. The model is then studied and analytically solved under some reasonable simplifying assumptions. In this way we obtain a condition under which the public debt steadily decreases.
NASA Astrophysics Data System (ADS)
Nonaka, Toshihiro; Kitazawa, Masakiyo; Esumi, ShinIchi
2017-06-01
We derive formulas for the efficiency correction of cumulants with many efficiency bins. The derivation of the formulas is simpler than the previously suggested method, but the numerical cost is drastically reduced from the naïve method. From analytical and numerical analyses in simple toy models, we show that use of the averaged efficiency in the efficiency correction might cause large deviations in some cases and should not be used, especially for high order cumulants. These analyses show the importance of carrying out the efficiency correction without taking the average.
Gueymard, C.
1986-11-01
An analytical method is described for calculating the daily averages or effective values of the sun's elevation, azimuth, hour angle, angle of incidence and air mass. A particular case is considered first and corresponds to the extraterrestrial radiation. The general derivation takes into account atmospheric effects in a simple way, provided that long-term averages of solar radiation are available. Examples of application are given for the climate of Montreal, Canada. In particular, it is shown that monthly averages of beam radiation on the horizontal may be directly converted to normal incidence values (and vice versa) by use of the mean solar elevation.
Allard, L; Langlois, Y E; Durand, L G; Roederer, G O; Cloutier, G
1992-05-01
The objective of the present study was to analyse the effect of averaging Doppler blood flow signals in lower limb arteries on amplitude and feature variabilities. Doppler signals recorded in 41 iliac and 35 superficial femoral arteries having different categories of stenosis were averaged over 1-15 cardiac cycles. Based on the relative decreasing rate of an index of variability, results indicated that amplitude variability of the spectrograms was exponentially reduced to 30, 6 and 1 per cent when averaging five, ten and 15 cardiac cycles, respectively. Nine diagnostic features were extracted from Doppler spectrograms and their variations from one cardiac cycle to the next quantified. Based on the relative decreasing rate of these variations, results indicated that feature variability was exponentially reduced to 30, 4 and 1 per cent when averaging five, ten and 15 cardiac cycles, respectively. The effect of averaging on the discriminant power of the features to separate the different categories of stenosis was also investigated by performing t-test analyses. Results showed that averaging between five and ten cardiac cycles provided the better discriminant power for most cases, whereas averaging over more than ten cardiac cycles was of little benefit. Based on the spectral analysis technique used in the present study, we conclude that averaging over ten cardiac cycles is sufficient for the analysis of Doppler spectrograms recorded in the lower limbs.
ERIC Educational Resources Information Center
Pfeffer, Jeffrey; Moore, William L.
1980-01-01
The average tenure of academic department heads was found to be positively related to the level of paradigm development characterizing the department, negatively related to departmental size, and related to interactions of the level of paradigm development with the seniority mix of the faculty and with the departmental size. (Author/IRT)
Using National Data to Estimate Average Cost Effectiveness of EFNEP Outcomes by State/Territory
ERIC Educational Resources Information Center
Baral, Ranju; Davis, George C.; Blake, Stephanie; You, Wen; Serrano, Elena
2013-01-01
This report demonstrates how existing national data can be used to first calculate upper limits on the average cost per participant and per outcome per state/territory for the Expanded Food and Nutrition Education Program (EFNEP). These upper limits can then be used by state EFNEP administrators to obtain more precise estimates for their states,…
NASA Astrophysics Data System (ADS)
Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum
2017-04-01
We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.
Using National Data to Estimate Average Cost Effectiveness of EFNEP Outcomes by State/Territory
ERIC Educational Resources Information Center
Baral, Ranju; Davis, George C.; Blake, Stephanie; You, Wen; Serrano, Elena
2013-01-01
This report demonstrates how existing national data can be used to first calculate upper limits on the average cost per participant and per outcome per state/territory for the Expanded Food and Nutrition Education Program (EFNEP). These upper limits can then be used by state EFNEP administrators to obtain more precise estimates for their states,…
NASA Astrophysics Data System (ADS)
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-09-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, Cɛ
Forwood, Suzanna E; Ahern, Amy; Hollands, Gareth J; Fletcher, Paul C; Marteau, Theresa M
2013-01-01
Previous studies have shown that estimations of the calorie content of an unhealthy main meal food tend to be lower when the food is shown alongside a healthy item (e.g. fruit or vegetables) than when shown alone. This effect has been called the negative calorie illusion and has been attributed to averaging the unhealthy (vice) and healthy (virtue) foods leading to increased perceived healthiness and reduced calorie estimates. The current study aimed to replicate and extend these findings to test the hypothesized mediating effect of ratings of healthiness of foods on calorie estimates. In three online studies, participants were invited to make calorie estimates of combinations of foods. Healthiness ratings of the food were also assessed. The first two studies failed to replicate the negative calorie illusion. In a final study, the use of a reference food, closely following a procedure from a previously published study, did elicit a negative calorie illusion. No evidence was found for a mediating role of healthiness estimates. The negative calorie illusion appears to be a function of the contrast between a food being judged and a reference, supporting the hypothesis that the negative calorie illusion arises from the use of a reference-dependent anchoring and adjustment heuristic and not from an 'averaging' effect, as initially proposed. This finding is consistent with existing data on sequential calorie estimates, and highlights a significant impact of the order in which foods are viewed on how foods are evaluated.
Hassan, Md Kamrul; Nahid, Nur Mohammad; Bahar, Ali Newaz; Bhuiyan, Mohammad Maksudur Rahman; Abdullah-Al-Shafi, Md; Ahmed, Kawsar
2017-08-01
Quantum-dot cellular automata (QCA) is a developing nanotechnology, which seems to be a good candidate to replace the conventional complementary metal-oxide-semiconductor (CMOS) technology. In this article, we present the dataset of average output polarization (AOP) for basic reversible logic gates presented in Ali Newaz et al. (2016) [1]. QCADesigner 2.0.3 has been employed to analysis the AOP of reversible gates at different temperature levels in Kelvin (K) unit.
Effect on long-term average spectrum of pop singers' vocal warm-up with vocal function exercises.
Guzman, Marco; Angulo, Mabel; Muñoz, Daniel; Mayerhoff, Ross
2013-04-01
Abstract This case-control study aimed to investigate if there is any change on the spectral slope declination immediately after vocal function exercises (VFE) vs traditional vocal warm-up exercises in normal singers. Thirty-eight pop singers with perceptually normal voices were divided into two groups: an experimental group (n = 20) and a control group (n = 18). One single session with VFE for the experimental group and traditional singing warm-up exercises for the control group was applied. Voice was recorded before and after the exercises. The recorded tasks were to read a phonetically balanced text and to sing a song. Long-term average spectrum (LTAS) analysis included alpha ratio, L1-L0 ratio, and singing power ratio (SPR). Acoustic parameters of voice samples pre- and post-training were compared. Comparison between VFE and control group was also performed. Significant changes after treatment included the alpha ratio and singing power ratio in speaking voice, and SPR in the singing voice for VFE group. The traditional vocal warm-up of the control group also showed pre-post changes. Significant differences between VFE group and control group for alpha ratio and SPR were found in speaking voice samples. This study demonstrates that VFE have an immediate effect on the spectrum of the voice, specifically a decrease on the spectral slope declination. The results of this study provide support for the advantageous effect of VFE as vocal warm-up on voice quality.
Effect of the initial excitation energy on the average fission lifetime of nuclei
Gontchar, I. I. Ponomarenko, N. A. Litnevsky, A. L.
2008-07-15
The dependence of the fission time on the initial nuclear excitation energy E{sub tot0}* is studied on the basis of a refined combined dynamical and statistical model. It is shown that this dependence may be nonmonotonic, in which case it features a broad maximum. It turns out that the form of the average fission time
Effect of spatial averaging on temporal statistical and scaling properties of rainfall
NASA Astrophysics Data System (ADS)
Olsson, Jonas; Singh, Vijay P.; Jinno, Kenji
1999-08-01
The variation of temporal continuous rainfall properties with spatial scale was investigated by analyses of daily 11.2-year rainfall time series of point values and spatial averages obtained from a network of 161 rain gauges in southern Sweden. The number of series studied ranged from 16 for point values to 1 for 8000 km2, the latter corresponding to the total network area. The analyses included investigations of general descriptive statistics, power spectra, empirical probability distribution functions, and scaling of statistical moments. Two important characteristics of the rainfall process in the region were indicated. The first was that temporal statistical properties related to extreme values qualitatively changed at a spatial scale of ˜2800 km2, whereas properties of the mean process did not. Although this change may partly be related to changes in the sample size, its presence does raise some important questions concerning the transformation between point values and regional scales in hydroclimatological rainfall modeling. The second important characteristic was a temporal scaling behavior between 1 day and 1 month with properties depending on the spatial averaging area. This finding is in agreement with previous investigations where a spatial variation of temporal scaling properties has been found and suggests the possibility of scaling-based time series modeling where the regional parameters are adjusted to take local climatological factors affecting the rainfall process into account.
NASA Astrophysics Data System (ADS)
Zupan, Ales; Burke, Kieron; Ernzerhof, Matthias; Perdew, John P.
1997-06-01
We analyze the electron densities n(r) of atoms, molecules, solids, and surfaces. The distributions of values of the Seitz radius rs=(3/4πn)1/3 and the reduced density gradient s=|∇n|/(2(3π2)1/3n4/3) in an electron density indicate which ranges of these variables are significant for physical processes. We also define energy-weighted averages of these variables, , from which local spin density (LSD) and generalized gradient approximation (GGA) exchange-correlation energies may be estimated. The changes in these averages upon rearrangement of the nuclei (atomization of molecules or solids, stretching of bond lengths or lattice parameters, change of crystal structure, etc.) are used to explain why GGA corrects LSD in the way it does. A thermodynamic-like inequality (essentially d/>d
ERIC Educational Resources Information Center
Chambers, John R.; Windschitl, Paul D.
2004-01-01
Biases in social comparative judgments, such as those illustrated by above-average and comparative-optimism effects, are often regarded as products of motivated reasoning (e.g., self-enhancement). These effects, however, can also be produced by information-processing limitations or aspects of judgment processes that are not necessarily biased by…
Average Causal Effects from Nonrandomized Studies: A Practical Guide and Simulated Example
ERIC Educational Resources Information Center
Schafer, Joseph L.; Kang, Joseph
2008-01-01
In a well-designed experiment, random assignment of participants to treatments makes causal inference straightforward. However, if participants are not randomized (as in observational study, quasi-experiment, or nonequivalent control-group designs), group comparisons may be biased by confounders that influence both the outcome and the alleged…
NASA Technical Reports Server (NTRS)
North, G. R.; Bell, T. L.; Cahalan, R. F.; Moeng, F. J.
1982-01-01
Geometric characteristics of the spherical earth are shown to be responsible for the increase of variance with latitude of zonally averaged meteorological statistics. An analytic model is constructed to display the effect of a spherical geometry on zonal averages, employing a sphere labeled with radial unit vectors in a real, stochastic field expanded in complex spherical harmonics. The variance of a zonally averaged field is found to be expressible in terms of the spectrum of the vector field of the spherical harmonics. A maximum variance is then located at the poles, and the ratio of the variance to the zonally averaged grid-point variance, weighted by the cosine of the latitude, yields the zonal correlation typical of the latitude. An example is provided for the 500 mb level in the Northern Hemisphere compared to 15 years of data. Variance is determined to increase north of 60 deg latitude.
NASA Technical Reports Server (NTRS)
North, G. R.; Bell, T. L.; Cahalan, R. F.; Moeng, F. J.
1982-01-01
Geometric characteristics of the spherical earth are shown to be responsible for the increase of variance with latitude of zonally averaged meteorological statistics. An analytic model is constructed to display the effect of a spherical geometry on zonal averages, employing a sphere labeled with radial unit vectors in a real, stochastic field expanded in complex spherical harmonics. The variance of a zonally averaged field is found to be expressible in terms of the spectrum of the vector field of the spherical harmonics. A maximum variance is then located at the poles, and the ratio of the variance to the zonally averaged grid-point variance, weighted by the cosine of the latitude, yields the zonal correlation typical of the latitude. An example is provided for the 500 mb level in the Northern Hemisphere compared to 15 years of data. Variance is determined to increase north of 60 deg latitude.
On the averaging of cardiac diffusion tensor MRI data: the effect of distance function selection
NASA Astrophysics Data System (ADS)
Giannakidis, Archontis; Melkus, Gerd; Yang, Guang; Gullberg, Grant T.
2016-11-01
Diffusion tensor magnetic resonance imaging (DT-MRI) allows a unique insight into the microstructure of highly-directional tissues. The selection of the most proper distance function for the space of diffusion tensors is crucial in enhancing the clinical application of this imaging modality. Both linear and nonlinear metrics have been proposed in the literature over the years. The debate on the most appropriate DT-MRI distance function is still ongoing. In this paper, we presented a framework to compare the Euclidean, affine-invariant Riemannian and log-Euclidean metrics using actual high-resolution DT-MRI rat heart data. We employed temporal averaging at the diffusion tensor level of three consecutive and identically-acquired DT-MRI datasets from each of five rat hearts as a means to rectify the background noise-induced loss of myocyte directional regularity. This procedure is applied here for the first time in the context of tensor distance function selection. When compared with previous studies that used a different concrete application to juxtapose the various DT-MRI distance functions, this work is unique in that it combined the following: (i) metrics were judged by quantitative—rather than qualitative—criteria, (ii) the comparison tools were non-biased, (iii) a longitudinal comparison operation was used on a same-voxel basis. The statistical analyses of the comparison showed that the three DT-MRI distance functions tend to provide equivalent results. Hence, we came to the conclusion that the tensor manifold for cardiac DT-MRI studies is a curved space of almost zero curvature. The signal to noise ratio dependence of the operations was investigated through simulations. Finally, the ‘swelling effect’ occurrence following Euclidean averaging was found to be too unimportant to be worth consideration.
Introducing the at-risk average causal effect with application to HealthWise South Africa.
Coffman, Donna L; Caldwell, Linda L; Smith, Edward A
2012-08-01
Researchers often hypothesize that a causal variable, whether randomly assigned or not, has an effect on an outcome behavior and that this effect may vary across levels of initial risk of engaging in the outcome behavior. In this paper, we propose a method for quantifying initial risk status. We then illustrate the use of this risk-status variable as a moderator of the causal effect of leisure boredom, a non-randomized continuous variable, on cigarette smoking initiation. The data come from the HealthWise South Africa study. We define the causal effects using marginal structural models and estimate the causal effects using inverse propensity weights. Indeed, we found leisure boredom had a differential causal effect on smoking initiation across different risk statuses. The proposed method may be useful for prevention scientists evaluating causal effects that may vary across levels of initial risk.
Introducing the At-Risk Average Causal Effect with Application to HealthWise South Africa
Caldwell, Linda L.; Smith, Edward A.
2012-01-01
Researchers often hypothesize that a causal variable, whether randomly assigned or not, has an effect on an outcome behavior and that this effect may vary across levels of initial risk of engaging in the outcome behavior. In this paper, we propose a method for quantifying initial risk status. We then illustrate the use of this risk-status variable as a moderator of the causal effect of leisure boredom, a non-randomized continuous variable, on cigarette smoking initiation. The data come from the HealthWise South Africa study. We define the causal effects using marginal structural models and estimate the causal effects using inverse propensity weights. Indeed, we found leisure boredom had a differential causal effect on smoking initiation across different risk statuses. The proposed method may be useful for prevention scientists evaluating causal effects that may vary across levels of initial risk. PMID:22477557
Estimating Heterogeneous Treatment Effects with Observational Data*
Xie, Yu; Brand, Jennie E.; Jann, Ben
2011-01-01
Individuals differ not only in their background characteristics, but also in how they respond to a particular treatment, intervention, or stimulation. In particular, treatment effects may vary systematically by the propensity for treatment. In this paper, we discuss a practical approach to studying heterogeneous treatment effects as a function of the treatment propensity, under the same assumption commonly underlying regression analysis: ignorability. We describe one parametric method and two non-parametric methods for estimating interactions between treatment and the propensity for treatment. For the first method, we begin by estimating propensity scores for the probability of treatment given a set of observed covariates for each unit and construct balanced propensity score strata; we then estimate propensity score stratum-specific average treatment effects and evaluate a trend across them. For the second method, we match control units to treated units based on the propensity score and transform the data into treatment-control comparisons at the most elementary level at which such comparisons can be constructed; we then estimate treatment effects as a function of the propensity score by fitting a non-parametric model as a smoothing device. For the third method, we first estimate non-parametric regressions of the outcome variable as a function of the propensity score separately for treated units and for control units and then take the difference between the two non-parametric regressions. We illustrate the application of these methods with an empirical example of the effects of college attendance on womens fertility. PMID:23482633
Average-atom treatment of relaxation time in x-ray Thomson scattering from warm dense matter
NASA Astrophysics Data System (ADS)
Johnson, W. R.; Nilsen, J.
2016-03-01
The influence of finite relaxation times on Thomson scattering from warm dense plasmas is examined within the framework of the average-atom approximation. Presently most calculations use the collision-free Lindhard dielectric function to evaluate the free-electron contribution to the Thomson cross section. In this work, we use the Mermin dielectric function, which includes relaxation time explicitly. The relaxation time is evaluated by treating the average atom as an impurity in a uniform electron gas and depends critically on the transport cross section. The calculated relaxation rates agree well with values inferred from the Ziman formula for the static conductivity and also with rates inferred from a fit to the frequency-dependent conductivity. Transport cross sections determined by the phase-shift analysis in the average-atom potential are compared with those evaluated in the commonly used Born approximation. The Born approximation converges to the exact cross sections at high energies; however, differences that occur at low energies lead to corresponding differences in relaxation rates. The relative importance of including relaxation time when modeling x-ray Thomson scattering spectra is examined by comparing calculations of the free-electron dynamic structure function for Thomson scattering using Lindhard and Mermin dielectric functions. Applications are given to warm dense Be plasmas, with temperatures ranging from 2 to 32 eV and densities ranging from 2 to 64 g/cc.
Li, Ye; Zhang, Yixin; Zhu, Yun; Chen, Minyu
2016-07-01
Based on the spatial power spectrum of the refractive index of anisotropic turbulence, the average polarizability of the Gaussian Schell-model quantized beams and lateral coherence length of the spherical wave propagating through the ocean water channel are derived. Numerical results show that, in strong temperature fluctuation, the depolarization effects of anisotropic turbulence are inferior to isotropic turbulence, as the other parameters of two links are the same. The depolarization effects of salinity fluctuation are less than the effects of the temperature fluctuation; the average polarizability of beams increases when increasing the inner scale of turbulence and the source's transverse size; and the larger rate of dissipation of kinetic energy per unit mass of fluid enhances the average polarizability of beams. The region of the receiving radius is smaller than the characteristic radius and the average polarizability of beams in isotropy turbulence is smaller than that of beams in anisotropy turbulence. However, the receiving radius region is larger than a characteristic radius and the average polarizability of beams in isotropy turbulence is larger than that of beams in anisotropy turbulence.
The Averaged Face Growth Rates of lysozyme Crystals: The Effect of Temperature
NASA Technical Reports Server (NTRS)
Nadarajah, Arunan; Forsythe, Elizabeth L.; Pusey, Marc L.
1995-01-01
Measurements of the averaged or macroscopic face growth rates of lysozyme crystals are reported here for the (110) face of tetragonal lysozyme, at three sets of pH and salt concentrations, with temperatures over a 4-22 C range for several protein concentrations. The growth rate trends with supersaturation were similar to previous microscopic growth rate measurements. However, it was found that at high super-saturations the growth rates attain a maximum and then start decreasing. No 'dead zone' was observed but the growth rates were found to approach zero asymptotically at very low super-saturations. The growth rate data also displayed a dependence on pH and salt concentration which could not be characterized solely by the super-saturation. A complete mechanism for lysozyme crystal growth, involving the formation of an aggregate growth unit, mass transport of the growth unit to the crystal interface and faceted crystal growth by growth unit addition, is suggested. Such a mechanism may provide a more consistent explanation for the observed growth rate trends than those suggested by other investigators. The nutrient solution interactions leading to the formation of the aggregate growth unit may, thus, be as important as those occurring at the crystal interface and may account for the differences between small molecule and protein crystal growth.
Loop expansion of the average effective action in the functional renormalization group approach
NASA Astrophysics Data System (ADS)
Lavrov, Peter M.; Merzlikin, Boris S.
2015-10-01
We formulate a perturbation expansion for the effective action in a new approach to the functional renormalization group method based on the concept of composite fields for regulator functions being their most essential ingredients. We demonstrate explicitly the principal difference between the properties of effective actions in these two approaches existing already on the one-loop level in a simple gauge model.
Forwood, Suzanna E.; Ahern, Amy; Hollands, Gareth J.; Fletcher, Paul C.; Marteau, Theresa M.
2013-01-01
Objective Previous studies have shown that estimations of the calorie content of an unhealthy main meal food tend to be lower when the food is shown alongside a healthy item (e.g. fruit or vegetables) than when shown alone. This effect has been called the negative calorie illusion and has been attributed to averaging the unhealthy (vice) and healthy (virtue) foods leading to increased perceived healthiness and reduced calorie estimates. The current study aimed to replicate and extend these findings to test the hypothesized mediating effect of ratings of healthiness of foods on calorie estimates. Methods In three online studies, participants were invited to make calorie estimates of combinations of foods. Healthiness ratings of the food were also assessed. Results The first two studies failed to replicate the negative calorie illusion. In a final study, the use of a reference food, closely following a procedure from a previously published study, did elicit a negative calorie illusion. No evidence was found for a mediating role of healthiness estimates. Conclusion The negative calorie illusion appears to be a function of the contrast between a food being judged and a reference, supporting the hypothesis that the negative calorie illusion arises from the use of a reference-dependent anchoring and adjustment heuristic and not from an ‘averaging’ effect, as initially proposed. This finding is consistent with existing data on sequential calorie estimates, and highlights a significant impact of the order in which foods are viewed on how foods are evaluated. PMID:23967216
The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis.
Godfrey, Devon J; McAdams, H Page; Dobbins, James T
2013-02-01
Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. For scan angles of 20° and 5 mm plane separation, seven MITS planes must be averaged to sufficiently
The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis
Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T.
2013-01-01
Purpose: Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Methods: Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. Results: For scan angles of 20° and 5 mm plane separation, seven MITS planes must be
The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis
Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T. III
2013-02-15
Purpose: Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Methods: Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. Results: For scan angles of 20 Degree-Sign and 5 mm plane separation, seven MITS
NASA Technical Reports Server (NTRS)
Bernstein, R. B.
1973-01-01
The surprising feature of the Doppler problem in threshold determination is the 'amplification effect' of the target's thermal energy spread. The small thermal energy spread of the target molecules results in a large dispersion in relative kinetic energy. The Doppler broadening effect in connection with thermal energy beam experiments is discussed, and a procedure is recommended for the deconvolution of molecular scattering cross-section functions whose dominant dependence upon relative velocity is approximately that of the standard low-energy form.
NASA Technical Reports Server (NTRS)
Bernstein, R. B.
1973-01-01
The surprising feature of the Doppler problem in threshold determination is the 'amplification effect' of the target's thermal energy spread. The small thermal energy spread of the target molecules results in a large dispersion in relative kinetic energy. The Doppler broadening effect in connection with thermal energy beam experiments is discussed, and a procedure is recommended for the deconvolution of molecular scattering cross-section functions whose dominant dependence upon relative velocity is approximately that of the standard low-energy form.
USDA-ARS?s Scientific Manuscript database
A total of 24 Angus x Hereford steers (BW = 479.8 ± 4.48) were used to assess the effect of Metabolizable Energy Intake (MEI) on Average Daily Gain (ADG) and Tympanic Temperature (TT) during the wintertime in southern Chile. The study was conducted at the experimental field of the Catholic Universit...
ERIC Educational Resources Information Center
Afzal, Muhammad Tanveer; Gondal, Bashir; Fatima, Nuzhat
2014-01-01
The major objective of the study was to elicit the effect of three instructional methods for teaching of mathematics on low, average and high achiever elementary school students. Three methods: traditional instructional method, computer assisted instruction (CAI) and teacher facilitated mathematics learning software were employed for the teaching…
ERIC Educational Resources Information Center
Doerann-George, Judith
The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…
ERIC Educational Resources Information Center
Heffez, Jack
To determine what effects employment will have on high school students' grade point averages and rate of school attendance, the author involved fifty-six students in an experiment. Twenty-eight students were employed part-time under the Youth Incentive Entitlement Project (YIEP). The twenty-eight students in the control group were eligible for…
ERIC Educational Resources Information Center
Apprey, Maurice; Bassett, Kimberley C.; Preston-Grimes, Patrice; Lewis, Dion W.; Wood, Beverly
2014-01-01
Two pivotal and interconnected claims are addressed in this article. First, strategy precedes program effectiveness. Second, graduation rates and rankings are insufficient in any account of academic progress for African American students. In this article, graduation is regarded as the floor and not the ceiling, as it were. The ideal situation in…
Effects of Social Interactions on Empirical Responses to Selection for Average Daily Gain of Boars
USDA-ARS?s Scientific Manuscript database
Effects of competition on responses to selection for ADG were examined with records of 9,720 boars from dam lines (1 and 2) and sire lines (3 and 4) provided by Pig Improvement Company. Each line was analyzed separately. Pens contained 15 boars. Gains (ADG) were measured from about 71 to 161 d of...
A Statistical Analysis of the Effects of Housing Environment on Grade Point Average.
ERIC Educational Resources Information Center
Maurais, Roger L.
This study examined the effect on GPA of increased occupancy of double dormitory rooms. Seven groups of 50 students each were randomly selected: Group (1) freshmen, two per room; (2) freshmen, three per room; (3) freshmen living off campus, (4) seniors, two per room; (5) seniors, three per room; (6) seniors living off campus; (7) seniors in…
ERIC Educational Resources Information Center
Apprey, Maurice; Bassett, Kimberley C.; Preston-Grimes, Patrice; Lewis, Dion W.; Wood, Beverly
2014-01-01
Two pivotal and interconnected claims are addressed in this article. First, strategy precedes program effectiveness. Second, graduation rates and rankings are insufficient in any account of academic progress for African American students. In this article, graduation is regarded as the floor and not the ceiling, as it were. The ideal situation in…
van Osch, Yvette; Blanken, Irene; Meijs, Maartje H J; van Wolferen, Job
2015-04-01
We tested whether the perceived physical attractiveness of a group is greater than the average attractiveness of its members. In nine studies, we find evidence for the so-called group attractiveness effect (GA-effect), using female, male, and mixed-gender groups, indicating that group impressions of physical attractiveness are more positive than the average ratings of the group members. A meta-analysis on 33 comparisons reveals that the effect is medium to large (Cohen's d = 0.60) and moderated by group size. We explored two explanations for the GA-effect: (a) selective attention to attractive group members, and (b) the Gestalt principle of similarity. The results of our studies are in favor of the selective attention account: People selectively attend to the most attractive members of a group and their attractiveness has a greater influence on the evaluation of the group.
Covariant and background independent functional RG flow for the effective average action
NASA Astrophysics Data System (ADS)
Safari, Mahmoud; Vacca, Gian Paolo
2016-11-01
We extend our prescription for the construction of a covariant and background-independent effective action for scalar quantum field theories to the case where momentum modes below a certain scale are suppressed by the presence of an infrared regulator. The key step is an appropriate choice of the infrared cutoff for which the Ward identity, capturing the information from single-field dependence of the ultraviolet action, continues to be exactly solvable, and therefore, in addition to covariance, manifest background independence of the effective action is guaranteed at any scale. A practical consequence is that in this framework one can adopt truncations dependent on the single total field. Furthermore we discuss the necessary and sufficient conditions for the preservation of symmetries along the renormalization group flow.
NASA Astrophysics Data System (ADS)
Li, M.; Chen, Y.
2009-12-01
Coordinate rotation is typically applied to align measured turbulence data along the stream wise direction before calculating turbulent fluxes. A standard averaging period (30 min) is commonly used when estimating these fluxes. Different rotation approaches with various averaging periods can cause systemic bias and significant variations in flux estimations. Thus, measuring surface fluxes over a non-flat terrain requires that an appropriate rotation technique and optimal averaging period are applied. In this study, two coordinate rotation approaches (double and planar-fit rotations) and no-rotation, in associated with averaging periods of 15-240 min, were applied to compute heat and water vapor fluxes over a mountainous terrain using the eddy covariance method. Measurements were conducted in an experimental watershed, the Lien-Hua-Chih (LHC) watershed, located in the central Taiwan. This watershed has considerable meso-scale circulation and mountainous terrain. Vegetation type is a mixture of natural deciduous forest and shrubs; canopy height is about 17 m. A 22 m tall observation tower was built inside the canopy. The prevailing wind direction is NW during daytime and ES during the night time at the LHC site in both the dry and wet seasons. Turbulence data above the canopy were measured with an eddy covariance system comprising a 3-D sonic anemometer (Young 81000) and a krypton hygrometer (Campbell KH20). Raw data of 10 Hz were recorded simultaneously with a data logger (CR1000) and a CF card. Air temperature/humidity profiles were measured to calculate the heat/moisture storage inside the canopy layer. Air pressure data were used to correct the effect of air density fluctuations on surface fluxes. The effects of coordinate rotation approaches with various averaging periods on the average daily energy closure fraction are presented. The criteria of the best energy closure fraction and minimum uncertainty indicate that planar-fit rotation with an averaging period
Lee, Jerry; Hulman, Sonia; Musci, Michael; Stang, Ellen
2015-10-01
Prescription opioid and heroin abuse have been increasing steadily year after year, and continue to be a serious national problem. A sequela of the increase in opioid abuse has been an increase in the number of infants born with opioid dependence. These infants often require costly, prolonged stays in the neonatal intensive care unit (NICU) for drug withdrawal treatment. The authors studied a population of infants from a large Medicaid health plan who were born with neonatal abstinence syndrome (NAS) secondary to in utero opioid exposure to assess the average length of stay in the NICU, and to determine the variables that may account for differences in interinstitutional lengths of stay. The overall average length of stay for NAS was 21.1 days for the 139 infants included in the study. Analysis of the medication used for treatment revealed that infants who were treated with a combined inpatient/outpatient regimen with methadone had an average length of stay of 11.4 days versus 25.1 days for infants who were treated entirely as inpatients (P<0.001), a 55% reduction in average length of stay. In 2009 there were an estimated 13,600 cases of NAS in the United States at a cost of $53,000 per case. A 55% reduction in length of stay corresponds to $396 million in annual savings for the treatment of NAS. Development of successful combined inpatient/outpatient management programs for NAS warrants further consideration.
Bora, B.; Bhuyan, H.; Favre, M.; Wyndham, E.; Chuaqui, H.; Kakati, M.
2011-10-15
Self-excited plasma series resonance is observed in low pressure capacitvely coupled radio frequency discharges as high-frequency oscillations superimposed on the normal radio frequency current. This high-frequency contribution to the radio frequency current is generated by a series resonance between the capacitive sheath and the inductive and resistive bulk plasma. In this report, we present an experimental method to measure the plasma series resonance in a capacitively coupled radio frequency argon plasma by modifying the homogeneous discharge model. The homogeneous discharge model is modified by introducing a correction factor to the plasma resistance. Plasma parameters are also calculated by considering the plasma series resonances effect. Experimental measurements show that the self-excitation of the plasma series resonance, which arises in capacitive discharge due to the nonlinear interaction of plasma bulk and sheath, significantly enhances both the Ohmic and stochastic heating. The experimentally measured total dissipation, which is the sum of the Ohmic and stochastic heating, is found to increase significantly with decreasing pressure.
Nahshoni, Eitan; Golubchik, Pavel; Glazer, Jonathan; Sever, Jonathan; Strasberg, Boris; Imbar, Shula; Shoval, Gal; Weizman, Abraham; Zalsman, Gil
2012-02-01
Reports on sudden cardiac death (SCD) of children and adolescents treated with stimulant agents have raised concerns regarding the need for cardiovascular monitoring and risk stratification schedules. Cardiac ventricular late potentials (LPs) represent delayed ventricular activation that might predispose to fatal ventricular arrhythmias and SCD in cardiac patients. LPs have not previously been measured in children with attention deficit/hyperactivity disorder (ADHD). LPs were measured in 18 physically healthy ADHD children (5 girls and 13 boys, age 11.9 ± 2.5 years, treatment duration 2.6 ± 1.9 years) before and 2 h after oral methylphenidate administration. No significant changes were detected and LPs were found to be within normal ranges. In conclusion, this preliminary small-scale study suggests that methylphenidate in physically healthy children with ADHD was not associated with cardiac ventricular LPs, suggesting the safety of the agent in this age group.
Hinkelman, Laura M.; Evans, K. Franklin; Clothiaux, Eugene E.; Ackerman, Thomas P.; Stackhouse, Paul W.
2007-10-01
Cumulus clouds can become tilted or elongated in the presence of wind shear. Nevertheless, most studies of the interaction of cumulus clouds and radiation have assumed these clouds to be isotropic. This paper describes an investigation of the effect of fair-weather cumulus cloud field anisotropy on domain-averaged solar fluxes and atmospheric heating rate profiles. A stochastic field generation algorithm was used to produce 20 three-dimensional liquid water content fields based on the statistical properties of cloud scenes from a large eddy simulation. Progressively greater degrees of x–z plane tilting and horizontal stretching were imposed on each of these scenes, so that an ensemble of scenes was produced for each level of distortion. The resulting scenes were used as input to a three-dimensional Monte Carlo radiative transfer model. Domain-averaged transmission, reflection, and absorption of broadband solar radiation were computed for each scene along with the average heating rate profile. Both tilt and horizontal stretching were found to significantly affect calculated fluxes, with the amount and sign of flux differences depending strongly on sun position relative to cloud distortion geometry. The mechanisms by which anisotropy interacts with solar fluxes were investigated by comparisons to independent pixel approximation and tilted independent pixel approximation computations for the same scenes. Finally, cumulus anisotropy was found to most strongly impact solar radiative transfer by changing the effective cloud fraction (i.e., the cloud fraction with respect to the solar beam direction).
Liu, Yan; Korn, Edward L; Oh, Hee Soo; Pearson, Helmer; Xu, Tian-Min; Baumrind, Sheldon
2009-05-01
This study continues our assessment of agreement and disagreement among 25 Chinese and 20 US orthodontists in the ranking for facial attractiveness of end-of-treatment photographs of randomly sampled growing Chinese and white orthodontic patients. The main aims of this article were to (1) measure the overall pattern of agreement between the mean rankings of US and Chinese orthodontists, and (2) measure the strength of agreement between the rankings of the US and Chinese orthodontists for each patient. Each judge independently ranked standard clinical sets of profile, frontal, and frontal-smiling photographs of 43 US patients and 48 Chinese patients. For each patient, a separate mean rank was computed from the responses of each group of judges. Pearson correlations between the mean ranks of the 2 groups of judges were used to measure their overall agreement. Paired and unpaired t tests were used to measure the agreement between the judges of the 2 groups for each patient. The overall agreement between the mean rankings of the US and Chinese judges was very high. For the US patients, the correlation between the Chinese and US judges means was r = 0.92, P <0.0001. For the Chinese patients, the analogous value was r = 0.86, P <0.0001. Agreement between the 2 groups of judges concerning each patient was also generally strong. For two thirds of the patients, the mean ranks of the US and Chinese judges differed by less than 1 unit in a scale of 12. However, for 6 patients considered individually (5 Chinese and 1 US), the assessment of the 2 groups of judges was statistically significantly different at P values ranging from 0.02 to less than 0.0001, even after the Bonferroni correction. These findings demonstrate that orthodontic clinicians can reliably identify and rank subtle differences between patients, and that differences between judges and between patients can be distinguished at a high level of statistical significance, given appropriate study designs. However, the
NASA Technical Reports Server (NTRS)
Matsunaga, Tsuneo
1993-01-01
Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a Japanese future imaging sensor which has five channels in thermal infrared (TIR) region. To extract spectral emissivity information from ASTER and/or TIMS data, various temperature-emissivity (T-E) separation methods have been developed to date. Most of them require assumptions on surface emissivity, in which emissivity measured in a laboratory is often used instead of in-situ pixel-averaged emissivity. But if these two emissivities are different, accuracies of separated emissivity and surface temperature are reduced. In this study, the difference between laboratory and in-situ pixel-averaged emissivity and its effect on T-E separation are discussed. TIMS data of an area containing both rocks and vegetation were also processed to retrieve emissivity spectra using two T-E separation methods.
ERIC Educational Resources Information Center
Powell, Lynda H.; Calvin, James E., III; Calvin, James E., Jr.
2007-01-01
To curb the epidemic of obesity in the United States, revised Medicare policy allows support for efficacious obesity treatments. This review summarizes the evidence from rigorous randomized trials (9 lifestyle trials, 5 drug trials, and 2 surgical trials) on the efficacy and risk-benefit profile of lifestyle, drug, and surgical interventions aimed…
... radiation oncologist : a doctor who treats cancer with radiation therapy q A medical oncologist : a doctor who treats cancer with medicines such as chemotherapy or targeted therapy You might have many other specialists on your treatment team as well, including physician assistants (PAs), nurse ...
Vranckx, Stijn; Vos, Peter; Maiheu, Bino; Janssen, Stijn
2015-11-01
Effects of vegetation on pollutant dispersion receive increased attention in attempts to reduce air pollutant concentration levels in the urban environment. In this study, we examine the influence of vegetation on the concentrations of traffic pollutants in urban street canyons using numerical simulations with the CFD code OpenFOAM. This CFD approach is validated against literature wind tunnel data of traffic pollutant dispersion in street canyons. The impact of trees is simulated for a variety of vegetation types and the full range of approaching wind directions at 15° interval. All these results are combined using meteo statistics, including effects of seasonal leaf loss, to determine the annual average effect of trees in street canyons. This analysis is performed for two pollutants, elemental carbon (EC) and PM10, using background concentrations and emission strengths for the city of Antwerp, Belgium. The results show that due to the presence of trees the annual average pollutant concentrations increase with about 8% (range of 1% to 13%) for EC and with about 1.4% (range of 0.2 to 2.6%) for PM10. The study indicates that this annual effect is considerably smaller than earlier estimates which are generally based on a specific set of governing conditions (1 wind direction, full leafed trees and peak hour traffic emissions). Copyright © 2015 Elsevier B.V. All rights reserved.
Hulman, Sonia; Musci, Michael; Stang, Ellen
2015-01-01
Abstract Prescription opioid and heroin abuse have been increasing steadily year after year, and continue to be a serious national problem. A sequela of the increase in opioid abuse has been an increase in the number of infants born with opioid dependence. These infants often require costly, prolonged stays in the neonatal intensive care unit (NICU) for drug withdrawal treatment. The authors studied a population of infants from a large Medicaid health plan who were born with neonatal abstinence syndrome (NAS) secondary to in utero opioid exposure to assess the average length of stay in the NICU, and to determine the variables that may account for differences in interinstitutional lengths of stay. The overall average length of stay for NAS was 21.1 days for the 139 infants included in the study. Analysis of the medication used for treatment revealed that infants who were treated with a combined inpatient/outpatient regimen with methadone had an average length of stay of 11.4 days versus 25.1 days for infants who were treated entirely as inpatients (P<0.001), a 55% reduction in average length of stay. In 2009 there were an estimated 13,600 cases of NAS in the United States at a cost of $53,000 per case. A 55% reduction in length of stay corresponds to $396 million in annual savings for the treatment of NAS. Development of successful combined inpatient/outpatient management programs for NAS warrants further consideration. (Population Health Management 2015;18:392–397) PMID:25803316
A statistical study of gyro-averaging effects in a reduced model of drift-wave transport
Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.; Caldas, Ibere L.
2016-08-25
Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K_{0}, becomes K_{0}J_{0}($\\hat{p}$), where J_{0} is the zeroth-order Bessel function and $\\hat{p}$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $\\hat{p}$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K_{0}J_{0}($\\hat{p}$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), P_{c} provides an upper bound for the escape rate, and that P_{t }rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.
A statistical study of gyro-averaging effects in a reduced model of drift-wave transport
Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.; ...
2016-08-25
Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K0, becomes K0J0(more » $$\\hat{p}$$), where J0 is the zeroth-order Bessel function and $$\\hat{p}$$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $$\\hat{p}$$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K0J0($$\\hat{p}$$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), Pc provides an upper bound for the escape rate, and that Pt rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.« less
Kim, Ki-Hyun
2011-01-01
To learn more about the effects of mixing different odorants, a series of air dilution sensory (ADS) tests were conducted using four reduced sulfur compounds [RSC: hydrogen sulfide (H2S), methanethiol (CH3SH), dimethylsulfide (DMS), and dimethyldisulfide (DMDS)] at varying concentration levels. The tests were initially conducted by analyzing samples containing single individual RSCs at a wide range of concentrations. The resulting data were then evaluated to define the empirical relationship for each RSC between the dilution-to-threshold (D/T) ratio and odor intensity (OI) scaling. Based on the relationships defined for each individual RSC, the D/T ratios were estimated for a synthetic mixture of four RSCs. The effect of mixing was then examined by assessing the relative contribution of each RSC to those estimates with the aid of the actually measured D/T values. This stepwise test confirmed that the odor intensity of the synthetic mixture is not governed by the common theoretical basis (e.g., rule of additivity, synergism, or a stronger component model) but is best represented by the averaged contribution of all RSC components. The overall results of this study thus suggest that the mixing phenomenon between odorants with similar chemical properties (like RSC family) can be characterized by the averaging effect of all participants. PMID:22319360
A statistical study of gyro-averaging effects in a reduced model of drift-wave transport
Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.; Caldas, Ibere L.
2016-08-25
Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K_{0}, becomes K_{0}J_{0}($\\hat{p}$), where J_{0} is the zeroth-order Bessel function and $\\hat{p}$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $\\hat{p}$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K_{0}J_{0}($\\hat{p}$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), P_{c} provides an upper bound for the escape rate, and that P_{t }rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.
Samerdokiene, Vitalija; Mastauskas, Albinas; Atkocius, Vydmantas
2015-12-01
The use of radiation sources for various medical purposes is closely related to irradiation of the medical staff, which causes harmful effects to health and an increased risk of cancer. In total, 1463 medical staff who have been occupationally exposed to sources of ionising radiation (IR) had been monitored. Records with annual dose measurements (N = 19 157) were collected and regularly analysed for a 23-y period: from 01 January 1991 to 31 December 2013. The collected annual average effective dose (AAED) data have been analysed according to different socio-demographic parameters and will be used in future investigation in order to assess cancer risk among medical staff occupationally exposed to sources of IR. A thorough analysis of data extracted from medical staff's dose records allows one to conclude that the average annual effective dose of Lithuanian medical staff occupationally exposed to sources of IR was consistently decreased from 1991 (1.75 mSv) to 2013 (0.27 mSv) (p < 0.0001).
Kim, Ki-Hyun
2011-01-01
To learn more about the effects of mixing different odorants, a series of air dilution sensory (ADS) tests were conducted using four reduced sulfur compounds [RSC: hydrogen sulfide (H(2)S), methanethiol (CH(3)SH), dimethylsulfide (DMS), and dimethyldisulfide (DMDS)] at varying concentration levels. The tests were initially conducted by analyzing samples containing single individual RSCs at a wide range of concentrations. The resulting data were then evaluated to define the empirical relationship for each RSC between the dilution-to-threshold (D/T) ratio and odor intensity (OI) scaling. Based on the relationships defined for each individual RSC, the D/T ratios were estimated for a synthetic mixture of four RSCs. The effect of mixing was then examined by assessing the relative contribution of each RSC to those estimates with the aid of the actually measured D/T values. This stepwise test confirmed that the odor intensity of the synthetic mixture is not governed by the common theoretical basis (e.g., rule of additivity, synergism, or a stronger component model) but is best represented by the averaged contribution of all RSC components. The overall results of this study thus suggest that the mixing phenomenon between odorants with similar chemical properties (like RSC family) can be characterized by the averaging effect of all participants.
NASA Technical Reports Server (NTRS)
Hinkelman, Laura M.; Evans, K. Franklin; Clothiaux, Eugene E.; Ackerman, Thomas P.; Stackhouse, Paul W., Jr.
2006-01-01
Cumulus clouds can become tilted or elongated in the presence of wind shear. Nevertheless, most studies of the interaction of cumulus clouds and radiation have assumed these clouds to be isotropic. This paper describes an investigation of the effect of fair-weather cumulus cloud field anisotropy on domain-averaged solar fluxes and atmospheric heating rate profiles. A stochastic field generation algorithm was used to produce twenty three-dimensional liquid water content fields based on the statistical properties of cloud scenes from a large eddy simulation. Progressively greater degrees of x-z plane tilting and horizontal stretching were imposed on each of these scenes, so that an ensemble of scenes was produced for each level of distortion. The resulting scenes were used as input to a three-dimensional Monte Carlo radiative transfer model. Domain-average transmission, reflection, and absorption of broadband solar radiation were computed for each scene along with the average heating rate profile. Both tilt and horizontal stretching were found to significantly affect calculated fluxes, with the amount and sign of flux differences depending strongly on sun position relative to cloud distortion geometry. The mechanisms by which anisotropy interacts with solar fluxes were investigated by comparisons to independent pixel approximation and tilted independent pixel approximation computations for the same scenes. Cumulus anisotropy was found to most strongly impact solar radiative transfer by changing the effective cloud fraction, i.e., the cloud fraction when the field is projected on a surface perpendicular to the direction of the incident solar beam.
NASA Astrophysics Data System (ADS)
Kleeman, M.; Mahmud, A.
2008-12-01
California has one of the worst particulate air pollution problems in the nation with some estimates predicting more than 5000 premature deaths each year attributed to air pollution. Climate change will modify weather patterns in California with unknown consequences for PM2.5. Previous down-scaling exercises carried out for the entire United States have typically not resolved the details associated with California's mountain-valley topography and mixture of urban-rural emissions characteristics. Detailed studies carried out for California have identified strong effects acting in opposite directions on PM2.5 concentrations making the net prediction for climate effects on PM2.5 somewhat uncertain. More research is needed to reduce this uncertainty so that we can truly understand climate impacts on PM2.5 and public health. The objective of this research is to predict climate change effects on annual average concentrations of particulate matter (PM2.5) in California with sufficient resolution to capture the details of California's air basins. Business-as-usual scenarios generated by the Parallel Climate Model (PCM) will be down-scaled to 4km meteorology using the Weather Research Forecast (WRF) model. The CIT/UCD source-oriented photochemical air quality model will be employed to predict PM2.5 concentrations throughout the entire state of California. The modeled annual average total and speciated PM2.5 concentrations for the future (2047-2049) and the present-day (2004-2006) periods will be compared to determine climate change effects. The results from this study will improve our understanding of global climate change effects on PM2.5 concentrations in California.
Aghamohseni, Hengameh; Ohadi, Kaveh; Spearman, Maureen; Krahn, Natalie; Moo-Young, Murray; Scharer, Jeno M; Butler, Mike; Budman, Hector M
2014-09-30
The impact of operating conditions on the glycosylation pattern of humanized camelid monoclonal antibody, EG2-hFc produced by Chinese hamster ovary (CHO) cells has been evaluated by a combination of experiments and modeling. Cells were cultivated under different levels of glucose and glutamine concentrations with the goal of investigating the effect of nutrient depletion levels and ammonia build up on the cell growth and the glycoprofiles of the monoclonal antibody (Mab). The effect of average pH reduction on glycosylation level during the entire culture time or during a specific time span was also investigated. The relative abundance of glycan structures was quantified by hydrophilic interaction liquid chromatography (HILIC) and the galactosylation index (GI) and the sialylation index (SI) were determined. Lower initial concentrations of glutamine resulted in lower glucose consumption and lower cell yield but increased GI and SI levels when compared to cultures started with higher initial glutamine levels. Similarly, reducing the average pH of culture resulted in lower growth but higher SI and GI levels. These findings indicate that there is a tradeoff between cell growth, resulting Mab productivity and the achievement of desirable higher glycosylation levels. A dynamic model, based on a metabolic flux analysis (MFA), is proposed to describe the metabolism of nutrients, cell growth and Mab productivity. Finally, existing software (GLYCOVIS) that describes the glycosylation pathways was used to illustrate the impact of extracellular species on the glycoprofiles. Copyright © 2014 Elsevier B.V. All rights reserved.
Antitumor effects of electrochemical treatment
González, Maraelys Morales; Zamora, Lisset Ortíz; Cabrales, Luis Enrique Bergues; Sierra González, Gustavo Victoriano; de Oliveira, Luciana Oliveira; Zanella, Rodrigo; Buzaid, Antonio Carlos; Parise, Orlando; Brito, Luciana Macedo; Teixeira, Cesar Augusto Antunes; Gomes, Marina das Neves; Moreno, Gleyce; Feo da Veiga, Venicio; Telló, Marcos; Holandino, Carla
2013-01-01
Electrochemical treatment is an alternative modality for tumor treatment based on the application of a low intensity direct electric current to the tumor tissue through two or more platinum electrodes placed within the tumor zone or in the surrounding areas. This treatment is noted for its great effectiveness, minimal invasiveness and local effect. Several studies have been conducted worldwide to evaluate the antitumoral effect of this therapy. In all these studies a variety of biochemical and physiological responses of tumors to the applied treatment have been obtained. By this reason, researchers have suggested various mechanisms to explain how direct electric current destroys tumor cells. Although, it is generally accepted this treatment induces electrolysis, electroosmosis and electroporation in tumoral tissues. However, action mechanism of this alternative modality on the tumor tissue is not well understood. Although the principle of Electrochemical treatment is simple, a standardized method is not yet available. The mechanism by which Electrochemical treatment affects tumor growth and survival may represent more complex process. The present work analyzes the latest and most important research done on the electrochemical treatment of tumors. We conclude with our point of view about the destruction mechanism features of this alternative therapy. Also, we suggest some mechanisms and strategies from the thermodynamic point of view for this therapy. In the area of Electrochemical treatment of cancer this tool has been exploited very little and much work remains to be done. Electrochemical treatment constitutes a good therapeutic option for patients that have failed the conventional oncology methods. PMID:23592904
NASA Astrophysics Data System (ADS)
Chen, Feier; Tian, Kang; Ding, Xiaoxu; Miao, Yuqi; Lu, Chunxia
2016-11-01
Analysis of freight rate volatility characteristics attracts more attention after year 2008 due to the effect of credit crunch and slowdown in marine transportation. The multifractal detrended fluctuation analysis technique is employed to analyze the time series of Baltic Dry Bulk Freight Rate Index and the market trend of two bulk ship sizes, namely Capesize and Panamax for the period: March 1st 1999-February 26th 2015. In this paper, the degree of the multifractality with different fluctuation sizes is calculated. Besides, multifractal detrending moving average (MF-DMA) counting technique has been developed to quantify the components of multifractal spectrum with the finite-size effect taken into consideration. Numerical results show that both Capesize and Panamax freight rate index time series are of multifractal nature. The origin of multifractality for the bulk freight rate market series is found mostly due to nonlinear correlation.
Chambers, John R; Windschitl, Paul D
2004-09-01
Biases in social comparative judgments, such as those illustrated by above-average and comparative-optimism effects, are often regarded as products of motivated reasoning (e.g., self-enhancement). These effects, however, can also be produced by information-processing limitations or aspects of judgment processes that are not necessarily biased by motivational factors. In this article, the authors briefly review motivational accounts of biased comparative judgments, introduce a 3-stage model for understanding how people make comparative judgments, and then describe how various nonmotivational factors can influence the 3 stages of the comparative judgment process. Finally, the authors discuss several unresolved issues highlighted by their analysis, such as the interrelation between motivated and nonmotivated sources of bias and the influence of nonmotivated sources of bias on behavior.
ERIC Educational Resources Information Center
Huber, Martin
2012-01-01
As any empirical method used for causal analysis, social experiments are prone to attrition which may flaw the validity of the results. This article considers the problem of partially missing outcomes in experiments. First, it systematically reveals under which forms of attrition--in terms of its relation to observable and/or unobservable…
Effectiveness of antiretroviral treatment in Colombia.
Machado-Alba, Jorge Enrique; Vidal, Xavier
2012-11-01
To evaluate the effectiveness of antiretroviral therapies and factors associated with HIV/AIDS control in a population of patients treated by the Colombian Social Security Health System (SGSSS). This was a descriptive study of 510 HIV/AIDS patients treated with antiretroviral therapies in 19 cities in Colombia from June 1992-April 2011. Factors assessed from each patient's clinical history were: viral load, CD4 count, antiretroviral treatment regimens, prescribed daily doses of medications, length of disease evolution, duration of therapy, history of opportunistic diseases, and drug costs. Patients were predominantly male (75.1% males versus 24.9% women), with a mean age of 41.0 ± 11.4 years and an average length of disease progression of 72 months. All recommended treatment regimens were prescribed at the defined daily dose. Treatment was effective in 65.3% of patients (viral load < 50 copies per mL). Non-adherence to treatment, treatment failure, the presence of anxiety or depression, and treatment in the city of Barranquilla were associated with an increased risk of uncontrolled HIV infection. The mean annual cost of drugs per patient was US$ 2,736. Factors associated with uncontrolled HIV infection, especially regarding treatment adherence, must be identified to promote solutions for health care programs treating patients with HIV/AIDS.
Fugal, M; McDonald, D; Jacqmin, D; Koch, N; Ellis, A; Peng, J; Ashenafi, M; Vanek, K
2015-06-15
Purpose: This study explores novel methods to address two significant challenges affecting measurement of patient-specific quality assurance (QA) with IBA’s Matrixx Evolution™ ionization chamber array. First, dose calculation algorithms often struggle to accurately determine dose to the chamber array due to CT artifact and algorithm limitations. Second, finite chamber size and volume averaging effects cause additional deviation from the calculated dose. Methods: QA measurements were taken with the Matrixx positioned on the treatment table in a solid-water Multi-Cube™ phantom. To reduce the effect of CT artifact, the Matrixx CT image set was masked with appropriate materials and densities. Individual ionization chambers were masked as air, while the high-z electronic backplane and remaining solid-water material were masked as aluminum and water, respectively. Dose calculation was done using Varian’s Acuros XB™ (V11) algorithm, which is capable of predicting dose more accurately in non-biologic materials due to its consideration of each material’s atomic properties. Finally, the exported TPS dose was processed using an in-house algorithm (MATLAB) to assign the volume averaged TPS dose to each element of a corresponding 2-D matrix. This matrix was used for comparison with the measured dose. Square fields at regularly-spaced gantry angles, as well as selected patient plans were analyzed. Results: Analyzed plans showed improved agreement, with the average gamma passing rate increasing from 94 to 98%. Correction factors necessary for chamber angular dependence were reduced by 67% compared to factors measured previously, indicating that previously measured factors corrected for dose calculation errors in addition to true chamber angular dependence. Conclusion: By comparing volume averaged dose, calculated with a capable dose engine, on a phantom masked with correct materials and densities, QA results obtained with the Matrixx Evolution™ can be significantly
NASA Astrophysics Data System (ADS)
Managave, S. R.; Jani, R. A.; Narayana Rao, T.; Sunilkumar, K.; Satheeshkumar, S.; Ramesh, R.
2016-08-01
Evaporation of rain is known to contribute water vapor, a potent greenhouse gas, to the atmosphere. Stable oxygen and hydrogen isotopic compositions (δ18O and, δD, respectively) of precipitation, usually measured/presented as values integrated over rain events or monthly mean values, are important tools for detecting evaporation effects. The slope ~8 of the linear relationship between such time-averaged values of δD and δ18O (called the meteoric water line) is widely accepted as a proof of condensation under isotopic equilibrium and absence of evaporation of rain during atmospheric fall. Here, through a simultaneous investigation of the isotopic and drop size distributions of seventeen rain events sampled on an intra-event scale at Gadanki (13.5°N, 79.2°E), southern India, we demonstrate that the evaporation effects, not evident in the time-averaged data, are significantly manifested in the sub-samples of individual rain events. We detect this through (1) slopes significantly less than 8 for the δD-δ18O relation on intra-event scale and (2) significant positive correlations between deuterium excess ( d-excess = δD - 8*δ18O; lower values in rain indicate evaporation) and the mass-weighted mean diameter of the raindrops ( D m ). An estimated ~44 % of rain is influenced by evaporation. This study also reveals a signature of isotopic equilibration of rain with the cloud base vapor, the processes important for modeling isotopic composition of precipitation. d-excess values of rain are modified by the post-condensation processes and the present approach offers a way to identify the d-excess values least affected by such processes. Isotope-enabled global circulation models could be improved by incorporating intra-event isotopic data and raindrop size dependent isotopic effects.
Iverson, Richard M.; George, David L.
2014-01-01
To simulate debris-flow behaviour from initiation to deposition, we derive a depth-averaged, two-phase model that combines concepts of critical-state soil mechanics, grain-flow mechanics and fluid mechanics. The model's balance equations describe coupled evolution of the solid volume fraction, m, basal pore-fluid pressure, flow thickness and two components of flow velocity. Basal friction is evaluated using a generalized Coulomb rule, and fluid motion is evaluated in a frame of reference that translates with the velocity of the granular phase, vs. Source terms in each of the depth-averaged balance equations account for the influence of the granular dilation rate, defined as the depth integral of ∇⋅vs. Calculation of the dilation rate involves the effects of an elastic compressibility and an inelastic dilatancy angle proportional to m−meq, where meq is the value of m in equilibrium with the ambient stress state and flow rate. Normalization of the model equations shows that predicted debris-flow behaviour depends principally on the initial value of m−meq and on the ratio of two fundamental timescales. One of these timescales governs downslope debris-flow motion, and the other governs pore-pressure relaxation that modifies Coulomb friction and regulates evolution of m. A companion paper presents a suite of model predictions and tests.
Plantar Fasciitis: Prescribing Effective Treatments.
ERIC Educational Resources Information Center
Shea, Michael; Fields, Karl B.
2002-01-01
Plantar fasciitis is an extremely common, painful injury seen among people in running and jumping sports. While prognosis for recovery with conservative care is excellent, prolonged duration of symptoms affects sports participation. Studies on treatment options show mixed results, so finding effective treatments can be challenging. A logical…
Plantar Fasciitis: Prescribing Effective Treatments.
ERIC Educational Resources Information Center
Shea, Michael; Fields, Karl B.
2002-01-01
Plantar fasciitis is an extremely common, painful injury seen among people in running and jumping sports. While prognosis for recovery with conservative care is excellent, prolonged duration of symptoms affects sports participation. Studies on treatment options show mixed results, so finding effective treatments can be challenging. A logical…
NASA Technical Reports Server (NTRS)
Gal-Chen, T.; Wyngaard, J. C.
1982-01-01
Calculations of the ratio of the true one-dimensional spectrum of vertical velocity and that measured with multiple-Doppler radar beams are presented. It was assumed that the effects of pulse volume averaging and objective analysis routines is replacement of a point measurement with a volume integral. A u and v estimate was assumed to be feasible when orthogonal radars are not available. Also, the target fluid was configured as having an infinite vertical dimension, zero vertical velocity at the top and bottom, and having homogeneous and isotropic turbulence with a Kolmogorov energy spectrum. The ratio obtained indicated that equal resolutions among radars yields a monotonically decreasing, wavenumber-dependent response function. A gain of 0.95 was demonstrated in an experimental situation with 40 levels. Possible errors introduced when using unequal resolution radars were discussed. Finally, it was found that, for some flows, the extent of attenuation depends on the number of vertical levels resolvable by the radars.
NASA Technical Reports Server (NTRS)
Gal-Chen, T.; Wyngaard, J. C.
1982-01-01
Calculations of the ratio of the true one-dimensional spectrum of vertical velocity and that measured with multiple-Doppler radar beams are presented. It was assumed that the effects of pulse volume averaging and objective analysis routines is replacement of a point measurement with a volume integral. A u and v estimate was assumed to be feasible when orthogonal radars are not available. Also, the target fluid was configured as having an infinite vertical dimension, zero vertical velocity at the top and bottom, and having homogeneous and isotropic turbulence with a Kolmogorov energy spectrum. The ratio obtained indicated that equal resolutions among radars yields a monotonically decreasing, wavenumber-dependent response function. A gain of 0.95 was demonstrated in an experimental situation with 40 levels. Possible errors introduced when using unequal resolution radars were discussed. Finally, it was found that, for some flows, the extent of attenuation depends on the number of vertical levels resolvable by the radars.
Bayesian Model Averaging for Propensity Score Analysis.
Kaplan, David; Chen, Jianshen
2014-01-01
This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.
Roodposhti, Pezhman Mohamadi; Dabiri, Najafgholi
2012-01-01
Thirty two Holstein female calves (initial body weight = 40±3.0 kg) were used to investigate the effects of probiotic and prebiotic on average daily gain (ADG), fecal E. coli count, white blood cell count, plasma IgG1 level and cell-mediated immune response to injection of phytohemagglutinin in suckling female calves. Calves were assigned randomly to one of the four treatments, including whole milk without additives (control), whole milk containing probiotic, whole milk containing prebiotic and whole milk containing probiotic and prebiotic (synbiotic). Average daily gain was greater in calves fed probiotic, prebiotic and synbiotic at weeks 6, 7 and 8 (p<0.05). E. coli count was significantly lower in calves fed probiotic, prebiotic and synbiotic on d 56 (p<0.05). There was no significant difference between treatments in blood samples and cell-mediated response. This study showed that addition of probiotic, prebiotic and combination of these additives to milk enhanced ADG and reduced fecal E. coli count in preruminant calves. PMID:25049688
Effects of 27-day averaged tidal forcing on the thermosphere-ionosphere as examined by the TIEGCM
NASA Astrophysics Data System (ADS)
Maute, A. I.; Forbes, J. M.; Hagan, M. E.
2016-12-01
The variability of the ionosphere and thermosphere is influenced by solar and geomagnetic forcing and by lower atmosphere coupling. During the last solar minimum low- and mid-latitude ionospheric observations have shown strong longitudinal signals which are associated with upward propagating tides. Progress has been made in explaining observed ionospheric and thermospheric variations by investigating possible coupling mechanisms e.g., wind dynamo, propagation of tides into the upper thermosphere, global circulation changes, and compositional effects. To fully understand the vertical coupling a comprehensive set of simultaneous measurements of key quantities is missing. The Ionospheric Connection (ICON) explorer will provide such a data set and the data interpretation will be supported by numerical modeling to investigate the lower to upper atmosphere coupling. Due to ICON's orbit, 27 days of measurements are needed to cover all longitudes and local times and to be able to derive tidal components. In this presentation we employ the Thermosphere Ionosphere Electrodynamics General Circulation Model (TIEGCM) to evaluate the influence of the 27-day processing window on the ionosphere and thermosphere state. Specifically, we compare TIEGCM simulations that are forced at its 97 km lower boundary by daily tidal fields from 2009 MERRA-forced TIME-GCM output [Häusler et al., 2015], and by the corresponding 27-day mean tidal fields. Apart from the expected reduced day-to-day variability when using 27-day averaged tidal forcing, the simulations indicate net NmF2 changes at low latitudes, which vary with season. First results indicate that compositional effects may influence the Nmf2 modifications. We will quantify the effect of using a 27-day averaged diurnal tidal forcing versus daily ones on the equatorial vertical drift, low and mid-latitude NmF2 and hmF2, global circulation, and composition. The possible causes for the simulated changes will be examined. The result of
ERIC Educational Resources Information Center
Nagengast, Benjamin; Marsh, Herbert W.
2012-01-01
Being schooled with other high-achieving peers has a detrimental influence on students' self-perceptions: School-average and class-average achievement have a negative effect on academic self-concept and career aspirations--the big-fish-little-pond effect. Individual achievement, on the other hand, predicts academic self-concept and career…
NASA Astrophysics Data System (ADS)
Shu, Di; Guo, Lei; Yin, Liang; Chen, Zhaoyang; Chen, Juan; Qi, Xin
2015-11-01
The average volume of magnetic Barkhausen jump (AVMBJ) v bar generated by magnetic domain wall irreversible displacement under the effect of the incentive magnetic field H for ferromagnetic materials and the relationship between irreversible magnetic susceptibility χirr and stress σ are adopted in this paper to study the theoretical relationship among AVMBJ v bar(magneto-elasticity noise) and the incentive magnetic field H. Then the numerical relationship among AVMBJ v bar, stress σ and the incentive magnetic field H is deduced. Utilizing this numerical relationship, the displacement process of magnetic domain wall for single crystal is analyzed and the effect of the incentive magnetic field H and the stress σ on the AVMBJ v bar (magneto-elasticity noise) is explained from experimental and theoretical perspectives. The saturation velocity of Barkhausen jump characteristic value curve is different when tensile or compressive stress is applied on ferromagnetic materials, because the resistance of magnetic domain wall displacement is different. The idea of critical magnetic field in the process of magnetic domain wall displacement is introduced in this paper, which solves the supersaturated calibration problem of AVMBJ - σ calibration curve.
Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H
2016-08-01
The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012.
NASA Astrophysics Data System (ADS)
Hassani, Behzad; Atkinson, Gail
2017-04-01
We develop an empirical site amplification model for sites in central and eastern North America (CENA) using the peak frequency of the site response transfer function (fpeak) and the time-averaged shear-wave velocity in the upper 30 m (VS30). The database for the study includes peak ground-motion amplitudes and 5%-damped pseudo spectral acceleration extracted from the Next-Generation-Attenuation-East database. The site terms are derived by analyzing the residuals calculated from the empirical data with respect to a selected regional GMPE model developed for hard-rock reference site conditions (Atkinson et al., 2015). We develop two alternative site effects models for CENA, each of which assumes that either fpeak or VS30 is the main site variable, then models any remaining residual trends with respect to the other parameter. For the first alternative, assuming that VS30 is the main model parameter, we obtain a frequency-dependent VS30 scaling term that is similar in form to that obtained in previous studies for sites in Western North America (WNA). However, the scaling term is less significant in amplitude for CENA in comparison to that for WNA, suggesting that VS30 is not as indicative of site response in CENA. For the second alternative, assuming that fpeak is the main site-effects parameter, a frequency-independent VS30 scaling term is obtained for CENA, which is much smaller in amplitude compared to the VS30 scaling effect derived in the first approach. This shows that by using fpeak as the primary site-effects modeling parameter, we remove most of the VS30 scaling effects that are implied by the data. Finally, we provide recommendations on the effective use of fpeak and VS30 to model site effects in CENA, differentiating between glaciated and non-glaciated sites. Glaciated sites show larger amplifications compared to non-glaciated sites, especially at intermediate-to-high frequencies, presumably due to the high impedance contrast at the base of the soil profile.
Paten, A M; Pain, S J; Peterson, S W; Lopez-Villalobos, N; Kenyon, P R; Blair, H T
2016-11-21
The foetal mammary gland is sensitive to maternal weight and nutrition during gestation, which could affect offspring milk production. It has previously been shown that ewes born to dams offered maintenance nutrition during pregnancy (day 21 to 140 of gestation) produced greater milk, lactose and CP yields in their first lactation when compared with ewes born to dams offered ad libitum nutrition. In addition, ewes born to heavier dams produced greater milk and lactose yields when compared with ewes born to lighter dams. The objective of this study was to analyse and compare the 5-year lactation performance of the previously mentioned ewes, born to heavy or light dams that were offered maintenance or ad libitum pregnancy nutrition. Ewes were milked once per week, for the first 6 weeks of their lactation, for 5 years. Using milk yield and composition data, accumulated yields were calculated over a 42-day period for each year for milk, milk fat, CP, true protein, casein and lactose using a Legendre orthogonal polynomial model. Over the 5-year period, ewes born to heavy dams produced greater average milk (P=0.04), lactose (P=0.01) and CP (P=0.04) yields than offspring born to light dams. In contrast, over the 5-year period dam nutrition during pregnancy did not affect average (P>0.05) offspring milk yields or composition, but did increase milk and lactose accumulated yield (P=0.03 and 0.01, respectively) in the first lactation. These results indicate that maternal gestational nutrition appears to only affect the first lactational performance of ewe offspring. Neither dam nutrition nor size affected grand-offspring live weight gain to, or live weight at weaning (P>0.05). Combined these data indicate that under the conditions of the present study, manipulating dam weight or nutrition in pregnancy can have some effects of offspring lactational performance, however, these effects are not large enough to alter grand-offspring growth to weaning. Therefore, such manipulations
ERIC Educational Resources Information Center
Gibbison, Godfrey A.; Henry, Tracyann L.; Perkins-Brown, Jayne
2011-01-01
Freshman grade point average, in particular first semester grade point average, is an important predictor of survival and eventual student success in college. As many institutions of higher learning are searching for ways to improve student success, one would hope that policies geared towards the success of freshmen have long term benefits…
ERIC Educational Resources Information Center
Carriaga, Benito T.
2012-01-01
This study evaluated the impact of the master schedule design on student attendance, discipline, and grade point averages. Unexcused and excused absences, minor and major infraction, and grade point averages in three high schools during the 2008-09 and 2009-10 school years were included in the study. The purpose was to examine if any difference…
Campbell, Justin S; Loeffler, George H; Pulos, Steven; Campbell, Annie W
2016-11-01
This study tested the hypothesis that inpatient/residential treatment for PTSD associated with military duty should result in significantly lower PTSD symptoms at patient discharge compared to patient intake. Meta-analysis of effects comparing intake and discharge PTSD symptoms from 26 samples, reported in 16 studies, supported this hypothesis (d = -.73; p < .00001). Moderator analysis indicated between-study variation in PTSD symptom changes was predominantly due to the type of measure used, with the Clinician Administered PTSD Scale producing the largest effect (d = -1.60). Larger effects were also observed for more recently published studies and studies with larger percentages of females. These findings support the efficacy of inpatient treatment for military PTSD, although a causal factor for effectiveness could not be identified. Further, the results indicate between-program comparisons of symptom reduction require the same measure of PTSD. (PsycINFO Database Record
Effect of fringe-artifact correction on sub-tomogram averaging from Zernike phase-plate cryo-TEM.
Kishchenko, Gregory P; Danev, Radostin; Fisher, Rebecca; He, Jie; Hsieh, Chyongere; Marko, Michael; Sui, Haixin
2015-09-01
Zernike phase-plate (ZPP) imaging greatly increases contrast in cryo-electron microscopy, however fringe artifacts appear in the images. A computational de-fringing method has been proposed, but it has not been widely employed, perhaps because the importance of de-fringing has not been clearly demonstrated. For testing purposes, we employed Zernike phase-plate imaging in a cryo-electron tomographic study of radial-spoke complexes attached to microtubule doublets. We found that the contrast enhancement by ZPP imaging made nonlinear denoising insensitive to the filtering parameters, such that simple low-frequency band-pass filtering made the same improvement in map quality. We employed sub-tomogram averaging, which compensates for the effect of the "missing wedge" and considerably improves map quality. We found that fringes (caused by the abrupt cut-on of the central hole in the phase plate) can lead to incorrect representation of a structure that is well-known from the literature. The expected structure was restored by amplitude scaling, as proposed in the literature. Our results show that de-fringing is an important part of image-processing for cryo-electron tomography of macromolecular complexes with ZPP imaging.
NASA Technical Reports Server (NTRS)
Haugstad, B. S.; Eshleman, V. R.
1979-01-01
The dependence of the effects of planetary atmospheric turbulence on radio or optical wavelength in occultation experiments is discussed, and the analysis of Hubbard and Jokipii (1977) is criticized. It is argued that in deriving a necessary condition for the applicability of their method, Hubbard and Jokipii neglect a factor proportional to the square of the ratio of atmospheric or local Fresnel zone radius and the inner scale of turbulence, and fail to establish sufficient conditions, thereby omitting the square of the ratio of atmospheric scale height and the local Fresnel zone radius. The total discrepancy is said to mean that the results correspond to geometrical optics instead of wave optics, as claimed, thus being inapplicable in a dicussion of wavelength dependence. Calculations based on geometrical optics show that the bias in the average bending angle depends on the wavelength in the same way as does the bias in phase path caused by turbulence in a homogeneous atmosphere. Hubbard and Jokipii comment that the criterion of Haugstad and Eshleman is incorrect and show that there is a large wave optical domain where the results are independent of wavelength.
Effect of fringe-artifact correction on sub-tomogram averaging from Zernike phase-plate cryo-TEM
Kishchenko, Gregory P.; Danev, Radostin; Fisher, Rebecca; He, Jie; Hsieh, Chyongere; Marko, Michael; Sui, Haixin
2015-01-01
Zernike phase-plate (ZPP) imaging greatly increases contrast in cryo-electron microscopy, however fringe artifacts appear in the images. A computational de-fringing method has been proposed, but it has not been widely employed, perhaps because the importance of de-fringing has not been clearly demonstrated. For testing purposes, we employed Zernike phase-plate imaging in a cryo-electron tomographic study of radial-spoke complexes attached to microtubule doublets. We found that the contrast enhancement by ZPP imaging made nonlinear denoising insensitive to the filtering parameters, such that simple low-frequency band-pass filtering made the same improvement in map quality. We employed sub-tomogram averaging, which compensates for the effect of the “missing wedge” and considerably improves map quality. We found that fringes (caused by the abrupt cut-on of the central hole in the phase plate) can lead to incorrect representation of a structure that is well-known from the literature. The expected structure was restored by amplitude scaling, as proposed in the literature. Our results show that de-fringing is an important part of image-processing for cryo-electron tomography of macromolecular complexes with ZPP imaging. PMID:26210582
NASA Technical Reports Server (NTRS)
Haugstad, B. S.; Eshleman, V. R.
1979-01-01
The dependence of the effects of planetary atmospheric turbulence on radio or optical wavelength in occultation experiments is discussed, and the analysis of Hubbard and Jokipii (1977) is criticized. It is argued that in deriving a necessary condition for the applicability of their method, Hubbard and Jokipii neglect a factor proportional to the square of the ratio of atmospheric or local Fresnel zone radius and the inner scale of turbulence, and fail to establish sufficient conditions, thereby omitting the square of the ratio of atmospheric scale height and the local Fresnel zone radius. The total discrepancy is said to mean that the results correspond to geometrical optics instead of wave optics, as claimed, thus being inapplicable in a dicussion of wavelength dependence. Calculations based on geometrical optics show that the bias in the average bending angle depends on the wavelength in the same way as does the bias in phase path caused by turbulence in a homogeneous atmosphere. Hubbard and Jokipii comment that the criterion of Haugstad and Eshleman is incorrect and show that there is a large wave optical domain where the results are independent of wavelength.
2014-01-01
Background The study evaluated the effect of New Neonatal Porcine Diarrhoea Syndrome (NNPDS) on average daily gain (ADG) and mortality and described the clinical manifestations in four herds suffering from the syndrome. NNPDS is a diarrhoeic syndrome affecting piglets within the first week of life, which is not caused by enterotoxigenic Escherichia coli (ETEC), Clostridium perfringens (C. perfringens) type A/C, Clostridium difficile (C. difficile), rotavirus A, coronavirus, Cystoisospora suis, Strongyloides ransomi, Giardia spp or Cryptosporidium spp. Results Piglets were estimated to have a negative ADG of 9 and 14 g when diarrhoeic for 1 day and >1 day respectively. However, if only diarrhoeic on the day of birth, no negative effect on ADG was seen. Piglets originating from severely affected litters were estimated to have a reduced ADG of 38 g. The study did not show an overall effect of diarrhoea on mortality, but herd of origin, sow parity, birth weight, and gender were significantly associated with mortality. In one of the herds, approximately 25% of the diarrhoeic piglets vs. 6% of the non-diarrhoeic piglets died, and 74% of necropsied piglets were diagnosed with enteritis. These findings indicate that the high mortality seen in this herd was due to diarrhoea. Conclusions NNPDS negatively affected ADG in piglets, and even piglets that were diarrhoeic for one day only experienced a reduction in ADG. However, the study showed that diarrhoea restricted to the day of birth did not affect ADG and suggested this phenomenon to be unrelated to the syndrome. Since the diarrhoeal status of the litter had important effects on ADG, future research on NNPDS probably ought to focus on piglets from severely affected litters. The study showed important dissimilarities in the course of diarrhoea between the herds, and one herd was considerably more affected than the others. Within this herd, NNPDS seemed to be associated with a higher mortality, whereas in general the
Molins, Sergi; Trebotich, David; Steefel, Carl I.; Shen, Chaopeng
2012-03-30
The scale-dependence of geochemical reaction rates hinders their use in continuum scale models intended for the interpretation and prediction of chemical fate and transport in subsurface environments such as those considered for geologic sequestration of CO_{2}. Processes that take place at the pore scale, especially those involving mass transport limitations to reactive surfaces, may contribute to the discrepancy commonly observed between laboratory-determined and continuum-scale or field rates. In this study we investigate the dependence of mineral dissolution rates on the pore structure of the porous media by means of pore scale modeling of flow and multicomponent reactive transport. The pore scale model is composed of high-performance simulation tools and algorithms for incompressible flow and conservative transport combined with a general-purpose multicomponent geochemical reaction code. The model performs direct numerical simulation of reactive transport based on an operator-splitting approach to coupling transport and reactions. The approach is validated with a Poiseuille flow single-pore experiment and verified with an equivalent 1-D continuum-scale model of a capillary tube packed with calcite spheres. Using the case of calcite dissolution as an example, the high-resolution model is used to demonstrate that nonuniformity in the flow field at the pore scale has the effect of decreasing the overall reactivity of the system, even when systems with identical reactive surface area are considered. In conclusion, the effect becomes more pronounced as the heterogeneity of the reactive grain packing increases, particularly where the flow slows sufficiently such that the solution approaches equilibrium locally and the average rate becomes transport-limited.
Molins, Sergi; Trebotich, David; Steefel, Carl; Shen, Chaopeng
2012-03-30
The scale-dependence of geochemical reaction rates hinders their use in continuum scale models intended for the interpretation and prediction of chemical fate and transport in subsurface environments such as those considered for geologic sequestration of CO{sub 2}. Processes that take place at the pore scale, especially those involving mass transport limitations to reactive surfaces, may contribute to the discrepancy commonly observed between laboratory-determined and continuum-scale or field rates. Here, the dependence of mineral dissolution rates on the pore structure of the porous media is investigated by means of pore scale modeling of flow and multicomponent reactive transport. The pore scale model is composed of high-performance simulation tools and algorithms for incompressible flow and conservative transport combined with a general-purpose multicomponent geochemical reaction code. The model performs direct numerical simulation of reactive transport based on an operator-splitting approach to coupling transport and reactions. The approach is validated with a Poiseuille flow single-pore experiment and verified with an equivalent 1-D continuum-scale model of a capillary tube packed with calcite spheres. Using the case of calcite dissolution as an example, the high-resolution model is used to demonstrate that nonuniformity in the flow field at the pore scale has the effect of decreasing the overall reactivity of the system, even when systems with identical reactive surface area are considered. The effect becomes more pronounced as the heterogeneity of the reactive grain packing increases, particularly where the flow slows sufficiently such that the solution approaches equilibrium locally and the average rate becomes transport-limited.
Migraine treatment and placebo effect.
Speciali, José G; Peres, Mário; Bigal, Marcelo E
2010-03-01
Placebos are typically defined as physiologically inactive substances that elicit a therapeutic response. The antipode of the placebo effect is the nocebo effect, or the negative effects of placebo, where unpleasant symptoms (e.g., adverse events) emerge after the administration of placebo. Placebo analgesia is one of the most striking examples of the cognitive modulation of pain perception. Herein we focus on the importance of placebo in headache research. We first review the mechanisms of the placebo effect. We then focus on the importance of placebo in the acute treatment of migraine. We follow by discussing the importance of placebo on the preventive treatment of migraine and our perspectives for the 5 years to come regarding the study of the placebos.
Innovative and effective landfill treatment
Butler, P.B.; Karmazyn, J.; Scrivner, N.C.
1996-12-31
An innovative and effective metals treatment technology was developed for a Superfund site landfill. The new landfill technology reduced the remedial cost of that operable unit from $34 million (MM) to $12 MM. In 1993, EPA issued a Record of Decision (ROD) for a Superfund site in Newport, Delaware. Among other remedies, deep-soil mixing was specified for a 16-acre landfill. New information on waste volumes developed in the remedial design phase increased the cost of the remedy from $14 MM to $34 MM. An alternative treatment technology was developed to immobilize the metal contaminants with no increase in volume. EPA was included early in the development to ensure the proposal would be focused on issues critical to its review and acceptance. EPA accepted this technology and issued an Explanation of Significant Differences decision. the new remedy is estimated to cost $12 MM. The constituents of concern at the site are primarily metals: barium, lead, zinc, and cadmium. A treatment technology was developed which employed straight-forward chemical precipitation: sulfate addition for barium and sulfide addition for lead, zinc, and cadmium. The combined effect of numerous competing chemical equilibrium effects was modeled with the Environmental Simulation Program (ESP), a state-of-the-art equilibrium simulation program from OLI Systems, Inc. Due to the potential effects of acid rain, limestone was added to the treatment plan.
The effect of spatial averaging and glacier melt on detecting a forced signal in regional sea level
NASA Astrophysics Data System (ADS)
Richter, Kristin; Marzeion, Ben; Riva, Riccardo
2017-03-01
We investigate the spatial scales that are necessary to detect an externally forced signal in regional sea level within a selected fixed time period. Detection on a regional scale is challenging due to the increasing magnitude of unforced variability in dynamic sea level on progressingly smaller spatial scales. Using unforced control simulations with no evolving forcing we quantify the magnitude of regional internal variability depending on the degree of spatial averaging. We test various averaging techniques such as zonal averaging and averaging grid points within selected radii. By comparing the results from the control simulations with historical and 21st-century simulations, the procedure allows to estimate to what degree the data has to be averaged spatially in order to detect a forced signal within certain periods (e.g. periods with good observational coverage). We find that zonal averaging over ocean basins is necessary to detect a forced signal in steric and dynamic sea level during the past 25 years, while a signal emerges in 63% of the ocean areas over the past 45 years when smoothing with a 2000 km filter or less is applied. We also demonstrate that the addition of the glacier contribution increases the signal-to-noise ratio of regional sea level changes, thus leading to an earlier emergence by 10–20 years away from the sources of the ice mass loss. With smoothing, this results in the detection of an external signal in 90% of the ocean areas over the past 45 years.
NASA Astrophysics Data System (ADS)
Sadoun, Raphael; Shlosman, Isaac; Choi, Jun-Hwan; Romano-Díaz, Emilio
2016-10-01
We employ high-resolution cosmological zoom-in simulations focusing on a high-sigma peak and an average cosmological field at z ˜ 6-12 in order to investigate the influence of environment and baryonic feedback on galaxy evolution in the reionization epoch. Strong feedback, e.g., galactic winds, caused by elevated star formation rates (SFRs) is expected to play an important role in this evolution. We compare different outflow prescriptions: (i) constant wind velocity (CW), (ii) variable wind scaling with galaxy properties (VW), and (iii) no outflows (NW). The overdensity leads to accelerated evolution of dark matter and baryonic structures, absent from the “normal” region, and to shallow galaxy stellar mass functions at the low-mass end. Although CW shows little dependence on the environment, the more physically motivated VW model does exhibit this effect. In addition, VW can reproduce the observed specific SFR (sSFR) and the sSFR-stellar mass relation, which CW and NW fail to satisfy simultaneously. Winds also differ substantially in affecting the state of the intergalactic medium (IGM). The difference lies in the volume-filling factor of hot, high-metallicity gas, which is near unity for CW, while such gas remains confined in massive filaments for VW, and locked up in galaxies for NW. Such gas is nearly absent from the normal region. Although all wind models suffer from deficiencies, the VW model seems to be promising in correlating the outflow properties with those of host galaxies. Further constraints on the state of the IGM at high z are needed to separate different wind models.
Weber, Nicolai Rosager; Pedersen, Ken Steen; Hansen, Christian Fink; Denwood, Matthew; Hjulsager, Charlotte Kristiane; Nielsen, Jens Peter
2017-02-01
Previous research projects have demonstrated the need for better diagnostic tools to support decisions on medication strategies for infections caused by Escherichia coli F4 (F4) and F18 (F18), Lawsonia intracellularis (LI) and Brachyspira pilosicoli (PILO). This study was carried out as a randomised clinical trial in three Danish pig herds and included 1047 nursery pigs, distributed over 10 batches and 78 pens. The objectives of this study were: (1) to assess the effect of four 5-day treatment strategies (initiated at clinical outbreak of diarrhoea or at fixed time points 14, 21, or 28days after weaning) on average daily weight gain (ADG); (2) to compare the effect of treatment with doxycycline or tylosine on diarrhoea prevalence, pathogenic bacterial load, and ADG; (3) to evaluate PCR testing of faecal pen floor samples as a diagnostic tool for determining the optimal time of treatment. (1) The four treatment strategies had a significant overall effect on ADG (p=0.01). Pigs starting treatment 14days after weaning had a significantly higher ADG (42 g) compared to pigs treated on day 28 (p=0.01). (2) When measured 2days after treatment, doxycycline treatment resulted in fewer LI-positive pens (p=0.004), lower excretion levels of LI (p=0.013), and fewer pens with a high level of LI (p=0.031) compared to pens treated with tylosine. There was no significant difference in F4, F18 and PILO levels after treatment with the two antibiotic compounds. There was a significant difference (p=0.04) of mean diarrhoea prevalence on day 21 of the study between pens treated with tylosine (0.254, 95% CI: 0.184-0.324), and doxycycline (0.167, 95% CI: 0.124-0.210). The type of antibiotic compound was not found to have a significant effect on ADG (p=0.209). (3) Pigs starting treatment on day 14 in pens where F4, F18, LI or PILO were detected by qPCR on the pen floor had a statistically significant increase in ADG (66g) compared to pigs treated on day 14 in pens where no enteric pathogens
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Outcomes of the Remplissage Procedure and Its Effects on Return to Sports: Average 5-Year Follow-up.
Garcia, Grant H; Wu, Hao-Hua; Liu, Joseph N; Huffman, G Russell; Kelly, John D
2016-05-01
Short-term outcomes for patients with large, engaging Hill-Sachs lesions who underwent remplissage have demonstrated good results. However, limited data are available for longer term outcomes. To evaluate the long-term outcomes of remplissage and determine the long-term rate of return to specific sports postoperatively. Case series; Level of evidence, 4. This was a retrospective review of patients treated with the remplissage procedure from 2007 to 2013. All underwent preoperative magnetic resonance imaging demonstrating large Hill-Sachs lesions by the Rowe criteria and glenoid bone loss <20%. All Hill-Sachs lesions were "off track" by an arthroscopic examination and preoperative imaging. At final follow-up, patients underwent a range of motion evaluation and were administered a detailed outcome survey, which included Western Ontario Shoulder Instability Index (WOSI) and American Shoulder and Elbow Surgeons (ASES) scores as well as questions regarding sports, employment, physical activities, and dislocation events. A total of 50 patients (51 shoulders) were included in the study. The average patient age at surgery was 29.8 years (range, 15.0-72.4 years), and the average follow-up time was 60.7 months (range, 25.5-97.6 months); 20.0% of patients underwent previous surgery on their shoulder. The average postoperative WOSI score was 79.5%, and the average ASES score was 89.3. Six shoulders had dislocation events (11.8%) postoperatively: 3 were traumatic, and 3 were atraumatic. Increased preoperative dislocations led to a greater risk of a postoperative dislocation (P < .001). There was also a trend toward higher postoperative dislocation rates in patients who underwent revision (P = .062). The average loss of external rotation was 5.26° (P = .13). The rate of return to ≥1 sports was 95.5% of patients at an average of 7.0 months postoperatively; 81.0% returned to their previous intensity and level of sport. Of patients who played a throwing sport, 65.5% (n = 19
NASA Astrophysics Data System (ADS)
Feng, Junting; Zhang, Baozhong; Wei, Zheng; Xu, Di
2017-07-01
The eddy-covariance method is an important technique for investigating the exchange of energy and substances between the atmosphere and an ecosystem. However, an inappropriate averaging period leads to inaccurate fluxes and a low energy-balance ratio (EBR). The effects of various averaging periods on fluxes and the EBR are analyzed using flux data from the entire growth stage of maize measured with an eddy-covariance system in northern China. We find that the relative error of the flux between an averaging period of 10-60 min and the commonly used averaging period of 30 min is within 3%. When the averaging period is between 10 and 60 min, the magnitudes of fluxes increase with the length of the averaging period at various growth stages. For averaging periods exceeding 60 min, the magnitudes of fluxes vary significantly, particularly for periods longer than 120 min. In general, { EBR} > 0.8 in the maize field, tending to increase within periods of 10-60 min, but decreasing rapidly at various growth stages for averaging periods longer than 120 min. Ogive functions indicate an optimal averaging period of the seedling-shooting and shooting-heading stages is approximately 10-30 min, and that of the heading-filling and filling-maturity stages is 30-60 min.
Simmons, H Clifton; Oxford, D Eric; Hill, Matthew D
2008-01-01
Fifty-six consecutive patients in a referral-based practice seeking treatment for a complex chronic painful temporomandibular disorder (TMD) were enrolled in a retrospective study to evaluate the skeletal relationship of patients with TMD compared to the distribution of skeletal patterns found in the average population. During the standard clinical workup, lateral cephalometric radiographs were performed. Using Wits appraisal all of the fifty-six (56) cephalometric radiographs were analyzed. Based on the results of the Wits analysis, 34.6 percent of the patients were skeletal Class I, 63.6 percent were skeletal Class II, and 1.8 percent were skeletal Class III. These results were compared with the data published by the National Health and Nutrition Examination Survey (NHANES) in Proffit's text Contemporary Orthodontics. This study states that in the general population occlusal diversity is eighty to eighty-five percent (80-85%) skeletal Class I, fifteen percent (15%) are skeletal Class II, and one percent (1%) are skeletal Class III. The conclusion can be drawn that the patient sampling analyzed shows that TMD patients have a higher prevalence for skeletal Class II than the general population.
Aarabi, Ardalan; Osharina, Victoria; Wallois, Fabrice
2017-07-15
Slow and rapid event-related designs are used in fMRI and functional near-infrared spectroscopy (fNIRS) experiments to temporally characterize the brain hemodynamic response to discrete events. Conventional averaging (CA) and the deconvolution method (DM) are the two techniques commonly used to estimate the Hemodynamic Response Function (HRF) profile in event-related designs. In this study, we conducted a series of simulations using synthetic and real NIRS data to examine the effect of the main confounding factors, including event sequence timing parameters, different types of noise, signal-to-noise ratio (SNR), temporal autocorrelation and temporal filtering on the performance of these techniques in slow and rapid event-related designs. We also compared systematic errors in the estimates of the fitted HRF amplitude, latency and duration for both techniques. We further compared the performance of deconvolution methods based on Finite Impulse Response (FIR) basis functions and gamma basis sets. Our results demonstrate that DM was much less sensitive to confounding factors than CA. Event timing was the main parameter largely affecting the accuracy of CA. In slow event-related designs, deconvolution methods provided similar results to those obtained by CA. In rapid event-related designs, our results showed that DM outperformed CA for all SNR, especially above -5 dB regardless of the event sequence timing and the dynamics of background NIRS activity. Our results also show that periodic low-frequency systemic hemodynamic fluctuations as well as phase-locked noise can markedly obscure hemodynamic evoked responses. Temporal autocorrelation also affected the performance of both techniques by inducing distortions in the time profile of the estimated hemodynamic response with inflated t-statistics, especially at low SNRs. We also found that high-pass temporal filtering could substantially affect the performance of both techniques by removing the low-frequency components of
ERIC Educational Resources Information Center
Hernandez, Barbara L. Michiels; Ward, Susan; Strickland, George
2006-01-01
Legislative mandates and reforms hold universities accountable for student certification test performance. The purpose of this investigation was to determine if cumulative grade point average scores and the preprofessional academic skills test scores predict performance on elementary certification test (professional development) scores of…
A. David; E. Humenberger
2017-01-01
Because jack pine (Pinus banksiana Lamb.) is serotinous, it retains multiple years of cones until environmental conditions are favorable for releasing seed. These cones, which contain seed cohorts that developed under a variety of growing seasons, can be accurately aged using bud scale scars on twigs and branches. By calculating the average daily...
NASA Technical Reports Server (NTRS)
Kaye, Jack A.
1987-01-01
The usual assumption by which chemical reaction rates are calculated in two-dimensional atmospheric models is by using a product of zonal means of rate coefficients and constituent concentrations rather than the rigorous zonal mean of the corresponding products. This assumption has been tested for the reactions O + NO2 yields NO + O2 and NO + O3 yields NO2 + O2 using mapped limb infrared monitor of the stratosphere data from the Nimbus 7 satellite and found to be quite satisfactory for winter 1979 at 60 deg N in the upper stratosphere. Relative differences between the two-dimensional averaged rate and the more rigorous rate, calculated from the full, longitudinally varying temperatures and mixing ratios, were small (usually below 5 percent) and exceeded 15 percent only during times of strong dynamical activity. At those times or locations where stratospheric circulation is primarily zonal, the two averages agreed to within a few percent.
NASA Technical Reports Server (NTRS)
Fried, D. L.
1975-01-01
Laser scintillation data obtained by the NASA Goddard Space Flight Center balloon flight no. 5 from White Sands Missile Range on 19 October 1973 are analyzed. The measurement data, taken with various size receiver apertures, were related to predictions of aperture averaging theory, and it is concluded that the data are in reasonable agreement with theory. The following parameters are assigned to the vertical distribution of the strength of turbulence during the period of the measurements (daytime), for lambda = 0.633 microns, and the source at the zenith; the aperture averaging length is d sub o = 0.125 m, and the log-amplitude variance is (beta sub l)2 = 0.084 square nepers. This corresponds to a normalized point intensity variance of 0.40.
NASA Astrophysics Data System (ADS)
Cyranka, Jacek; Zgliczyński, Piotr
2016-10-01
We describe a topological method to study the dynamics of dissipative PDEs on a torus with rapidly oscillating forcing terms. We show that a dissipative PDE, which is invariant with respect to the Galilean transformations, with a large average initial velocity can be reduced to a problem with rapidly oscillating forcing terms. We apply the technique to the viscous Burgers' equation, and the incompressible 2D Navier-Stokes equations with a time-dependent forcing. We prove that for a large initial average speed the equation admits a bounded eternal solution, which attracts all other solutions forward in time. For the incompressible 3D Navier-Stokes equations we establish the existence of a locally attracting solution.
Ganesh, Santhi K.; Chasman, Daniel I.; Larson, Martin G.; Guo, Xiuqing; Verwoert, Germain; Bis, Joshua C.; Gu, Xiangjun; Smith, Albert V.; Yang, Min-Lee; Zhang, Yan; Ehret, Georg; Rose, Lynda M.; Hwang, Shih-Jen; Papanicolau, George J.; Sijbrands, Eric J.; Rice, Kenneth; Eiriksdottir, Gudny; Pihur, Vasyl; Ridker, Paul M.; Vasan, Ramachandran S.; Newton-Cheh, Christopher; Newton-Cheh, Christopher; Johnson, Toby; Gateva, Vesela; Tobin, Martin D.; Bochud, Murielle; Coin, Lachlan; Najjar, Samer S.; Zhao, Jing Hua; Heath, Simon C.; Eyheramendy, Susana; Papadakis, Konstantinos; Voight, Benjamin F.; Scott, Laura J.; Zhang, Feng; Farrall, Martin; Tanaka, Toshiko; Wallace, Chris; Chambers, John C.; Khaw, Kay-Tee; Nilsson, Peter; van der Harst, Pim; Polidoro, Silvia; Grobbee, Diederick E.; Onland-Moret, N. Charlotte; Bots, Michiel L.; Wain, Louise V.; Elliott, Katherine S.; Teumer, Alexander; Luan, Jian’an; Lucas, Gavin; Kuusisto, Johanna; Burton, Paul R.; Hadley, David; McArdle, Wendy L.; Brown, Morris; Dominiczak, Anna; Newhouse, Stephen J.; Samani, Nilesh J.; Webster, John; Zeggini, Eleftheria; Beckmann, Jacques S.; Bergmann, Sven; Lim, Noha; Song, Kijoung; Vollenweider, Peter; Waeber, Gerard; Waterworth, Dawn M.; Yuan, Xin; Groop, Leif; Orho-Melander, Marju; Allione, Alessandra; Di Gregorio, Alessandra; Guarrera, Simonetta; Panico, Salvatore; Ricceri, Fulvio; Romanazzi, Valeria; Sacerdote, Carlotta; Vineis, Paolo; Barroso, Inês; Sandhu, Manjinder S.; Luben, Robert N.; Crawford, Gabriel J.; Jousilahti, Pekka; Perola, Markus; Boehnke, Michael; Bonnycastle, Lori L.; Collins, Francis S.; Jackson, Anne U.; Mohlke, Karen L.; Stringham, Heather M.; Valle, Timo T.; Willer, Cristen J.; Bergman, Richard N.; Morken, Mario A.; Döring, Angela; Gieger, Christian; Illig, Thomas; Meitinger, Thomas; Org, Elin; Pfeufer, Arne; Wichmann, H. Erich; Kathiresan, Sekar; Marrugat, Jaume; O’Donnell, Christopher J.; Schwartz, Stephen M.; Siscovick, David S.; Subirana, Isaac; Freimer, Nelson B.; Hartikainen, Anna-Liisa; McCarthy, Mark I.; O’Reilly, Paul F.; Peltonen, Leena; Pouta, Anneli; de Jong, Paul E.; Snieder, Harold; van Gilst, Wiek H.; Clarke, Robert; Goel, Anuj; Hamsten, Anders; Peden, John F.; Seedorf, Udo; Syvänen, Ann-Christine; Tognoni, Giovanni; Lakatta, Edward G.; Sanna, Serena; Scheet, Paul; Schlessinger, David; Scuteri, Angelo; Dörr, Marcus; Ernst, Florian; Felix, Stephan B.; Homuth, Georg; Lorbeer, Roberto; Reffelmann, Thorsten; Rettig, Rainer; Völker, Uwe; Galan, Pilar; Gut, Ivo G.; Hercberg, Serge; Lathrop, G. Mark; Zeleneka, Diana; Deloukas, Panos; Soranzo, Nicole; Williams, Frances M.; Zhai, Guangju; Salomaa, Veikko; Laakso, Markku; Elosua, Roberto; Forouhi, Nita G.; Völzke, Henry; Uiterwaal, Cuno S.; van der Schouw, Yvonne T; Numans, Mattijs E.; Matullo, Giuseppe; Navis, Gerjan; Berglund, Göran; Bingham, Sheila A.; Kooner, Jaspal S.; Paterson, Andrew D.; Connell, John M.; Bandinelli, Stefania; Ferrucci, Luigi; Watkins, Hugh; Spector, Tim D.; Tuomilehto, Jaakko; Altshuler, David; Strachan, David P.; Laan, Maris; Meneton, Pierre; Wareham, Nicholas J.; Uda, Manuela; Jarvelin, Marjo-Riitta; Mooser, Vincent; Melander, Olle; Loos, Ruth J.F.; Elliott, Paul; Abecasis, Gonçalo R.; Caulfield, Mark; Munroe, Patricia B.; Raffel, Leslie J.; Amin, Najaf; Rotter, Jerome I.; Liu, Kiang; Launer, Lenore J.; Xu, Ming; Caulfield, Mark; Morrison, Alanna C.; Johnson, Andrew D.; Vaidya, Dhananjay; Dehghan, Abbas; Li, Guo; Bouchard, Claude; Harris, Tamara B.; Zhang, He; Boerwinkle, Eric; Siscovick, David S.; Gao, Wei; Uitterlinden, Andre G.; Rivadeneira, Fernando; Hofman, Albert; Willer, Cristen J.; Franco, Oscar H.; Huo, Yong; Witteman, Jacqueline C.M.; Munroe, Patricia B.; Gudnason, Vilmundur; Palmas, Walter; van Duijn, Cornelia; Fornage, Myriam; Levy, Daniel; Psaty, Bruce M.; Chakravarti, Aravinda
2014-01-01
Blood pressure (BP) is a heritable, quantitative trait with intraindividual variability and susceptibility to measurement error. Genetic studies of BP generally use single-visit measurements and thus cannot remove variability occurring over months or years. We leveraged the idea that averaging BP measured across time would improve phenotypic accuracy and thereby increase statistical power to detect genetic associations. We studied systolic BP (SBP), diastolic BP (DBP), mean arterial pressure (MAP), and pulse pressure (PP) averaged over multiple years in 46,629 individuals of European ancestry. We identified 39 trait-variant associations across 19 independent loci (p < 5 × 10−8); five associations (in four loci) uniquely identified by our LTA analyses included those of SBP and MAP at 2p23 (rs1275988, near KCNK3), DBP at 2q11.2 (rs7599598, in FER1L5), and PP at 6p21 (rs10948071, near CRIP3) and 7p13 (rs2949837, near IGFBP3). Replication analyses conducted in cohorts with single-visit BP data showed positive replication of associations and a nominal association (p < 0.05). We estimated a 20% gain in statistical power with long-term average (LTA) as compared to single-visit BP association studies. Using LTA analysis, we identified genetic loci influencing BP. LTA might be one way of increasing the power of genetic associations for continuous traits in extant samples for other phenotypes that are measured serially over time. PMID:24975945
Koopmann, Jaclyn; Lanaj, Klodiana; Wang, Mo; Zhou, Le; Shi, Junqi
2016-07-01
The teams literature suggests that team tenure improves team psychological safety climate and climate strength in a linear fashion, but the empirical findings to date have been mixed. Alternatively, theories of group formation suggest that new and longer tenured teams experience greater team psychological safety climate than moderately tenured teams. Adopting this second perspective, we used a sample of 115 research and development teams and found that team tenure had a curvilinear relationship with team psychological safety climate and climate strength. Supporting group formation theories, team psychological safety climate and climate strength were higher in new and longer tenured teams compared with moderately tenured teams. Moreover, we found a curvilinear relationship between team tenure and average team member creative performance as partially mediated by team psychological safety climate. Team psychological safety climate improved average team member task performance only when team psychological safety climate was strong. Likewise, team tenure influenced average team member task performance in a curvilinear manner via team psychological safety climate only when team psychological safety climate was strong. We discuss theoretical and practical implications and offer several directions for future research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kawanabe, Keiichi; Ise, Kentaro; Goto, Koji; Akiyama, Haruhiko; Nakamura, Takashi; Kaneuji, Ayumi; Sugimori, Tanzo; Matsumoto, Tadami
2009-07-01
A method has been developed for creating a bioactive coating on titanium by alkaline and heat treatment, and shown that it forms a thin layer of hydroxyapatite (HA) on the surface of implants when soaked in simulated body fluid. A series of 70 cementless primary total hip arthroplasties using this coating technique on a porous titanium surface was performed, and followed up the patients for a mean period of 4.8 years. There were no instances of loosening or revision, or formation of a reactive line on the porous coating. Although radiography just after operation showed a gap between the host bone and the socket in over 70% of cases, all the gaps disappeared within a year, indicating the good osteoconduction provided by the coating. Alkaline-heat treatment of titanium to provide a thin HA coating has several advantages over plasma-spraying, including no degeneration or absorption of the HA coating, simplicity of the manufacturing process, and cost effectiveness. In addition, this method allows homogeneous deposition of bone-like apatite within a porous implant. Although this was a relatively short-term study, treatment that creates a bioactive surface on titanium and titanium alloy implants has considerable promise for clinical application.
Angel, Brad M; Simpson, Stuart L; Granger, Ellissah; Goodwyn, Kathryn; Jolley, Dianne F
2017-11-01
Intermittent, fluctuating and pulsed contaminant discharges may result in organisms receiving highly variable contaminant exposures. This study investigated the effects of dissolved copper pulse concentration and exposure duration on the toxicity to two freshwater green algae species. The effects of single copper pulses of between 1 and 48 h duration and continuous exposures (72 h) on growth rate inhibition of Pseudokirchneriella subcapitata and Chlorella sp. were compared on a time-averaged concentration (TAC) basis. Relationships were then derived between the exposure concentration and duration required to elicit different levels of toxicity expressed as inhibition concentration (IC). Continuous exposure IC50's of 3.0 and 1.9 μg/L were measured on a TAC basis for P. subcapitata and Chlorella sp., respectively. Algal growth rates generally recovered to control levels within 24-48 h of the copper pulse removal, with some treatments exhibiting significantly (p < 0.05) higher rates of cell division than controls in this recovery period. For both algae, when exposed to treatments with equivalent TACs, the continuous exposure elicited similar or slightly greater growth rate inhibition than the pulsed exposures. To elicit equivalent inhibition, the exposure concentration increased as the exposure duration decreased, and power models fitted this relationship reasonably well for both species. Water quality guideline values (WQGVs) are predominantly derived using data from continuous exposure toxicity bioassays, despite intermittent contaminant exposures often occurring in aquatic systems. The results indicate the WQGV for copper may be relaxed for pulsed exposures by a factor less than or equivalent to the TAC and still achieve a protection to these sensitive algae species. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs.
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
NASA Technical Reports Server (NTRS)
Lichtenstein, J. H.
1978-01-01
An analytical method of computing the averaging effect of wing-span size on the loading of a wing induced by random turbulence was adapted for use on a digital electronic computer. The turbulence input was assumed to have a Dryden power spectral density. The computations were made for lift, rolling moment, and bending moment for two span load distributions, rectangular and elliptic. Data are presented to show the wing-span averaging effect for wing-span ratios encompassing current airplane sizes. The rectangular wing-span loading showed a slightly greater averaging effect than did the elliptic loading. In the frequency range most bothersome to airplane passengers, the wing-span averaging effect can reduce the normal lift load, and thus the acceleration, by about 7 percent for a typical medium-sized transport. Some calculations were made to evaluate the effect of using a Von Karman turbulence representation. These results showed that using the Von Karman representation generally resulted in a span averaging effect about 3 percent larger.
ERIC Educational Resources Information Center
Seltzer, Michael; Kim, Jinok
2007-01-01
Individual differences in response to a given treatment have been a longstanding interest in education. While many evaluation studies focus on average treatment effects (i.e., the effects of treatments on the levels of outcomes of interest), this paper additionally considers estimating the effects of treatments on the dispersion in outcomes.…
ERIC Educational Resources Information Center
Watson, Jane; Callingham, Rosemary
2013-01-01
This paper considers the responses of 26 teachers to items exploring their pedagogical content knowledge (PCK) about the concept of average. The items explored teachers' knowledge of average, their planning of a unit on average, and their understanding of students as learners in devising remediation for two student responses to a problem. Results…
Areal Average Albedo (AREALAVEALB)
Riihimaki, Laura; Marinovici, Cristina; Kassianov, Evgueni
2008-01-01
he Areal Averaged Albedo VAP yields areal averaged surface spectral albedo estimates from MFRSR measurements collected under fully overcast conditions via a simple one-line equation (Barnard et al., 2008), which links cloud optical depth, normalized cloud transmittance, asymmetry parameter, and areal averaged surface albedo under fully overcast conditions.
Ganesh, Santhi K; Chasman, Daniel I; Larson, Martin G; Guo, Xiuqing; Verwoert, Germain; Bis, Joshua C; Gu, Xiangjun; Smith, Albert V; Yang, Min-Lee; Zhang, Yan; Ehret, Georg; Rose, Lynda M; Hwang, Shih-Jen; Papanicolau, George J; Sijbrands, Eric J; Rice, Kenneth; Eiriksdottir, Gudny; Pihur, Vasyl; Ridker, Paul M; Vasan, Ramachandran S; Newton-Cheh, Christopher; Raffel, Leslie J; Amin, Najaf; Rotter, Jerome I; Liu, Kiang; Launer, Lenore J; Xu, Ming; Caulfield, Mark; Morrison, Alanna C; Johnson, Andrew D; Vaidya, Dhananjay; Dehghan, Abbas; Li, Guo; Bouchard, Claude; Harris, Tamara B; Zhang, He; Boerwinkle, Eric; Siscovick, David S; Gao, Wei; Uitterlinden, Andre G; Rivadeneira, Fernando; Hofman, Albert; Willer, Cristen J; Franco, Oscar H; Huo, Yong; Witteman, Jacqueline C M; Munroe, Patricia B; Gudnason, Vilmundur; Palmas, Walter; van Duijn, Cornelia; Fornage, Myriam; Levy, Daniel; Psaty, Bruce M; Chakravarti, Aravinda
2014-07-03
Blood pressure (BP) is a heritable, quantitative trait with intraindividual variability and susceptibility to measurement error. Genetic studies of BP generally use single-visit measurements and thus cannot remove variability occurring over months or years. We leveraged the idea that averaging BP measured across time would improve phenotypic accuracy and thereby increase statistical power to detect genetic associations. We studied systolic BP (SBP), diastolic BP (DBP), mean arterial pressure (MAP), and pulse pressure (PP) averaged over multiple years in 46,629 individuals of European ancestry. We identified 39 trait-variant associations across 19 independent loci (p < 5 × 10(-8)); five associations (in four loci) uniquely identified by our LTA analyses included those of SBP and MAP at 2p23 (rs1275988, near KCNK3), DBP at 2q11.2 (rs7599598, in FER1L5), and PP at 6p21 (rs10948071, near CRIP3) and 7p13 (rs2949837, near IGFBP3). Replication analyses conducted in cohorts with single-visit BP data showed positive replication of associations and a nominal association (p < 0.05). We estimated a 20% gain in statistical power with long-term average (LTA) as compared to single-visit BP association studies. Using LTA analysis, we identified genetic loci influencing BP. LTA might be one way of increasing the power of genetic associations for continuous traits in extant samples for other phenotypes that are measured serially over time. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Labille, Jérôme; Fatin-Rouge, Nicolas; Buffle, Jacques
2007-02-13
Fluorescence correlation spectroscopy (FCS) has been used to study the diffusion of nanometric solutes in agarose gel, at microscopic and macroscopic scales. Agarose gel was prepared and put in contact with aqueous solution. Several factors were studied: (i) the role of gel relaxation after its preparation, (ii) the specific structure of the interfacial zone and its role on the local diffusion coefficient of solutes, and (iii) the comparison between the local diffusion coefficient and the average diffusion coefficient in the gel. Fluorescent dyes and labeled biomolecules were used to cover a size range of solutes of 1.5 to 15 nm. Their transport through the interface from the solution toward the gel was modeled by the first Fick's law based on either average diffusion coefficients or the knowledge of local diffusion coefficients in the system. Experimental results have shown that, at the liquid/gel interface, a gel layer with a thickness of 120 microm is formed with characteristics significantly different from the bulk gel. In particular, in this layer, the porosity of agarose fiber network is significantly lower than in the bulk gel. The diffusion coefficient of solutes in this layer is consequently decreased for steric reasons. Modeling of solute transport shows that, in the bulk gel, macroscopic diffusion satisfactorily follows the classical Fick's diffusion laws. For the tested solutes, the local diffusion coefficients in the bulk gel, measured at microscopic scale by FCS, were equal, within experimental errors, to the average diffusion coefficients applicable at macroscopic scales (>or=mm). This confirms that anomalous diffusion applies only to solutes with sizes close to the gel pore size and at short time (
ERIC Educational Resources Information Center
Bruch, Sarah; Grigg, Jeffrey; Hanselman, Paul
2010-01-01
This study focuses on how the treatment effects of a teacher professional development initiative in science differed by school capacity. In other words, the authors are primarily concerned with treatment effect heterogeneity. As such, this paper complements ongoing evaluation of the average treatment effects of the initiative over time. The…
Temperature effects in treatment wetlands.
Kadlec, R H; Reddy, K R
2001-01-01
Several biogeochemical processes that regulate the removal of nutrients in wetlands are affected by temperature, thus influencing the overall treatment efficiency. In this paper, the effects of temperature on carbon, nitrogen, and phosphorus cycling processes in treatment wetlands and their implications to water quality are discussed. Many environmental factors display annual cycles that mediate whole system performance. Water temperature is one of the important cyclic stimuli, but inlet flow rates and concentrations, and several features of the annual biogeochemical cycle, also can contribute to the observed patterns of nutrient and pollutant removal. Atmospheric influences, including rain, evapotranspiration, and water reaeration, also follow seasonal patterns. Processes regulating storages in wetlands are active throughout the year and can act as seasonal reservoirs of nutrients, carbon, and pollutants. Many individual wetland processes, such as microbially mediated reactions, are affected by temperature. Response was much greater to changes at the lower end of the temperature scale (< 15 degrees C) than at the optimal range (20 to 35 degrees C). Processes regulating organic matter decomposition are affected by temperature. Similarly, all nitrogen cycling reactions (mineralization, nitrification, and denitrification) are affected by temperature. The temperature coefficient (theta) varied from 1.05 to 1.37 for carbon and nitrogen cycling processes during isolated conditions. Phosphorus sorption reactions are least affected by temperature, with theta values of 1.03 to 1.12. Physical processes involved in the removal of particulate carbon, nitrogen, and phosphorus are not affected much by temperature. In contrast, observed wetland removals may have different temperature dependence. Design models are oversimplified because of limitations of data for calibration. The result of complex system behavior and the simple model is the need to interpret whole ecosystem data
Effects of vector-averaged gravity on the response to different stimulatory signals in T-Cells
NASA Astrophysics Data System (ADS)
Vadrucci, Sonia; Henggeler, Daniele; Lovis, Pascal; Lambers, Britta; Cogoli, Augusto
2005-08-01
In this study, we investigated the influence of several combinations of activators on the stimulation of purified T-Lymphocytes under vector-averaged gravity generated by the random positioning machine (RPM). Our results show that anti-CD3 antibody in combination with either anti-CD28 or Interleukin-2 (IL-2) acted favourably on the proliferation independently of gravity conditions. It was observed that proliferation in the RPM samples only decreased if the combination of activators included the lectins Concanavalin A (Con A) or Phytohemagglutinin (PHA) in combination with anti-CD28. In these samples, we further found a clear deficit in the upregulation of c- fos, IL-2 and the α-subunit of the IL-2 receptor (IL- 2Rα). Our results suggest that attenuation in T-cell activation is most likely due to a reduced expression and induction of early stimulatory factors, and is activator- dependent.
The effectiveness of stuttering treatments in Germany.
Euler, Harald A; Lange, Benjamin P; Schroeder, Sascha; Neumann, Katrin
2014-03-01
Persons who stutter (PWS) should be referred to the most effective treatments available, locally or regionally. A prospective comparison of the effects of the most common stuttering treatments in Germany is not available. Therefore, a retrospective evaluation by clients of stuttering treatments was carried out. The five most common German stuttering treatments (231 single treatment cases) were rated as to their perceived effectiveness, using a structured questionnaire, by 88 PWS recruited through various sources. The participants had received between 1 and 7 treatments for stuttering. Two stuttering treatments (stuttering modification, fluency shaping) showed favorable and three treatments (breathing therapy, hypnosis, unspecified logopedic treatment) showed unsatisfactory effectiveness ratings. The effectiveness ratings of stuttering modification and fluency shaping did not differ significantly. The three other treatments were equally ineffective. The differences between the effective and ineffective treatments were of large effect sizes. The typical therapy biography begins in childhood with an unspecified logopedic treatment administered extensively in single and individual sessions. Available comparisons showed intensive or interval treatments to be superior to extensive treatments, and group treatments to be superior to single client treatments. The stuttering treatment most often prescribed in Germany, namely a weekly session of individual treatment by a speech-language pathologist, usually with an assorted package of mostly unknown components, is of limited effectiveness. Better effectiveness can be expected from fluency shaping or stuttering modification approaches, preferably with an intensive time schedule and with group sessions. Readers will be able to: (a) discuss the five most prevalent stuttering treatments in Germany; (b) summarize the effectiveness of these treatments; and (c) describe structural treatment components that seem to be preferable
Fuel treatment guidebook: illustrating treatment effects on fire hazard
Morris Johnson; David L. Peterson; Crystal Raymond
2009-01-01
The Guide to Fuel Treatments (Johnson and others 2007) analyzes potential fuel treatments and the potential effects of those treatments for dry forest lands in the Western United States. The guide examines low- to mid-elevation dry forest stands with high stem densities and heavy ladder fuels, which are currently common due to fire exclusion and various land management...
Huang, Shi; Cordova, David; Estrada, Yannine; Brincks, Ahnalee M; Asfour, Lila S; Prado, Guillermo
2014-06-01
The Complier Average Causal Effect (CACE) method has been increasingly used in prevention research to provide more accurate causal intervention effect estimates in the presence of noncompliance. The purpose of this study was to provide an applied demonstration of the CACE analytic approach to evaluate the relative effects of a family-based prevention intervention, Familias Unidas, in preventing/reducing illicit drug use for those participants who received the intended dosage. This study is a secondary data analysis of a randomized controlled trial designed to evaluate the relative efficacy of Familias Unidas with high-risk Hispanic youth. A total of 242 high-risk Hispanic youth aged 12-17 years and their primary caregivers were randomized to either Familias Unidas or Community Practice and assessed at baseline, 6 months and 12 months postbaseline. CACE models were estimated with a finite growth mixture model. Predictors of engagement were included in the CACE model. Findings indicate that, relative to the intent-to-treat (ITT) analytic approach, the CACE analytic approach yielded stronger intervention effects among both initially engaged and overall engaged participants. The CACE analytic approach may be particularly helpful for studies involving parent/family-centered interventions given that participants may not receive the intended dosage. Future studies should consider implementing the CACE analysis in addition to ITT analysis when examining the effects of family-based prevention programs to determine whether, and the extent to which, the CACE analysis has more power to uncover intervention effects.
Rodrigues, Jonathan; Minhas, Kishore; Pieles, Guido; McAlindon, Elisa; Occleshaw, Christopher; Manghat, Nathan; Hamilton, Mark
2016-10-01
The aim of this study was to quantify the degree of the effect of in-plane partial volume averaging on recorded peak velocity in phase contrast magnetic resonance angiography (PCMRA). Using cardiac optimized 1.5 Tesla MRI scanners (Siemens Symphony and Avanto), 145 flow measurements (14 anatomical locations; ventricular outlets, aortic valve (AorV), aorta (5 sites), pulmonary arteries (3 sites), pulmonary veins, superior and inferior vena cava)- in 37 subjects (consisting of healthy volunteers, congenital and acquired heart disease patients) were analyzed by Siemens Argus default voxel averaging technique (where peak velocity = mean of highest velocity voxel and four neighbouring voxels) and by single voxel technique (1.3×1.3×5 or 1.7×1.7×5.5 mm(3)) (where peak velocity = highest velocity voxel only). The effect of scan protocol (breath hold versus free breathing) and scanner type (Siemens Symphony versus Siemens Avanto) were also assessed. Statistical significance was defined as P<0.05. There was a significant mean increase in peak velocity of 7.1% when single voxel technique was used compared to voxel averaging (P<0.0001). Significant increases in peak velocity were observed by single voxel technique compared to voxel averaging regardless of subject type, anatomical flow location, scanner type and breathing command. Disabling voxel averaging did not affect the volume of flow recorded. Reducing spatial resolution by the use of voxel averaging produces a significant underestimation of peak velocity. While this is of itself not surprising this is the first report to quantify the size of the effect. When PCMRA is used to assess peak velocity recording pixel averaging should be disabled.
Rodrigues, Jonathan; Minhas, Kishore; Pieles, Guido; McAlindon, Elisa; Occleshaw, Christopher; Manghat, Nathan
2016-01-01
Background The aim of this study was to quantify the degree of the effect of in-plane partial volume averaging on recorded peak velocity in phase contrast magnetic resonance angiography (PCMRA). Methods Using cardiac optimized 1.5 Tesla MRI scanners (Siemens Symphony and Avanto), 145 flow measurements (14 anatomical locations; ventricular outlets, aortic valve (AorV), aorta (5 sites), pulmonary arteries (3 sites), pulmonary veins, superior and inferior vena cava)- in 37 subjects (consisting of healthy volunteers, congenital and acquired heart disease patients) were analyzed by Siemens Argus default voxel averaging technique (where peak velocity = mean of highest velocity voxel and four neighbouring voxels) and by single voxel technique (1.3×1.3×5 or 1.7×1.7×5.5 mm3) (where peak velocity = highest velocity voxel only). The effect of scan protocol (breath hold versus free breathing) and scanner type (Siemens Symphony versus Siemens Avanto) were also assessed. Statistical significance was defined as P<0.05. Results There was a significant mean increase in peak velocity of 7.1% when single voxel technique was used compared to voxel averaging (P<0.0001). Significant increases in peak velocity were observed by single voxel technique compared to voxel averaging regardless of subject type, anatomical flow location, scanner type and breathing command. Disabling voxel averaging did not affect the volume of flow recorded. Conclusions Reducing spatial resolution by the use of voxel averaging produces a significant underestimation of peak velocity. While this is of itself not surprising this is the first report to quantify the size of the effect. When PCMRA is used to assess peak velocity recording pixel averaging should be disabled. PMID:27942477
Meadows, Caroline C; Gable, Philip A; Lohse, Keith R; Miller, Matthew W
2016-07-01
From a neurobiological and motivational perspective, the feedback-related negativity (FRN) and reward positivity (RewP) event-related potential (ERP) components should increase with reward magnitude (reward associated with valence (success/failure) feedback). To test this hypothesis, we recorded participants' electroencephalograms while presenting them with potential monetary rewards ($0.00-$4.96) pre-trial for each trial of a reaction time task and presenting them with valence feedback post-trial. Averaged ERPs time-locked to valence feedback were extracted, and results revealed a valence by magnitude interaction for neural activity in the FRN/RewP time window. This interaction was driven by magnitude affecting RewP, but not FRN, amplitude. Moreover, single trial ERP analyses revealed a reliable correlation between magnitude and RewP, but not FRN, amplitude. Finally, P3b and late positive potential (LPP) amplitudes were affected by magnitude. Results partly support the neurobiological (dopamine) account of the FRN/RewP and suggest motivation affects feedback processing, as indicated by multiple ERP components.
Hoseyni, Fatemeh; Mahjoubi, Ehsan; Zahmatkesh, Davood; Yazdi, Mehdi Hossein
2016-11-01
This research communication describes relationships between pre-weaning average daily gain (ADG) and dam parity with future productivity of dairy calves. Higher ADG before weaning has been shown to be related to greater milk production in the first lactation of Holstein calves fed milk replacer. However, data is limited on the relationship between pre-weaning ADG and first lactation performance of Holstein calves fed whole milk. Data of three hundred and thirty-two Holstein calves from 35 primiparous and 297 multiparous cows was investigated to evaluate the relationship between the dam parity and pre-weaning ADG with the first lactation performance. Results indicated that birth (P < 0·01), and weaning body weight (P < 0·001) were greater in calves born from multiparous cows. Neither 305 d milk production nor pre-weaning ADG differed significantly between calves born to primiparous or multiparous cows, although milk yield tended to be higher in the former and ADG higher in the latter. Correlations between 305 d milk yield and pre-weaning ADG, dam parity and birth body weight were low and non-significant, although there was a tendency for a positive correlation between ADG and milk yield.
NASA Astrophysics Data System (ADS)
Fisher, B.; O'Dell, C.; Mandrake, L.
2013-12-01
The Atmospheric CO2 Observations from Space (ACOS) group has been producing and distributing total column CO2 (XCO2) products using JAXA/NIES/MOE Greenhouse Gases Observing SATellite (GOSAT) spectra and has accumulated almost 4 years of data with version 3.3. While the ACOS team strives to only process soundings that the retrieval algorithm can handle well, we are conservative in what we reject from processing. Consequently, some soundings get processed which do not yield reliable results. We have developed post-processing filters based on comparisons to a few truth proxies (model means, TCCON, and the southern hemisphere approximation) to flag the less reliable soundings. Here we compare regionally (using TRANSCOM spatial bins) and monthly averaged XCO2 that have been filtered by our normal method (described in the ACOS Level 2 Data User's Guide) and a newer method, which we have named warn levels. Mean XCO2 differences are quantified spatially and temporally to inform possible biases in carbon cycle studies that could potentially be introduced by the application of differing post-processing screening methodologies to the ACOS products.
States' Average College Tuition.
ERIC Educational Resources Information Center
Eglin, Joseph J., Jr.; And Others
This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annual…
López-Soria, S; Sibila, M; Nofrarías, M; Calsamiglia, M; Manzanilla, E G; Ramírez-Mendoza, H; Mínguez, A; Serrano, J M; Marín, O; Joisel, F; Charreyre, C; Segalés, J
2014-12-05
Porcine circovirus type 2 (PCV2) is a ubiquitous virus that mainly affects nursery and fattening pigs causing systemic disease (PCV2-SD) or subclinical infection. A characteristic sign in both presentations is reduction of average daily weight gain (ADWG). The present study aimed to assess the relationship between PCV2 load in serum and ADWG from 3 (weaning) to 21 weeks of age (slaughter) (ADWG 3-21). Thus, three different boar lines were used to inseminate sows from two PCV2-SD affected farms. One or two pigs per sow were selected (60, 61 and 51 piglets from Pietrain, Pietrain×Large White and Duroc×Large White boar lines, respectively). Pigs were bled at 3, 9, 15 and 21 weeks of age and weighted at 3 and 21 weeks. Area under the curve of the viral load at all sampling times (AUCqPCR 3-21) was calculated for each animal according to standard and real time quantitative PCR results; this variable was categorized as "negative or low" (<10(4.3) PCV2 genome copies/ml of serum), "medium" (≥10(4.3) to ≤10(5.3)) and "high" (>10(5.3)). Data regarding sex, PCV2 antibody titre at weaning and sow parity was also collected. A generalized linear model was performed, obtaining that paternal genetic line and AUCqPCR 3-21 were related to ADWG 3-21. ADWG 3-21 (mean±typical error) for "negative or low", "medium" and "high" AUCqPCR 3-21 was 672±9, 650±12 and 603±16 g/day, respectively, showing significant differences among them. This study describes different ADWG performances in 3 pig populations that suffered from different degrees of PCV2 viraemia. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Stäbler, Franziska; Dumont, Hanna; Becker, Michael; Baumert, Jürgen
2017-01-01
Empirical studies have demonstrated that students who are taught in a group of students with higher average achievement benefit in terms of their achievement. However, there is also evidence showing that being surrounded by high-achieving students has a negative effect on students' academic self-concept, also known as the big-fish--little-pond…
ERIC Educational Resources Information Center
Wang, Wen-Chung; Su, Ya-Hui
2004-01-01
In this study we investigated the effects of the average signed area (ASA) between the item characteristic curves of the reference and focal groups and three test purification procedures on the uniform differential item functioning (DIF) detection via the Mantel-Haenszel (M-H) method through Monte Carlo simulations. The results showed that ASA,…
ERIC Educational Resources Information Center
Stäbler, Franziska; Dumont, Hanna; Becker, Michael; Baumert, Jürgen
2017-01-01
Empirical studies have demonstrated that students who are taught in a group of students with higher average achievement benefit in terms of their achievement. However, there is also evidence showing that being surrounded by high-achieving students has a negative effect on students' academic self-concept, also known as the big-fish--little-pond…
NASA Technical Reports Server (NTRS)
Dorney, Daniel J.
1996-01-01
Experimental data from jet-engine tests have indicated that unsteady blade-row interaction effects can have a significant impact on the efficiency of low-pressure turbine stages. Measured turbine efficiencies at takeoff can be as much as two points higher than those at cruise conditions. Preliminary studies indicate that Reynolds number effects may contribute to the lower efficiencies at cruise conditions. In the current study, numerical experiments have been performed to quantify the Reynolds number dependence of unsteady wake/separation bubble interaction on the performance of a low-pressure turbine.
ERIC Educational Resources Information Center
De La Paz, Susan; Felton, Mark K.
2010-01-01
This study examined the effects of historical reasoning strategy instruction on 11th-grade students. Students learned historical inquiry strategies using 20th Century American history topics ranging from the Spanish-American war to the Gulf of Tonkin incident. In addition, students learned a pre-writing strategy for composing argumentative essays…
Barraclough, B; Lebron, S; Li, J; Fan, Qiyong; Liu, C; Yan, G
2015-06-15
Purpose: A novel convolution-based approach has been proposed to address ion chamber (IC) volume averaging effect (VAE) for the commissioning of commercial treatment planning systems (TPS). We investigate the use of various convolution kernels and its impact on the accuracy of beam models. Methods: Our approach simulates the VAE by iteratively convolving the calculated beam profiles with a detector response function (DRF) while optimizing the beam model. At convergence, the convolved profiles match the measured profiles, indicating the calculated profiles match the “true” beam profiles. To validate the approach, beam profiles of an Elekta LINAC were repeatedly collected with ICs of various volumes (CC04, CC13 and SNC 125) to obtain clinically acceptable beam models. The TPS-calculated profiles were convolved externally with the DRF of respective IC. The beam model parameters were reoptimized using Nelder-Mead method by forcing the convolved profiles to match the measured profiles. We evaluated three types of DRFs (Gaussian, Lorentzian, and parabolic) and the impact of kernel dependence on field geometry (depth and field size). The profiles calculated with beam models were compared with SNC EDGE diode-measured profiles. Results: The method was successfully implemented with Pinnacle Scripting and Matlab. The reoptimization converged in ∼10 minutes. For all tested ICs and DRFs, penumbra widths of the TPS-calculated profiles and diode-measured profiles were within 1.0 mm. Gaussian function had the best performance with mean penumbra width difference within 0.5 mm. The use of geometry dependent DRFs showed marginal improvement, reducing the penumbra width differences to less than 0.3 mm. Significant increase in IMRT QA passing rates was achieved with the optimized beam model. Conclusion: The proposed approach significantly improved the accuracy of the TPS beam model. Gaussian functions as the convolution kernel performed consistently better than Lorentzian and
ERIC Educational Resources Information Center
Siegel, Irving H.
The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)
Averaging Schwarzschild spacetime
NASA Astrophysics Data System (ADS)
Tegai, S. Ph.; Drobov, I. V.
2017-07-01
We tried to average the Schwarzschild solution for the gravitational point source by analogy with the same problem in Newtonian gravity or electrostatics. We expected to get a similar result, consisting of two parts: the smoothed interior part being a sphere filled with some matter content and an empty exterior part described by the original solution. We considered several variants of generally covariant averaging schemes. The averaging of the connection in the spirit of Zalaletdinov's macroscopic gravity gave unsatisfactory results. With the transport operators proposed in the literature it did not give the expected Schwarzschild solution in the exterior part of the averaged spacetime. We were able to construct a transport operator that preserves the Newtonian analogy for the outward region but such an operator does not have a clear geometrical meaning. In contrast, using the curvature as the primary averaged object instead of the connection does give the desired result for the exterior part of the problem in a fine way. However for the interior part, this curvature averaging does not work because the Schwarzschild curvature components diverge as 1 /r3 near the center and therefore are not integrable.
Sumner, Isaiah; Iyengar, Srinivasan S
2007-10-18
We have introduced a computational methodology to study vibrational spectroscopy in clusters inclusive of critical nuclear quantum effects. This approach is based on the recently developed quantum wavepacket ab initio molecular dynamics method that combines quantum wavepacket dynamics with ab initio molecular dynamics. The computational efficiency of the dynamical procedure is drastically improved (by several orders of magnitude) through the utilization of wavelet-based techniques combined with the previously introduced time-dependent deterministic sampling procedure measure to achieve stable, picosecond length, quantum-classical dynamics of electrons and nuclei in clusters. The dynamical information is employed to construct a novel cumulative flux/velocity correlation function, where the wavepacket flux from the quantized particle is combined with classical nuclear velocities to obtain the vibrational density of states. The approach is demonstrated by computing the vibrational density of states of [Cl-H-Cl]-, inclusive of critical quantum nuclear effects, and our results are in good agreement with experiment. A general hierarchical procedure is also provided, based on electronic structure harmonic frequencies, classical ab initio molecular dynamics, computation of nuclear quantum-mechanical eigenstates, and employing quantum wavepacket ab initio dynamics to understand vibrational spectroscopy in hydrogen-bonded clusters that display large degrees of anharmonicities.
The effect of average cycling current on total energy of lithium-ion batteries for electric vehicles
NASA Astrophysics Data System (ADS)
Barai, Anup; Uddin, Kotub; Widanalage, W. D.; McGordon, Andrew; Jennings, Paul
2016-01-01
Predicting the remaining range of a battery reliably, accurately and simply is imperative for effective power management of electrified vehicles and reducing driver anxiety resulting from perceived low driving range. Techniques for predicting the remaining range of an electric vehicle exist; in the best cases they are scaled by factors that account for expected energy losses due to driving style, environmental conditions and the use of on-board energy consuming devices such as air-conditioning. In this work, experimental results that establish the dependence of remaining electrical energy on the vehicle battery immediate cycling history are presented. A method to estimate the remaining energy given short-term cycling history is presented. This method differs from the traditional state of charge methods typically used in battery management systems by considering energy throughput more directly.
Lau, T W; Fang, C; Leung, F
2017-03-01
After the implementation of the multidisciplinary geriatric hip fracture clinical pathway in 2007, the hospital length of stay and the clinical outcomes improves. Moreover, the cost of manpower for each hip fracture decreases. It proves that this care model is cost-effective. The objective of this study is to compare the clinical outcomes and the cost of manpower before and after the implementation of the multidisciplinary geriatric hip fracture clinical pathway (GHFCP). The hip fracture data from 2006 was compared with the data of four consecutive years since 2008. The efficiency of the program is assessed using the hospital length of stay. The clinical outcomes include mortality rates and complication rates are compared. Cost of manpower was also analysed. After the implementation of the GHFCP, the preoperative length of stay shortened significantly from 5.8 days in 2006 to 1.3 days in 2011. The total length of stay in both acute and rehabilitation hospitals were also shortened by 6.1 days and 14.2 days, respectively. The postoperative pneumonia rate also decreased from 1.25 to 0.25%. The short- and long-term mortalities also showed a general improvement. Despite allied health manpower was increased to meet the increased workload, the shortened length of stay accounted for a mark decrease in cost of manpower per hip fracture case. This study proves that the GHFCP shortened the geriatric hip fracture patients' length of stay and improves the clinical outcomes. It is also cost-effective which proves better care is less costly.
High average power pockels cell
Daly, Thomas P.
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Improved interval estimation of comparative treatment effects
NASA Astrophysics Data System (ADS)
Van Krevelen, Ryne Christian
Comparative experiments, in which subjects are randomized to one of two treatments, are performed often. There is no shortage of papers testing whether a treatment effect exists and providing confidence intervals for the magnitude of this effect. While it is well understood that the object and scope of inference for an experiment will depend on what assumptions are made, these entities are not always clearly presented. We have proposed one possible method, which is based on the ideas of Jerzy Neyman, that can be used for constructing confidence intervals in a comparative experiment. The resulting intervals, referred to as Neyman-type confidence intervals, can be applied in a wide range of cases. Special care is taken to note which assumptions are made and what object and scope of inference are being investigated. We have presented a notation that highlights which parts of a problem are being treated as random. This helps ensure the focus on the appropriate scope of inference. The Neyman-type confidence intervals are compared to possible alternatives in two different inference settings: one in which inference is made about the units in the sample and one in which inference is made about units in a fixed population. A third inference setting, one in which inference is made about a process distribution, is also discussed. It is stressed that certain assumptions underlying this third type of inference are unverifiable. When these assumptions are not met, the resulting confidence intervals may cover their intended target well below the desired rate. Through simulation, we demonstrate that the Neyman-type intervals have good coverage properties when inference is being made about a sample or a population. In some cases the alternative intervals are much wider than necessary on average. Therefore, we recommend that researchers consider using our Neyman-type confidence intervals when carrying out inference about a sample or a population as it may provide them with more
Cho, Misuk
2013-06-01
[Purpose] The purpose of this study was to compare the effects of bridge exercises applying the abdominal drawing-in method and modified wall squat exercises on deep abdominal muscle thickness and lumbar stability. [Subjects] A total of 30 subjects were equally divided into an experimental group and a control group. [Methods] The experimental group completed modified wall squat exercises, and the control group performed bridge exercises. Both did so for 30 minutes three times per week over a six-week period. Both groups' transversus abdominis (Tra), internal oblique (IO), and multifidus muscle thickness were measured using ultrasonography, while their static lumbar stability and dynamic lumbar stability were measured using a pressure biofeedback unit. [Results] A comparison of the pre-intervention and post-intervention measures of the experimental group and the control group was made; the Tra and IO thicknesses were significantly different in both groups. [Conclusion] The modified wall squat exercise and bridge exercise affected the thicknesses of the Tra and the IO muscles. While the bridge exercise requirs space and a mattress to lie on, the modified wall squat exercise can be conveniently performed anytime.
Bellani, Giacomo; Grasselli, Giacomo; Teggia-Droghi, Maddalena; Mauri, Tommaso; Coppadoro, Andrea; Brochard, Laurent; Pesenti, Antonio
2016-04-28
Preservation of spontaneous breathing (SB) is sometimes debated because it has potentially both negative and positive effects on lung injury in comparison with fully controlled mechanical ventilation (CMV). We wanted (1) to verify in mechanically ventilated patients if the change in transpulmonary pressure was similar between pressure support ventilation (PSV) and CMV for a similar tidal volume, (2) to estimate the influence of SB on alveolar pressure (Palv), and (3) to determine whether a reliable plateau pressure could be measured during pressure support ventilation (PSV). We studied ten patients equipped with esophageal catheters undergoing three levels of PSV followed by a phase of CMV. For each condition, we calculated the maximal and mean transpulmonary (ΔPL) swings and Palv. Overall, ΔPL was similar between CMV and PSV, but only loosely correlated. The differences in ΔPL between CMV and PSV were explained largely by different inspiratory flows, indicating that the resistive pressure drop caused this difference. By contrast, the Palv profile was very different between CMV and SB; SB led to progressively more negative Palv during inspiration, and Palv became lower than the set positive end-expiratory pressure in nine of ten patients at low PSV. Finally, inspiratory occlusion holds performed during PSV led to plateau and Δ PL pressures comparable with those measured during CMV. Under similar conditions of flow and volume, transpulmonary pressure change is similar between CMV and PSV. SB during mechanical ventilation can cause remarkably negative swings in Palv, a mechanism by which SB might potentially induce lung injury.
Threaded average temperature thermocouple
NASA Technical Reports Server (NTRS)
Ward, Stanley W. (Inventor)
1990-01-01
A threaded average temperature thermocouple 11 is provided to measure the average temperature of a test situs of a test material 30. A ceramic insulator rod 15 with two parallel holes 17 and 18 through the length thereof is securely fitted in a cylinder 16, which is bored along the longitudinal axis of symmetry of threaded bolt 12. Threaded bolt 12 is composed of material having thermal properties similar to those of test material 30. Leads of a thermocouple wire 20 leading from a remotely situated temperature sensing device 35 are each fed through one of the holes 17 or 18, secured at head end 13 of ceramic insulator rod 15, and exit at tip end 14. Each lead of thermocouple wire 20 is bent into and secured in an opposite radial groove 25 in tip end 14 of threaded bolt 12. Resulting threaded average temperature thermocouple 11 is ready to be inserted into cylindrical receptacle 32. The tip end 14 of the threaded average temperature thermocouple 11 is in intimate contact with receptacle 32. A jam nut 36 secures the threaded average temperature thermocouple 11 to test material 30.
Summers, A F; Weber, S P; Lardner, H A; Funston, R N
2014-06-01
Postweaning heifer development systems were evaluated at 2 locations in a 4-yr study for their effect on performance and subsequent adaptation to grazing corn residue as a pregnant heifer. In Exp. 1, heifers were blocked by BW and randomly assigned to graze winter range (WR) or graze winter range and corn residue (CR). In Exp. 2, heifers were assigned to graze winter range and corn residue (CR) or graze winter range and placed in a drylot (DL). Artificial insemination and natural mating were used at breeding on the basis of location. In Exp. 1, heifers developed on corn residue tended (P = 0.11) to have reduced ADG compared with WR heifers. Subsequently, BW at the end of the 82-d corn residue grazing period tended (P = 0.09) to be lower for CR compared with WR heifers. However, the proportion of heifers attaining puberty before the breeding season and pregnancy rates were similar (P ≥ 0.29) for CR and WR heifers. Developing heifers on winter range tended (P = 0.09) to reduce heifer development costs $36/pregnant heifer compared with CR heifers. In Exp. 2, DL heifers had greater (P < 0.01) overall ADG during development compared with CR heifers, resulting in greater (P < 0.01) prebreeding BW for DL heifers compared with CR heifers (355 vs. 322 ± 9 kg). At pregnancy diagnosis BW remained greater (P = 0.02) for DL compared with CR heifers (423 vs. 406 ± 7 kg). Corn-residue-developed heifers had increased (P = 0.03) AI conception rates compared with DL heifers (78% vs. 67% ± 6%). However, there was no difference (P ≥ 0.21) in percent pubertal before the breeding season or final pregnancy rates for CR and DL heifers. Developing heifers on corn residue reduced (P = 0.02) heifer development costs $38/pregnant heifer compared with DL-developed heifers. A subset of pregnant heifers from both experiments grazed corn residue fields in late gestation. As pregnant heifers grazing corn residue, WR heifers (Exp. 1) tended to have reduced ADG compared with CR heifers (0
Reznik, Ed; Chaudhary, Osman; Segrè, Daniel
2013-01-01
The Michaelis-Menten equation for an irreversible enzymatic reaction depends linearly on the enzyme concentration. Even if the enzyme concentration changes in time, this linearity implies that the amount of substrate depleted during a given time interval depends only on the average enzyme concentration. Here, we use a time re-scaling approach to generalize this result to a broad category of multi-reaction systems, whose constituent enzymes have the same dependence on time, e.g. they belong to the same regulon. This “average enzyme principle” provides a natural methodology for jointly studying metabolism and its regulation. PMID:23892076
Barraclough, Brendan; Li, Jonathan G; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2016-05-01
To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit "real" ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%-80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Although all three DRFs were found adequate to represent the response of the studied ionization
Barraclough, Brendan; Lebron, Sharon; Li, Jonathan G.; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2016-05-15
Purpose: To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). Methods: A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit “real” ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%–80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. Results: The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Conclusions: Although all three DRFs were found adequate to
ERIC Educational Resources Information Center
Watson, Jane; Chick, Helen
2012-01-01
This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…
Haas, C N; Heller, B
1988-01-01
When plate count methods are used for microbial enumeration, if too-numerous-to-count results occur, they are commonly discarded. In this paper, a method for consideration of such results in computation of an average microbial density is developed, and its use is illustrated by example. PMID:3178211
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Imaging treatment effects in depression.
Höflich, Anna; Baldinger, Pia; Savli, Markus; Lanzenberger, Rupert; Kasper, Siegfried
2012-01-01
In the past years a multitude of studies has revealed alterations on a neuromolecular, structural and network level in patients with major depressive disorder within key regions of emotion and cognition processing as well as implicated neurotransmitter systems. The present review is thought to give an overview over recent developments with regard to treatment-induced changes in structural, functional and molecular neuroimaging. A number of studies could show that antidepressant treatment may lead to a partial restorage of primarily altered processes. This becomes evident in structural magnetic resonance imaging studies which point towards the reduction of volumetric differences between depressed patients and healthy controls during treatment, along with a normalization of neuronal functioning as assessed with functional magnetic resonance imaging. On a molecular level positron emission tomography studies investigating targets which are fundamentally implicated in antidepressant action such as serotonergic and dopaminergic transporters and receptors have shown to be sustainably influenced by antidepressant treatment. However, it seems that not all dysfunctional processes can be reversed by antidepressant treatment and that state and trait factors are evident not only on a behavioral but also on a neurobiological level.
Berg, Juliette K; Bradshaw, Catherine P; Jo, Booil; Ialongo, Nicholas S
2017-07-01
Complier average causal effect (CACE) analysis is a causal inference approach that accounts for levels of teacher implementation compliance. In the current study, CACE was used to examine one-year impacts of PAX good behavior game (PAX GBG) and promoting alternative thinking strategies (PATHS) on teacher efficacy and burnout. Teachers in 27 elementary schools were randomized to PAX GBG, an integration of PAX GBG and PATHS, or a control condition. There were positive overall effects on teachers' efficacy beliefs, but high implementing teachers also reported increases in burnout across the school year. The CACE approach may offer new information not captured using a traditional intent-to-treat approach.
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Post-treatment Effects of Topiramate Treatment for Heavy Drinking
Kranzler, Henry R.; Wetherill, Reagan; Feinn, Richard; Pond, Timothy; Gelernter, Joel; Covault, Jonathan
2014-01-01
Background We examined whether the effects of topiramate and a single nucleotide polymorphism (SNP; rs2832407) in GRIK1, which encodes a kainate receptor subunit, persisted following a 12-week, placebo-controlled trial in 138 heavy drinkers with a treatment goal of reduced drinking. During treatment, topiramate 200 mg/day significantly reduced heavy drinking days and increased the frequency of abstinent days (Kranzler et al. 2014a). In the European-American (EA) subsample (n=122), rs2832407 moderated the treatment effect on heavy drinking. Methods Patients were re-interviewed 3 and 6 months after the end of treatment. During treatment, we obtained 92.4% of drinking data, with 89.1% and 85.5% complete data at the 3- and 6-month follow-up visits, respectively. We examined four outcomes over time in the overall sample and the EA subsample: percent heavy drinking days (PHDD), percent days abstinent (PDA), serum γ-glutamyl transpeptidase (GGTP) concentration, and a measure of alcohol-related problems. Results In the full sample, the lower PHDD and higher PDA seen with topiramate treatment were no longer significant during follow-up. Nonetheless, the topiramate-treated patients had lower alcohol-related problem scores during treatment and both follow-up periods. Further, in the EA subsample, the greater reduction in PHDD seen during treatment in rs2832407*C-allele homozygotes persisted throughout follow-up, with no significant effects in A-allele carriers. A reduction in GGTP concentration was consistent with the reduction in heavy drinking, but did not reach statistical significance. Conclusion There are persistent therapeutic effects of topiramate in heavy drinkers, principally in rs2832407*C-allele homozygotes. PMID:25581656
The Effect of Sensory Integration Treatment on Children with Multiple Disabilities.
ERIC Educational Resources Information Center
Din, Feng S.; Lodato, Donna M.
Six children with multiple disabilities (ages 5 to 8) participated in this evaluation of the effect of sensory integration treatment on sensorimotor function and academic learning. The children had cognitive abilities ranging from sub-average to significantly sub-average, three were non-ambulatory, one had severe behavioral problems, and each…
Effect of surface treatment on enamel surface roughness.
Ersahan, Seyda; Alakus Sabuncuoglu, Fidan
2016-01-01
To compare the effects of different methods of surface treatment on enamel roughness. Ninety human maxillary first premolars were randomly divided into three groups (n=30) according to type of enamel surface treatment: I, acid etching; II, Er:YAG laser; III, Nd:YAG laser. The surface roughness of enamel was measured with a noncontact optical profilometer. For each enamel sample, two readings were taken across the sample-before enamel surface treatment (T1) and after enamel surface treatment (T2). The roughness parameter analyzed was the average roughness (Ra). Statistical analysis was performed using a Paired sample t test and the post-hoc Mann- Whitney U test, with the significance level set at 0.05. The highest Ra (average roughness) values were observed for Group II, with a significant difference with Groups I and III (P<0.001). Ra values for the acid etching group (Group I) were significantly lower than other groups (P<0.001). Surface treatment of enamel with Er:YAG laser and Nd:YAG laser results in significantly higher Ra than acid-etching. Both Er:YAG laser or Nd:YAG laser can be recommended as viable treatment alternatives to acid etching.
NASA Astrophysics Data System (ADS)
Benveniste, Y.; Milton, G. W.
2010-07-01
The effective medium approximation (EMA) and the average field approximation (AFA) are two classical micromechanics models for the determination of effective properties of heterogeneous media. They are also known in the literature as 'self-consistent' approximations. In the AFA, the basic idea is to estimate the actual average field existing in a phase through a configuration in which a typical particle of that phase is embedded in the homogenized medium. In the EMA, on the other hand, one or more representative microstructural elements of the composite is embedded in the homogenized effective medium subjected to a uniform field, and the demand is made that the dominant part of the far-field disturbance vanishes. Both parts of this study are concerned with two-phase, matrix-based, effectively isotropic composites with an inclusion phase consisting of randomly oriented particles of arbitrary shape in general, and ellipsoidal shape in particular. The constituent phases are assumed to be isotropic. It is shown that in those systems the AFA and EMA give different predictions, with the distinction between them becoming especially striking regarding their standing vis-à-vis the Hashin-Shtrikman (HS-bounds). While due to its realizability property the EMA will always obey the bounds, we show that there are circumstances in which the AFA may violate the bounds. In the AFA for two-phase matrix-based composites, the embedded inclusion is a particle of the inclusion phase. If the particle is directly embedded in the effective medium, the method is called here the self-consistent scheme-average field approximation (SCS-AFA), and will obey the HS-bounds for an inclusion shape that is simply connected. If the embedded entity is a matrix-coated particle, then the method is called the generalized self-consistent scheme-average field approximation (GSCS-AFA), and may violate the HS-bounds. On the other hand, in the EMA for matrix-based composites with well-separated inclusions, we
ERIC Educational Resources Information Center
Chaney, Bradford
2016-01-01
The primary technique that many researchers use to analyze data from randomized control trials (RCTs)--detecting the average treatment effect (ATE)--imposes assumptions upon the data that often are not correct. Both theory and past research suggest that treatments may have significant impacts on subgroups even when showing no overall effect.…
ERIC Educational Resources Information Center
Chaney, Bradford
2016-01-01
The primary technique that many researchers use to analyze data from randomized control trials (RCTs)--detecting the average treatment effect (ATE)--imposes assumptions upon the data that often are not correct. Both theory and past research suggest that treatments may have significant impacts on subgroups even when showing no overall effect.…
Kilburn, Tina R.; Eriksen, Hanne-Lise Falgreen; Underbjerg, Mette; Thorsen, Poul; Mortensen, Erik Lykke; Landrø, Nils Inge; Bakketeig, Leiv S.; Grove, Jakob; Sværke, Claus; Kesmodel, Ulrik Schiøler
2015-01-01
Background Deficits in information processing may be a core deficit after fetal alcohol exposure. This study was designed to investigate the possible effects of weekly low to moderate maternal alcohol consumption and binge drinking episodes in early pregnancy on choice reaction time (CRT) and information processing time (IPT) in young children. Method Participants were sampled based on maternal alcohol consumption during pregnancy. At the age of 60–64 months, 1,333 children were administered a modified version of the Sternberg paradigm to assess CRT and IPT. In addition, a test of general intelligence (WPPSI-R) was administered. Results Adjusted for a wide range of potential confounders, this study showed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT. There was, however, an indication of slower CRT associated with binge drinking episodes in gestational weeks 1–4. Conclusion This study observed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT as assessed by the Sternberg paradigm. However, there were some indications of CRT being associated with binge drinking during very early pregnancy. Further large-scale studies are needed to investigate effects of different patterns of maternal alcohol consumption on basic cognitive processes in offspring. PMID:26382068
Perceived Effectiveness of Diverse Sleep Treatments in Older Adults
Gooneratne, Nalaka S.; Tavaria, Ashdin; Patel, Nirav; Madhusudan, Lavanya; Nadaraja, Divani; Onen, Fannie; Richards, Kathy C.
2013-01-01
Objectives To describe the different methods older adults use to treat sleep problems and the perceived effectiveness of these methods. Design Cross-sectional study of treatment patterns for sleep disorders using a mailed questionnaire that gathered information concerning sleep history, demographics, and treatment choices. Setting/Participants Study participants were drawn from a community-based sample of adults aged >65 years, of which 242 responded (67% response rate). Measurements Standardized questionnaires to assess sleep parameters (Pittsburgh Sleep Quality Index), demographic information, and sleep treatment options. Results Study participants engaged in a variety of treatment regimens to improve their sleep, with the average number of treatments attempted being 4.7 +/− 2.9. The most commonly used interventions were watching TV or listening to the radio (66.4%) or reading (56.2%). The most commonly used pharmacotherapy was pain medication (40.1%). Prescription sleeping pills had the greatest self-reported effectiveness. Approximately half of all study participants who used alcohol or over-the-counter sleeping aids had not discussed their sleep problems with their doctor. Conclusion Older adults frequently choose treatments for their sleep problems that can potentially worsen their sleep symptoms. Many patients have not spoken to their health care provider about their treatment choices. These findings highlight the importance of discussing sleep habits and self-treatment choices as well as treating sleep disorders in older adults. PMID:21314649
Effectiveness of brace treatment for adolescent idiopathic scoliosis
2015-01-01
Objectives Effectiveness of brace treatment for adolescent idiopathic scoliosis (AIS) was demonstrated by the BrAIST study in 2013. Objectives of this study were to confirm its effectiveness by analyzing our results and to clarify the factors affecting the results of the treatment. Materials and methods According to the Scoliosis Research Society AIS brace studies standardization criteria, patients with age 10 years or older, Risser 0 to II, less than 1 year post-menarche, curve magnitude 25 to 40 degrees before brace treatment and who received no prior treatment were included in the study. At skeletal maturity, the rate of the patients whose curve was stabilized, exceeded 45 degrees, and who were recommended or underwent surgery were investigated. Additionally, initial correction rate by the brace and factors affecting the results were investigated. Results A total of 33 patients (27 females and 6 males) could be followed-up until their skeletal maturity and included in the analysis. An average age was 11.9 years, average Cobb angle was 30.8°, and Risser sign was 0 in 13 patients, I in 5, and II in 15 patients before treatment. There were 13 thoracic curves, 14 thoracolumbar or lumbar curves, and 6 double curves. Initial correction rate by the brace was 53.8% for the total curves. In terms of curve pattern, 34.4% for thoracic curve, 73.9% for thoracolumbar or lumbar curve, and 48.8% for double curve. After an average follow-up period of 33 months, 8 patients improved in more than 6 degrees, change of 17 patients were within 6 degrees, and 8 progressed in more than 6 degrees. Therefore, totally, 76% (25/33) of the curves were stabilized by the treatment. Four curves (12%) exceeded 45 degrees and one patient (3%) underwent surgery. Our results were better than the reported natural history. Factors that affected the results were hump degree before treatment and initial correction rate by the brace. Conclusions 76% of the curve with AIS could be stabilized by brace
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. Copyright 2010 Elsevier Ltd. All rights reserved.
Treatment Effects on Neonatal EEG.
Obeid, Rawad; Tsuchida, Tammy N
2016-10-01
Conventional EEG and amplitude-integrated electroencephalography are used in neonates to assess prognosis and significant changes in brain activity. Neuroactive medications and hypothermia can influence brain activity and therefore alter EEG interpretation. There are limited studies on the effect of these therapies on neonatal EEG background activity. Medication effects on the EEG or amplitude-integrated electroencephalography include increased interburst interval duration, voltage suppression, and sleep disruption. The effect is transient in term newborns but can be persistent in premature newborns. Although therapeutic hypothermia does not produce significant changes in EEG activity, it does change the time point at which EEG can accurately predict neurodevelopmental outcome. It is important to account for these effects on the EEG to avoid inaccurate interpretation that may affect prognostication.
Effectiveness of a multimodal treatment program for somatoform pain disorder.
Pieh, Christoph; Neumeier, Susanne; Loew, Thomas; Altmeppen, Jürgen; Angerer, Michael; Busch, Volker; Lahmann, Claas
2014-03-01
Chronic pain conditions are highly prevalent, with somatoform pain disorder accounting for a large proportion. However, the psychological forms of treatment currently used achieve only small to medium effect sizes. This retrospective study investigated the effectiveness of a 5-week multimodal pain program for patients with somatoform pain disorder. The diagnosis of somatoform pain disorder was confirmed by a specialist for anesthesiology and pain management and a specialist for psychosomatic medicine. Therapy outcome was evaluated with a Numeric Rating Scale (NRS), the Pain Disability Index (PDI), and the Pain Perception Scale. Within the study sample (n = 100), all parameters showed a significant and clinically relevant improvement at the end of therapy (P values < 0.001). The highest effect sizes (d) were found for reduction in average pain rating (NRS: d = 1.00) and the affective items of the Pain Perception Scale (SES-A: d = 0.07). The lowest effect sizes were found for improvement of pain-related disabilities (PDI: d = 0.42) and sensory items of the Pain Perception Scale (SES-S: d = 0.50). Despite high chronification of pain condition, with average pain duration of greater than 8 years, the multimodal treatment program showed medium to large effect sizes on the outcome of patients with somatoform pain disorder. Compared with previous data with small to moderate effect sizes, a multimodal program seems to be more effective than other interventions to address somatoform pain disorder. © 2013 World Institute of Pain.
Pharmacogenomics of antidepressant treatment effects
Licinio, Julio; Wong, Ma-Li
2011-01-01
There has been considerable promise and hope that pharmacogenomics will optimize existing treatments for major depression, as well as identify novel targets for drug discovery. Immediately after the sequencing of the human genome, there was much hope that tremendous progress in pharmacogenomics would rapidly be achieved. In the past 10 years this initial enthusiasm has been replaced by a more sober optimism, as we have gone a long way towards the goal of guiding therapeutics based on genomics. While the effort to translate discovery to clinical applications is ongoing, we now have a vast body of knowledge as well as a clear direction forward. This article will provide a critical appraisal of the state of the art in the pharmacogenomics of depression, both in terms of pharmacodynamics and pharmacokinetics. PMID:21485747
Temperature averaging thermal probe
NASA Technical Reports Server (NTRS)
Kalil, L. F.; Reinhardt, V. (Inventor)
1985-01-01
A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.
Temperature averaging thermal probe
NASA Astrophysics Data System (ADS)
Kalil, L. F.; Reinhardt, V.
1985-12-01
A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.
Lee, Romeo B.; Baring, Rito V.; Sta. Maria, Madelene A.
2016-01-01
The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students. PMID:27247700
Lee, Romeo B; Baring, Rito V; Sta Maria, Madelene A
2016-02-01
The study seeks to estimate gender variations in the direct effects of (a) number of organizational memberships, (b) number of social networking sites (SNS), and (c) grade-point average (GPA) on global social responsibility (GSR); and in the indirect effects of (a) and of (b) through (c) on GSR. Cross-sectional survey data were drawn from questionnaire interviews involving 3,173 Filipino university students. Based on a path model, the three factors were tested to determine their inter-relationships and their relationships with GSR. The direct and total effects of the exogenous factors on the dependent variable are statistically significantly robust. The indirect effects of organizational memberships on GSR through GPA are also statistically significant, but the indirect effects of SNS on GSR through GPA are marginal. Men and women significantly differ only in terms of the total effects of their organizational memberships on GSR. The lack of broad gender variations in the effects of SNS, organizational memberships and GPA on GSR may be linked to the relatively homogenous characteristics and experiences of the university students interviewed. There is a need for more path models to better understand the predictors of GSR in local students.
Cost-Effective Fuel Treatment Planning
NASA Astrophysics Data System (ADS)
Kreitler, J.; Thompson, M.; Vaillant, N.
2014-12-01
The cost of fighting large wildland fires in the western United States has grown dramatically over the past decade. This trend will likely continue with growth of the WUI into fire prone ecosystems, dangerous fuel conditions from decades of fire suppression, and a potentially increasing effect from prolonged drought and climate change. Fuel treatments are often considered the primary pre-fire mechanism to reduce the exposure of values at risk to wildland fire, and a growing suite of fire models and tools are employed to prioritize where treatments could mitigate wildland fire damages. Assessments using the likelihood and consequence of fire are critical because funds are insufficient to reduce risk on all lands needing treatment, therefore prioritization is required to maximize the effectiveness of fuel treatment budgets. Cost-effectiveness, doing the most good per dollar, would seem to be an important fuel treatment metric, yet studies or plans that prioritize fuel treatments using costs or cost-effectiveness measures are absent from the literature. Therefore, to explore the effect of using costs in fuel treatment planning we test four prioritization algorithms designed to reduce risk in a case study examining fuel treatments on the Sisters Ranger District of central Oregon. For benefits we model sediment retention and standing biomass, and measure the effectiveness of each algorithm by comparing the differences among treatment and no treat alternative scenarios. Our objective is to maximize the averted loss of net benefits subject to a representative fuel treatment budget. We model costs across the study landscape using the My Fuel Treatment Planner software, tree list data, local mill prices, and GIS-measured site characteristics. We use fire simulations to generate burn probabilities, and estimate fire intensity as conditional flame length at each pixel. Two prioritization algorithms target treatments based on cost-effectiveness and show improvements over those
NASA Astrophysics Data System (ADS)
Su, Ruifeng; Liu, Haitao; Liang, Yingchun; Yu, Fuli
2017-01-01
Thermal problems are huge challenges for solid state lasers that are interested in high output power, cooling of the nonlinear optics is insufficient to completely solve the problem of thermally induced stress, as residual thermal stress remains after cooling, which is first proposed, to the best of our knowledge. In this paper a comprehensive model incorporating principles of thermodynamics, mechanics and optics is proposed, and it is used to study the residual thermal stress of a mounted KDP crystal after cooling process from mechanical perspective, along with the effects of the residual thermal stress on the second harmonic generation (SHG) efficiency of a high-average-power laser. Effects of the structural parameters of the mounting configuration of the KDP crystal on the residual thermal stress are characterized, as well as the SHG efficiency. The numerical results demonstrate the feasibility of solving the problems of residual thermal stress from the perspective on structural design of mounting configuration.
Three Essays on Estimating Causal Treatment Effects
ERIC Educational Resources Information Center
Deutsch, Jonah
2013-01-01
This dissertation is composed of three distinct chapters, each of which addresses issues of estimating treatment effects. The first chapter empirically tests the Value-Added (VA) model using school lotteries. The second chapter, co-authored with Michael Wood, considers properties of inverse probability weighting (IPW) in simple treatment effect…
Impact of Treatment Integrity on Intervention Effectiveness
ERIC Educational Resources Information Center
Fryling, Mitch J.; Wallace, Michele D.; Yassine, Jordan N.
2012-01-01
Treatment integrity has cogent implications for intervention effectiveness. Understanding these implications is an important, but often neglected, undertaking in behavior analysis. This paper reviews current research on treatment integrity in applied behavior analysis. Specifically, we review research evaluating the relation between integrity…
Impact of Treatment Integrity on Intervention Effectiveness
ERIC Educational Resources Information Center
Fryling, Mitch J.; Wallace, Michele D.; Yassine, Jordan N.
2012-01-01
Treatment integrity has cogent implications for intervention effectiveness. Understanding these implications is an important, but often neglected, undertaking in behavior analysis. This paper reviews current research on treatment integrity in applied behavior analysis. Specifically, we review research evaluating the relation between integrity…
Three Essays on Estimating Causal Treatment Effects
ERIC Educational Resources Information Center
Deutsch, Jonah
2013-01-01
This dissertation is composed of three distinct chapters, each of which addresses issues of estimating treatment effects. The first chapter empirically tests the Value-Added (VA) model using school lotteries. The second chapter, co-authored with Michael Wood, considers properties of inverse probability weighting (IPW) in simple treatment effect…
NASA Astrophysics Data System (ADS)
Kovsh, Ivan B.; Strekalova, M. S.
1994-02-01
An investigation is reported of the effects of a surface heat treatment of aluminium by a YAG : Nd laser beam with a power up to 0.8 kW. In particular, a study was made of the influence of the treatment conditions on the microhardness, as well as on the residual stresses and their sign in hardened surface layers of aluminium. The efficiency of aluminium hardening by radiation from a cw YAG : Nd laser was found to be considerably higher than in the case of a cw CO2 laser.
Weighing the potential effectiveness of various treatments for sleep bruxism.
Huynh, Nelly; Manzini, Christiane; Rompré, Pierre H; Lavigne, Gilles J
2007-10-01
Sleep bruxism may lead to a variety of problems, but its pathophysiology has not been completely elucidated. As such, there is no definitive treatment, but certain preventive measures and/or drugs may be used in acute cases, particularly those involving pain. This article is intended to guide clinician scientists to the treatment most appropriate for future clinical studies. To determine the best current treatment, 2 measures were used to compare the results of 10 clinical studies on sleep bruxism, 3 involving oral devices and 7 involving pharmacologic therapy. The first measure, the number needed to treat (NNT), allows several randomized clinical studies to be compared and a general conclusion to be drawn. The second measure, effect size, allows evaluation of the impact of treatment relative to a placebo using different studies of similar design. Taking into account the NNT, the effect size and the power of each study, it can be concluded that the following treatments reduce sleep bruxism: mandibular advancement device, clonidine and occlusal splint. However, the first 2 of these have been linked to adverse effects. The occlusal splint is therefore the treatment of choice, as it reduces grinding noise and protects the teeth from premature wear with no reported adverse effects. The NNT could not be calculated for an alternative pharmacologic treatment, short-term clonazepam therapy, which had a large effect size and reduced the average bruxism index. However, the risk of dependency limits its use over long periods. Assessment of efficacy and safety of the most promising treatments will require studies with larger sample sizes over longer periods.
NASA Astrophysics Data System (ADS)
Di Iorio, T. A.; Sudicky, E. A.; Jones, J.; McLaren, R. G.
2002-05-01
The Integrated Hydrology Model (InHM) is a fully-coupled 3D control-volume finite element model which can simulate water flow and advective-dispersive solute transport on the 2D land surface and in the 3D subsurface under variably-saturated conditions. Full coupling of the surface and subsurface flow regimes is accomplished by simultaneously solving one system of non-linear discrete equations for overland flow rates and water depths, stream flow rates, subsurface pressure heads, saturations and velocities, as well as water fluxes between continua. The results of high-resolution 3D numerical experiments performed with InHM are presented which examine the impact of a subsurface contaminant plume discharging along a reach of a small stream within a watershed located in Southern Ontario, Canada. The sub-catchment under study is about 17 km2 in area, has about 60 m of topographic relief as defined by a 25m-scale DEM, and is highly heterogeneous in terms of its land use, near-surface soil types and Quaternary geology. Simulations are compared for cases where annual, monthly and daily precipitation averages are applied as rainfall inputs to the model in order to assess the effect of different temporal averaging scales on predicted downstream surface-water, hyporheic-zone and stream-bottom sediment quality. Results show that predicted water and solute exchange fluxes across the streambed can vary rapidly in space and time due to individual rainfall events and that short duration, high intensity peaks are not captured if monthly or annual average rainfall is used as input. In addition, we also compare the predicted spatial and temporal patterns of surface runoff, infiltration and exfiltration over the entire land surface using the different temporal resolutions of rainfall, and for changes in land use as represented by changes in the near-surface hydraulic properties of the sub-catchment.
Shin, Sang Soo; Shin, Young-Jeon
2016-01-01
With an increasing number of studies highlighting regional social capital (SC) as a determinant of health, many studies are using multi-level analysis with merged and averaged scores of community residents' survey responses calculated from community SC data. Sufficient examination is required to validate if the merged and averaged data can represent the community. Therefore, this study analyzes the validity of the selected indicators and their applicability in multi-level analysis. Within and between analysis (WABA) was performed after creating community variables using merged and averaged data of community residents' responses from the 2013 Community Health Survey in Korea, using subjective self-rated health assessment as a dependent variable. Further analysis was performed following the model suggested by WABA result. Both E-test results (1) and WABA results (2) revealed that single-level analysis needs to be performed using qualitative SC variable with cluster mean centering. Through single-level multivariate regression analysis, qualitative SC with cluster mean centering showed positive effect on self-rated health (0.054, p<0.001), although there was no substantial difference in comparison to analysis using SC variables without cluster mean centering or multi-level analysis. As modification in qualitative SC was larger within the community than between communities, we validate that relational analysis of individual self-rated health can be performed within the group, using cluster mean centering. Other tests besides the WABA can be performed in the future to confirm the validity of using community variables and their applicability in multi-level analysis.
2016-01-01
OBJECTIVES: With an increasing number of studies highlighting regional social capital (SC) as a determinant of health, many studies are using multi-level analysis with merged and averaged scores of community residents’ survey responses calculated from community SC data. Sufficient examination is required to validate if the merged and averaged data can represent the community. Therefore, this study analyzes the validity of the selected indicators and their applicability in multi-level analysis. METHODS: Within and between analysis (WABA) was performed after creating community variables using merged and averaged data of community residents’ responses from the 2013 Community Health Survey in Korea, using subjective self-rated health assessment as a dependent variable. Further analysis was performed following the model suggested by WABA result. RESULTS: Both E-test results (1) and WABA results (2) revealed that single-level analysis needs to be performed using qualitative SC variable with cluster mean centering. Through single-level multivariate regression analysis, qualitative SC with cluster mean centering showed positive effect on self-rated health (0.054, p<0.001), although there was no substantial difference in comparison to analysis using SC variables without cluster mean centering or multi-level analysis. CONCLUSIONS: As modification in qualitative SC was larger within the community than between communities, we validate that relational analysis of individual self-rated health can be performed within the group, using cluster mean centering. Other tests besides the WABA can be performed in the future to confirm the validity of using community variables and their applicability in multi-level analysis. PMID:27292102
Effects of exercise interventions during different treatments in breast cancer
Fairman, Ciaran M; Focht, Brian C; Lucas, Alexander R; Lustberg, Maryam B
2017-01-01
Previous findings suggest that exercise is a safe and efficacious means of improving physiological and psychosocial outcomes in female breast cancer survivors. To date, most research has focused on post-treatment interventions. However, given that the type and severity of treatment-related adverse effects may be dependent on the type of treatment, and that the effects are substantially more pronounced during treatment, an assessment of the safety and efficacy of exercise during treatment is warranted. In this review, we present and evaluate the results of randomized controlled trials (RCTs) conducted during breast cancer treatment. We conducted literature searches to identify studies examining exercise interventions in breast cancer patients who were undergoing chemotherapy or radiation. Data were extracted on physiological and psychosocial outcomes. Cohen’s d effect sizes were calculated for each outcome. A total of 17 studies involving 1,175 participants undergoing active cancer therapy met the inclusion criteria. Findings revealed that, on average, exercise interventions resulted in moderate to large improvements in muscular strength: resistance exercise (RE, d = 0.86), aerobic exercise (AE, d = 0.55), small to moderate improvements in cardiovascular functioning (RE, d = 0.45; AE, d = 0.17, combination exercise (COMB, d = 0.31) and quality of life (QoL; RE, d = 0.30; AE, d = 0.50; COMB, d = 0.63). The results of this review suggest that exercise is a safe, feasible, and efficacious intervention in breast cancer patients who are undergoing different types of treatment. Additional research addressing the different modes of exercise during each type of treatment is warranted to assess the comparable efficacy of the various exercise modes during established breast cancer treatments. PMID:27258052
Effects of exercise interventions during different treatments in breast cancer.
Fairman, Ciaran M; Focht, Brian C; Lucas, Alexander R; Lustberg, Maryam B
2016-05-01
Previous findings suggest that exercise is a safe and efficacious means of improving physiological and psychosocial outcomes in female breast cancer survivors. To date, most research has focused on post-treatment interventions. However, given that the type and severity of treatment-related adverse effects may be dependent on the type of treatment, and that the effects are substantially more pronounced during treatment, an assessment of the safety and efficacy of exercise during treatment is warranted. In this review, we present and evaluate the results of randomized controlled trials (RCTs) conducted during breast cancer treatment. We conducted literature searches to identify studies examining exercise interventions in breast cancer patients who were undergoing chemotherapy or radiation. Data were extracted on physiological and psychosocial outcomes. Cohen's d effect sizes were calculated for each outcome. A total of 17 studies involving 1,175 participants undergoing active cancer therapy met the inclusion criteria. Findings revealed that, on average, exercise interventions resulted in moderate to large improvements in muscular strength: resistance exercise (RE, = 0.86), aerobic exercise (AE, = 0.55), small to moderate improvements in cardiovascular functioning (RE, = 0.45; AE, = 0.17, combination exercise (COMB, = 0.31) and quality of life (QoL; RE, = 0.30; AE, = 0.50; COMB, = 0.63). The results of this review suggest that exercise is a safe, feasible, and efficacious intervention in breast cancer patients who are undergoing different types of treatment. Additional research addressing the different modes of exercise during each type of treatment is warranted to assess the comparable efficacy of the various exercise modes during established breast cancer treatments. ©2016 Frontline Medical Communications.
Đuričić, Ivana; Kotur-Stevuljević, Jelena; Miljković, Milica; Kerkez, Mirko; Đorđević, Vladimir; Đurašić, Ljubomir; Šobajić, Slađana
2015-01-01
Summary Background This study investigated the effects of a nutritionally relevant intake of eicosapentaenoic (EPA) and docosahexaenoic (DHA) fatty acids derived from oily fish or a fish oil supplement on selected cardiovascular risk factors in average middle-aged individuals. Methods Thirty-three participants were randomized to receive salmon (oily fish) providing 274 mg EPA + 671 mg DHA/day or a commercial fish oil supplement providing 396 mg EPA + 250 mg DHA/day in a cross-over trial over an 8-week period separated by a 6-month washout period. Blood samples were collected before and after each intervention and lipids, inflammatory and oxidative stress parameters were determined. Results Plasma levels of EPA, DHA and total n-3 fatty acids significantly increased after both interventions. A decreasing trend in triglycerides was more pronounced with salmon than with the fish oil supplement, but the changes noticed were not significant. Although there were no relevant changes in inflammatory marker concentrations at the end of both interventions, significant negative correlations were noticed between total plasma n-3 fatty acids and soluble intercellular adhesion molecule and C-reactive protein throughout the whole intervention period (p<0.05). Among the oxidative stress parameters, intervention with salmon showed a prooxidative effect through a superoxide anion increase (p=0.025). A relevant positive correlation was also found between its concentration and total plasma n-3 fatty acids (p<0.05). Other oxidative stress markers were not significantly influenced by the dietary interventions applied. Conclusions Following two sets of recommendations for n-3 fatty acids intake aimed at the general public had only a moderate effect on the selected cardiovascular risk factors in average healthy middle-aged subjects over a short-term period. PMID:28356841
Designing Digital Control Systems With Averaged Measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.; Beale, Guy O.
1990-01-01
Rational criteria represent improvement over "cut-and-try" approach. Recent development in theory of control systems yields improvements in mathematical modeling and design of digital feedback controllers using time-averaged measurements. By using one of new formulations for systems with time-averaged measurements, designer takes averaging effect into account when modeling plant, eliminating need to iterate design and simulation phases.
Karabanov, Alexander; van der Drift, Anniek; Edwards, Luke J; Kuprov, Ilya; Köckenberger, Walter
2012-02-28
A strategy is described for simulations of solid effect dynamic nuclear polarisation that reduces substantially the dimension of the quantum mechanical problem. Averaging the Hamiltonian in the doubly rotating frame is used to confine the active space to the zero quantum coherence subspace. A further restriction of the Liouville space is made by truncating higher spin order states, which are weakly populated due to the presence of relaxation processes. Based on a dissipative transport equation, which is used to estimate the transport of the magnetisation starting from single spin order to higher spin order states, a minimal spin order for the states is calculated that needs to be taken into account for the spin dynamics simulation. The strategy accelerates individual spin calculations by orders of magnitude, thus making it possible to simulate the polarisation dynamics of systems with up to 25 nuclear spins.
NASA Astrophysics Data System (ADS)
Gómez-Rocha, M.; Hilger, T.; Krassnigg, A.
2016-04-01
We extend earlier investigations of heavy-light pseudoscalar mesons to the vector case, using a simple model in the context of the Dyson-Schwinger-Bethe-Salpeter approach. We investigate the effects of a dressed quark-gluon vertex in a systematic fashion and illustrate and attempt to quantify corrections beyond the phenomenologically very useful and successful rainbow-ladder truncation. In particular we investigate the dressed quark-photon vertex in such a setup and make a prediction for the experimentally as yet unknown mass of the Bc* , which we obtain at 6.334 GeV well in line with predictions from other approaches. Furthermore, we combine a comprehensive set of results from the theoretical literature. The theoretical average for the mass of the Bc* meson is 6.336 ±0.002 GeV .
Negative Effects from Psychological Treatments: A Perspective
ERIC Educational Resources Information Center
Barlow, David H.
2010-01-01
The author offers a 40-year perspective on the observation and study of negative effects from psychotherapy or psychological treatments. This perspective is placed in the context of the enormous progress in refining methodologies for psychotherapy research over that period of time, resulting in the clear demonstration of positive effects from…
Side Effects of Contingent Shock Treatment
ERIC Educational Resources Information Center
van Oorsouw, W. M. W. J.; Israel, M. L.; von Heyn, R. E.; Duker, P. C.
2008-01-01
In this study, the side effects of contingent shock (CS) treatment were addressed with a group of nine individuals, who showed severe forms of self-injurious behavior (SIB) and aggressive behavior. Side effects were assigned to one of the following four behavior categories; (a) positive verbal and nonverbal utterances, (b) negative verbal and…
Pharmacological Treatment Effects on Eye Movement Control
ERIC Educational Resources Information Center
Reilly, James L.; Lencer, Rebekka; Bishop, Jeffrey R.; Keedy, Sarah; Sweeney, John A.
2008-01-01
The increasing use of eye movement paradigms to assess the functional integrity of brain systems involved in sensorimotor and cognitive processing in clinical disorders requires greater attention to effects of pharmacological treatments on these systems. This is needed to better differentiate disease and medication effects in clinical samples, to…
Pharmacological Treatment Effects on Eye Movement Control
ERIC Educational Resources Information Center
Reilly, James L.; Lencer, Rebekka; Bishop, Jeffrey R.; Keedy, Sarah; Sweeney, John A.
2008-01-01
The increasing use of eye movement paradigms to assess the functional integrity of brain systems involved in sensorimotor and cognitive processing in clinical disorders requires greater attention to effects of pharmacological treatments on these systems. This is needed to better differentiate disease and medication effects in clinical samples, to…
[Intensified insulin treatment is cost-effective].
Reichard, P; Alm, C; Andersson, E; Wärn, I; Rosenqvist, U
1999-01-20
Both the Diabetes Control and Complications Trial (DCCT) in USA/Canada, and Stockholm Diabetes Intervention Study (SDIS) showed intensified insulin treatment and reduced glycaemia to prevent complications in patients with insulin-dependent (type I) diabetes mellitus. In the DCCT, the intensified treatment was considered cost-effective. In the SDIS, investigation of the direct increase in costs due to the intensified insulin treatment showed the saving in direct costs due to the reduction in photocoagulation requirements, and in the prevalence of renal insufficiency and of amputation, to correspond to 10 years' intensive insulin treatment. Thus, as intensified insulin treatment in type I diabetes reduces direct suffering at a low cost, it may be regarded as 'evidence-based' and mandatory.
Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average
ERIC Educational Resources Information Center
DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.
2007-01-01
Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…
Obesity in Family Practice: Is Treatment Effective?
Sanborn, Margaret D.; Manske, Stephen R.; Schlegel, Ronald P.
1983-01-01
Obesity is a common condition which has important effects on health status and longevity. This review examines the efficacy of treatments for both moderate and severe obesity. A plan of treatment combining diet, exercise, and behavioral strategies is outlined. Surgery and its complications are reviewed. Eight management issues, including rate of weight loss, self-help groups, and fringe therapies, are presented. Management recommendations are based on a critical review of the weight loss literature. PMID:21283350
Effective Treatments of Atrophic Acne Scars
Zhou, Bingrong
2015-01-01
Atrophic scarring is often an unfortunate and permanent complication of acne vulgaris. It has high prevalence, significant impact on quality of life, and therapeutic challenge for dermatologists. The treatment of atrophic acne scars varies depending on the types of acne scars and the limitations of the treatment modalities in their ability to improve scars. Therefore, many options are available for the treatment of acne scarring, including chemical peeling, dermabrasion, laser treatment, punch techniques, fat transplantation, other tissue augmenting agents, needling, subcision, and combined therapy. Various modalities have been used to treat scars, but limited efficacy and problematic side effects have restricted their application. In order to optimally treat a patient’s scar, we need to consider which treatment offers the most satisfactory result. There are also promising procedures in the future, such as stem cell therapy. In this article, the authors review the different treatment options of atrophic acne scars. This may be useful for selecting the best therapeutic strategy, whether it be single or combined therapy, in the treatment of atrophic acne scars while reducing or avoiding the side effects and complications. PMID:26029333
Randomization inference for treatment effects on a binary outcome.
Rigdon, Joseph; Hudgens, Michael G
2015-03-15
Two methods are developed for constructing randomization-based confidence sets for the average effect of a treatment on a binary outcome. The methods are nonparametric and require no assumptions about random sampling from a larger population. Both of the resulting 1 - α confidence sets are exact in the sense that the probability of containing the true treatment effect is at least 1 - α. Both types of confidence sets are also guaranteed to have width no greater than one. In contrast, a previously proposed asymptotic confidence interval is not exact and may have width greater than 1. The first approach combines Bonferroni-adjusted prediction sets for the attributable effects in the treated and untreated. The second method entails inverting a permutation test. Simulations are presented comparing the two randomization-based confidence sets with the asymptotic interval as well as the standard Wald confidence interval and a commonly used exact interval for the difference in binomial proportions. Results show for small to moderate sample sizes that the permutation confidence set attains the narrowest width on average among the methods that maintain nominal coverage. Extensions that allow for stratifying on categorical baseline covariates are also discussed. Copyright © 2014 John Wiley & Sons, Ltd.
Delfino, R J; Zeiger, R S; Seltzer, J M; Street, D H
1998-01-01
Experimental research in humans and animals points to the importance of adverse respiratory effects from short-term particle exposures and to the importance of proinflammatory effects of air pollutants, particularly O(subscript)3. However, particle averaging time has not been subjected to direct scientific evaluation, and there is a lack of epidemiological research examining both this issue and whether modification of air pollutant effects occurs with differences in asthma severity and anti-inflammatory medication use. The present study examined the relationship of adverse asthma symptoms (bothersome or interfered with daily activities or sleep) to O(3) and particles (less than or equal to)10 micrometer (PM10) in a Southern California community in the air inversion zone (1200-2100 ft) with high O(3) and low PM (R = 0.3). A panel of 25 asthmatics 9-17 years of age were followed daily, August through October 1995 (n = 1,759 person-days excluding one subject without symptoms). Exposures included stationary outdoor hourly PM10 (highest 24-hr mean, 54 microgram/m(3), versus median of 1-hr maximums, 56 microgram/m(3) and O(3) (mean of 1-hr maximums, 90 ppb, 5 days (greater than or equal to)120 ppb). Longitudinal regression analyses utilized the generalized estimating equations (GEE) model controlling for autocorrelation, day of week, outdoor fungi, and weather. Asthma symptoms were significantly associated with both outdoor O(3) and PM(10) in single pollutant- and co-regressions, with 1-hr and 8-hr maximum PM(10) having larger effects than the 24-hr mean. Subgroup analyses showed effects of current day PM(10) maximums were strongest in 10 more frequently symptomatic (MS) children: the odds ratios (ORs) for adverse symptoms from 90th percentile increases were 2.24 [95% confidence interval (CI), 1.46-3.46] for 1-hr PM10 (47 microgram/m(3); 1.82 (CI, 1.18-2.81) for 8-hr PM10 (36 microgram/m(3); and 1.50 (CI, 0.80-2.80) for 24-hr PM10 (25 microgram/m(3). Subgroup analyses
Cognitive effects of olanzapine treatment in schizophrenia.
McGurk, Susan R; Lee, M A; Jayathilake, K; Meltzer, Herbert Y
2004-05-10
Improvement in some but not all domains of cognition during treatment with the atypical antipsychotic drugs clozapine, quetiapine, olanzapine, and risperidone has been reported in some but not all studies. It has been recently suggested that these reports are an artifact, related to lessening of the impairment due to typical neuroleptic drugs and anticholinergic agents. The purpose of this study was to further test the hypothesis that olanzapine, an atypical antipsychotic drug reported to have anticholinergic properties, improves cognition in patients with schizophrenia, including domains of cognition related closely to work and social function (ie, verbal learning and memory) and that this improvement is independent of improvement in psychopathology. Thirty-four patients with schizophrenia who were partial responders to typical antipsychotic drug treatment were evaluated with a comprehensive neurocognitive battery, including measures of executive functioning; verbal and visual learning and memory; working memory; immediate, selective, and sustained attention; perceptual/motor processing; and motor skills prior to and following treatment with olanzapine for 6 weeks. The Brief Psychiatric Rating Scale (BPRS) was used to assess psychopathology in patients treated with typical antipsychotic drugs. Subjects were switched to olanzapine (average dose 13.4 mg, range 5-20 mg) and reassessed following 6 weeks and 6 months of treatment. Significant improvement was noted in 9 of 19 cognitive tests, including measures of selective attention, verbal learning and memory, and verbal fluency. No cognitive test was worsened by olanzapine treatment. Improvements in the BPRS Total and Positive Symptom Subscale scores were noted. Improvements in verbal learning and memory, sustained attention, and psychomotor tracking were independent of improvement in psychopathology. These data suggest that olanzapine improved some but not all cognitive deficits in schizophrenia, including verbal
NASA Astrophysics Data System (ADS)
Chien, David Michael
2000-10-01
The Energy Policy and Conservation Act of 1975, which created fuel economy standards for automobiles and light trucks, was passed by Congress in response to the rapid rise in world oil prices as a result of the 1973 oil crisis. The standards were first implemented in 1978 for automobiles and 1979 for light trucks, and began with initial standards of 18 MPG for automobiles and 17.2 MPG for light trucks. The current fuel economy standards for 1998 have been held constant at 27.5 MPG for automobiles and 20.5 MPG for light trucks since 1990--1991. While actual new automobile fuel economy has almost doubled from 14 MPG in 1974 to 27.2 MPG in 1994, it is reasonable to ask if the CAFE standards are still needed. Each year Congress attempts to pass another increase in the Corporate Average Fuel Economy (CAFE) standard and fails. Many have called for the abolition of CAFE standards citing the ineffectiveness of the standards in the past. In order to determine whether CAFE standards should be increased, held constant, or repealed, an evaluation of the effectiveness of the CAFE standards to date must be established. Because fuel prices were rising concurrently with the CAFE standards, many authors have attributed the rapid rise in new car fuel economy solely to fuel prices. The purpose of this dissertation is to re-examine the determinants of new car fuel economy via three effects: CAFE regulations, fuel price, and income effects. By measuring the marginal effects of the three fuel economy determinants upon consumers and manufacturers choices, for fuel economy, an estimate was made of the influence of each upon new fuel economy. The conclusions of this dissertation present some clear signals to policymakers: CAFE standards have been very effective in increasing fuel economy from 1979 to 1998. Furthermore, they have been the main cause of fuel economy improvement, with income being a much smaller component. Furthermore, this dissertation has suggested that fuel prices have
Averaging Robertson-Walker cosmologies
Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de
2009-04-15
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.
Pharmacological treatment effects on eye movement control
Reilly, James L.; Lencer, Rebekka; Bishop, Jeffrey R.; Keedy, Sarah; Sweeney, John A.
2011-01-01
The increasing use of eye movement paradigms to assess the functional integrity of brain systems involved in sensorimotor and cognitive processing in clinical disorders requires greater attention to effects of pharmacological treatments on these systems. This is needed to better differentiate disease and medication effects in clinical samples, to learn about neurochemical systems relevant for identified disturbances, and to facilitate identification of oculomotor biomarkers of pharmacological effects. In this review, studies of pharmacologic treatment effects on eye movements in healthy individuals are summarized and the sensitivity of eye movements to a variety of pharmacological manipulations is established. Primary findings from these studies of healthy individuals involving mainly acute effects indicate that: (i) the most consistent finding across several classes of drugs, including benzodiazepines, first- and second-generation antipsychotics, anticholinergic agents, and anticonvulsant/mood stabilizing medications is a decrease in saccade and smooth pursuit velocity (or increase in saccades during pursuit); (ii) these oculomotor effects largely reflect the general sedating effects of these medications on central nervous system functioning and are often dose-dependent; (iii) in many cases changes in oculomotor functioning are more sensitive indicators of pharmacological effects than other measures; and (iv) other agents, including the antidepressant class of serotonergic reuptake inhibitors, direct serotonergic agonists, and stimulants including amphetamine and nicotine, do not appear to adversely impact oculomotor functions in healthy individuals and may well enhance aspects of saccade and pursuit performance. Pharmacological treatment effects on eye movements across several clinical disorders including schizophrenia, affective disorders, attention deficit hyperactivity disorder, Parkinson's disease, and Huntington's disease are also reviewed. While greater
Musical hallucinations: review of treatment effects
Coebergh, Jan A. F.; Lauw, R. F.; Bots, R.; Sommer, I. E. C.; Blom, J. D.
2015-01-01
Background: Despite an increased scientific interest in musical hallucinations over the past 25 years, treatment protocols are still lacking. This may well be due to the fact that musical hallucinations have multiple causes, and that published cases are relatively rare. Objective: To review the effects of published treatment methods for musical hallucinations. Methods: A literature search yielded 175 articles discussing a total number of 516 cases, of which 147 articles discussed treatment in 276 individuals. We analyzed the treatment results in relation to the etiological factor considered responsible for the mediation of the musical hallucinations, i.e., idiopathic/hypoacusis, psychiatric disorder, brain lesion, and other pathology, epilepsy or intoxication/pharmacology. Results: Musical hallucinations can disappear without intervention. When hallucinations are bearable, patients can be reassured without any other treatment. However, in other patients musical hallucinations are so disturbing that treatment is indicated. Distinct etiological groups appear to respond differently to treatment. In the hypoacusis group, treating the hearing impairment can yield significant improvement and coping strategies (e.g., more acoustic stimulation) are frequently helpful. Pharmacological treatment methods can also be successful, with antidepressants being possibly more helpful than antiepileptics (which are still better than antipsychotics). The limited use of acetylcholinesterase inhibitors has looked promising. Musical hallucinations occurring as part of a psychiatric disorder tend to respond well to psychopharmacological treatments targeting the underlying disorder. Musical hallucinations experienced in the context of brain injuries and epilepsy tend to respond well to antiepileptics, but their natural course is often benign, irrespective of any pharmacological treatment. When intoxication/pharmacology is the main etiological factor, it is important to stop or switch the
Musical hallucinations: review of treatment effects.
Coebergh, Jan A F; Lauw, R F; Bots, R; Sommer, I E C; Blom, J D
2015-01-01
Despite an increased scientific interest in musical hallucinations over the past 25 years, treatment protocols are still lacking. This may well be due to the fact that musical hallucinations have multiple causes, and that published cases are relatively rare. To review the effects of published treatment methods for musical hallucinations. A literature search yielded 175 articles discussing a total number of 516 cases, of which 147 articles discussed treatment in 276 individuals. We analyzed the treatment results in relation to the etiological factor considered responsible for the mediation of the musical hallucinations, i.e., idiopathic/hypoacusis, psychiatric disorder, brain lesion, and other pathology, epilepsy or intoxication/pharmacology. Musical hallucinations can disappear without intervention. When hallucinations are bearable, patients can be reassured without any other treatment. However, in other patients musical hallucinations are so disturbing that treatment is indicated. Distinct etiological groups appear to respond differently to treatment. In the hypoacusis group, treating the hearing impairment can yield significant improvement and coping strategies (e.g., more acoustic stimulation) are frequently helpful. Pharmacological treatment methods can also be successful, with antidepressants being possibly more helpful than antiepileptics (which are still better than antipsychotics). The limited use of acetylcholinesterase inhibitors has looked promising. Musical hallucinations occurring as part of a psychiatric disorder tend to respond well to psychopharmacological treatments targeting the underlying disorder. Musical hallucinations experienced in the context of brain injuries and epilepsy tend to respond well to antiepileptics, but their natural course is often benign, irrespective of any pharmacological treatment. When intoxication/pharmacology is the main etiological factor, it is important to stop or switch the causative substance or medication
Effective treatment of head louse with pediculicides.
Mumcuoglu, Kosta Y
2006-05-01
Of the pediculicides on the market, most are not 100% ovicidal and don't have a residual activity of more than 2 days. Therefore, at least 2 treatments are necessary to control the entire louse population. In order for a pediculicide to be effective it should kill all active stages of the louse after a single treatment. Otherwise remaining lice will continue laying eggs and the following treatments will not be fully effective, at least against the eggs. However, there is no general consensus as to when the second treatment should be conducted. Taking into consideration that head louse eggs hatch between 5 to 11 days, it is suggested that a second treatment should be administered 10 days after the beginning of the treatment. This might also explain why most of the clinical trials that were conducted by treating the patients twice with an interval of 6, 7, or 8 days showed a poor efficacy, and clinical trials where the pediculicide was applied with an interval of 10 days showed an efficacy level of more than 90%.
ERIC Educational Resources Information Center
Zwick, Rebecca; Himelfarb, Igor
2011-01-01
Research has often found that, when high school grades and SAT scores are used to predict first-year college grade-point average (FGPA) via regression analysis, African-American and Latino students, are, on average, predicted to earn higher FGPAs than they actually do. Under various plausible models, this phenomenon can be explained in terms of…
Cost-effectiveness analysis of treatments for premenstrual dysphoric disorder.
Rendas-Baum, Regina; Yang, Min; Gricar, Joseph; Wallenstein, Gene V
2010-01-01
Premenstrual syndrome (PMS) is reported to affect between 13% and 31% of women. Between 3% and 8% of women are reported to meet criteria for the more severe form of PMS, premenstrual dysphoric disorder (PMDD). Although PMDD has received increased attention in recent years, the cost effectiveness of treatments for PMDD remains unknown. To evaluate the cost effectiveness of the four medications with a US FDA-approved indication for PMDD: fluoxetine, sertraline, paroxetine and drospirenone plus ethinyl estradiol (DRSP/EE). A decision-analytic model was used to evaluate both direct costs (medication and physician visits) and clinical outcomes (treatment success, failure and discontinuation). Medication costs were based on average wholesale prices of branded products; physician visit costs were obtained from a claims database study of PMDD patients and the Agency for Healthcare Research and Quality. Clinical outcome probabilities were derived from published clinical trials in PMDD. The incremental cost-effectiveness ratio (ICER) was calculated using the difference in costs and percentage of successfully treated patients at 6 months. Deterministic and probabilistic sensitivity analyses were used to assess the impact of uncertainty in parameter estimates. Threshold values where a change in the cost-effective strategy occurred were identified using a net benefit framework. Starting therapy with DRSP/EE dominated both sertraline and paroxetine, but not fluoxetine. The estimated ICER of initiating treatment with fluoxetine relative to DRSP/EE was $US4385 per treatment success (year 2007 values). Cost-effectiveness acceptability curves revealed that for ceiling ratios>or=$US3450 per treatment success, fluoxetine had the highest probability (>or=0.37) of being the most cost-effective treatment, relative to the other options. The cost-effectiveness acceptability frontier further indicated that DRSP/EE remained the option with the highest expected net monetary benefit for
Effectiveness of Psychological and Pharmacological Treatments for Nocturnal Enuresis.
ERIC Educational Resources Information Center
Houts, Arthur C.; And Others
1994-01-01
Assesses overall effectiveness of psychological and pharmacological treatments, relative effectiveness of specific treatments, and moderators of treatment effectiveness for nocturnal enuretic children via quantitative integration of research. Findings confirm that more children benefit from psychological than from pharmacological interventions and…
Valkov, T.V.; Tan, C.S.
1999-07-01
In a two-part paper, key computed results from a set of first-of-a-kind numerical simulations on the unsteady interaction of axial compressor stator with upstream rotor wakes and tip leakage vortices are employed to elucidate their impact on the time-averaged performance of the stator. Detailed interrogation of the computed flowfield showed that for both wakes and tip leakage vortices, the impact of these mechanisms can be described on the same physical basis. Specifically, there are two generic mechanisms with significant influence on performance: reversible recovery of the energy in the wakes/tip vortices (beneficial) and the associated nontransitional boundary layer response (detrimental). In the presence of flow unsteadiness associated with rotor wakes and tip vortices, the efficiency of the stator under consideration is higher than that obtained using a mixed-out steady flow approximation. The effects of tip vortices and wakes are of comparable importance. The impact of stator interaction with upstream wakes and vortices depends on the following parameters: axial spacing, loading, and the frequency of wake fluctuations in the rotor frame. At reduced spacing, this impact becomes significant. The most important aspect of the tip vortex is the relative velocity defect and the associated relative total pressure defect, which is perceived by the stator in the same manner as a wake. In Part 2, the focus will be on the interaction of stator with the moving upstream rotor tip and streamwise vortices, the controlling parametric trends, and implications on design.
NASA Astrophysics Data System (ADS)
Moran-Lopez, Tiberius; Schilling, Oleg
2014-11-01
Reshocked Richtmyer-Meshkov turbulent mixing for various gas pairs and large shock Mach numbers is simulated using a third-order weighted essentially nonoscillatory (WENO) implementation of a new K- ɛ multicomponent Reynolds-averaged Navier-Stokes model. Experiments previously performed at the University of Provence with gas pairs CO2 /He, CO2 /Ar, and CO2 /Kr (with At = - 0 . 73 , - 0 . 05 , and 0 . 3 , respectively) and incident shock Mach numbers Ma = 2 . 4 , 3 . 1 , 3 . 7 , 4 . 2 , and 4 . 5 are considered. The evolution of the mixing layer widths is shown to be in good agreement with the experimental data. Budgets of the turbulent transport equations are used to elucidate the mechanisms contributing to turbulent mixing in large Mach number reshocked Richtmyer-Meshkov instability. These results are contrasted with those from previous modeling of smaller Mach number experiments to identify the physical effects which require accurate modeling, including mean and turbulent enthalpy diffusion, pressure-dilatation, and dilatation dissipation. This work was performed under the auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Neurocognitive Effects of Treatment for Childhood Cancer
ERIC Educational Resources Information Center
Butler, Robert W.; Haser, Jennifer K.
2006-01-01
We review research on the neuropsychological effects that central nervous system (CNS) cancer treatments have on the cognitive abilities of children and adolescents. The authors focus on the two most common malignancies of childhood: leukemias and brain tumors. The literature review is structured so as to separate out earlier studies, generally…
Apathy in Alzheimer's Disease: Any Effective Treatment?
Fasanaro, Angiola M.
2014-01-01
Objective. This review has evaluated the effectiveness of pharmacological treatment of apathy in patients with Alzheimer's disease (AD). Methods. A systematic literature search was conducted on published clinical trials assessing the effects of pharmacological treatment on apathy in AD over the last 10 years. Results. Fourteen studies considered of good quality were included in the analysis (4 randomized controlled trials, 9 open-label studies, and 1 retrospective analysis). Cholinesterase inhibitors were investigated in 9 studies, monoaminergic compounds such as methylphenidate and modafinil in two trials and one trial, respectively, and Ginkgo biloba (EGb 761 extract) and citalopram in one study each. Cholinesterase inhibitors did not show statistical significant effect in 1 RCT study but were associated to improvement in 3 open-label studies. Methylphenidate elicited a small but significant activity accompanied by relevant side effects such as high blood pressure, cough, and osteoarticular pain. EGb 761 was well tolerated and countered apathy. Other treatments induced modest improvements or were ineffective. Conclusions. Apathy treatment remains a challenge and there is no evident advantage of any specific pharmacotherapy tested so far. The development of controlled studies according to updated guidelines for the diagnosis of apathy in patients with AD is desirable. PMID:24672318
Apathy in Alzheimer's disease: any effective treatment?
Rea, Raffaele; Carotenuto, Anna; Fasanaro, Angiola M; Traini, Enea; Amenta, Francesco
2014-01-01
This review has evaluated the effectiveness of pharmacological treatment of apathy in patients with Alzheimer's disease (AD). A systematic literature search was conducted on published clinical trials assessing the effects of pharmacological treatment on apathy in AD over the last 10 years. Fourteen studies considered of good quality were included in the analysis (4 randomized controlled trials, 9 open-label studies, and 1 retrospective analysis). Cholinesterase inhibitors were investigated in 9 studies, monoaminergic compounds such as methylphenidate and modafinil in two trials and one trial, respectively, and Ginkgo biloba (EGb 761 extract) and citalopram in one study each. Cholinesterase inhibitors did not show statistical significant effect in 1 RCT study but were associated to improvement in 3 open-label studies. Methylphenidate elicited a small but significant activity accompanied by relevant side effects such as high blood pressure, cough, and osteoarticular pain. EGb 761 was well tolerated and countered apathy. Other treatments induced modest improvements or were ineffective. Apathy treatment remains a challenge and there is no evident advantage of any specific pharmacotherapy tested so far. The development of controlled studies according to updated guidelines for the diagnosis of apathy in patients with AD is desirable.
Neurocognitive Effects of Treatment for Childhood Cancer
ERIC Educational Resources Information Center
Butler, Robert W.; Haser, Jennifer K.
2006-01-01
We review research on the neuropsychological effects that central nervous system (CNS) cancer treatments have on the cognitive abilities of children and adolescents. The authors focus on the two most common malignancies of childhood: leukemias and brain tumors. The literature review is structured so as to separate out earlier studies, generally…
Evaluating the effectiveness of postfire rehabilitation treatments
Peter R. Robichaud; Jan L. Beyers; Daniel G. Neary
2000-01-01
Spending on postfire emergency watershed rehabilitation has increased during the past decade. A west-wide evaluation of USDA Forest Service burned area emergency rehabilitation (BAER) treatment effectiveness was undertaken as a joint project by USDA Forest Service Research and National Forest System staffs. This evaluation covers 470 fires and 321 BAER projects, from...
Evaluating the Effectiveness Of Postfire Rehabilitation Treatments
Peter R. Robichaud; Jan L. Beyers; Daniel G. Neary
2000-01-01
Spending on postfire emergency watershed rehabilitation has increased during the past decade. A west-wide evaluation of USDA Forest Service burned area emergency rehabilitation (BAER) treatment effectiveness was undertaken as a joint project by USDA Forest Service Research and National Forest System staffs. This evaluation covers 470 fires and 321 BAER projects, from...
Liu, Q.Y.; Maris, M. |; Petcov, S.T. |
1997-11-01
This is the first of two articles aimed at providing comprehensive predictions for the day-night (D-N) effect for the Super-Kamiokande detector in the case of the Mikheyev-Smirnov-Wolfenstein (MSW) {nu}{sub e}{r_arrow}{nu}{sub {mu}({tau})} transition solution of the solar neutrino problem. The one-year-averaged probability of survival of the solar {nu}{sub e} crossing the Earth{close_quote}s mantle, the core, the inner 2/3 of the core, and the (core+mantle) is calculated with high precision (better than 1{percent}) using the elliptical orbit approximation to describe the Earth{close_quote}s motion around the Sun. Results for the survival probability in the indicated cases are obtained for a large set of values of the MSW transition parameters {Delta}m{sup 2} and sin{sup 2}2{theta}{sub V} from the {open_quotes}conservative{close_quotes} regions of the MSW solution, derived by taking into account possible relatively large uncertainties in the values of the {sup 8}B and {sup 7}Be neutrino fluxes. Our results show that the one-year-averaged D-N asymmetry in the {nu}{sub e} survival probability for neutrinos crossing the Earth{close_quote}s core can be, in the case of sin{sup 2}2{theta}{sub V}{le}0.013, larger than the asymmetry in the probability for (only mantle crossing+core crossing) neutrinos by a factor of up to 6. The enhancement is larger in the case of neutrinos crossing the inner 2/3 of the core. This indicates that the Super-Kamiokande experiment might be able to test the sin{sup 2}2{theta}{sub V}{le}0.01 region of the MSW solution of the solar neutrino problem by performing selective D-N asymmetry measurements. {copyright} {ital 1997} {ital The American Physical Society}
NASA Astrophysics Data System (ADS)
Cunha, D. M.; Tomal, A.; Poletti, M. E.
2010-08-01
In this work, a computational code for the study of imaging systems and dosimetry in conventional and digital mammography through Monte Carlo simulations is described. The developed code includes interference and Doppler energy broadening for simulation of elastic and inelastic photon scattering, respectively. The code estimates the contribution of scattered radiation to image quality through the spatial distribution of the scatter-to-primary ratio (S/P). It allows the inclusion of different designs of anti-scatter grids (linear or cellular), for evaluation of contrast improvement factor (CIF), Bucky factor (BF) and signal difference-to-noise ratio improvement factor (SIF). It also allows the computation of the normalized average glandular dose, \\bar{D}_{g,N} . These quantities were studied for different breast thicknesses and compositions, anode/filter combinations and tube potentials. Results showed that the S/P increases linearly with breast thickness, varying slightly with breast composition or the spectrum used. Evaluation of grid performance showed that the cellular grid provides the highest CIF with smaller BF. The SIF was also greater for the cellular grid, although both grids showed SIF < 1 for thin breasts. Results for \\bar{D}_{g,N} showed that it increases with the half-value layer (HVL) of the spectrum, decreases considerably with breast thickness and has a small dependence on the anode/filter combination. Inclusion of interference effects of breast tissues affected the values of S/P obtained with the grid up to 25%, while the energy broadening effect produced smaller variations on the evaluated quantities.
NASA Astrophysics Data System (ADS)
Vieira, V. N.; Mendonça, A. P. A.; Dias, F. T.; da Silva, D. L.; Pureur, P.; Schaf, J.; Hneda, M. L.; Mesquita, F.
2014-12-01
We reported on MZFC(T) and MFCC(T) reversible dc magnetizations of YBa2Cu3O7-δ, Y0.99Ca0.01Ba2Cu3O7-δ and YBa1.75Sra0.25Cu3O7-δ single crystals with a strong focus on the effects of Ca and Sr doping on the average superconducting kinetic energy density, k(T) of the YBa2Cu3O7-δ The k(T) is used as a relevant tool to provide physical information about the HTSC paring mechanism. The determination of the k(T) from MZFC(T) and MFCC(T) data is supported by virial theorem of superconductivity [k(T) = - MB]. The MZFC(T) and MFCC(T) measurements were performed with a SQUID magnetometer to H <= 50kOe applied parallel to the c axis of the samples. The results show that the samples present a common k(T) behavior that is characterized by a maximum value for T <= Tc that gradually decreases as the temperature rises towards to the Tc, becoming null to T <= Tc. The magnetic field affects smoothly the k(T) data behavior. The k(T) results contrasting of our samples shows that the Ca and Sr doping promotes a reduction of its amplitude. A possible explanation to this feature could be associated to the fact that the hole doping character promoted by Ca doping and the chemical pressure effect motivated by Sr doping affects considerably the superconducting paring mechanism of the YBa2Cu3O7-δ.
Ziolko, Dominik; Hala, David; Lester, John N; Scrimshaw, Mark D
2009-12-01
Eight different sewage treatment works were sampled in the North West of England. The effectiveness of the conventional treatment processes (primary sedimentation and biological trickling filters) as well as various tertiary treatment units in terms of both total and dissolved copper removal was evaluated. The removal of total copper across primary sedimentation averaged 53% and were relatively consistent at all sites, however, at three sites the removal of dissolved copper also occurred at this stage of treatment. Removal of total copper by the biological trickling filters averaged 49%, however, substantial dissolution of copper occurred at two sites, which highlighted the unpredictability of this treatment process in the removal of dissolved copper. Copper removal during tertiary treatment varied considerably even for the same treatment processes installed at different sites, primarily due to the variability of insoluble copper removal, with little effect on copper in the dissolved form being observed. The proportion of dissolved copper increased significantly during treatment, from an average of 22% in crude sewages to 55% in the final effluents. There may be the potential to optimise existing, conventional treatment processes (primary or biological treatment) to enhance dissolved copper removal, possibly reducing the requirement for installing any tertiary processes specifically for the removal of copper.
An investigation of scale effects in family substance abuse treatment programs.
Lee, A James
2010-07-05
This short report investigates scale effects in family substance abuse treatment programs. In Massachusetts, the family substance abuse treatment programs were much more costly than other adult residential treatment models. State officials were concerned that the "scale" or size of these programs (averaging just eight families) was too small to be economical. Although the sample size (just nine programs) was too small to permit reliable inference, the data clearly signalled the importance of "scale effects" in these family substance abuse treatment programs. To further investigate scale effects in family substance abuse treatment programs, data from the Center for Substance Abuse Treatment's (CSAT's) Residential Women and Children and Pregnant and Postpartum Women (RWC-PPW) Demonstration were re-analyzed, focusing on the relationship between cost per family-day and the estimated average family census. This analysis indicates strong economies of scale up until an average family census of about 14, and less apparent scale effects beyond that point. In consideration of these and other study findings, a multidisciplinary interagency team redesigned the Massachusetts' family treatment program model. The new programs are larger than the former family treatment programs, with each new program having capacity to treat 11 to 15 families depending on family makeup.
Effects of outpatient treatment of dyslexia.
van Daal, V H; Reitsma, P
1999-01-01
The effects of a Dutch intervention program for dyslexia are reported. The program was individually tailored, depending on the style of reading, the phase of the learning process, and the intermediate results of the treatment. Two groups of participants were involved: (a) a group of children with pure dyslexia (n = 109) and (b) a group that had reading problems but also suffered from cognitive deficits or psychiatric symptoms (n = 29). Scores of reading single words and text at intake and after the intervention were analyzed to assess the efficacy of the intervention program. Furthermore, the effects of pre-intervention variables such as intelligence, reported speech, and language problems and of intervention variables such as the initial level of performance and the duration of the treatment were examined. Both groups benefitted from the intervention, but the children with pure dyslexia profited most. Neither of the groups could catch up the reading deficit. Intelligence and reported speech and language problems did not affect the treatment outcomes. Individual differences in treatment outcome were related to the absolute level of word reading and age at intake. In the group with comorbidity, the intervention program was more successful in relatively younger children. Within this group, the cognitive deficits and types of psychiatric problems were not related to the treatment.
Garasto, Sabrina; Berardelli, Maurizio; DeRango, Francesco; Mari, Vincenzo; Feraco, Emidio; De Benedictis, Giovanna
2004-01-01
Background In studies on the genetics of human aging, we observed an age-related variation of the 3'APOB-VNTR genotypic pool (alleles: Short, S, <35 repeats; Medium, M, 35–39 repeats; Long, L, >39 repeats) with the homozygous SS genotype showing a convex frequency trajectory in a healthy aging population. This genotype was rare in centenarians, thus indicating that the S alleles are unfavorable to longevity, while common in adults, thus indicating a protective role at middle age. This apparent paradox could be due to possible effects exerted by the above polymorphism on lipidemic parameters. Aim of the work was to get insights into these puzzling findings Methods We followed a double strategy. Firstly, we analyzed the average effects of S (αS), M (αM), and L (αL) alleles on lipidemic parameters in a sample of healthy people (409 subjects aged 20–102 years) recruited in Calabria (southern Italy). The (αS), (αM), and (αL) values were estimated by relating 3'APOB-VNTR genotypes to lipidemic parameters, after adjustment for age, sex and body mass index (multiple regression). Then, we analyzed the S alleles as susceptibility factors of Cardiovascular Atherosclerotic Disease (CD) in CD patients characterized either by low serum HDL-Cholesterol or by high serum LDL-Cholesterol (CD-H and CD-L patients, 40 and 40 subjects respectively). The Odds Ratios (OR) were computed for carriers of S alleles in CD-H and CD-L patients matched for origin, sex and age with controls extracted from the sample of healthy subjects. Results By the analysis of the healthy sample group we found that the S alleles lower the average values of serum Total Cholesterol (αS = -5.98 mg/dL with [-11.62 ÷ -0.74] 95% confidence interval) and LDL-Cholesterol (αS = -4.41 mg/dL with [-8.93 ÷ -0.20] 95% confidence interval) while the alleles M and L have no significant effect on the lipidemic phenotype. In line with these findings, the analysis of CD patients showed that the S alleles are
Assessment of the effects of CT dose in averaged x-ray CT images of a dose-sensitive polymer gel
NASA Astrophysics Data System (ADS)
Kairn, T.; Kakakhel, M. B.; Johnston, H.; Jirasek, A.; Trapp, J. V.
2015-01-01
The signal-to-noise ratio achievable in x-ray computed tomography (CT) images of polymer gels can be increased by averaging over multiple scans of each sample. However, repeated scanning delivers a small additional dose to the gel which may compromise the accuracy of the dose measurement. In this study, a NIPAM-based polymer gel was irradiated and then CT scanned 25 times, with the resulting data used to derive an averaged image and a "zero-scan" image of the gel. Comparison between these two results and the first scan of the gel showed that the averaged and zero-scan images provided better contrast, higher contrast-to- noise and higher signal-to-noise than the initial scan. The pixel values (Hounsfield units, HU) in the averaged image were not noticeably elevated, compared to the zero-scan result and the gradients used in the linear extrapolation of the zero-scan images were small and symmetrically distributed around zero. These results indicate that the averaged image was not artificially lightened by the small, additional dose delivered during CT scanning. This work demonstrates the broader usefulness of the zero-scan method as a means to verify the dosimetric accuracy of gel images derived from averaged x-ray CT data.
ERIC Educational Resources Information Center
Corno, Lyn; And Others
1981-01-01
Treatment and aptitude-treatment interaction (ATI) effects were assessed on grade 3 student self-appraisal data relating to self-esteem, attitude, anxiety, and locus of control. In particular, parent instruction in learning skills resulted in significantly higher average scores on student self-esteem and attitude and lower scores on anxiety.…
NASA Astrophysics Data System (ADS)
Roelvink, Dano; Costas, Susana
2015-04-01
Geological records contain a wealth of information about accretionary episodes in the life of a coastal profile, such as age and type of the deposits and circumstances during which the accretion took place; of erosional events mainly the final limit of the erosion and circumstances under which the erosion took place can be estimated. To obtain a more complete picture of the events shaping the sedimentary record and transport processes involved, process-based modelling can be a useful tool (e.g. Apotsos et al., 2011). However, application of such modelling to different types of events remains a challenge. In our presentation we intend to show examples of the effects of different events on the stratigraphic record and to discuss the challenges related to the modelling of each of these types of events. The test site chosen is the Costa da Caparica, south of Lisbon, Portugal. The stratigraphic record and progradation rates of the coastal were obtained combining geophysical (Ground Penetrating Radar) and dating (Optically Stimulating Luminescence) techniques, which document very recent ages for the preserved coastal barrier. Within the record, we focus on a period around the big tsunami of 1755, during which the shoreline experienced a long-term prograding trend with evidence of severe erosion events. Rather than trying to exactly reproduce the stratigraphy observed here, we will carry out exploratory simulations to create 'building blocks' of stratigraphy related to the different types of events, which we can loosely compare with observations reported in Rebelo et al. (2013). The model applied for all simulations is XBeach (Roelvink et al., 2009), which is used in three different modes (no short waves, time-varying wave action balance, stationary wave action balance, respectively) to accommodate the impact of tsunamis, storms, and average conditions; for the latter we include the dune and associated processes in a simplified aeolian transport and response model. In all
Cai, Jing; Read, Paul W; Sheng, Ke
2008-11-01
Composite images such as average intensity projection (AIP) and maximum intensity projection (MIP) derived from four-dimensional computed tomography (4D-CT) images are commonly used in radiation therapy for treating lung and abdominal tumors. It has been reported that the quality of 4D-CT images is influenced by the patient respiratory variability, which can be assessed by the standard deviation of the peak and valley of the respiratory trajectory. Subsequently, the resultant MIP underestimates the actual tumor motion extent. As a more general application, AIP comprises not only the tumor motion extent but also the probability that the tumor is present. AIP generated from 4D-CT can also be affected by the respiratory variability. To quantitate the accuracy of AIP and develop clinically relevant parameters for determining suitability of the 4D-CT study for AIP-based treatment planning, real time sagittal dynamic magnetic resonance imaging (dMRI) was used as the basis for generating simulated 4D-CT. Five-minute MRI scans were performed on seven healthy volunteers and eight lung tumor patients. In addition, images of circular phantoms with diameter 1, 3, or 5 cm were generated by software to simulate lung tumors. Motion patterns determined by dMRI images were reproduced by the software generated phantoms. Resorted dMRI using a 4D-CT acquisition method (RedCAM) based on phantom or patient images was reconstructed by simulating the imaging rebinning processes. AIP images and the corresponding color intensity projection (CIP) images were reconstructed from RedCAM and the full set of dMRI for comparison. AIP similarity indicated by the Dice index between RedCAM and dMRI was calculated and correlated with respiratory variability (v) and tumor size (s). The similarity of percentile intrafractional motion target area (IMTA), defined by the area that the tumor presented for a given percentage of time, and MIP-to-percentile IMTA similarity as a function of percentile were also
Alexoff, D.L.; Alexoff, D.L.; Dewey, S.L.; Vaska, P.; Krishnamoorthy, S.; Ferrieri, R.; Schueller, M.; Schlyer, D.; Fowler, J.S.
2011-03-01
PET imaging in plants is receiving increased interest as a new strategy to measure plant responses to environmental stimuli and as a tool for phenotyping genetically engineered plants. PET imaging in plants, however, poses new challenges. In particular, the leaves of most plants are so thin that a large fraction of positrons emitted from PET isotopes ({sup 18}F, {sup 11}C, {sup 13}N) escape while even state-of-the-art PET cameras have significant partial-volume errors for such thin objects. Although these limitations are acknowledged by researchers, little data have been published on them. Here we measured the magnitude and distribution of escaping positrons from the leaf of Nicotiana tabacum for the radionuclides {sup 18}F, {sup 11}C and {sup 13}N using a commercial small-animal PET scanner. Imaging results were compared to radionuclide concentrations measured from dissection and counting and to a Monte Carlo simulation using GATE (Geant4 Application for Tomographic Emission). Simulated and experimentally determined escape fractions were consistent. The fractions of positrons (mean {+-} S.D.) escaping the leaf parenchyma were measured to be 59 {+-} 1.1%, 64 {+-} 4.4% and 67 {+-} 1.9% for {sup 18}F, {sup 11}C and {sup 13}N, respectively. Escape fractions were lower in thicker leaf areas like the midrib. Partial-volume averaging underestimated activity concentrations in the leaf blade by a factor of 10 to 15. The foregoing effects combine to yield PET images whose contrast does not reflect the actual activity concentrations. These errors can be largely corrected by integrating activity along the PET axis perpendicular to the leaf surface, including detection of escaped positrons, and calculating concentration using a measured leaf thickness.
NASA Astrophysics Data System (ADS)
Zimmerman, R. W.; Leung, C. T.
2009-12-01
Most oil and gas reservoirs, as well as most potential sites for nuclear waste disposal, are naturally fractured. In these sites, the network of fractures will provide the main path for fluid to flow through the rock mass. In many cases, the fracture density is so high as to make it impractical to model it with a discrete fracture network (DFN) approach. For such rock masses, it would be useful to have recourse to analytical, or semi-analytical, methods to estimate the macroscopic hydraulic conductivity of the fracture network. We have investigated single-phase fluid flow through generated stochastically two-dimensional fracture networks. The centers and orientations of the fractures are uniformly distributed, whereas their lengths follow a lognormal distribution. The aperture of each fracture is correlated with its length, either through direct proportionality, or through a nonlinear relationship. The discrete fracture network flow and transport simulator NAPSAC, developed by Serco (Didcot, UK), is used to establish the “true” macroscopic hydraulic conductivity of the network. We then attempt to match this value by starting with the individual fracture conductances, and using various upscaling methods. Kirkpatrick’s effective medium approximation, which works well for pore networks on a core scale, generally underestimates the conductivity of the fracture networks. We attribute this to the fact that the conductances of individual fracture segments (between adjacent intersections with other fractures) are correlated with each other, whereas Kirkpatrick’s approximation assumes no correlation. The power-law averaging approach proposed by Desbarats for porous media is able to match the numerical value, using power-law exponents that generally lie between 0 (geometric mean) and 1 (harmonic mean). The appropriate exponent can be correlated with statistical parameters that characterize the fracture density.
Widjaja, E; Mahmoodabadi, S Z; Rea, D; Moineddin, R; Vidarsson, L; Nilsson, D
2009-01-01
Tensor estimation can be improved by increasing the number of gradient directions (NGD) or increasing the number of signal averages (NSA), but at a cost of increased scan time. To evaluate the effects of NGD and NSA on fractional anisotropy (FA) and fiber density index (FDI) in vivo. Ten healthy adults were scanned on a 1.5T system using nine different diffusion tensor sequences. Combinations of 7 NGD, 15 NGD, and 25 NGD with 1 NSA, 2 NSA, and 3 NSA were used, with scan times varying from 2 to 18 min. Regions of interest (ROIs) were placed in the internal capsules, middle cerebellar peduncles, and splenium of the corpus callosum, and FA and FDI were calculated. Analysis of variance was used to assess whether there was a difference in FA and FDI of different combinations of NGD and NSA. There was no significant difference in FA of different combinations of NGD and NSA of the ROIs (P>0.005). There was a significant difference in FDI between 7 NGD/1 NSA and 25 NGD/3 NSA in all three ROIs (P<0.005). There were no significant differences in FDI between 15 NGD/3 NSA, 25 NGD/1 NSA, and 25 NGD/2 NSA and 25 NGD/3 NSA in all ROIs (P>0.005). We have not found any significant difference in FA with varying NGD and NSA in vivo in areas with relatively high anisotropy. However, lower NGD resulted in reduced FDI in vivo. With larger NGD, NSA has less influence on FDI. The optimal sequence among the nine sequences tested with the shortest scan time was 25 NGD/1 NSA.
Alternatives to the Moving Average
Paul C. van Deusen
2001-01-01
There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...
Treatment of childhood cancers: late effects.
2015-10-01
In France, about 1 in 1000 young adults aged 20 to 30 years is a survivor of childhood cancer and is thus faced with late effects of their cancer and its treatment (radiation therapy and/or chemotherapy). What are the late effects of childhood cancer therapy? A systematic review by the Scottish Intercollegiate Guidelines Network (SIGN) provides useful information based on European and North American data. Cancer treatments can have many long-term consequences that depend on the drugs and doses used, radiation therapy protocols and irradiated organs, and age at the time of treatment. Cytotoxic drugs and radiation can both cause infertility. Abdominopelvic radiation therapy in girls has been linked to an increased risk of premature delivery and other complications of pregnancy. No increase in birth defects has been reported among children born to childhood cancer survivors. Anthracyclines and radiation therapy can cause cardiomyopathy. Neck irradiation can lead to thyroid disorders, and cranial irradiation to growth retardation. Chemotherapy can cause osteonecrosis and loss of bone density, but without an increased risk of fracture. The risk of cognitive impairment and structural abnormalities of the brain is higher when the child is younger or receives a high cumulative dose of cranial irradiation or total irradiation dosage. Some cytotoxic drugs can damage the kidneys. Cranial radiation therapy can cause long-term neuroendocrine disorders and growth disorders, especially when the dose exceeds 18 Gy. Cytotoxic drugs (alkylating agents, etoposide, etc.) and radiation therapy can cause second cancers of a different histological type. One analysis of second cancers showed a median time to onset of 7 years for solid tumours and 2.5 years for lymphoma and leukaemia. Better knowledge of the late effects of childhood cancer therapy can help orient the choice of treatment towards less harmful options or, if necessary, implement measures aimed at preventing late adverse
The economic effects of whole-herd versus selective anthelmintic treatment strategies in dairy cows.
Charlier, J; Levecke, B; Devleesschauwer, B; Vercruysse, J; Hogeveen, H
2012-06-01
Current control practices against gastrointestinal nematodes in dairy cows rely strongly on anthelmintic use. To reduce the development of anthelmintic resistance or disposition of drug residues in the environment, novel control approaches are currently proposed that target anthelmintic treatment to individual animals instead of the whole herd. However, such selective treatment strategies come with additional costs for labor and diagnostics and, so far, no studies have addressed whether they could be economically sustainable. The objectives of this study were to (1) investigate the economic effects at farm level of whole-herd versus more selective anthelmintic treatment strategies in adult dairy cows, and (2) determine how these economic effects depend on level of infection and herd size. A Monte Carlo simulation, fed by current epidemiological and economical knowledge, was used to estimate the expected economic effects and possible variation of different control strategies under Belgian conditions. Four treatment strategies were compared with a baseline situation in which no treatments were applied: whole herd at calving (S1), selective at calving with (S2) or without (S3) treatment of the first-calf cows, and whole-herd when animals are moved from grazing to the barn in the fall (housing treatment, S4). The benefit per lactation for an average dairy herd varied between -$2 and $131 (average $64) for S1, between -$2 and $127 (average $62) for S2, between -$17 and $104 (average $43) for S3, and between -$41 and $72 (average $15) for S4. The farmer's risk associated with any treatment strategy, as indicated by the width of the 95% credible intervals of economic benefit of anthelmintic treatment, decreased with increasing level of exposure, as assessed by bulk tank milk ELISA. The order of the different strategies when sorted by expected benefit was robust to changes in economic input parameters. We conclude that, on average, strategies applying anthelmintic
Lessons from placebo effects in migraine treatment.
Antonaci, Fabio; Chimento, Pierluigi; Diener, Hans-Christoph; Sances, Grazia; Bono, Giorgio
2007-02-01
In medical research, the placebo effect is an important methodological tool. Placebo is given to participants in clinical trials, with the intention of mimicking an experimental intervention. The "nocebo" effect, on the other hand, is the phenomenon whereby a patient who believes that a treatment will cause harm actually does experience adverse effects. The placebo effect strongly influences the way the results of clinical trials are interpreted. Placebo responses vary with the choice of study design, the choice of primary outcome measure, the characteristics of the patients and the cultural setting in which the trial is conducted. In migraine trials, the placebo response is high, in terms of both efficacy and side effects. Although medical ethics committees are becoming increasingly resistant to the use of placebo in acute migraine trials, placebo nevertheless remains the pivotal comparator in trials of migraine medications.
Gautam, R; Vanderstichel, R; Boerlage, A S; Revie, C W; Hammell, K L
2017-03-01
Effectiveness of sea lice bath treatment is often assessed by comparing pre- and post-treatment counts. However, in practice, the post-treatment counting window varies from the day of treatment to several days after treatment. In this study, we assess the effect of post-treatment lag time on sea lice abundance estimates after chemical bath treatment using data from the sea lice data management program (Fish-iTrends) between 2010 and 2014. Data on two life stages, (i) adult female (AF) and (ii) pre-adult and adult male (PAAM), were aggregated at the cage level and log-transformed. Average sea lice counts by post-treatment lag time were computed for AF and PAAM and compared relative to treatment day, using linear mixed models. There were 720 observations (treatment events) that uniquely matched pre- and post-treatment counts from 53 farms. Lag time had a significant effect on the estimated sea lice abundance, which was influenced by season and pre-treatment sea lice levels. During summer, sea lice were at a minimum when counted 1 day post-treatment irrespective of pre-treatment sea lice levels, whereas in the spring and autumn, low levels were observed for PAAM over a longer interval of time, provided the pre-treatment sea lice levels were >5-10. © 2016 John Wiley & Sons Ltd.
Cox, Louis A; Popken, Douglas A; Ricci, Paolo F
2013-08-01
Recent studies have indicated that reducing particulate pollution would substantially reduce average daily mortality rates, prolonging lives, especially among the elderly (age ≥ 75). These benefits are projected by statistical models of significant positive associations between levels of fine particulate matter (PM2.5) levels and daily mortality rates. We examine the empirical correspondence between changes in average PM2.5 levels and temperatures from 1999 to 2000, and corresponding changes in average daily mortality rates, in each of 100 U.S. cities in the National Mortality and Morbidity Air Pollution Study (NMMAPS) data base, which has extensive PM2.5, temperature, and mortality data for those 2 years. Increases in average daily temperatures appear to significantly reduce average daily mortality rates, as expected from previous research. Unexpectedly, reductions in PM2.5 do not appear to cause any reductions in mortality rates. PM2.5 and mortality rates are both elevated on cold winter days, creating a significant positive statistical relation between their levels, but we find no evidence that reductions in PM2.5 concentrations cause reductions in mortality rates. For all concerned, it is crucial to use causal relations, rather than statistical associations, to project the changes in human health risks due to interventions such as reductions in particulate air pollution.
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Lim, Wansu; Cho, Tae-Sik; Yun, Changho; Kim, Kiseon
2009-11-09
In this paper, we derive the average bit error rate (BER) of subcarrier multiplexing (SCM)-based free space optics (FSO) systems using a dual-drive Mach-Zehnder modulator (DD-MZM) for optical single-sideband (OSSB) signals under atmospheric turbulence channels. In particular, we consider the third-order intermodulation (IM3), a significant performance degradation factor, in the case of high input signal power systems. The derived average BER, as a function of the input signal power and the scintillation index, is employed to determine the optimum number of SCM users upon the designing FSO systems. For instance, when the user number doubles, the input signal power decreases by almost 2 dBm under the log-normal and exponential turbulence channels at a given average BER.
NASA Technical Reports Server (NTRS)
Liu, W. T.
1984-01-01
The average wind speeds from the scatterometer (SASS) on the ocean observing satellite SEASAT are found to be generally higher than the average wind speeds from ship reports. In this study, two factors, sea surface temperature and atmospheric stability, are identified which affect microwave scatter and, therefore, wave development. The problem of relating satellite observations to a fictitious quantity, such as the neutral wind, that has to be derived from in situ observations with models is examined. The study also demonstrates the dependence of SASS winds on sea surface temperature at low wind speeds, possibly due to temperature-dependent factors, such as water viscosity, which affect wave development.
NASA Technical Reports Server (NTRS)
Liu, W. T.
1984-01-01
The average wind speeds from the scatterometer (SASS) on the ocean observing satellite SEASAT are found to be generally higher than the average wind speeds from ship reports. In this study, two factors, sea surface temperature and atmospheric stability, are identified which affect microwave scatter and, therefore, wave development. The problem of relating satellite observations to a fictitious quantity, such as the neutral wind, that has to be derived from in situ observations with models is examined. The study also demonstrates the dependence of SASS winds on sea surface temperature at low wind speeds, possibly due to temperature-dependent factors, such as water viscosity, which affect wave development.
NASA Astrophysics Data System (ADS)
Luznik, L.; Lust, E.; Flack, K. A.
2014-12-01
There are few studies describing the interaction between marine current turbines and an overlying surface gravity wave field. In this work we present an experimental study on the effects of surface gravity waves of different wavelengths on the wave phase averaged performance characteristics of a marine current turbine model. Measurements are performed with a 1/25 scale (diameter D=0.8m) two bladed horizontal axis turbine towed in the large (116m long) towing tank at the U.S. Naval Academy equipped with a dual-flap, servo-controlled wave maker. Three regular waves with wavelengths of 15.8, 8.8 and 3.9m with wave heights adjusted such that all waveforms have the same energy input per unit width are produced by the wave maker and model turbine is towed into the waves at constant carriage speed of 1.68 m/s. This representing the case of waves travelling in the same direction as the mean current. Thrust and torque developed by the model turbine are measured using a dynamometer mounted in line with the turbine shaft. Shaft rotation speed and blade position are measured using in in-house designed shaft position indexing system. The tip speed ratio (TSR) is adjusted using a hysteresis brake which is attached to the output shaft. Free surface elevation and wave parameters are measured with two optical wave height sensors, one located in the turbine rotor plane and other one diameter upstream of the rotor. All instruments are synchronized in time and data is sampled at a rate of 700 Hz. All measured quantities are conditionally sampled as a function of the measured surface elevation and transformed to wave phase space using the Hilbert Transform. Phenomena observed in earlier experiments with the same turbine such as phase lag in the torque signal and an increase in thrust due to Stokes drift are examined and presented with the present data as well as spectral analysis of the torque and thrust data.
ERIC Educational Resources Information Center
McCarthy, Kevin J.
In 1997, all high schools in the largest school district in Colorado were invited to participate in a study of whether students who participated in school-sponsored activities were different from nonparticipants with respect to grade point averages (GPAs) and school attendance. The project also studied differences in these areas for gender,…
Chapman, Cole G; Brooks, John M
2016-12-01
To examine the settings of simulation evidence supporting use of nonlinear two-stage residual inclusion (2SRI) instrumental variable (IV) methods for estimating average treatment effects (ATE) using observational data and investigate potential bias of 2SRI across alternative scenarios of essential heterogeneity and uniqueness of marginal patients. Potential bias of linear and nonlinear IV methods for ATE and local average treatment effects (LATE) is assessed using simulation models with a binary outcome and binary endogenous treatment across settings varying by the relationship between treatment effectiveness and treatment choice. Results show that nonlinear 2SRI models produce estimates of ATE and LATE that are substantially biased when the relationships between treatment and outcome for marginal patients are unique from relationships for the full population. Bias of linear IV estimates for LATE was low across all scenarios. Researchers are increasingly opting for nonlinear 2SRI to estimate treatment effects in models with binary and otherwise inherently nonlinear dependent variables, believing that it produces generally unbiased and consistent estimates. This research shows that positive properties of nonlinear 2SRI rely on assumptions about the relationships between treatment effect heterogeneity and choice. © Health Research and Educational Trust.
Modeling psychiatric disorders for developing effective treatments
Kaiser, Tobias; Feng, Guoping
2016-01-01
The recent advance in identifying risk genes has provided an unprecedented opportunity for developing animal models for psychiatric disease research with the goal of attaining translational utility to ultimately develop novel treatments. However, at this early stage, successful translation has yet to be achieved. Here, we review recent advances in modeling psychiatric disease, discuss utility and limitations of animal models, and emphasize the importance of shifting from behavioral analysis to identifying neurophysiological defects, which are likely more conserved across species and thus increase translatability. Looking forward, we envision that preclinical research will align with clinical research to build a common framework of comparable neurobiological abnormalities and form subgroups of patients based on similar pathophysiology. Experimental neuroscience can then use animal models to discover mechanisms underlying distinct abnormalities and develop strategies for effective treatments. PMID:26340119
Spatial limitations in averaging social cues
Florey, Joseph; Clifford, Colin W. G.; Dakin, Steven; Mareschal, Isabelle
2016-01-01
The direction of social attention from groups provides stronger cueing than from an individual. It has previously been shown that both basic visual features such as size or orientation and more complex features such as face emotion and identity can be averaged across multiple elements. Here we used an equivalent noise procedure to compare observers’ ability to average social cues with their averaging of a non-social cue. Estimates of observers’ internal noise (uncertainty associated with processing any individual) and sample-size (the effective number of gaze-directions pooled) were derived by fitting equivalent noise functions to discrimination thresholds. We also used reverse correlation analysis to estimate the spatial distribution of samples used by participants. Averaging of head-rotation and cone-rotation was less noisy and more efficient than averaging of gaze direction, though presenting only the eye region of faces at a larger size improved gaze averaging performance. The reverse correlation analysis revealed greater sampling areas for head rotation compared to gaze. We attribute these differences in averaging between gaze and head cues to poorer visual processing of faces in the periphery. The similarity between head and cone averaging are examined within the framework of a general mechanism for averaging of object rotation. PMID:27573589
NASA Astrophysics Data System (ADS)
Sellitto, P.; Dufour, G.; Eremenko, M.; Cuesta, J.; Peuch, V.-H.; Eldering, A.; Edwards, D. P.; Flaud, J.-M.
2013-03-01
Practical implementations of chemical OSSEs (Observing System Simulation Experiments) usually rely on approximations of the pseudo-observations by means of a prior parametrization of the averaging kernels, which describe the sensitivity of the observing systems to the target atmospheric species. This is intended to avoid the need for use of a computationally expensive pseudo-observations simulator that relies on full radiative transfer calculations. Here we present an investigation on how no, or limited, scene dependent averaging kernels parametrizations may misrepresent the sensitivity of an observing system, and thus possibly lead to inaccurate results of OSSEs. We carried out the full radiative transfer calculation for a three-days period over Europe, to produce reference pseudo-observations of lower tropospheric ozone, as they would be observed by a concept geostationary observing system called MAGEAQ (Monitoring the Atmosphere from Geostationary orbit for European Air Quality). The selected spatiotemporal interval is characterized by a peculiar ozone pollution event. We then compared our reference with approximated pseudo-observations, following existing simulation exercises made for both the MAGEAQ and GEOstationary Coastal and Air Pollution Events (GEO-CAPE) missions. We found that approximated averaging kernels may fail to replicate the variability of the full radiative transfer calculations. Then, we compared the full radiative transfer and the approximated pseudo-observations during a pollution event. We found that the approximations substantially overestimate the capability of the MAGEAQ to follow the spatiotemporal variations of the lower tropospheric ozone in selected areas. We conclude that such approximations may lead to false conclusions if used in an OSSE. Thus, we recommend to use comprehensive scene-dependent approximations of the averaging kernels, in cases where the full radiative transfer is computationally too costly for the OSSE being
NASA Astrophysics Data System (ADS)
Sellitto, P.; Dufour, G.; Eremenko, M.; Cuesta, J.; Peuch, V.-H.; Eldering, A.; Edwards, D. P.; Flaud, J.-M.
2013-08-01
Practical implementations of chemical OSSEs (Observing System Simulation Experiments) usually rely on approximations of the pseudo-observations by means of a predefined parametrization of the averaging kernels, which describe the sensitivity of the observing system to the target atmospheric species. This is intended to avoid the use of a computationally expensive pseudo-observations simulator, that relies on full radiative transfer calculations. Here we present an investigation on how no, or limited, scene dependent averaging kernels parametrizations may misrepresent the sensitivity of an observing system. We carried out the full radiative transfer calculation for a three-days period over Europe, to produce reference pseudo-observations of lower tropospheric ozone, as they would be observed by a concept geostationary observing system called MAGEAQ (Monitoring the Atmosphere from Geostationary orbit for European Air Quality). The selected spatio-temporal interval is characterised by an ozone pollution event. We then compared our reference with approximated pseudo-observations, following existing simulation exercises made for both the MAGEAQ and GEOstationary Coastal and Air Pollution Events (GEO-CAPE) missions. We found that approximated averaging kernels may fail to replicate the variability of the full radiative transfer calculations. In addition, we found that the approximations substantially overestimate the capability of MAGEAQ to follow the spatio-temporal variations of the lower tropospheric ozone in selected areas, during the mentioned pollution event. We conclude that such approximations may lead to false conclusions if used in an OSSE. Thus, we recommend to use comprehensive scene-dependent approximations of the averaging kernels, in cases where the full radiative transfer is computationally too costly for the OSSE being investigated.
Schizophrenia costs and treatment cost-effectiveness.
Knapp, M
2000-01-01
The paper sets out to summarize evidence on the costs of schizophrenia and on the cost-effectiveness of three broad treatment areas. Evidence from a number of countries was examined, both published and unpublished, and systematic reviews and meta-analyses were consulted. The costs of schizophrenia are high and wide-ranging. They fall not only to health-care agencies but also to other parts of the public sector, to families, to sufferers themselves and to the wider society. However, there are interventions--a counselling intervention to address non-compliance with medication, family interventions to reduce levels of expressed emotion, and atypical antipsychotic drugs--that have been found to be not only effective (improving patient outcomes) but also appear to be cost-effective. Resource constraints and policy pressures make it increasingly common for economic as well as clinical questions to be asked about new modes of treatment. This is the new reality of mental health practice. Reliable evidence is now available to address these economic questions and can be factored into decision-making processes.
[Hyponatremia: effective treatment based on calculated outcomes].
Vervoort, G; Wetzels, J F M
2006-09-30
A 78-year-old man was treated for symptomatic hyponatremia. Despite administration of an isotonic NaCl 0.9% solution, plasma sodium remained unchanged due to high concentrations of sodium and potassium in the urine. After infusion of a hypertonic NaCl solution, a satisfactory increase in plasma sodium was reached and symptoms resolved gradually. The hyponatremia was found to be caused by hypothyroidism, which was treated. A 70-year-old female was admitted to the hospital with loss of consciousness and hyponatremia. She was treated initially with a hypertonic NaCl 2.5% solution, which resulted in a steady increase in plasma sodium and a resolution of symptoms. Treatment was changed to an isotonic NaCl 0.9% infusion to attenuate the rise of serum sodium. Nevertheless plasma sodium increased too rapidly due to increased diuresis and reduced urinary sodium and potassium excretion. A slower increase in plasma sodium was achieved by administering a glucose 5% infusion. Hyponatremia is frequently observed in hospitalised patients. It should be treated effectively, and the rate of correction should be adapted to the clinical situation. Effective treatment is determined by calculating changes in effective osmoles and the resulting changes in the distribution of water over extra- and intracellular spaces. Changes in urine production and urinary excretion of sodium and potassium should be taken into account.
Odenwald, Michael; Semrau, Peter
2013-03-21
Motivation to change has been proposed as a prerequisite for behavioral change, although empirical results are contradictory. Traumatic experiences are frequently found amongst patients in alcohol treatment, but this has not been systematically studied in terms of effects on treatment outcomes. This study aimed to clarify whether individual Trauma Load explains some of the inconsistencies between motivation to change and behavioral change. Over the course of two months in 2009, 55 patients admitted to an alcohol detoxification unit of a psychiatric hospital were enrolled in this study. At treatment entry, we assessed lifetime Trauma Load and motivation to change. Mode of discharge was taken from patient files following therapy. We tested whether Trauma Load moderates the effect of motivation to change on dropout from alcohol detoxification using multivariate methods. 55.4% dropped out of detoxification treatment, while 44.6% completed the treatment. Age, gender and days in treatment did not differ between completers and dropouts. Patients who dropped out reported more traumatic event types on average than completers. Treatment completers had higher scores in the URICA subscale Maintenance. Multivariate methods confirmed the moderator effect of Trauma Load: among participants with high Trauma Load, treatment completion was related to higher Maintenance scores at treatment entry; this was not true among patients with low Trauma Load. We found evidence that the effect of motivation to change on detoxification treatment completion is moderated by Trauma Load: among patients with low Trauma Load, motivation to change is not relevant for treatment completion; among highly burdened patients, however, who a priori have a greater risk of dropping out, a high motivation to change might make the difference. This finding justifies targeted and specific interventions for highly burdened alcohol patients to increase their motivation to change.
2013-01-01
Background Motivation to change has been proposed as a prerequisite for behavioral change, although empirical results are contradictory. Traumatic experiences are frequently found amongst patients in alcohol treatment, but this has not been systematically studied in terms of effects on treatment outcomes. This study aimed to clarify whether individual Trauma Load explains some of the inconsistencies between motivation to change and behavioral change. Methods Over the course of two months in 2009, 55 patients admitted to an alcohol detoxification unit of a psychiatric hospital were enrolled in this study. At treatment entry, we assessed lifetime Trauma Load and motivation to change. Mode of discharge was taken from patient files following therapy. We tested whether Trauma Load moderates the effect of motivation to change on dropout from alcohol detoxification using multivariate methods. Results 55.4% dropped out of detoxification treatment, while 44.6% completed the treatment. Age, gender and days in treatment did not differ between completers and dropouts. Patients who dropped out reported more traumatic event types on average than completers. Treatment completers had higher scores in the URICA subscale Maintenance. Multivariate methods confirmed the moderator effect of Trauma Load: among participants with high Trauma Load, treatment completion was related to higher Maintenance scores at treatment entry; this was not true among patients with low Trauma Load. Conclusions We found evidence that the effect of motivation to change on detoxification treatment completion is moderated by Trauma Load: among patients with low Trauma Load, motivation to change is not relevant for treatment completion; among highly burdened patients, however, who a priori have a greater risk of dropping out, a high motivation to change might make the difference. This finding justifies targeted and specific interventions for highly burdened alcohol patients to increase their motivation to
Modern Mechanical Surface Treatment: States, Stability, Effects
NASA Astrophysics Data System (ADS)
Schulze, Volker
2003-05-01
The only comprehensive, systematic comparison of major mechanical surface treatments, their effects, and the resulting material properties. The result is an up-to-date, full review of this topic, collating the knowledge hitherto spread throughout many original papers. The book begins with a description of elementary processes and mechanisms to give readers an easy introduction, before proceeding to offer systematic, detailed descriptions of the various techniques and three very important types of loading: thermal, quasistatic, and cyclic loading. It combines and correlates experimental and model aspects, while supplying in-depth explanations of the mechanisms and a very high amount of exemplary data.
The Average of Rates and the Average Rate.
ERIC Educational Resources Information Center
Lindstrom, Peter
1988-01-01
Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)
The Average of Rates and the Average Rate.
ERIC Educational Resources Information Center
Lindstrom, Peter
1988-01-01
Defines arithmetic, harmonic, and weighted harmonic means, and discusses their properties. Describes the application of these properties in problems involving fuel economy estimates and average rates of motion. Gives example problems and solutions. (CW)
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Effects of repeated ivermectin treatment in onchocerciasis.
Njoo, F L; Stilma, J S; van der Lelij, A
1992-01-01
A group of 87 onchocerciasis patients from a hyperendemic area without vector control was treated with a single dose of 150 micrograms/kg ivermectin. A second, third and fourth dose was administered 5, 12 and 17 months later to 44, 35 and 25 patients. Skin snip loads reduced substantially following each consecutive dose. However, after three doses 44% of the patients remained skin snip positive. Side-effects decreased from 32.2% requiring medical treatment at the first dose to none after the fourth dose. From this study it was concluded that a complete eradication of microfilariae in skin snips in severely infected persons living in a hyperendemic area without vector control is probably not feasible. Medical supervision for the observation of side-effects after the third dose can be limited.
The Hubble rate in averaged cosmology
Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com
2011-03-01
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.
Effects of bacterial treatments on wood extractives.
Kallioinen, Anne; Vaari, Anu; Rättö, Marjaana; Konn, Jonas; Siika-aho, Matti; Viikari, Liisa
2003-06-12
Bacterial strains were isolated from spruce wood chips and their ability to reduce the content of wood extractives was studied. Strains were screened by cultivation on liquid media containing wood extractives as the major nutrient. Some bacterial species could decrease remarkably the amount of extractives in the liquid media and reduced the amount of triglycerides, steryl esters and total extractives by 100, 20 and 39%, respectively. Spruce wood chips were treated in controlled conditions with selected bacteria to test their effects on the chips. All the bacteria grew well on wood chips. The effect of bacterial metabolism on wood extractives was significant. Bacterial treatments reduced the amount of lipophilic extractives by 16-38% in 1 week of treatment and up to 67% in 2 weeks. The most efficient strain removed 90, 66 and 50% of triglycerides, steryl esters and resin acids, respectively, in 2 weeks. These results indicate that bacteria may be promising agents for the removal of extractives for improved pulping and papermaking processes.
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
Post-fire treatment effectiveness for hillslope stabilization
Peter R. Robichaud; Louise E. Ashmun; Bruce D. Sims
2010-01-01
This synthesis of post-fire treatment effectiveness reviews the past decade of research, monitoring, and product development related to post-fire hillslope emergency stabilization treatments, including erosion barriers, mulching, chemical soil treatments, and combinations of these treatments. In the past ten years, erosion barrier treatments (contour-felled logs and...
Light propagation in the averaged universe
Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de
2014-10-01
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.
Effect of Amblyopia Treatment on Macular Thickness in Eyes With Myopic Anisometropic Amblyopia.
Pang, Yi; Frantz, Kelly A; Block, Sandra; Goodfellow, Geoffrey W; Allison, Christine
2015-04-01
To determine whether abnormal macular thickness in myopic anisometropic amblyopia differed after amblyopia treatment. Furthermore, to investigate whether effect of treatment on macular thickness was associated with subject age or improvement in stereoacuity. Seventeen children (mean age: 9.0 [±3.0] years, ranging from 5.7-13.9 years) with myopic anisometropic amblyopia (visual acuity [VA] in amblyopic eyes: 20/80-20/400) were recruited and treated with 16-week refractive correction, followed by an additional 16-week refractive correction and patching. Macular thickness, best-corrected VA, and stereoacuity were measured both before and after amblyopia treatment. Factorial repeated-measures analysis of variance was performed to determine whether macular thickness in amblyopic eyes changed after amblyopia treatment. Mean baseline VA in the amblyopic eye was 1.0 ± 0.3 logMAR and improved to 0.7 ± 0.3 after amblyopia treatment (P < 0.0001). The interaction between eye and amblyopia treatment was statistically significant for average foveal thickness (P = 0.040). There was no treatment effect on fellow eyes (P = 0.245); however, the average foveal thickness in the amblyopic eye was significantly reduced after amblyopia treatment (P = 0.049). No statistically significant interactions were found for the other macular thickness parameters (P > 0.05). Abnormal central macula associated with myopic anisometropic amblyopia tended to be thinner following amblyopia treatment with no significant changes in peripheral macular thickness.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.
Prostate Cancer Treatments Have Varying Side Effects, Study Shows
... page: https://medlineplus.gov/news/fullstory_164200.html Prostate Cancer Treatments Have Varying Side Effects, Study Shows Even ' ... News) -- The long-term side effects of different prostate cancer treatments vary -- and knowing that may help men ...
NASA Technical Reports Server (NTRS)
Conrad, G. W.; Stephens, A. P.; Conrad, A. H.; Spooner, B. S. (Principal Investigator)
1993-01-01
Fertilized eggs of Ilyanassa obsoleta Stimpson were collected immediately after their deposition in egg capsules. Unopened egg capsules then were affixed to glass slides, and incubated either statically (controls) or on a clinostat (experimentals). After incubation for 9-14 days, hatching occurred sooner and in a higher percentage of clinostated capsules than in controls. Embryos that hatched while undergoing clinostat incubation were abnormal in morphology, whereas other embryos present in non-hatched capsules in the same tubes appeared normal, as did embryos in the control tubes. Although the results are compatible with a conclusion that vector-averaged gravity in the experimental tubes caused the altered development, some other aspects of how the incubations were done may have contributed to the differences between the control and experimental results.
NASA Technical Reports Server (NTRS)
Conrad, G. W.; Stephens, A. P.; Conrad, A. H.; Spooner, B. S. (Principal Investigator)
1993-01-01
Fertilized eggs of Ilyanassa obsoleta Stimpson were collected immediately after their deposition in egg capsules. Unopened egg capsules then were affixed to glass slides, and incubated either statically (controls) or on a clinostat (experimentals). After incubation for 9-14 days, hatching occurred sooner and in a higher percentage of clinostated capsules than in controls. Embryos that hatched while undergoing clinostat incubation were abnormal in morphology, whereas other embryos present in non-hatched capsules in the same tubes appeared normal, as did embryos in the control tubes. Although the results are compatible with a conclusion that vector-averaged gravity in the experimental tubes caused the altered development, some other aspects of how the incubations were done may have contributed to the differences between the control and experimental results.
NASA Astrophysics Data System (ADS)
Jadhav, Nitin A.; Singh, Pramod K.; Rhee, Hee Woo; Bhattacharya, Bhaskar
2014-10-01
Mesoporous ZnO nanoparticles have been synthesized with tremendous increase in specific surface area of up to 578 m2/g which was 5.54 m2/g in previous reports (J. Phys. Chem. C 113:14676-14680, 2009). Different mesoporous ZnO nanoparticles with average pore sizes ranging from 7.22 to 13.43 nm and specific surface area ranging from 50.41 to 578 m2/g were prepared through the sol-gel method via a simple evaporation-induced self-assembly process. The hydrolysis rate of zinc acetate was varied using different concentrations of sodium hydroxide. Morphology, crystallinity, porosity, and J- V characteristics of the materials have been studied using transmission electron microscopy (TEM), X-ray diffraction (XRD), BET nitrogen adsorption/desorption, and Keithley instruments.
Jadhav, Nitin A; Singh, Pramod K; Rhee, Hee Woo; Bhattacharya, Bhaskar
2014-01-01
Mesoporous ZnO nanoparticles have been synthesized with tremendous increase in specific surface area of up to 578 m(2)/g which was 5.54 m(2)/g in previous reports (J. Phys. Chem. C 113:14676-14680, 2009). Different mesoporous ZnO nanoparticles with average pore sizes ranging from 7.22 to 13.43 nm and specific surface area ranging from 50.41 to 578 m(2)/g were prepared through the sol-gel method via a simple evaporation-induced self-assembly process. The hydrolysis rate of zinc acetate was varied using different concentrations of sodium hydroxide. Morphology, crystallinity, porosity, and J-V characteristics of the materials have been studied using transmission electron microscopy (TEM), X-ray diffraction (XRD), BET nitrogen adsorption/desorption, and Keithley instruments.
Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L
2014-08-01
Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrument on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.
The Effects of Positive Patient Testimonials on PTSD Treatment Choice
Pruitt, Larry D.; Zoellner, Lori A.; Feeny, Norah C.; Caldwell, Daniel; Hanson, Robert
2012-01-01
Despite the existence of effective treatment options for PTSD, these treatments are failing to reach those that stand to benefit from PTSD treatment. Understanding the processes underlying an individual’s treatment seeking behavior holds the potential for reducing treatment-seeking barriers. The current study investigates the effects that positive treatment testimonials have on decisions regarding PTSD treatment. An undergraduate (N = 439) and a trauma-exposed community (N = 203) sample were provided with videotaped treatment rationales for prolonged exposure (PE) and sertraline treatments of PTSD. Half of each sample also viewed testimonials, detailing a fictional patient’s treatment experience. All participants then chose among treatment options and rated the credibility of- and personal reactions toward- those options. Among treatment naïve undergraduates, testimonials increased the proportion choosing PE alone; and among treatment naïve members of the trauma-exposed community sample, testimonials increased the proportion choosing a combined PE plus sertraline treatment. These effects were not observed for those with prior history of either psychotherapeutic or pharmacological treatment. Major barriers exist that prevent individuals with PTSD from seeking treatment. For a critical unreached treatment sample, those who are treatment naïve, positive patient testimonials offer a mechanism in which to make effective treatments more appealing and accessible. PMID:23103234
Imai, Y; Abe, K; Nishiyama, A; Sekino, M; Yoshinaga, K
1997-12-01
We evaluated the effect of barnidipine, a dihydropyridine calcium antagonist, administered once daily in the morning in a dose of 5, 10, or 15 mg on ambulatory blood pressure (BP) in 34 patients (51.3+/-9.6 years). Hypertension was diagnosed based on the clinic BP. The patients were classified into groups according to the ambulatory BP: group 1, dippers with true hypertension; group 2, nondippers with true hypertension; group 3, dippers with false hypertension; and Group 4, nondippers with false hypertension. Barnidipine reduced the clinic systolic BP (SBP) and diastolic BP (DBP) in all groups and significantly reduced the average 24 h ambulatory BP (133.0+/-16.5/90.7+/-12.3 mm Hg v 119.7+/-13.7/81.8+/-10.3 mm Hg, P < .0001 for both SBP and DBP). Barnidipine significantly reduced the daytime ambulatory SBP in groups 1, 2, and 3, but not in group 4, and significantly reduced daytime ambulatory DBP in group 1 but not in groups 2, 3, and 4. Barnidipine significantly reduced the nighttime ambulatory SBP only in group 2 and the nighttime ambulatory DBP in groups 2 and 4. Once-a-day administration of barnidipine influenced 24 h BP on true hypertensives (the ratio of the trough to peak effect > 50%), but had minimal effect on low BP such as the nocturnal BP in dippers and the ambulatory BP in false hypertensives. These findings suggest that barnidipine can be used safely in patients with isolated clinic ("white coat") hypertension and in those with dipping patterns of circadian BP variation whose nocturnal BP is low before treatment.
Treatment effect of TUSPLV on recurrent varicocele
Yan, Tian-Zhong; Wu, Xiao-Qiang; Wang, Zhi-Wei
2017-01-01
The aim of the study was to analyze the treatment effect of transumbilical single-port laparoscopic varicocelectomy (TUSPLV) on recurrent varicocele (VC). In order to compare the surgical effects of TUSPLV to traditional retroperitoneal ligation of the internal spermatic vein, 64 patients with recurrent VC were enrolled and divided into the control group (n=30) and the observation group (n=34). Patients in the control group underwent surgery using traditional retroperitoneal ligation of the internal spermatic vein, while those in the observation group underwent surgery using TUSPLV. The results showed that the time of operation and bleeding volume in the observation group were significantly lower. The occurrence and recurrence rates of periprocedural complications were considerably lower in the observation group. Differences were statistically significant (P<0.05). In terms of the pregnancy rate, the difference between the 2 groups had no statistical significance (P>0.05). We concluded that employing TUSPLV to treat recurrent VC was safe and effective. PMID:28123466
Cost-effectiveness of Family-Based Obesity Treatment.
Quattrin, Teresa; Cao, Ying; Paluch, Rocco A; Roemmich, James N; Ecker, Michelle A; Epstein, Leonard H
2017-09-01
We translated family-based behavioral treatment (FBT) to treat children with overweight and obesity and their parents in the patient-centered medical home. We reported greater reductions in child and parent weight at 6 and 24 months compared with an attention-controlled information control (IC) group. This article reports the cost-effectiveness of long-term weight change for FBT compared with IC. Ninety-six children 2 to 5 years of age with overweight or obesity and with parents who had a BMI ≥25 were randomly assigned to FBT or IC, and both received diet and activity education (12-month treatment and 12-month follow-up). Weight loss and cost-effectiveness were assessed at 24 months. Intention-to-treat, completers, and sensitivity analyses were performed. The average societal cost per family was $1629 for the FBT and $886 for the IC groups at 24 months. At 24 months, child percent over BMI (%OBMI) change decreased by 2.0 U in the FBT group versus an increase of 4.4 U in the IC group. Parents lost 6.0 vs 0.2 kg at 24 months in the FBT and IC groups, respectively. The incremental cost-effectiveness ratios (ICERs) for children and parents' %OBMI were $116.1 and $83.5 per U of %OBMI, respectively. Parental ICERs were also calculated for body weight and BMI and were $128.1 per 1, and $353.8/ per kilogram, respectively. ICER values for child %OBMI were similar in the intention-to-treat group ($116.1/1 U decrease) compared with completers ($114.3). For families consisting of children and parents with overweight, FBT presents a more cost-effective alternative than an IC group. Copyright © 2017 by the American Academy of Pediatrics.
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.
Vibrational averages along thermal lines
NASA Astrophysics Data System (ADS)
Monserrat, Bartomeu
2016-01-01
A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.
Effects of sonochemical treatment on meteoritic nanodiamonds
NASA Astrophysics Data System (ADS)
Fisenko, Anatolii V.; Verchovsky, Sasha B.; Shiryaev, Andrei A.; Semjonova, Luba F.
2017-01-01
A nanodiamond-rich fraction (NDF) separated from the Orgueil meteorite was subjected to a high-intensity ultrasonic treatment in a weakly acidic aqueous solution. After sedimentation by centrifugation, two fractions of grains (suspension, designated as OD7C and sediment, designated as OD7D) with different properties have been obtained. The following effects of the sonication were revealed from comparison of the contents and isotope compositions of C, N, and Xe released during stepped pyrolysis and combustion of the fractions OD7C and OD7D, the initial NDF and two grain-size fractions (OD10 and OD15) produced without sonication (a) surface layer of the sonicated diamond grains is modified to different extent in comparison with nontreated ones, (b) in some grains concentrations of the bulk N and Xe a reduced significantly, and (c) nondiamond nitrogen containing phases (e.g., Si3N4) have been destroyed. It is suggested that combined effects of the sonication and centrifugation observed for the fractions OD7C and OD7D are due to differences in surface chemistry of the nanodiamond grains, which statistically influences behavior of nanoparticles during the sonication resulting in their preferential modification in the different reaction zones of the cavitating fluid.
George, David L.; Iverson, Richard M.
2014-01-01
We evaluate a new depth-averaged mathematical model that is designed to simulate all stages of debris-flow motion, from initiation to deposition. A companion paper shows how the model’s five governing equations describe simultaneous evolution of flow thickness, solid volume fraction, basal pore-fluid pressure, and two components of flow momentum. Each equation contains a source term that represents the influence of state-dependent granular dilatancy. Here we recapitulate the equations and analyze their eigenstructure to show that they form a hyperbolic system with desirable stability properties. To solve the equations we use a shock-capturing numerical scheme with adaptive mesh refinement, implemented in an open-source software package we call D-Claw. As tests of D-Claw, we compare model output with results from two sets of large-scale debris-flow experiments. One set focuses on flow initiation from landslides triggered by rising pore-water pressures, and the other focuses on downstream flow dynamics, runout, and deposition. D-Claw performs well in predicting evolution of flow speeds, thicknesses, and basal pore-fluid pressures measured in each type of experiment. Computational results illustrate the critical role of dilatancy in linking coevolution of the solid volume fraction and pore-fluid pressure, which mediates basal Coulomb friction and thereby regulates debris-flow dynamics.
Effect of mastication and other mechanical treatments on fuel structure in chaparral
Brennan, Teresa J.; Keeley, Jon E.
2015-01-01
Mechanical fuel treatments are a common pre-fire strategy for reducing wildfire hazard that alters fuel structure by converting live canopy fuels to a compacted layer of dead surface fuels. Current knowledge concerning their effectiveness, however, comes primarily from forest-dominated ecosystems. Our objectives were to quantify and compare changes in shrub-dominated chaparral following crushing, mastication, re-mastication and mastication-plus-burning treatments, and to assess treatment longevity. Results from analysis of variance (ANOVA) identified significant differences in all fuel components by treatment type, vegetation type and time since treatment. Live woody fuel components of height, cover and mass were positively correlated with time since treatment, whereas downed woody fuel components were negatively correlated. Herbaceous fuels, conversely, were not correlated, and exhibited a 5-fold increase in cover across treatment types in comparison to controls. Average live woody fuel recovery was 50% across all treatment and vegetation types. Differences in recovery between time-since-treatment years 1–8 ranged from 32–65% and exhibited significant positive correlations with time since treatment. These results suggest that treatment effectiveness is short term due to the rapid regrowth of shrubs in these systems and is compromised by the substantial increase in herbaceous fuels. Consequences of not having a full understanding of these treatments are serious and leave concern for their widespread use on chaparral-dominated landscapes.
Total pressure averaging in pulsating flows
NASA Technical Reports Server (NTRS)
Krause, L. N.; Dudzinski, T. J.; Johnson, R. C.
1972-01-01
A number of total-pressure tubes were tested in a non-steady flow generator in which the fraction of period that pressure is a maximum is approximately 0.8, thereby simulating turbomachine-type flow conditions. Most of the tubes indicated a pressure which was higher than the true average. Organ-pipe resonance which further increased the indicated pressure was encountered within the tubes at discrete frequencies. There was no obvious combination of tube diameter, length, and/or geometry variation used in the tests which resulted in negligible averaging error. A pneumatic-type probe was found to measure true average pressure, and is suggested as a comparison instrument to determine whether nonlinear averaging effects are serious in unknown pulsation profiles. The experiments were performed at a pressure level of 1 bar, for Mach number up to near 1, and frequencies up to 3 kHz.
Radial averages of astigmatic TEM images.
Fernando, K Vince
2008-10-01
The Contrast Transfer Function (CTF) of an image, which modulates images taken from a Transmission Electron Microscope (TEM), is usually determined from the radial average of the power spectrum of the image (Frank, J., Three-dimensional Electron Microscopy of Macromolecular Assemblies, Oxford University Press, Oxford, 2006). The CTF is primarily defined by the defocus. If the defocus estimate is accurate enough then it is possible to demodulate the image, which is popularly known as the CTF correction. However, it is known that the radial average is somewhat attenuated if the image is astigmatic (see Fernando, K.V., Fuller, S.D., 2007. Determination of astigmatism in TEM images. Journal of Structural Biology 157, 189-200) but this distortion due to astigmatism has not been fully studied or understood up to now. We have discovered the exact mathematical relationship between the radial averages of TEM images with and without astigmatism. This relationship is determined by a zeroth order Bessel function of the first kind and hence we can exactly quantify this distortion in the radial averages of signal and power spectra of astigmatic images. The argument to this Bessel function is similar to an aberration function (without the spherical aberration term) except that the defocus parameter is replaced by the differences of the defoci in the major and minor axes of astigmatism. The ill effects due this Bessel function are twofold. Since the zeroth order Bessel function is a decaying oscillatory function, it introduces additional zeros to the radial average and it also attenuates the CTF signal in the radial averages. Using our analysis, it is possible to simulate the effects of astigmatism in radial averages by imposing Bessel functions on idealized radial averages of images which are not astigmatic. We validate our theory using astigmatic TEM images.
Polyhedral Painting with Group Averaging
ERIC Educational Resources Information Center
Farris, Frank A.; Tsao, Ryan
2016-01-01
The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…
Polyhedral Painting with Group Averaging
ERIC Educational Resources Information Center
Farris, Frank A.; Tsao, Ryan
2016-01-01
The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…
Averaging inhomogenous cosmologies - a dialogue
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Averaging inhomogeneous cosmologies - a dialogue.
NASA Astrophysics Data System (ADS)
Buchert, T.
The averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists, one of them (RED) advocating the standard model, the other (GREEN) advancing some arguments against it. Technical explanations of these arguments as well as the conclusions of this debate are given by BLUE.
Del Greco, M; Nollo, G; Disertori, M; Sanna, G; Maggioni, A P; Santoro, E; Tarantino, F; Della Mea, M T; Antolini, R; Micciolo, R
1996-01-01
To evaluate the influence of different filtering techniques on the measurement of ventricular late potentials (VLP) the Sottoprogetto Aritmie of GISSI-3 collected signal-averaged ECG (SAECG) from 647 patients. Data were recorded after myocardial infarction (10 +/- 4 days) in 20 Italian Coronary Units. Three main filtering algorithms were used in the different commercial devices: Bidirectional Filter (ART, Aerotel, Fidelity Medical) (BF: 340 Patients), Spectral Filter (Marquette) (SF: 258 Patients) and Del Mar Filter (Del Mar Avionics) (DF: 49 Patients). QRS duration (QRSD), low amplitude signal duration (LAS40) and root mean-square-voltage (RMS40), were measured with various filters set at 40-250 Hz high and low pass frequencies. After correction for clinical variables the measurements of VLP in the three different groups were different. QRSD value obtained by BF (100.6 +/- 13 ms) was shorter than that obtained by SF (109.1 +/- 12 ms). No differences were found in LAS40 and RMS40 values between SF and BF, while DF gave longer LAS40 and lower RMS40 than SF and BF. Residual noise was lower in BF (0.3 +/- 0.1 muV). than in SF and DF (0.5 +/- 0.1 muV). Applying standard criteria DF gave a higher prevalence of VLP (48.9%) than BF (23.8%) and SF (19%) groups. This study demonstrates that the use of different filters produces discordant result on VLP measurements. For correct application of SAECG analysis in risk stratification after myocardial infarction, normal and abnormal values must be specifically established for the different filter techniques.
Kienapfel, K
2015-02-01
The Knowledge of muscle activity in common head-neck positions (HNPs) is a necessary precondition for making judgements on HNPs. The aim of the study was to record the surface electromyography activities of important muscles of the horse's neck in various HNPs. The electrical activities of the m. splenius, brachiocephalicus and trapezius were recorded on both sides. Five horses, both with and without a rider, were examined in all three gaits on both hands in three different HNPs: a 'free' position, a 'gathered' (head higher, neck more flexed) position with the noseline in front of the vertical and a 'hyperflexed' position. Averages of ten consecutive gait cycles in each HNP were evaluated and compared by standard statistical methods. No difference between ridden and unridden horses could be detected. The m. brachiocephalicus was in the hyperflexed position in all gaits significantly (p < 0.01) more active than in the gathered and free position, which were not significantly different. By contrast, the m. splenius was in the hyperflexed position less active than in the free position (p < 0.02), in which it always showed the highest activity. In walking, the muscle activities in the free and gathered positions deviated significantly (p < 0.01). The m. trapezius was in the hyperflexed posture during walking significantly less active than in the free (p < 0.01) and gathered (p < 0.01) positions with the strongest activities in the free position. Again, the free and gathered positions differed significantly (p < 0.01). In trot, the same pattern occured, although the gathered and hyperflexed positions did not differ significantly. In the canter, the activities of the m. trapezius showed no differences between HNPs. In HNPs with the noseline in front of the vertical, the muscles of the topline (m. splenius, m. trapezius) are activated and trained. In the hyperflexed position, however, a major muscle of the lower topline (m. brachiocephalicus) is activated and trained.
Quality of Life and Cost Effectiveness of Prostate Cancer Treatment
2005-03-01
AD Award Number: W81XWH-04-1-0257 TITLE: Quality of Life and Cost Effectiveness of Prostate Cancer Treatment PRINCIPAL INVESTIGATOR: Ravishankar...patients across two ethnic groups, (2) analyze and compare short and long term cost-effectiveness of prostate cancer treatment across ethnic groups; and...cost-effectiveness of prostate cancer treatment across ethnic groups; and (3) analyze and compare resource utilization patterns, treatment modalities
Gondek, Dawid; Edbrooke-Childs, Julian; Fink, Elian; Deighton, Jessica; Wolpert, Miranda
2016-05-01
Due to recent increases in the use of feedback from outcome measures in mental health settings, we systematically reviewed evidence regarding the impact of feedback from outcome measures on treatment effectiveness, treatment efficiency, and collaborative practice. In over half of 32 studies reviewed, the feedback condition had significantly higher levels of treatment effectiveness on at least one treatment outcome variable. Feedback was particularly effective for not-on-track patients or when it was provided to both clinicians and patients. The findings for treatment efficiency and collaborative practice were less consistent. Given the heterogeneity of studies, more research is needed to determine when and for whom feedback is most effective.
Psychological mechanisms of effective cognitive-behavioral treatments for PTSD.
Zalta, Alyson K
2015-04-01
Several psychotherapies have been established as effective treatments for posttraumatic stress disorder (PTSD) including prolonged exposure, cognitive processing therapy, and cognitive therapy for PTSD. Understanding the key mechanisms of these treatments, i.e., how these treatments lead to therapeutic benefits, will enable us to maximize the efficacy, effectiveness, and efficiency of these therapies. This article provides an overview of the theorized mechanisms for each of these treatments, reviews the recent empirical evidence on psychological mechanisms of these treatments, discusses the ongoing debates in the field, and provides recommendations for future research. Few studies to date have examined whether changes in purported treatment mechanisms predict subsequent changes in treatment outcomes. Future clinical trials examining treatments for PTSD should use study designs that enable researchers to establish the temporal precedence of change in treatment mechanisms prior to symptom reduction. Moreover, further research is needed that explores the links between specific treatment components, underlying change mechanisms, and treatment outcomes.
Average power meter for laser radiation
NASA Astrophysics Data System (ADS)
Shevnina, Elena I.; Maraev, Anton A.; Ishanin, Gennady G.
2016-04-01
Advanced metrology equipment, in particular an average power meter for laser radiation, is necessary for effective using of laser technology. In the paper we propose a measurement scheme with periodic scanning of a laser beam. The scheme is implemented in a pass-through average power meter that can perform continuous monitoring during the laser operation in pulse mode or in continuous wave mode and at the same time not to interrupt the operation. The detector used in the device is based on the thermoelastic effect in crystalline quartz as it has fast response, long-time stability of sensitivity, and almost uniform sensitivity dependence on the wavelength.
The effectiveness of group treatment for female adult incest survivors.
Brown, Donalee; Reyes, Sonia; Brown, Brienne; Gonzenbach, Meredith
2013-01-01
Very few clinicians receive training in the treatment of sexual abuse, yet during their careers many will encounter victims of sexual abuse. This article discusses the incidence of child sexual abuse, defines incest, and discusses treatment options. A review of group treatment is explored, with results being documented providing support for the effectiveness of the group treatment process.
ERIC Educational Resources Information Center
Radunzel, Justine; Noble, Julie
2013-01-01
In this study, we evaluated the differential effects on racial/ethnic, family income, and gender groups of using ACT® College Readiness Assessment Composite score and high school grade point average (HSGPA) for predicting long-term college success. Outcomes included annual progress towards a degree (based on cumulative credit-bearing hours…
Understanding Treatment Effect Terminology in Pain and Symptom Management Research.
Garrido, Melissa M; Dowd, Bryan; Hebert, Paul L; Maciejewski, Matthew L
2016-09-01
Within health services and medical research, there is a wide variety of terminology related to treatment effects. Understanding differences in types of treatment effects is especially important in pain and symptom management research where nonexperimental and quasiexperimental observational data analysis is common. We use the example of a palliative care consultation team leader considering implementation of a medication reconciliation program and a care-coordination intervention reported in the literature to illustrate population-level and conditional treatment effects and to highlight the sensitivity of values of treatment effects to sample selection and treatment assignment. Our goal is to facilitate appropriate reporting and interpretation of study results and to help investigators understand what information a decision maker needs when deciding whether to implement a treatment. Greater awareness of the reasons why treatment effects may differ across studies of the same patients in the same treatment settings can help policy makers and clinicians understand to whom a study's results may be generalized.
Safe and effective treatment of seborrheic dermatitis.
Elewski, Boni E
2009-06-01
Seborrheic dermatitis is a common chronic inflammatory skin disorder that can vary in presentation from mild dandruff to dense, diffuse, adherent scale. The disorder occurs throughout the world without racial or geographic predominance; it is more common in males than females. Its precise etiology remains unknown, but the condition is strongly associated with lipophilic Malassezia yeasts found among the normal skin flora and represents a cofactor linked to several risk factors, including T-cell depression, increased sebum levels, and activation of the alternative complement pathway. The goal of treatment is symptom control, with an emphasis on the importance of maintaining patient adherence to therapy to achieve low rates of recurrence. Available therapies include corticosteroids, antifungal agents, immunomodulators, and medicated keratolytic shampoos. Although corticosteroids are associated with recurrence, they sometimes may be recommended in combination with antifungal agents. Antifungal therapy is considered primary, but some agents are more effective than others because of their favorable pharmacokinetic profiles, high rates of absorption, anti-inflammatory and antipruritic properties, and vehicle.
[The treatment effects analysis of 164 patients with sudden sensorineural hearing loss].
Zhang, Wei; Xie, Wen; Xu, Hong; Liu, Yuehui
2015-05-01
To explore the effective treatment of sudden sensorineural hearing loss and factors affecting its prognosis. The clinical data and follow-up results of 164 patients with sudden sensorineural hearing loss were analyzed retrospectively. All the 164 patients were given intravenous vasodilator, neurotrophic drugs treatment, oral prednisone treatment, and intratympanic dexamethasone injection. All patients were divided into low frequency hearing loss type,intermediate frequency hearing loss, high frequency hearing loss, all frequency hearing loss and total deafness group. Pure tone hearing threshold test were performed before and 3 months after treatment. All patients and different groups were compared before and after treatment damage frequency of average air conduction and various frequency air conduction hearing. Analysis of gender, age, process and hearing curve type, frequency hearing of impaired before treatment, the symptoms with or without vertigo. All the patients' hearing improved after treatment. The treatment efficiency was 46.3%, and low frequency hearing improvements were better than the high frequency hearing. Including age, process, frequency hearing of impaired before treatment, with or without vertigo isindependent factors influencing its prognosis. Based on the regular treatment,oral and intratympanic injection glucocorticoid therapy are safe and effective for sudden hearing loss,The prognosis and age, course, impaired hearing before curve type, treatment frequency hearing level is closely related, with or without vertigo.
ERIC Educational Resources Information Center
Richards, Nancy; Smith, Manuel J.
Project STAR (Social Thinking and Reasoning Program) is a classroom-based social skills program for students in grades 5-8. To assess the long-term effectiveness of this program, students participated in the project (N=331) were compared with control students (N=191) during 1980-83. The hypothesis that there are significant differences in current…
ERIC Educational Resources Information Center
Maheady, Larry; And Others
1983-01-01
Results indicated that extrinsic rewards improved students' test performances significantly more than no rewards or feedback reward conditions. These improvements in performance were noted for all students under extrinsic reward conditions, thereby extending the effectiveness of these procedures across IQ levels. (Author/CL)
ERIC Educational Resources Information Center
Yannucci, Michael J.
2014-01-01
The purpose of this study was to investigate school administrators' perceptions of teachers' adherence to the highly effective critical attributes of the four domains of Charlotte Danielson's "Framework for Teaching" (Planning and Preparation, The Classroom Environment, Instruction, and Professional Responsibilities) in kindergarten…
ERIC Educational Resources Information Center
Yannucci, Michael J.
2014-01-01
The purpose of this study was to investigate school administrators' perceptions of teachers' adherence to the highly effective critical attributes of the four domains of Charlotte Danielson's "Framework for Teaching" (Planning and Preparation, The Classroom Environment, Instruction, and Professional Responsibilities) in kindergarten…
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
Spatial moving average risk smoothing.
Botella-Rocamora, P; López-Quílez, A; Martinez-Beneito, M A
2013-07-10
This paper introduces spatial moving average risk smoothing (SMARS) as a new way of carrying out disease mapping. This proposal applies the moving average ideas of time series theory to the spatial domain, making use of a spatial moving average process of unknown order to define dependence on the risk of a disease occurring. Correlation of the risks for different locations will be a function of m values (m being unknown), providing a rich class of correlation functions that may be reproduced by SMARS. Moreover, the distance (in terms of neighborhoods) that should be covered for two units to be found to make the correlation of their risks 0 is a quantity to be fitted by the model. This way, we reproduce patterns that range from spatially independent to long-range spatially dependent. We will also show a theoretical study of the correlation structure induced by SMARS, illustrating the wide variety of correlation functions that this proposal is able to reproduce. We will also present three applications of SMARS to both simulated and real datasets. These applications will show SMARS to be a competitive disease mapping model when compared with alternative proposals that have already appeared in the literature. Finally, the application of SMARS to the study of mortality for 21 causes of death in the Comunitat Valenciana will allow us to identify some qualitative differences in the patterns of those diseases. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Feroughi, Omid M.; Kronemayer, Helmut; Dreier, Thomas; Schulz, Christof
2015-09-01
Multi-line NO laser-induced fluorescence (LIF) thermometry enables accurate gas-phase temperature imaging in combustion systems through least-squares fitting of excitation spectra. The required excitation wavelength scan takes several minutes which systematic biases the results in case of temperature fluctuations. In this work, the effect of various types (linear, Gaussian and bimodal) and amplitudes of temperature fluctuations is quantified based on simulated NO-LIF excitation spectra. Temperature fluctuations of less than ±5 % result in a negligible error of less than ±1 % in temperature for all cases. Bimodal temperature distributions have the largest effect on the determined temperature. Symmetric temperature fluctuations around 900 K have a negligible effect. At lower mean temperatures, fluctuations cause a positive bias leading to over-predicted mean temperatures, while at higher temperatures the bias is negative. The results of the theoretical analysis were applied as a guide for interpreting experimental multi-line NO-LIF temperature measurements in a mildly turbulent pilot-plant scale flame reactor dedicated for nanoparticle synthesis.
Han, Z T; Chen, Q X
2015-07-31
This study aimed to investigate the curative effect and costs of surgical and gamma knife treatments on intractable epilepsy caused by temporal-hippocampal sclerosis. The subjects comprised patients who suffered from intractable epilepsy caused by temporal-hippocampal sclerosis and received treatment in the Department of Neurosurgery of our hospital between 2010 and 2011. After obtaining their consent, patients were evaluated and selected to receive surgical or gamma knife treatments. In the surgical group, the short-term curative rate was 92.60% and the average cost was US$ 1311.50 while in the gamma knife group, the short-term curative rate was 53.79%, and the average cost was US$ 2786.90. Both surgical and gamma knife treatments of intractable epilepsy caused by temporal-hippocampal sclerosis are safe and effective, but the short-term curative effect of surgical treatment is better than that of gamma knife, and its cost is lower.
Treatment strategy: Role of enfuvirtide in managing treatment-limiting side effects
Tsoukas, Christos
2007-01-01
Side effects can limit the options available to physicians for the treatment of HIV infection. Management of these side effects is essential, to avoid cessation of treatment. The entry inhibitor enfuvirtide can be useful as one of three active agents in an HIV treatment regimen as a way to both reduce treatment-limiting side effects and provide an efficacious agent for viral control. In the present case, the patient had a problematic and lengthy treatment history, with numerous concomitant conditions. His latest regimen, which includes an agent in a new drug class (enfuvirtide), has maintained HIV suppression while minimizing toxicity. PMID:23365593
Treatment strategy: Role of enfuvirtide in managing treatment-limiting side effects.
Tsoukas, Christos
2007-01-01
Side effects can limit the options available to physicians for the treatment of HIV infection. Management of these side effects is essential, to avoid cessation of treatment. The entry inhibitor enfuvirtide can be useful as one of three active agents in an HIV treatment regimen as a way to both reduce treatment-limiting side effects and provide an efficacious agent for viral control. In the present case, the patient had a problematic and lengthy treatment history, with numerous concomitant conditions. His latest regimen, which includes an agent in a new drug class (enfuvirtide), has maintained HIV suppression while minimizing toxicity.
Effects of Cognitive Adjunct Treatments on Assertiveness.
ERIC Educational Resources Information Center
Derry, Paul A.; Stone, Gerald L.
This study examined the contribution of cognitively-oriented adjunct treatments to assertive training. Unassertive university students (N=42) were randomly assigned within an analysis of covariance design with three levels of treatment (Cognitive Self-Statement Training (CSST), Attribution Training (AT), and Behavioral Rehearsal (BR]. Multiple…
Nilotinib Effective and Safe in Initial Treatment of CML
Preliminary results from a phase III trial testing nilotinib (Tasigna) against imatinib mesylate (Gleevec) as first-line treatment for chronic-phase chronic myelogenous leukemia (CML) indicate that nilotinib is effective and safe as initial treatment for
Effectiveness of Acupuncture in the Treatment of Gulf War Illness
2010-07-01
Technical Reporting: 1 year Progress Report GW080059 - Effectiveness of Acupuncture in the Treatment of Gulf War Illness PI - Lisa Conboy, MA, MS, ScD...individualized acupuncture treatment on sUbjects’ overall health and disease burden. This three-year project collects main outcomes after 2 months of biweekly... acupuncture treatment. Longer-term effectiveness will be measured with a 6 month follow-up. Our objectives are to find : a successful treatment of GW I
Effects of glucocorticoid treatment on bone strength.
Manolides, Andrew S; Cullen, Diane M; Akhter, Mohammed P
2010-09-01
Glucocorticoids (GCs) are prescribed for the treatment of several diseases, but their long-term use causes osteoporosis. Current research suggests that GCs suppress the canonical Wnt/beta pathway, resulting in decreased expression of critical bone proteins. This study examined how bone structure and strength of high bone mass (HBM) mice and low density lipoprotein receptor-related protein 5 (LRP5) knockout (KO+/-) mice are affected by GC treatment in comparison to wild-type (WT) mice, and if changes were specific to either trabecular or cortical bone. Mice were treated with either prednisone or placebo. The femurs and L4 vertebral bodies were analyzed by micro-CT for structure and mechanically tested to determine strength and apparent material strength properties. Differences in all measured variables corresponding to GC treatment and genotype were tested using two-way ANOVA. GC treatment caused decreased structural strength parameters, weakened apparent material strength properties, and disruption of bone structure in HBM, but not LRP5+/- or WT, mice. Despite treatment-related loss, trabecular bone structure and strength remained elevated as compared to LRP5+/- and WT mice. In HBM femurs, both cortical and trabecular structure, but not strength parameters, were negatively affected by treatment. In HBM vertebral bodies, both structural and strength parameters were negatively affected by treatment.
Cardona-Arias, Jaiberth Antonio; López-Carvajal, Liliana; Tamayo Plata, Mery Patricia; Vélez, Iván Darío
2017-05-01
The treatment of cutaneous leishmaniasis is toxic, has contraindications, and a high cost. The objective of this study was to estimate the cost-effectiveness of thermotherapy versus pentavalent antimonials for the treatment of cutaneous leishmaniasis. Effectiveness was the proportion of healing and safety with the adverse effects; these parameters were estimated from a controlled clinical trial and a meta-analysis. A standard costing was conducted. Average and incremental cost-effectiveness ratios were estimated. The uncertainty regarding effectiveness, safety, and costs was determined through sensitivity analyses. The total costs were $66,807 with Glucantime and $14,079 with thermotherapy. The therapeutic effectiveness rates were 64.2% for thermotherapy and 85.1% for Glucantime. The average cost-effectiveness ratios ranged between $721 and $1275 for Glucantime and between $187 and $390 for thermotherapy. Based on the meta-analysis, thermotherapy may be a dominant strategy. The excellent cost-effectiveness ratio of thermotherapy shows the relevance of its inclusion in guidelines for the treatment. © 2017 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.
Bangotra, Pargin; Mehra, Rohit; Kaur, Kirandeep; Jakhu, Rajan
2016-10-01
The activity concentration of (226)Ra (radium), (232)Th (thorium) and (40)K (potassium) has been measured in the soil samples collected from Mansa and Muktsar districts of Punjab (India) using NaI (Tikl) gamma detector. The concentration of three radionuclides ((226)Ra, (232)Th and (40)K) in the studied area has been varied from 18±4 to 46±5, 53±7 to 98±8 and 248±54 to 756±110 Bq kg(-1), respectively. Radium equivalent activities (Raeq) have been calculated in soil samples for the assessment of the radiation hazards arising due to the use of these soil samples. The absorbed dose rate of (226)Ra, (232)Th and (40)K in studied area has been varied from 8 to 21, 33 to 61 and 9 to 25 nGy h(-1), respectively. The corresponding indoor and outdoor annual effective dose in studied area was 0.38 and 0.09 mSv, respectively. The external and internal hazard has been also calculated for the assessment of radiation hazards in the studied area.
Effect on performance of weanling alpacas following treatments against gastro-intestinal parasites.
Thomas, Susan M; Morgan, Eric R
2013-11-15
Nematodes and coccidia are common parasites of alpacas (Vicugna pacos), and important causes of disease in this increasingly popular livestock species. Endoparasitic infestation is thought to increase at times of natural or imposed stress, and antiparasitic treatments are often administered, although to date there is little evidence regarding their effect. Thirty-one alpaca juvenilles (cria) were divided into four groups at weaning, and received either no treatment as a control (C), fenbendazole anthelmintic (FB), toltrazuril coccidiostat (T), or both treatments (FBT). Body weights and faecal egg/oocyst counts were recorded weekly for six weeks following treatment. Although the prophylactic treatments decreased faecal egg/oocyst counts of the target organisms in the short term, there was no significant difference in egg/oocyst output over the course of the trial from animals given wormer, coccidiostat or both treatments. The group receiving anthelmintic only showed a significant reduction in live weight gain (LWG), with no significant difference in LWG between the other groups. At the conclusion of the trial, 'wormed only' alpacas weighed 3.3% less than at weaning, losing an average 1.3 kg over six weeks, whereas average LWG in the control group was 2.5 kg. Antiparasitics transiently reduced egg/oocyst output but results suggest that further investigation is required on the action of anthelmintics administered to alpaca cria at weaning and their effect on animal health and welfare. Copyright © 2013 Elsevier B.V. All rights reserved.
Effect of treatment pressure on treatment quality and bending properties of red pine lumber
Patricia K. Lebow; Stan T. Lebow; William J. Nelson
2010-01-01
Although higher treatment pressures have the potential to improve preservative penetration, higher pressures may possibly result in greater reduction in mechanical properties. The present study evaluated the effect of treatment pressure on the treatment quality and mechanical properties of red pine (Pinus resinosa Ait.) lumber. End-matched sections of red pine lumber...
NASA Technical Reports Server (NTRS)
Hah, Chunill
2011-01-01
The current paper reports on an investigation of steady and unsteady flow effects of circumferential grooves casing treatment in a transonic compressor rotor. Circumferential grooves casing treatment is used mainly to increase stall margin in axial compressors with a relatively small decrease in aerodynamic efficiency. It is widely believed that flow mechanisms of circumferential grooves casing treatment near stall conditions are not yet well understood even though this treatment has been used widely in real engines. Numerical analysis based on steady Reynolds-averaged Navier-Stokes (RANS) has been the primary tool used to understand flow mechanism for circumferential grooves casing treatment. Although steady RANS explains some flow effects of circumferential grooves casing treatment, it does not calculate all the measured changes in the compressor characteristics. Therefore, design optimization of circumferential grooves with steady RANS has not been very successful. As a compressor operates toward the stall condition, the flow field becomes transient. Major sources of self-generated flow unsteadiness are shock oscillation and interaction between the passage shock and the tip leakage vortex. In the present paper, an unsteady Reynolds-averaged Navier-Stokes (URANS) approach is applied to study the effects of circumferential grooves in a transonic compressor. The results from URANS are compared with the results from RANS and measured data. The current investigation shows that there are significant unsteady flow effects on the performance of the circumferential grooves casing treatment. For the currently investigated rotor, the unsteady effects are of the same magnitude as the steady effects in terms of extending the compressor stall margin.
Cost-effectiveness of early treatment for retinopathy of prematurity.
Kamholz, Karen L; Cole, Cynthia H; Gray, James E; Zupancic, John A F
2009-01-01
The Early Treatment for Retinopathy of Prematurity trial demonstrated that peripheral retinal ablation of eyes with high-risk prethreshold retinopathy of prematurity (early treatment) is associated with improved visual outcomes at 9 months' corrected gestational age compared with treatment at threshold disease (conventional management). However, early treatment increased the frequency of laser therapy, anesthesia with intubation, treatment-related systemic complications, and the need for repeat treatments. To determine the cost-effectiveness of an early treatment strategy for retinopathy of prematurity compared with conventional management. We developed a stochastic decision analytic model to assess the incremental cost of early treatment per eye with severe visual impairment prevented. We derived resource-use and efficacy estimates from the Early Treatment for Retinopathy of Prematurity trial's published outcome data. We used a third-party payer perspective. Our primary analysis focused on outcomes from birth through 9 months' corrected gestational age. A secondary analysis used a lifetime horizon. Parameter uncertainty was quantified by using probabilistic and deterministic sensitivity analyses. The incremental cost-effectiveness of early treatment was $14,200 per eye with severe visual impairment prevented. There was a 90% probability that the cost-effectiveness of early treatment would be less than $40,000 per eye with severe visual impairment prevented and a 0.5% probability that early treatment would be cost-saving (less costly and more effective). Limiting early treatment to more severely affected eyes (eyes with "type 1 retinopathy of prematurity" as defined by the Early Treatment for Retinopathy of Prematurity trial) had a cost-effectiveness of $6,200 per eye with severe visual impairment prevented. Analyses that considered long-term costs and outcomes found that early treatment was cost-saving. Early treatment of retinopathy of prematurity is both
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t
Steiger, U; Cotting, J; Reichen, J
1990-03-01
We prospectively studied the effect of albendazole on microsomal reserve and on first-pass activation to albendazole sulfoxide in patients with hydatid disease. An aminopyrine breath test was performed in 12 patients while they were receiving albendazole treatment and while they were not. Excretion of 14CO2 in breath averaged 0.70%.kg.mmol-1 +/- 0.20%.kg.mmol-1 without treatment and 0.54%.kg.mmol-1 +/- 0.14%.kg.mmol-1 with treatment (p less than 0.005). Plasma levels of albendazole sulfoxide were measured 4 hours after the morning dose during the first and second half of the 4-week treatment cycles. In nine of the 12 patients albendazole sulfoxide levels decreased during the second half of the cycle by an average of 0.84 +/- 0.76 mumol/L (p less than 0.02). Transaminase levels increased in 10 of the 12 patients during long-term albendazole treatment, and major side effects, including hepatotoxicity, neutropenia, and alopecia, were observed in three patients. We conclude that albendazole partially inhibits microsomal enzyme function but induces its own metabolism. Hepatotoxicity and other possible severe side effects necessitate close therapeutic monitoring of patients who are given albendazole.
Microplastics in Sewage Sludge: Effects of Treatment.
Mahon, A M; O'Connell, B; Healy, M G; O'Connor, I; Officer, R; Nash, R; Morrison, L
2017-01-17
Waste water treatment plants (WWTPs) are receptors for the cumulative loading of microplastics (MPs) derived from industry, landfill, domestic wastewater and stormwater. The partitioning of MPs through the settlement processes of wastewater treatment results in the majority becoming entrained in the sewage sludge. This study characterized MPs in sludge samples from seven WWTPs in Ireland which use anaerobic digestion (AD), thermal drying (TD), or lime stabilization (LS) treatment processes. Abundances ranged from 4196 to 15 385 particles kg(-1) (dry weight). Results of a general linear mixed model (GLMM) showed significantly higher abundances of MPs in smaller size classes in the LS samples, suggesting that the treatment process of LS shears MP particles. In contrast, lower abundances of MPs found in the AD samples suggests that this process may reduce MP abundances. Surface morphologies examined using scanning electron microscopy (SEM) showed characteristics of melting and blistering of TD MPs and shredding and flaking of LS MPs. This study highlights the potential for sewage sludge treatment processes to affect the risk of MP pollution prior to land spreading and may have implications for legislation governing the application of biosolids to agricultural land.
Effectiveness and safety of topical tacrolimus in treatment of vitiligo
Rokni, Ghasem Rahmatpour; Golpour, Massoud; Gorji, Alimorad Heidari; Khalilian, Alireza; Ghasemi, Hamta
2017-01-01
Vitiligo is one of the most primitive well-known dermatoid disorders with different suggested therapies. Therefore, this study investigated the efficiency and safety of topical tacrolimus in treatment of patients with vitiligo. This study was a clinical randomized designed study pre- post-test method, has been conducted on thirty cases with vitiligo who have referred to polyclinic and dermatology clinic. Participant's evaluated and demographic information recorded in designed checklist. In the next stage, the disease activity scored by vitiligo index disease activity system. Photography and depigmentation percent has recorded before treatment and further in 4th, 8th, 12th, 16th, 20th, and 24th weeks. Finally, gathered data compared through SPSS-20 software. The final sample comprised 30 persons including: 12 men (40%) and 18 women (60%). The average of patient's age in this study was 26/13 ± 18/20 (2-76-year-old). Eleven persons was ≤15 years old and rest was older than 15. Sixty-six lesions have funded in patients that maximum has accrued on face and neck (37/87%) and trunk (21/21%). In addition, minimum of lesions is related to genitalia (9/09%). In the in 4th, 8th, 12th, 16th weeks, improvement in face and neck had increased significantly, into the past weeks. In the 20th and 24th weeks, the improvement has increased although it was not significant enhancement. Also about trunk, in the 4th week the improvement does not have significant increasing in compare to the past week. In the eighth, 12th, 16th, 20th, and 24th weeks the improvement has been increased significantly in compare to the past weeks. Although in the case of limbs and genitalia, the improvement was lower. There was no significant difference between male and females and age. Although the improvement was, slow in older persons. Study results, has presented applying topical tacrolimus in vitiligo, particularly in face and neck, could be effective and does not seen any specified adverse effects
Bartl, R; Bartl, C
2015-12-01
Osteoporosis is still an underdiagnosed and insufficiently therapied widespread disease in Germany. Of the estimated 7 million osteoporosis patients only 1.5 million receive a guideline conform diagnosis and even less receive appropriate treatment. Some 90 % of patients are provided with analgesics but only 10 % receive an effective therapy, although efficacious, well-tested and affordable medications are available. In addition, approximately one half of the patients terminate treatment after only 1 year although according to the results of recent studies the duration of therapy should be at least 3-5 years. In view of the increasing average life expectancy, a consistent management for prevention of fractures associated with osteoporosis is always most important for society, even if only for reasons of costs. Achievement of this target depends on four circumstances: clarification of the origin of osteoporosis and fractures (bone consciousness), prophylaxis of bone loss and fractures (primary prevention), consistent guideline conform diagnostics and therapy (secondary and tertiary prevention) and cooperation of all disciplines in medicine (bone is everybody's business). This article describes the current state of diagnostics (bone density measurement with dual X-ray absorptiometry, FRAX®), prophylaxis of fractures (screening program) and therapy (use of economic and effective medications with low side effects). Novel medications are already undergoing clinical testing and a "healing" of bone reduction with restoration of the normal bone structure is to be expected.
Achronal averaged null energy condition
Graham, Noah; Olum, Ken D.
2007-09-15
The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.
Solymosi, Tamas; Melczer, Zsolt; Szabolcs, Istvan; Nagy, Endre V.; Goth, Miklos
2015-01-01
Background. Because of the increased risk of surgery, thyroid nodules causing compression signs and/or hyperthyroidism are concerning during pregnancy. Patients and Methods. Six patients with nontoxic cystic, four with nontoxic solid, and three with overt hyperthyroidism caused by toxic nodules were treated with percutaneous ethanol injection therapy (PEI). An average of 0.68 mL ethanol per 1 mL nodule volume was administered. Mean number of PEI treatments for patients was 2.9. Success was defined as the shrinkage of the nodule by more than 50% of the pretreatment volume (V0) and the normalization of TSH and FT4 levels. The average V0 was 15.3 mL. Short-term success was measured prior to labor, whereas long-term success was determined during the final follow-up (an average of 6.8 years). Results. The pressure symptoms decreased in all but one patient after PEI and did not worsen until delivery. The PEI was successful in 11 (85%) and 7 (54%) patients at short-term and long-term follow-up, respectively. Three patients underwent repeat PEI which was successful in 2 patients. Conclusions. PEI is a safe tool and seems to have good short-term results in treating selected symptomatic pregnant patients. Long-term success may require repeat PEI. PMID:26697066
Nonparametric Bounds and Sensitivity Analysis of Treatment Effects
Richardson, Amy; Hudgens, Michael G.; Gilbert, Peter B.; Fine, Jason P.
2015-01-01
This paper considers conducting inference about the effect of a treatment (or exposure) on an outcome of interest. In the ideal setting where treatment is assigned randomly, under certain assumptions the treatment effect is identifiable from the observable data and inference is straightforward. However, in other settings such as observational studies or randomized trials with noncompliance, the treatment effect is no longer identifiable without relying on untestable assumptions. Nonetheless, the observable data often do provide some information about the effect of treatment, that is, the parameter of interest is partially identifiable. Two approaches are often employed in this setting: (i) bounds are derived for the treatment effect under minimal assumptions, or (ii) additional untestable assumptions are invoked that render the treatment effect identifiable and then sensitivity analysis is conducted to assess how inference about the treatment effect changes as the untestable assumptions are varied. Approaches (i) and (ii) are considered in various settings, including assessing principal strata effects, direct and indirect effects and effects of time-varying exposures. Methods for drawing formal inference about partially identified parameters are also discussed. PMID:25663743
Effects of Behavioral and Pharmacological Treatment on Smokeless Tobacco Users.
ERIC Educational Resources Information Center
Hatsukami, Dorothy; And Others
1996-01-01
Examined the effects of 2 mg of nicotine polacrilex versus placebo gum and a group behavioral treatment versus minimal contact on cessation of smokeless tobacco use. Participants (n=210) were randomly assigned 1 of the 4 treatment conditions. Withdrawal symptoms were assessed throughout the treatment. Discusses findings. (KW)
Effects of Behavioral and Pharmacological Treatment on Smokeless Tobacco Users.
ERIC Educational Resources Information Center
Hatsukami, Dorothy; And Others
1996-01-01
Examined the effects of 2 mg of nicotine polacrilex versus placebo gum and a group behavioral treatment versus minimal contact on cessation of smokeless tobacco use. Participants (n=210) were randomly assigned 1 of the 4 treatment conditions. Withdrawal symptoms were assessed throughout the treatment. Discusses findings. (KW)
Reitzel, Lorraine R; Carbonell, Joyce L
2006-10-01
Published and unpublished data from nine studies on juvenile sexual offender treatment effectiveness were summarized by meta-analysis (N=2986, 2604 known male). Recidivism rates for sexual, non-sexual violent, non-sexual non-violent crimes, and unspecified non-sexual were as follows: 12.53%, 24.73%, 28.51%, and 20.40%, respectively, based on an average 59-month follow-up period. Four included studies contained a control group (n=2288) and five studies included a comparison treatment group (n=698). An average weighted effect size of 0.43 (CI=0.33-0.55) was obtained, indicating a statistically significant effect of treatment on sexual recidivism. However, individual study characteristics (e.g., handling of dropouts and non-equivalent follow-up periods between treatment groups) suggest that results should be interpreted with caution. A comparison of odds ratios by quality of study design indicated that higher quality designs yielded better effect sizes, though the difference between groups was not significant.
Ferron, John M; Bell, Bethany A; Hess, Melinda R; Rendina-Gobioff, Gianna; Hibbard, Susan T
2009-05-01
Multiple-baseline studies are prevalent in behavioral research, but questions remain about how to best analyze the resulting data. Monte Carlo methods were used to examine the utility of multilevel models for multiple-baseline data under conditions that varied in the number of participants, number of repeated observations per participant, variance in baseline levels, variance in treatment effects, and amount of autocorrelation in the Level 1 errors. Interval estimates of the average treatment effect were examined for two specifications of the Level 1 error structure (sigma(2)I and first-order autoregressive) and for five different methods of estimating the degrees of freedom (containment, residual, between-within, Satterthwaite, and Kenward-Roger). When the Satterthwaite or Kenward-Roger method was used and an autoregressive Level 1 error structure was specified, the interval estimates of the average treatment effect were relatively accurate. Conversely, the interval estimates of the treatment effect variance were inaccurate, and the corresponding point estimates were biased.
Providing cost-effective treatment of hard-to-heal wounds in the community through use of NPWT.
Hampton, Jane
2015-06-01
The treatment of non-healing wounds accounts for a high proportion of wound care costs. Advanced technology treatments, such as negative pressure wound therapy (NPWT), could be cost-effective if they result in faster healing. The objective of this study is to assess the effect on healing and the cost-effectiveness of a single-use NPWT (i.e PICO by Smith & Nephew) when used on hard-to-heal wounds in a community setting. This was a cohort case study in which wounds were treated with NWPT for 2 weeks. Wounds were assessed every 2-4 weeks to a healed state. The weekly cost of treatment prior to intervention, that is, the products used and nurse time, were compared with treatment costs associated with NWPT and after a return to standard treatment. The study included 9 patients with leg ulcers or pressure ulcers that had been slow healing or non-healing for at least 6 weeks. While treated with NPWT, the average weekly reduction in wound size was 21%. The wound size achieved with NPWT was reached on average 10 weeks earlier than predicted. The increased healing rate continued after PICO stopped and 5 wounds healed on average 8 weeks later. Frequency of dressing changes fell from 4 times weekly at baseline to 2 times a week with NPWT and to 1.8 after NPWT stopped. Weekly cost of treatment with NPWT was, on average, 1.6 times higher than the baseline, but fell to 3 times less when NPWT stopped owing to the reduction in dressing changes. The amount of change in healing rate was considerably higher than the increase in costs associated with NPWT. NWPT is a cost-effective treatment for hard-to-heal wounds. Wounds decreased in size and healed more quickly under NWPT treatment than under standard treatment. Additional NPWT costs can be quickly offset by faster healing and a shortened treatment period.
Cost-effectiveness of treatment of diabetic macular edema.
Pershing, Suzann; Enns, Eva A; Matesic, Brian; Owens, Douglas K; Goldhaber-Fiebert, Jeremy D
2014-01-07
Macular edema is the most common cause of vision loss among patients with diabetes. To determine the cost-effectiveness of different treatments of diabetic macular edema (DME). Markov model. Published literature and expert opinion. Patients with clinically significant DME. Lifetime. Societal. Laser treatment, intraocular injections of triamcinolone or a vascular endothelial growth factor (VEGF) inhibitor, or a combination of both. Discounted costs, gains in quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios (ICERs). All treatments except laser monotherapy substantially reduced costs, and all treatments except triamcinolone monotherapy increased QALYs. Laser treatment plus a VEGF inhibitor achieved the greatest benefit, gaining 0.56 QALYs at a cost of $6975 for an ICER of $12 410 per QALY compared with laser treatment plus triamcinolone. Monotherapy with a VEGF inhibitor achieved similar outcomes to combination therapy with laser treatment plus a VEGF inhibitor. Laser monotherapy and triamcinolone monotherapy were less effective and more costly than combination therapy. VEGF inhibitor monotherapy was sometimes preferred over laser treatment plus a VEGF inhibitor, depending on the reduction in quality of life with loss of visual acuity. When the VEGF inhibitor bevacizumab was as effective as ranibizumab, it was preferable because of its lower cost. Long-term outcome data for treated and untreated diseases are limited. The most effective treatment of DME is VEGF inhibitor injections with or without laser treatment. This therapy compares favorably with cost-effective interventions for other conditions. Agency for Healthcare Research and Quality.
Personality and Treatment Effectiveness in Anorexia Nervosa.
ERIC Educational Resources Information Center
Skoog, Dagna K.; And Others
1984-01-01
Compared pre- and posttreatment Minnesota Multiphasic Personality Inventory profiles of female inpatients (N=12) with anorexia nervosa. Results showed change after treatment, and found that pretreatment profiles obtained at a different hospital were remarkably similar, which suggests a common constellation of personality characteristics in…
Effects of ecosystem-based management treatments
Michael G. Harrington; Carl E. Fiedler; Stephen F. Arno; Ward W. McCaughey; Leon J. Theroux; Clinton E. Carlson; Kristin L. Zouhar; Thomas H. DeLuca; Donald J. Bedunah; Dayna M. Ayers; Elizabeth A. Beringer; Sallie J. Hejl; Lynn Bacon; Robert E. Benson; Jane Kapler Smith; Rick Floch
1999-01-01
The prescribed burn treatments were applied to reduce pre-existing and new slash fuel loadings, reduce understory tree density to lower crown fire potential, stimulate vigor of decadent understory vegetation, produce mineral seedbeds for seral species establishment, and increase availability of mineral nutrients. To test the feasibility of prescribed burning under a...
Personality and Treatment Effectiveness in Anorexia Nervosa.
ERIC Educational Resources Information Center
Skoog, Dagna K.; And Others
1984-01-01
Compared pre- and posttreatment Minnesota Multiphasic Personality Inventory profiles of female inpatients (N=12) with anorexia nervosa. Results showed change after treatment, and found that pretreatment profiles obtained at a different hospital were remarkably similar, which suggests a common constellation of personality characteristics in…
Effect of aspirin treatment on chondromalacia patellae.
Bentley, G; Leslie, I J; Fischer, D
1981-01-01
Twenty-nine patients (21 females and 8 males) with chondromalacia patellae diagnosed by arthroscopy were randomly allocated to receive aspirin or placebo for 3 months. Clinical and arthroscopic examination after 3 months showed no significant change in symptoms, signs, or macroscopic appearances in either group. Surgical treatment was performed in 14 patients for deteriorating symptoms. Images PMID:7008711
Effect of heat-treatment temperatures on density and porosity in MgB 2 superconductor
NASA Astrophysics Data System (ADS)
Liu, C. F.; Yan, G.; Du, S. J.; Xi, W.; Feng, Y.; Zhang, P. X.; Wu, X. Z.; Zhou, L.
2003-04-01
The density, porosity, and microstructures of MgB 2 samples are very important factors for transition critical current density. The effect of heat-treatment temperatures on density and porosity in MgB 2 superconductors has been investigated. The open porosity increases with increasing heat-treatment temperatures, but close porosity decreases. The calculated theory densities from the lattice parameters of the measured samples are 2.6-2.63 g/cm 3. The average measured total porosity (including open and close porosity) is about 50%.
Frick, K. D.; Lietman, T. M.; Holm, S. O.; Jha, H. C.; Chaudhary, J. S.; Bhatta, R. C.
2001-01-01
OBJECTIVE: The present study compares the cost-effectiveness of targeted household treatment and mass treatment of children in the most westerly part of Nepal. METHODS: Effectiveness was measured as the percentage point change in the prevalence of trachoma. Resource measures included personnel time required for treatment, transportation, the time that study subjects had to wait to receive treatment, and the quantity of azithromycin used. The costs of the programme were calculated from the perspectives of the public health programme sponsor, the study subjects, and the society as a whole. FINDINGS: Previous studies have indicated no statistically significant differences in effectiveness, and the present work showed no significant differences in total personnel and transportation costs per child aged 1-10 years, the total time that adults spent waiting, or the quantity of azithromycin per child. However, the mass treatment of children was slightly more effective and used less of each resource per child aged 1-10 years than the targeted treatment of households. CONCLUSION: From all perspectives, the mass treatment of children is at least as effective and no more expensive than targeted household treatment, notwithstanding the absence of statistically significant differences. Less expensive targeting methods are required in order to make targeted household treatment more cost-effective. PMID:11285663
The role of symmetry in attraction to average faces.
Jones, Benedict C; DeBruine, Lisa M; Little, Anthony C
2007-11-01
Although many studies have demonstrated that average faces tend to be attractive, few studies have examined the extent to which symmetry contributes to the attractiveness of average faces. Such studies are potentially important, however, because average faces are highly symmetric and increasing the symmetry of face images increases their attractiveness. Here we demonstrate that increasing averageness of 2-D face shape independently of symmetry is sufficient to increase attractiveness, indicating that preferences for symmetry cannot solely explain the attractiveness of average faces. Additionally, we show that averageness preferences are significantly weaker when the effects of symmetry are controlled for using computer graphic methods than when the effects of symmetry are not controlled for, suggesting that symmetry contributes to the attractiveness of average faces. Importantly, this latter finding was not explained by the greater perceived similarity between versions offaces that varied in averageness, but not symmetry, than between versions of faces that varied in both averageness and symmetry.
Cost-effectiveness of viral hepatitis B & C treatment.
Toy, Mehlika
2013-12-01
With the availability of effective antiviral therapies for chronic viral hepatitis B and C, cost-effectiveness studies have been performed to assess the outcomes and costs of these therapies to support health policy. It is now accepted that treatment of active CHB is cost-effective versus no treatment, although there are a variety of options. And with the new developments around CHC treatment and diagnostic tools it is of interest to both the clinician and policy makers to know both the costs and effects of these choices. The purpose of this article is to provide the reader with an insight into the recent treatment developments and cost-effectiveness issues related to chronic hepatitis B and C treatment, and an overview of recent cost-effectiveness studies evolving around HBV and HCV therapy. Copyright © 2013 Elsevier Ltd. All rights reserved.
Natural Acne Treatment: What's Most Effective?
... helpful in reducing acne inflammation and breakouts: Tea tree oil. Gels containing 5 percent tea tree oil may be as effective as are lotions containing 5 percent benzoyl peroxide, although tea tree oil might work more slowly. Possible side effects ...
21 CFR 1240.10 - Effective bactericidal treatment.
Code of Federal Regulations, 2010 CFR
2010-04-01
... owner or operator of a conveyance, to be effective to prevent the spread of communicable disease. [40 FR... DISEASES General Provisions § 1240.10 Effective bactericidal treatment. Whenever, under the provisions of...
21 CFR 1240.10 - Effective bactericidal treatment.
Code of Federal Regulations, 2011 CFR
2011-04-01
... owner or operator of a conveyance, to be effective to prevent the spread of communicable disease. ... DISEASES General Provisions § 1240.10 Effective bactericidal treatment. Whenever, under the provisions of...
Moore, Eva; Wisniewski, Amy; Dobs, Adrian
2003-08-01
Cross-sex hormone treatment is an important component in medical treatment of transsexual people. Endocrinologists are often faced with designing treatment recommendations. Although guidelines from organizations, such as the Harry Benjamin International Gender Dysphoria Association, have been helpful, management remains complex and experience guided. We discuss the range of treatment used by transsexual people, the rationale behind these, and the expectation from such treatment. Recommendations from seven clinical research centers treating transsexual people are discussed. In addition, self-reported hormonal regimens from 25 male-to-female transsexual people and five female-to-male transsexual people are reported. Finally, the potential adverse effects of cross-sex hormone treatment of transsexual people are reviewed. In light of the complexity of managing treatment goals and adverse effects, the active involvement of a medical doctor experienced in cross-sex hormonal therapy is vital to ensure the safety of transsexual people.
Treatment effect on biases in size estimation in spider phobia.
Shiban, Youssef; Fruth, Martina B; Pauli, Paul; Kinateder, Max; Reichenberger, Jonas; Mühlberger, Andreas
2016-12-01
The current study investigates biases in size estimations made by spider-phobic and healthy participants before and after treatment. Forty-one spider-phobic and 20 healthy participants received virtual reality (VR) exposure treatment and were then asked to rate the size of a real spider immediately before and, on average, 15days after the treatment. During the VR exposure treatment skin conductance response was assessed. Prior to the treatment, both groups tended to overestimate the size of the spider, but this size estimation bias was significantly larger in the phobic group than in the control group. The VR exposure treatment reduced this bias, which was reflected in a significantly smaller size rating post treatment. However, the size estimation bias was unrelated to the skin conductance response. Our results confirm the hypothesis that size estimation by spider-phobic patients is biased. This bias is not stable over time and can be decreased with adequate treatment. Copyright © 2016 Elsevier B.V. All rights reserved.
The average Indian female nose.
Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh
2011-12-01
This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.
Frequency of streamflow measurements required to determine forest treatment effects
Kenneth G. Reinhart
1964-01-01
Most of the stream-discharge records for our experimental watersheds are taken by continuous measurements. But the question arises: are continuous measurements necessary to determine effects of forest treatments? Or could treatment effects be determined by measurement of discharge at intervals, say, once a day or once a week?
Adverse effects of orthodontic treatment: A clinical perspective
Talic, Nabeel F.
2011-01-01
Orthodontic treatment is associated with a number of adverse effects, such as root resorption, pain, pulpal changes, periodontal disease, and temporomandibular dysfunction (TMD). Orthodontists should be aware of these effects and associated risk factors. Risk factors linked to root resorption include the duration of treatment, length, and shape of the root, trauma history, habits, and genetic predisposition. PMID:24151415
Side effects as influencers of treatment outcome.
Sharif, Zafar
2008-01-01
Research relative to the efficacy of a therapeutic agent commands a clinician's greatest interest, but treatment decisions are made based on optimizing efficacy and tolerability/safety considerations. Second-generation atypical antipsychotic drugs are a study in the importance of taking a careful look at the full benefit-risk profile of each drug. The disorders that atypical antipsychotics are approved to treat--schizophrenia, schizoaffective disorder, and bipolar disorder--are associated with an increased rate of certain medical comorbidities compared to the general population. Between-drug differences in efficacy are relatively modest for the atypicals, or between atypicals and conventionals, while differences in safety and tolerability are larger and more clinically relevant. The current article will provide a brief summary of safety-related issues that influence treatment outcome and choice of drug.
Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.
Ma, Yunbei; Zhou, Xiao-Hua
2017-02-01
For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.
The effect of amblyopia treatment on stereoacuity.
Stewart, Catherine E; Wallace, Michael P; Stephens, David A; Fielder, Alistair R; Moseley, Merrick J
2013-04-01
To explore how stereoacuity changes in patients while they are being treated for amblyopia. The Monitored Occlusion Treatment for Amblyopia Study (MOTAS) comprised 3 distinct phases. In the first phase, baseline, assessments of visual function were made to confirm the initial visual and binocular visual deficit. The second phase, refractive adaptation, now commonly termed "optical treatment," was an 18-week period of spectacle wear with measurements of logMAR visual acuity and stereoacuity with the Frisby test at weeks 0, 6, 12, and 18. In the third phase, occlusion, participants were prescribed 6 hours of patching per day. A total of 85 children were enrolled (mean age, 5.1 ± 1.5 years). In 21 children amblyopia was associated with anisometropia; in 29, with strabismus; and in 35, with both. At study entry, poor stereoacuity was associated with poor visual acuity (P < 0.001) in the amblyopic eye and greater angle of strabismus (P < 0.001). Of 66 participants, 25 (38%) who received refractive adaptation and 19 (29%) who received occlusion improved by at least one octave in stereoacuity, exceeding test-retest variability. Overall, 38 (45%) improved one or more octaves across both treatment phases. Unmeasureable stereoacuity was observed in 56 participants (66%) at study entry and in 37 (43%) at study exit. Stereoacuity improved for almost one half of the study participants. Improvement was observed in both treatment phases. Factors associated with poor or nil stereoacuity at study entry and exit were poor visual acuity of the amblyopic eye and large-angle strabismus. Copyright © 2013 American Association for Pediatric Ophthalmology and Strabismus. Published by Mosby, Inc. All rights reserved.
Invisalign: current guidelines for effective treatment.
Kuncio, Daniel A
2014-03-01
Invisalign is an increasingly popular technique for aligning teeth and correcting malocclusions orthodontically. This article analyzes the current professional literature published on Invisalign and the benefits and risks of using the technique for both patients and doctors. The steady increase in the number of cases treated with Invisalign and where the technique is going in the future is investigated. Ten guidelines for Invisalign treatment and patient selection are given, along with case examples.
Maximizing cost-effectiveness by adjusting treatment strategy according to glaucoma severity.
Guedes, Ricardo Augusto Paletta; Guedes, Vanessa Maria Paletta; Gomes, Carlos Eduardo de Mello; Chaoubah, Alfredo
2016-12-01
The aim of this study is to determine the most cost-effective strategy for the treatment of primary open-angle glaucoma (POAG) in Brazil, from the payer's perspective (Brazilian Public Health System) in the setting of the Glaucoma Referral Centers. Study design was a cost-effectiveness analysis of different treatment strategies for POAG. We developed 3 Markov models (one for each glaucoma stage: early, moderate and advanced), using a hypothetical cohort of POAG patients, from the perspective of the Brazilian Public Health System (SUS) and a horizon of the average life expectancy of the Brazilian population. Different strategies were tested according to disease severity. For early glaucoma, we compared observation, laser and medications. For moderate glaucoma, medications, laser and surgery. For advanced glaucoma, medications and surgery. Main outcome measures were ICER (incremental cost-effectiveness ratio), medical direct costs and QALY (quality-adjusted life year). In early glaucoma, both laser and medical treatment were cost-effective (ICERs of initial laser and initial medical treatment over observation only, were R$ 2,811.39/QALY and R$ 3,450.47/QALY). Compared to observation strategy, the two alternatives have provided significant gains in quality of life. In moderate glaucoma population, medical treatment presented the highest costs among treatment strategies. Both laser and surgery were highly cost-effective in this group. For advanced glaucoma, both tested strategies were cost-effective. Starting age had a great impact on results in all studied groups. Initiating glaucoma therapy using laser or surgery were more cost-effective, the younger the patient. All tested treatment strategies for glaucoma provided real gains in quality of life and were cost-effective. However, according to the disease severity, not all strategies provided the same cost-effectiveness profile. Based on our findings, there should be a preferred strategy for each glaucoma stage
Maximizing cost-effectiveness by adjusting treatment strategy according to glaucoma severity
Guedes, Ricardo Augusto Paletta; Guedes, Vanessa Maria Paletta; Gomes, Carlos Eduardo de Mello; Chaoubah, Alfredo
2016-01-01
Abstract Background: The aim of this study is to determine the most cost-effective strategy for the treatment of primary open-angle glaucoma (POAG) in Brazil, from the payer's perspective (Brazilian Public Health System) in the setting of the Glaucoma Referral Centers. Methods: Study design was a cost-effectiveness analysis of different treatment strategies for POAG. We developed 3 Markov models (one for each glaucoma stage: early, moderate and advanced), using a hypothetical cohort of POAG patients, from the perspective of the Brazilian Public Health System (SUS) and a horizon of the average life expectancy of the Brazilian population. Different strategies were tested according to disease severity. For early glaucoma, we compared observation, laser and medications. For moderate glaucoma, medications, laser and surgery. For advanced glaucoma, medications and surgery. Main outcome measures were ICER (incremental cost-effectiveness ratio), medical direct costs and QALY (quality-adjusted life year). Results: In early glaucoma, both laser and medical treatment were cost-effective (ICERs of initial laser and initial medical treatment over observation only, were R$ 2,811.39/QALY and R$ 3,450.47/QALY). Compared to observation strategy, the two alternatives have provided significant gains in quality of life. In moderate glaucoma population, medical treatment presented the highest costs among treatment strategies. Both laser and surgery were highly cost-effective in this group. For advanced glaucoma, both tested strategies were cost-effective. Starting age had a great impact on results in all studied groups. Initiating glaucoma therapy using laser or surgery were more cost-effective, the younger the patient. Conclusion: All tested treatment strategies for glaucoma provided real gains in quality of life and were cost-effective. However, according to the disease severity, not all strategies provided the same cost-effectiveness profile. Based on our findings, there should be a
Effect of malignant disease and treatments on oral structures.
Seymour, Robin A; Walton, Graham
2009-12-01
There has been an increase in the diagnosis and treatment options for malignant diseases. In this article we provide an overview of the impact of the treatments of malignant diseases on the oral structures. Whilst some of the complications, such as oral mucositis and oral infection, are of short duration and respond once chemotherapy has been completed, other treatments have a prolonged effect. Of particular concern is the effect of bisphosphonates on bone turnover and the risk of osteonecrosis on the jaw and hormones affecting the periodontal tissues. These unwanted effects all impact upon the quality of life of many patients diagnosed with malignant disease. Treatments of malignant diseases can have a profound effect on oral structures and functions. All members of the dental team need to be aware of adverse effects arising from such treatments and how they can affect oral function and quality of life.
Duration effects in contingency management treatment of methamphetamine disorders.
Roll, John M; Chudzynski, Joy; Cameron, Jennifer M; Howell, Donelle N; McPherson, Sterling
2013-09-01
The primary aim of this study was to determine whether different durations of contingency management (CM) in conjunction with psychosocial treatment produced different rates of abstinence among methamphetamine dependent individuals. Participants were randomized to one of the four 16-week treatment conditions: standard psychosocial treatment or psychosocial treatment plus one of the three durations of CM (one-month, two-month, or four-month). A total of 118 participants were randomized to the four treatment conditions. There were significant differences across treatment conditions for number of consecutive days of methamphetamine abstinence (p<0.05). These differences were in the hypothesized direction, as participants were more likely to remain abstinent through the 16-week trial as CM duration increased. A significant effect of treatment condition (p<0.05) and time (p<0.05) on abstinence over time was also found. Longer durations of CM were more effective for maintaining methamphetamine abstinence.
Asymmetric inhibitory treatment effects in multilingual aphasia.
Goral, Mira; Naghibolhosseini, Maryam; Conner, Peggy S
2013-01-01
Findings from recent psycholinguistic studies of bilingual processing support the hypothesis that both languages of a bilingual are always active and that bilinguals continually engage in processes of language selection. This view aligns with the convergence hypothesis of bilingual language representation. Furthermore, it is hypothesized that when bilinguals perform a task in one language they need to inhibit their other, nontarget language(s) and that stronger inhibition is required when the task is performed in the weaker language than in the stronger one. The study of multilingual individuals who acquire aphasia resulting from a focal brain lesion offers a unique opportunity to test the convergence hypothesis and the inhibition asymmetry. We report on a trilingual person with chronic nonfluent aphasia who at the time of testing demonstrated greater impairment in her first acquired language (Persian) than in her third, later learned language (English). She received treatment in English followed by treatment in Persian. An examination of her connected language production revealed improvement in her grammatical skills in each language following intervention in that language, but decreased grammatical accuracy in English following treatment in Persian. The increased error rate was evident in structures that are used differently in the two languages (e.g., auxiliary verbs). The results support the prediction that greater inhibition is applied to the stronger language than to the weaker language, regardless of their age of acquisition. We interpret the findings as consistent with convergence theories that posit overlapping neuronal representation and simultaneous activation of multiple languages and with proficiency-dependent asymmetric inhibition in multilinguals.
Intralesional Cryotherapy for the Treatment of Keloid Scars: Evaluating Effectiveness
Bulstra, Anne Eva J.; Ket, Johannes C. F.; Ritt, Marco J. P. F.; van Leeuwen, Paul A. M.; Niessen, Frank B.
2015-01-01
Background: Intralesional (IL) cryotherapy is a novel treatment technique for keloid scars, in which the scar is frozen from inside. Over the past decade, several studies have been published with varying outcomes. A critical analysis of the current literature is, therefore, warranted to determine whether IL cryotherapy is an alternative to established keloid scar treatments. Methods: A comprehensive review was performed, based on the Preferred Reporting Items for Systematic Reviews and Meta-Analysis. PubMed and EMBASE were searched from inception. Studies and level of recommendation were graded according to the American Society of Plastic Surgeons criteria. Results: Eight studies meeting the inclusion criteria were selected. The average scar volume decrease ranged from 51% to 63%, but no complete scar eradication was achieved on average. Scar recurrence ranged from 0% to 24%. Hypopigmentation posttreatment was seen mostly in Fitzpatrick 4–6 skin type patients. Finally, complaints of pain and pruritus decreased significantly in most studies. Conclusions: IL cryotherapy for the treatment of keloid scars shows favorable results in terms of volume reduction and alleviated complaints of pain and pruritus. However, no complete scar eradication is established, and recurrences are seen. Also, persistent hypopigmentation proved a problem in Fitzpatrick 4–6 skin type patients. Summarized, the evidence proved limited and inconsistent resulting in an American Society of Plastic Surgeons grade C recommendation for this type of treatment of keloid scars. PMID:26180738
Evaluating the effect of synchronized sea lice treatments in Chile.
Arriagada, G; Stryhn, H; Sanchez, J; Vanderstichel, R; Campistó, J L; Rees, E E; Ibarra, R; St-Hilaire, S
2017-01-01
The sea louse is considered an important ectoparasite that affects farmed salmonids around the world. Sea lice control relies heavily on pharmacological treatments in several salmon-producing countries, including Chile. Among options for drug administration, immersion treatments represent the majority of antiparasitic control strategies used in Chile. As a topical procedure, immersion treatments do not induce a long lasting effect; therefore, re-infestation from neighbouring farms may undermine their efficacy. Synchronization of treatments has been proposed as a strategy to improve immersion treatment performance, but it has not been evaluated so far. Using a repeated-measures linear mixed-effect model, we evaluated the impact of treatment synchronization of neighbouring farms (within 10km seaway distance) on the adult lice mean abundance from weeks 2 to 8 post-treatment on rainbow trout and Atlantic salmon farms in Chile, while controlling for external and internal sources of lice before the treatments, and also for environmental and fish-related variables. Results indicate that treatment synchronization was significantly associated with lower adult lice levels from weeks 5 to 7 after treatment. This relationship appeared to be linear, suggesting that higher levels of synchronization may result in lower adult sea lice levels during these weeks. These findings suggest that synchronization can improve the performance of immersion delousing treatments by keeping sea lice levels low for a longer period of time. Our results may be applicable to other regions of the world where immersion treatments are widely used.
Revisiting the effectiveness of methadone treatment on crime reductions in the 1990s.
Rothbard, A; Alterman, A; Rutherford, M; Liu, F; Zelinski, S; McKay, J
1999-06-01
This study examines the relationship between methadone treatment and the criminal activity of 126 individuals participating in treatment during the early 1990s. The primary question addressed is to what extent is methadone maintenance treatment associated with reductions in crime? Although prior studies in the 1970s and early 1980s showed significant decreases in crime for individuals in treatment programs, criteria for remaining in this treatment modality have changed in recent years, particularly with the advent of acquired immune deficiency syndrome and the need to reduce intravenous drug use. A pre-post study design is employed spanning a 6-year time period of subject recruitment and follow-up (1987-1993). Uniform administrative records on arrests are used for the analyses. A multiple regression model is employed to explain the variance in the number of arrests 2 years following program admission, with prior criminal history, prior and current drug treatment, and current cocaine use employed as explanatory variables. Results indicate that treatment retention has only a slight, though significant, effect on reducing criminal activity during treatment. Two other factors that appear to increase arrest activity are the use of cocaine and prior criminal history. The fact that arrests did not decrease during a treatment period of 18 months on average requires more investigation in light of the increase in cocaine use in this population.
Diane M. Gercke; Susan A. Stewart
2006-01-01
In 2005, eight U.S. Forest Service and Bureau of Land Management interdisciplinary teams participated in a test of strategic placement of treatments (SPOTS) techniques to maximize the effectiveness of fuel treatments in reducing problem fire behavior, adverse fire effects, and suppression costs. This interagency approach to standardizing the assessment of risks and...
Continuum treatment of electronic polarization effect
NASA Astrophysics Data System (ADS)
Tan, Yu-Hong; Luo, Ray
2007-03-01
A continuum treatment of electronic polarization has been explored for in molecular mechanics simulations in implicit solvents. The dielectric constant for molecule interior is the only parameter in the continuum polarizable model. A value of 4 is found to yield optimal agreement with high-level ab initio quantum mechanical calculations for the tested molecular systems. Interestingly, its performance is not sensitive to the definition of molecular volume, in which the continuum electronic polarization is defined. In this model, quantum mechanical electrostatic field in different dielectric environments from vacuum, low-dielectric organic solvent, and water can be used simultaneously in atomic charge fitting to achieve consistent treatment of electrostatic interactions. The tests show that a single set of atomic charges can be used consistently in different dielectric environments and different molecular conformations, and the atomic charges transfer well from training monomers to tested dimers. The preliminary study gives us the hope of developing a continuum polarizable force field for more consistent simulations of proteins and nucleic acids in implicit solvents.
Enhancing treatment effectiveness through social modelling: A pilot study.
Faasse, Kate; Perera, Anna; Loveys, Kate; Grey, Andrew; Petrie, Keith J
2017-05-01
Medical treatments take place in social contexts; however, little research has investigated how social modelling might influence treatment outcomes. This experimental pilot study investigated social modelling of treatment effectiveness and placebo treatment outcomes. Fifty-nine participants took part in the study, ostensibly examining the use of beta-blockers (actually placebos) for examination anxiety. Participants were randomly assigned to observe a female confederate report positive treatment effects (reduced heart rate, relaxed, calm) or feeling no different. Heart rate, anxiety and blood pressure were assessed, as were symptoms and attributed side effects. Heart rate decreased significantly more in the social modelling compared to control condition, p = .027 (d = .63), and there were trends towards effects in the same direction for both anxiety, p = .097 (d = .46), and systolic blood pressure, p = .077 (d = .51). Significant pre-post placebo differences in heart rate, anxiety and diastolic blood pressure were found in the social modelling group, ps < .007 (ds = .77-1.37), but not the control condition, ps > .28 (ds = .09-.59). Social observation of medication effectiveness enhanced placebo effectiveness in heart rate, and showed a trend towards enhancing treatment effectiveness in both anxiety and systolic blood pressure. Social modelling may have utility in enhancing the effectiveness of many active medical treatments.
THE EFFECTIVENESS OF COMPULSORY DRUG TREATMENT: A SYSTEMATIC REVIEW
Werb, D; Kamarulzaman, A; Meacham, MC; Rafful, C; Fisher, B; Strathdee, SA; Wood, E
2016-01-01
Background Despite widespread implementation of compulsory treatment modalities for drug dependence, there has been no systematic evaluation of the scientific evidence on the effectiveness of compulsory drug treatment. Methods We conducted a systematic review of studies assessing the outcomes of compulsory treatment. We conducted a search in duplicate of all relevant peer-reviewed scientific literature evaluating compulsory treatment modalities. The following academic databases were searched: PubMed, PAIS International, Proquest, PsycINFO, Web of Science, Soc Abstracts, JSTOR, EBSCO/Academic Search Complete, REDALYC, SciELO Brazil. We also searched the Internet, and article reference lists, from database inception to July 15th, 2015. Eligibility criteria are as follows: peer-reviewed scientific studies presenting original data. Primary outcome of interest was post-treatment drug use. Secondary outcome of interest was post-treatment criminal recidivism. Results Of an initial 430 potential studies identified, nine quantitative studies met the inclusion criteria. Studies evaluated compulsory treatment options including drug detention facilities, short (i.e. 21-day) and long-term (i.e., 6 months) inpatient treatment, community-based treatment, group-based outpatient treatment, and prison-based treatment. Three studies (33%) reported no significant impacts of compulsory treatment compared with control interventions. Two studies (22%) found equivocal results but did not compare against a control condition. Two studies (22%) observed negative impacts of compulsory treatment on criminal recidivism. Two studies (22%) observed positive impacts of compulsory inpatient treatment on criminal recidivism and drug use. Conclusion There is limited scientific literature evaluating compulsory drug treatment. Evidence does not, on the whole, suggest improved outcomes related to compulsory treatment approaches, with some studies suggesting potential harms. Given the potential for human
The effectiveness of compulsory drug treatment: A systematic review.
Werb, D; Kamarulzaman, A; Meacham, M C; Rafful, C; Fischer, B; Strathdee, S A; Wood, E
2016-02-01
Despite widespread implementation of compulsory treatment modalities for drug dependence, there has been no systematic evaluation of the scientific evidence on the effectiveness of compulsory drug treatment. We conducted a systematic review of studies assessing the outcomes of compulsory treatment. We conducted a search in duplicate of all relevant peer-reviewed scientific literature evaluating compulsory treatment modalities. The following academic databases were searched: PubMed, PAIS International, Proquest, PsycINFO, Web of Science, Soc Abstracts, JSTOR, EBSCO/Academic Search Complete, REDALYC, SciELO Brazil. We also searched the Internet, and article reference lists, from database inception to July 15th, 2015. Eligibility criteria are as follows: peer-reviewed scientific studies presenting original data. Primary outcome of interest was post-treatment drug use. Secondary outcome of interest was post-treatment criminal recidivism. Of an initial 430 potential studies identified, nine quantitative studies met the inclusion criteria. Studies evaluated compulsory treatment options including drug detention facilities, short (i.e., 21-day) and long-term (i.e., 6 months) inpatient treatment, community-based treatment, group-based outpatient treatment, and prison-based treatment. Three studies (33%) reported no significant impacts of compulsory treatment compared with control interventions. Two studies (22%) found equivocal results but did not compare against a control condition. Two studies (22%) observed negative impacts of compulsory treatment on criminal recidivism. Two studies (22%) observed positive impacts of compulsory inpatient treatment on criminal recidivism and drug use. There is limited scientific literature evaluating compulsory drug treatment. Evidence does not, on the whole, suggest improved outcomes related to compulsory treatment approaches, with some studies suggesting potential harms. Given the potential for human rights abuses within compulsory
Common Language Effect Size for Multiple Treatment Comparisons
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven
2015-01-01
Researchers who need to explain treatment effects to laypeople can translate Cohen's effect size (standardized mean difference) to a common language effect size--a probability of a random observation from one population being larger than a random observation from the other population. This common language effect size can be extended to represent…
Common Language Effect Size for Multiple Treatment Comparisons
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven
2015-01-01
Researchers who need to explain treatment effects to laypeople can translate Cohen's effect size (standardized mean difference) to a common language effect size--a probability of a random observation from one population being larger than a random observation from the other population. This common language effect size can be extended to represent…
De Clerck, H. J.; Cevidanes, L. H.; Franchi, L.
2011-01-01
The aim of the present morphometric investigation was to evaluate the effects of bone-anchored maxillary protraction (BAMP) in the treatment of growing patients with Class III malocclusion. The shape and size changes in the craniofacial configuration of a sample of 26 children with Class III malocclusions consecutively treated with the BAMP protocol were compared with a matched sample of 15 children with untreated Class III malocclusions. All subjects in the two groups were at a prepubertal stage of skeletal development at time of first observation. Average duration of treatment was 14 months. Significant treatment-induced modifications involved both the maxilla and the mandible. The most evident deformation consisted of marked forward displacement of the maxillary complex with more moderate favourable effects in the mandible. Deformations in the vertical dimension were not detected. The significant deformations were associated with significant differences in size in the group treated with the BAMP protocol. PMID:21187527
Psychopharmacologic Treatment: A Note on Classroom Effects.
ERIC Educational Resources Information Center
Forness, Steven R.; Kavale, Kenneth A.
1988-01-01
Intended for teachers, the article provides an introduction to the four major classes of psychotropic medication (stimulants, tranquilizers, anticonvulsants, and antidepressants) commonly prescribed for children with learning or behavioral disorders. Specific effects on the classroom are addressed. (DB)
Psychopharmacologic Treatment: A Note on Classroom Effects.
ERIC Educational Resources Information Center
Forness, Steven R.; Kavale, Kenneth A.
1988-01-01
Intended for teachers, the article provides an introduction to the four major classes of psychotropic medication (stimulants, tranquilizers, anticonvulsants, and antidepressants) commonly prescribed for children with learning or behavioral disorders. Specific effects on the classroom are addressed. (DB)
Hodgkin lymphoma: Late effects of treatment and guidelines for surveillance.
Ng, Andrea K; van Leeuwen, Flora E
2016-07-01
Long-term survivors of Hodgkin lymphoma (HL) are at risk for a range of late effects, with second malignant neoplasm and cardiovascular diseases being the leading causes of death in these patients. The excess risks remain significantly elevated decades after treatment, and are clearly associated with extent of treatment exposures. Other late effects have also been identified, such as pulmonary dysfunction, endocrinopathies, muscle atrophy, and persistent fatigue. Systemic documentation of late effects and recognition of treatment- and patient-related risk factors are important, as they inform optimal surveillance and risk-reduction strategies, as well as guide therapeutic modifications in newly diagnosed patients to minimize treatment-related complications. As HL therapy evolves over time, with adoption of novel agents and contemporary treatment techniques, late effect risks and follow-up recommendations need to be continuously updated. Copyright © 2016 Elsevier Inc. All rights reserved.
Aptitude-treatment interaction effects in psychooncological interventions.
Stulz, Niklaus; Künzler, Alfred; Barth, Jürgen; Hepp, Urs
2014-01-01
To examine aptitude-treatment interaction (ATI) effects in cancer patients receiving psychooncological interventions (POIs). N=36 cancer patients were treated with POI. Hierarchical linear regression was used to test two interaction effects between patient baseline characteristics (aptitudes) and process analyses of therapy sessions (treatment) on change in mental health during POI. Patients with high emotional distress did best when their therapy reduced arousal, and patients with lower emotional distress benefited most if therapists emphasized arousal induction. The interaction between the coping style of the patient (internalizing vs. externalizing) and the focus of the treatment (emotion vs. behavior) did not predict POI outcomes. The ATI effect of patient's distress and therapist's arousal induction/reduction may help therapists to make differential treatment decisions in POI. Tailoring treatments to cancer patients based on their personal characteristics may enhance the effectiveness of POI. © 2014.
Sample Size Bias in Judgments of Perceptual Averages
ERIC Educational Resources Information Center
Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.
2014-01-01
Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…
Sample Size Bias in Judgments of Perceptual Averages
ERIC Educational Resources Information Center
Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.
2014-01-01
Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…