Park, Jihoon; Yoon, Chungsik; Lee, Kiyoung
2018-05-30
In the field of exposure science, various exposure assessment models have been developed to complement experimental measurements; however, few studies have been published on their validity. This study compares the estimated inhaled aerosol doses of several inhalation exposure models to experimental measurements of aerosols released from consumer spray products, and then compares deposited doses within different parts of the human respiratory tract according to deposition models. Exposure models, including the European Center for Ecotoxicology of Chemicals Targeted Risk Assessment (ECETOC TRA), the Consumer Exposure Model (CEM), SprayExpo, ConsExpo Web and ConsExpo Nano, were used to estimate the inhaled dose under various exposure scenarios, and modeled and experimental estimates were compared. The deposited dose in different respiratory regions was estimated using the International Commission on Radiological Protection model and multiple-path particle dosimetry models under the assumption of polydispersed particles. The modeled estimates of the inhaled doses were accurate in the short term, i.e., within 10 min of the initial spraying, with a differences from experimental estimates ranging from 0 to 73% among the models. However, the estimates for long-term exposure, i.e., exposure times of several hours, deviated significantly from the experimental estimates in the absence of ventilation. The differences between the experimental and modeled estimates of particle number and surface area were constant over time under ventilated conditions. ConsExpo Nano, as a nano-scale model, showed stable estimates of short-term exposure, with a difference from the experimental estimates of less than 60% for all metrics. The deposited particle estimates were similar among the deposition models, particularly in the nanoparticle range for the head airway and alveolar regions. In conclusion, the results showed that the inhalation exposure models tested in this study are suitable for estimating short-term aerosol exposure (within half an hour), but not for estimating long-term exposure. Copyright © 2018 Elsevier GmbH. All rights reserved.
Use of Internal Consistency Coefficients for Estimating Reliability of Experimental Tasks Scores
Green, Samuel B.; Yang, Yanyun; Alt, Mary; Brinkley, Shara; Gray, Shelley; Hogan, Tiffany; Cowan, Nelson
2017-01-01
Reliabilities of scores for experimental tasks are likely to differ from one study to another to the extent that the task stimuli change, the number of trials varies, the type of individuals taking the task changes, the administration conditions are altered, or the focal task variable differs. Given reliabilities vary as a function of the design of these tasks and the characteristics of the individuals taking them, making inferences about the reliability of scores in an ongoing study based on reliability estimates from prior studies is precarious. Thus, it would be advantageous to estimate reliability based on data from the ongoing study. We argue that internal consistency estimates of reliability are underutilized for experimental task data and in many applications could provide this information using a single administration of a task. We discuss different methods for computing internal consistency estimates with a generalized coefficient alpha and the conditions under which these estimates are accurate. We illustrate use of these coefficients using data for three different tasks. PMID:26546100
The effects of survey question wording on rape estimates: evidence from a quasi-experimental design.
Fisher, Bonnie S
2009-02-01
The measurement of rape is among the leading methodological issues in the violence against women field. Methodological discussion continues to focus on decreasing measurement errors and improving the accuracy of rape estimates. The current study used a quasi-experimental design to examine the effect of survey question wording on estimates of completed and attempted rape and verbal threats of rape. Specifically, the study statistically compares self-reported rape estimates from two nationally representative studies of college women's sexual victimization experiences, the National College Women Sexual Victimization study and the National Violence Against College Women study. Results show significant differences between the two sets of rape estimates, with National Violence Against College Women study rape estimates ranging from 4.4% to 10.4% lower than the National College Women Sexual Victimization study rape estimates. Implications for future methodological research are discussed.
Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach. NCEE 2012-4025
ERIC Educational Resources Information Center
Gleason, Philip M.; Resch, Alexandra M.; Berk, Jillian A.
2012-01-01
This NCEE Technical Methods Paper compares the estimated impacts of an educational intervention using experimental and regression discontinuity (RD) study designs. The analysis used data from two large-scale randomized controlled trials--the Education Technology Evaluation and the Teach for America Study--to provide evidence on the performance of…
It Pays to Compare: An Experimental Study on Computational Estimation
ERIC Educational Resources Information Center
Star, Jon R.; Rittle-Johnson, Bethany
2009-01-01
Comparing and contrasting examples is a core cognitive process that supports learning in children and adults across a variety of topics. In this experimental study, we evaluated the benefits of supporting comparison in a classroom context for children learning about computational estimation. Fifth- and sixth-grade students (N = 157) learned about…
Trade-offs in experimental designs for estimating post-release mortality in containment studies
Rogers, Mark W.; Barbour, Andrew B; Wilson, Kyle L
2014-01-01
Estimates of post-release mortality (PRM) facilitate accounting for unintended deaths from fishery activities and contribute to development of fishery regulations and harvest quotas. The most popular method for estimating PRM employs containers for comparing control and treatment fish, yet guidance for experimental design of PRM studies with containers is lacking. We used simulations to evaluate trade-offs in the number of containers (replicates) employed versus the number of fish-per container when estimating tagging mortality. We also investigated effects of control fish survival and how among container variation in survival affects the ability to detect additive mortality. Simulations revealed that high experimental effort was required when: (1) additive treatment mortality was small, (2) control fish mortality was non-negligible, and (3) among container variability in control fish mortality exceeded 10% of the mean. We provided programming code to allow investigators to compare alternative designs for their individual scenarios and expose trade-offs among experimental design options. Results from our simulations and simulation code will help investigators develop efficient PRM experimental designs for precise mortality assessment.
ERIC Educational Resources Information Center
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim
2014-01-01
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Theoretical and Experimental Estimations of Volumetric Inductive Phase Shift in Breast Cancer Tissue
NASA Astrophysics Data System (ADS)
González, C. A.; Lozano, L. M.; Uscanga, M. C.; Silva, J. G.; Polo, S. M.
2013-04-01
Impedance measurements based on magnetic induction for breast cancer detection has been proposed in some studies. This study evaluates theoretical and experimentally the use of a non-invasive technique based on magnetic induction for detection of patho-physiological conditions in breast cancer tissue associated to its volumetric electrical conductivity changes through inductive phase shift measurements. An induction coils-breast 3D pixel model was designed and tested. The model involves two circular coils coaxially centered and a human breast volume centrally placed with respect to the coils. A time-harmonic numerical simulation study addressed the effects of frequency-dependent electrical properties of tumoral tissue on the volumetric inductive phase shift of the breast model measured with the circular coils as inductor and sensor elements. Experimentally; five female volunteer patients with infiltrating ductal carcinoma previously diagnosed by the radiology and oncology departments of the Specialty Clinic for Women of the Mexican Army were measured by an experimental inductive spectrometer and the use of an ergonomic inductor-sensor coil designed to estimate the volumetric inductive phase shift in human breast tissue. Theoretical and experimental inductive phase shift estimations were developed at four frequencies: 0.01, 0.1, 1 and 10 MHz. The theoretical estimations were qualitatively in agreement with the experimental findings. Important increments in volumetric inductive phase shift measurements were evident at 0.01MHz in theoretical and experimental observations. The results suggest that the tested technique has the potential to detect pathological conditions in breast tissue associated to cancer by non-invasive monitoring. Further complementary studies are warranted to confirm the observations.
An estimating formula for ion-atom association rates in gases
NASA Technical Reports Server (NTRS)
Chatterjee, B. K.; Johnsen, R.
1990-01-01
A simple estimating formula is derived for rate coefficients of three-body ion atom association in gases and compare its predictions to experimental data on ion association and three-body radiative charge transfer reactions of singly- and doubly-charged rare-gas ions. The formula appears to reproduce most experimental data quite well. It may be useful for estimating the rates of reactions that have not been studied in the laboratory.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2016-12-01
We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
A number of recent soil biota studies have deviated from the standard experimental approach of generating a distinct data value for each experimental unit (e.g. Yang et al., 2013; Gundale et al., 2014). Instead, these studies have mixed together soils from multiple experimental units (i.e. sites wi...
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.
Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf
2010-05-25
Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks
2010-01-01
Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862
NASA Technical Reports Server (NTRS)
Liu, G.
1985-01-01
One of the major concerns in the design of an active control system is obtaining the information needed for effective feedback. This involves the combination of sensing and estimation. A sensor location index is defined as the weighted sum of the mean square estimation errors in which the sensor locations can be regarded as estimator design parameters. The design goal is to choose these locations to minimize the sensor location index. The choice of the number of sensors is a tradeoff between the estimation quality based upon the same performance index and the total costs of installing and maintaining extra sensors. An experimental study for choosing the sensor location was conducted on an aeroelastic system. The system modeling which includes the unsteady aerodynamics model developed by Stephen Rock was improved. Experimental results verify the trend of the theoretical predictions of the sensor location index for different sensor locations at various wind speeds.
Acute and chronic environmental effects of clandestine methamphetamine waste.
Kates, Lisa N; Knapp, Charles W; Keenan, Helen E
2014-09-15
The illicit manufacture of methamphetamine (MAP) produces substantial amounts of hazardous waste that is dumped illegally. This study presents the first environmental evaluation of waste produced from illicit MAP manufacture. Chemical oxygen demand (COD) was measured to assess immediate oxygen depletion effects. A mixture of five waste components (10mg/L/chemical) was found to have a COD (130 mg/L) higher than the European Union wastewater discharge regulations (125 mg/L). Two environmental partition coefficients, K(OW) and K(OC), were measured for several chemicals identified in MAP waste. Experimental values were input into a computer fugacity model (EPI Suite™) to estimate environmental fate. Experimental log K(OW) values ranged from -0.98 to 4.91, which were in accordance with computer estimated values. Experimental K(OC) values ranged from 11 to 72, which were much lower than the default computer values. The experimental fugacity model for discharge to water estimates that waste components will remain in the water compartment for 15 to 37 days. Using a combination of laboratory experimentation and computer modelling, the environmental fate of MAP waste products was estimated. While fugacity models using experimental and computational values were very similar, default computer models should not take the place of laboratory experimentation. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhu, Hong; Xu, Xiaohan; Ahn, Chul
2017-01-01
Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.
Caruso, Christina M; Martin, Ryan A; Sletvold, Nina; Morrissey, Michael B; Wade, Michael J; Augustine, Kate E; Carlson, Stephanie M; MacColl, Andrew D C; Siepielski, Adam M; Kingsolver, Joel G
2017-09-01
Although many selection estimates have been published, the environmental factors that cause selection to vary in space and time have rarely been identified. One way to identify these factors is by experimentally manipulating the environment and measuring selection in each treatment. We compiled and analyzed selection estimates from experimental studies. First, we tested whether the effect of manipulating the environment on selection gradients depends on taxon, trait type, or fitness component. We found that the effect of manipulating the environment was larger when selection was measured on life-history traits or via survival. Second, we tested two predictions about the environmental factors that cause variation in selection. We found support for the prediction that variation in selection is more likely to be caused by environmental factors that have a large effect on mean fitness but not for the prediction that variation is more likely to be caused by biotic factors. Third, we compared selection gradients from experimental and observational studies. We found that selection varied more among treatments in experimental studies than among spatial and temporal replicates in observational studies, suggesting that experimental studies can detect relationships between environmental factors and selection that would not be apparent in observational studies.
An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating
NASA Astrophysics Data System (ADS)
Ratcliffe, M. J.; Lieven, N. A. J.
1999-03-01
Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.
Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe
2016-12-01
The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate estimates, and using a balanced experimental design are important criteria to consider to maximize the power of hypothesis tests for comparing treatments using disease severity estimates.
It pays to compare: an experimental study on computational estimation.
Star, Jon R; Rittle-Johnson, Bethany
2009-04-01
Comparing and contrasting examples is a core cognitive process that supports learning in children and adults across a variety of topics. In this experimental study, we evaluated the benefits of supporting comparison in a classroom context for children learning about computational estimation. Fifth- and sixth-grade students (N=157) learned about estimation either by comparing alternative solution strategies or by reflecting on the strategies one at a time. At posttest and retention test, students who compared were more flexible problem solvers on a variety of measures. Comparison also supported greater conceptual knowledge, but only for students who already knew some estimation strategies. These findings indicate that comparison is an effective learning and instructional practice in a domain with multiple acceptable answers.
Longhi, Daniel Angelo; Martins, Wiaslan Figueiredo; da Silva, Nathália Buss; Carciofi, Bruno Augusto Mattar; de Aragão, Gláucia Maria Falcão; Laurindo, João Borges
2017-01-02
In predictive microbiology, the model parameters have been estimated using the sequential two-step modeling (TSM) approach, in which primary models are fitted to the microbial growth data, and then secondary models are fitted to the primary model parameters to represent their dependence with the environmental variables (e.g., temperature). The Optimal Experimental Design (OED) approach allows reducing the experimental workload and costs, and the improvement of model identifiability because primary and secondary models are fitted simultaneously from non-isothermal data. Lactobacillus viridescens was selected to this study because it is a lactic acid bacterium of great interest to meat products preservation. The objectives of this study were to estimate the growth parameters of L. viridescens in culture medium from TSM and OED approaches and to evaluate both the number of experimental data and the time needed in each approach and the confidence intervals of the model parameters. Experimental data for estimating the model parameters with TSM approach were obtained at six temperatures (total experimental time of 3540h and 196 experimental data of microbial growth). Data for OED approach were obtained from four optimal non-isothermal profiles (total experimental time of 588h and 60 experimental data of microbial growth), two profiles with increasing temperatures (IT) and two with decreasing temperatures (DT). The Baranyi and Roberts primary model and the square root secondary model were used to describe the microbial growth, in which the parameters b and T min (±95% confidence interval) were estimated from the experimental data. The parameters obtained from TSM approach were b=0.0290 (±0.0020) [1/(h 0.5 °C)] and T min =-1.33 (±1.26) [°C], with R 2 =0.986 and RMSE=0.581, and the parameters obtained with the OED approach were b=0.0316 (±0.0013) [1/(h 0.5 °C)] and T min =-0.24 (±0.55) [°C], with R 2 =0.990 and RMSE=0.436. The parameters obtained from OED approach presented smaller confidence intervals and best statistical indexes than those from TSM approach. Besides, less experimental data and time were needed to estimate the model parameters with OED than TSM. Furthermore, the OED model parameters were validated with non-isothermal experimental data with great accuracy. In this way, OED approach is feasible and is a very useful tool to improve the prediction of microbial growth under non-isothermal condition. Copyright © 2016 Elsevier B.V. All rights reserved.
George, William H.
2009-01-01
Research addressing relationships between alcohol and human sexuality has proliferated, due in part to efforts to characterize alcohol's role in HIV risk behavior. This study provides a descriptive review of the alcohol–sexuality literature, using abstracts from 264 identified studies to estimate changes in publication activity, target populations, and the prevalence of HIV-related studies over time. We also examine methodological trends by estimating the prevalence of experimental vs. non-experimental studies. Findings show considerable increases in research activity and diversity of populations studied since the mid-1980's and highlight the emergence of HIV-related studies as a focal point of alcohol–sexuality research efforts. Results also demonstrate a substantial decline in the proportion of studies utilizing experimental methods, in part because of frequent use of non-experimental approaches in studies of alcohol and HIV risk behavior. We discuss implications and review the role of experiments in evaluating causal relationships between alcohol and sexual risk behavior. PMID:16897352
Hendershot, Christian S; George, William H
2007-03-01
Research addressing relationships between alcohol and human sexuality has proliferated, due in part to efforts to characterize alcohol's role in HIV risk behavior. This study provides a descriptive review of the alcohol-sexuality literature, using abstracts from 264 identified studies to estimate changes in publication activity, target populations, and the prevalence of HIV-related studies over time. We also examine methodological trends by estimating the prevalence of experimental vs. non-experimental studies. Findings show considerable increases in research activity and diversity of populations studied since the mid-1980's and highlight the emergence of HIV-related studies as a focal point of alcohol-sexuality research efforts. Results also demonstrate a substantial decline in the proportion of studies utilizing experimental methods, in part because of frequent use of non-experimental approaches in studies of alcohol and HIV risk behavior. We discuss implications and review the role of experiments in evaluating causal relationships between alcohol and sexual risk behavior.
Development of advanced methods for analysis of experimental data in diffusion
NASA Astrophysics Data System (ADS)
Jaques, Alonso V.
There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.
Optimal experimental designs for the estimation of thermal properties of composite materials
NASA Technical Reports Server (NTRS)
Scott, Elaine P.; Moncman, Deborah A.
1994-01-01
Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.
Atomistic determination of flexoelectric properties of crystalline dielectrics
NASA Astrophysics Data System (ADS)
Maranganti, R.; Sharma, P.
2009-08-01
Upon application of a uniform strain, internal sublattice shifts within the unit cell of a noncentrosymmetric dielectric crystal result in the appearance of a net dipole moment: a phenomenon well known as piezoelectricity. A macroscopic strain gradient on the other hand can induce polarization in dielectrics of any crystal structure, even those which possess a centrosymmetric lattice. This phenomenon, called flexoelectricity, has both bulk and surface contributions: the strength of the bulk contribution can be characterized by means of a material property tensor called the bulk flexoelectric tensor. Several recent studies suggest that strain-gradient induced polarization may be responsible for a variety of interesting and anomalous electromechanical phenomena in materials including electromechanical coupling effects in nonuniformly strained nanostructures, “dead layer” effects in nanocapacitor systems, and “giant” piezoelectricity in perovskite nanostructures among others. In this work, adopting a lattice dynamics based microscopic approach we provide estimates of the flexoelectric tensor for certain cubic crystalline ionic salts, perovskite dielectrics, III-V and II-VI semiconductors. We compare our estimates with experimental/theoretical values wherever available and also revisit the validity of an existing empirical scaling relationship for the magnitude of flexoelectric coefficients in terms of material parameters. It is interesting to note that two independent groups report values of flexoelectric properties for perovskite dielectrics that are orders of magnitude apart: Cross and co-workers from Penn State have carried out experimental studies on a variety of materials including barium titanate while Catalan and co-workers from Cambridge used theoretical ab initio techniques as well as experimental techniques to study paraelectric strontium titanate as well as ferroelectric barium titanate and lead titanate. We find that, in the case of perovskite dielectrics, our estimates agree to an order of magnitude with the experimental and theoretical estimates for strontium titanate. For barium titanate however, while our estimates agree to an order of magnitude with existing ab initio calculations, there exists a large discrepancy with experimental estimates. The possible reasons for the observed deviations are discussed.
Experimental Estimation of Mutation Rates in a Wheat Population With a Gene Genealogy Approach
Raquin, Anne-Laure; Depaulis, Frantz; Lambert, Amaury; Galic, Nathalie; Brabant, Philippe; Goldringer, Isabelle
2008-01-01
Microsatellite markers are extensively used to evaluate genetic diversity in natural or experimental evolving populations. Their high degree of polymorphism reflects their high mutation rates. Estimates of the mutation rates are therefore necessary when characterizing diversity in populations. As a complement to the classical experimental designs, we propose to use experimental populations, where the initial state is entirely known and some intermediate states have been thoroughly surveyed, thus providing a short timescale estimation together with a large number of cumulated meioses. In this article, we derived four original gene genealogy-based methods to assess mutation rates with limited bias due to relevant model assumptions incorporating the initial state, the number of new alleles, and the genetic effective population size. We studied the evolution of genetic diversity at 21 microsatellite markers, after 15 generations in an experimental wheat population. Compared to the parents, 23 new alleles were found in generation 15 at 9 of the 21 loci studied. We provide evidence that they arose by mutation. Corresponding estimates of the mutation rates ranged from 0 to 4.97 × 10−3 per generation (i.e., year). Sequences of several alleles revealed that length polymorphism was only due to variation in the core of the microsatellite. Among different microsatellite characteristics, both the motif repeat number and an independent estimation of the Nei diversity were correlated with the novel diversity. Despite a reduced genetic effective size, global diversity at microsatellite markers increased in this population, suggesting that microsatellite diversity should be used with caution as an indicator in biodiversity conservation issues. PMID:18689900
Experimental estimation of mutation rates in a wheat population with a gene genealogy approach.
Raquin, Anne-Laure; Depaulis, Frantz; Lambert, Amaury; Galic, Nathalie; Brabant, Philippe; Goldringer, Isabelle
2008-08-01
Microsatellite markers are extensively used to evaluate genetic diversity in natural or experimental evolving populations. Their high degree of polymorphism reflects their high mutation rates. Estimates of the mutation rates are therefore necessary when characterizing diversity in populations. As a complement to the classical experimental designs, we propose to use experimental populations, where the initial state is entirely known and some intermediate states have been thoroughly surveyed, thus providing a short timescale estimation together with a large number of cumulated meioses. In this article, we derived four original gene genealogy-based methods to assess mutation rates with limited bias due to relevant model assumptions incorporating the initial state, the number of new alleles, and the genetic effective population size. We studied the evolution of genetic diversity at 21 microsatellite markers, after 15 generations in an experimental wheat population. Compared to the parents, 23 new alleles were found in generation 15 at 9 of the 21 loci studied. We provide evidence that they arose by mutation. Corresponding estimates of the mutation rates ranged from 0 to 4.97 x 10(-3) per generation (i.e., year). Sequences of several alleles revealed that length polymorphism was only due to variation in the core of the microsatellite. Among different microsatellite characteristics, both the motif repeat number and an independent estimation of the Nei diversity were correlated with the novel diversity. Despite a reduced genetic effective size, global diversity at microsatellite markers increased in this population, suggesting that microsatellite diversity should be used with caution as an indicator in biodiversity conservation issues.
Navarro-Fontestad, Carmen; González-Álvarez, Isabel; Fernández-Teruel, Carlos; Bermejo, Marival; Casabó, Vicente Germán
2012-01-01
The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confidence interval. In this way, the new proposed method demonstrates to be as useful as WinNonlin® software when it was applicable. Copyright © 2011 John Wiley & Sons, Ltd.
Jaciw, Andrew P
2016-06-01
Various studies have examined bias in impact estimates from comparison group studies (CGSs) of job training programs, and in education, where results are benchmarked against experimental results. Such within-study comparison (WSC) approaches investigate levels of bias in CGS-based impact estimates, as well as the success of various design and analytic strategies for reducing bias. This article reviews past literature and summarizes conditions under which CGSs replicate experimental benchmark results. It extends the framework to, and develops the methodology for, situations where results from CGSs are generalized to untreated inference populations. Past research is summarized; methods are developed to examine bias in program impact estimates based on cross-site comparisons in a multisite trial that are evaluated against site-specific experimental benchmarks. Students in Grades K-3 in 79 schools in Tennessee; students in Grades 4-8 in 82 schools in Alabama. Grades K-3 Stanford Achievement Test (SAT) in reading and math scores; Grades 4-8 SAT10 reading scores. Past studies show that bias in CGS-based estimates can be limited through strong design, with local matching, and appropriate analysis involving pretest covariates and variables that represent selection processes. Extension of the methodology to investigate accuracy of generalized estimates from CGSs shows bias from confounders and effect moderators. CGS results, when extrapolated to untreated inference populations, may be biased due to variation in outcomes and impact. Accounting for effects of confounders or moderators may reduce bias. © The Author(s) 2016.
Impact and quantification of the sources of error in DNA pooling designs.
Jawaid, A; Sham, P
2009-01-01
The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.
Wimberley, Catriona J; Fischer, Kristina; Reilhac, Anthonin; Pichler, Bernd J; Gregoire, Marie Claude
2014-10-01
The partial saturation approach (PSA) is a simple, single injection experimental protocol that will estimate both B(avail) and appK(D) without the use of blood sampling. This makes it ideal for use in longitudinal studies of neurodegenerative diseases in the rodent. The aim of this study was to increase the range and applicability of the PSA by developing a data driven strategy for determining reliable regional estimates of receptor density (B(avail)) and in vivo affinity (1/appK(D)), and validate the strategy using a simulation model. The data driven method uses a time window guided by the dynamic equilibrium state of the system as opposed to using a static time window. To test the method, simulations of partial saturation experiments were generated and validated against experimental data. The experimental conditions simulated included a range of receptor occupancy levels and three different B(avail) and appK(D) values to mimic diseases states. Also the effect of using a reference region and typical PET noise on the stability and accuracy of the estimates was investigated. The investigations showed that the parameter estimates in a simulated healthy mouse, using the data driven method were within 10±30% of the simulated input for the range of occupancy levels simulated. Throughout all experimental conditions simulated, the accuracy and robustness of the estimates using the data driven method were much improved upon the typical method of using a static time window, especially at low receptor occupancy levels. Introducing a reference region caused a bias of approximately 10% over the range of occupancy levels. Based on extensive simulated experimental conditions, it was shown the data driven method provides accurate and precise estimates of B(avail) and appK(D) for a broader range of conditions compared to the original method. Copyright © 2014 Elsevier Inc. All rights reserved.
Modelling of the UV Index on vertical and 40° tilted planes for different orientations.
Serrano, D; Marín, M J; Utrillas, M P; Tena, F; Martínez-Lozano, J A
2012-02-01
In this study, estimated data of the UV Index on vertical planes are presented for the latitude of Valencia, Spain. For that purpose, the UVER values have been generated on vertical planes by means of four different geometrical models a) isotropic, b) Perez, c) Gueymard, d) Muneer, based on values of the global horizontal UVER and the diffuse horizontal UVER, measured experimentally. The UVER values, obtained by any model, overestimate the experimental values for all orientations, with the exception of the Perez model for the East plane. The results show statistical values of the MAD parameter (Mean Absolute Deviation) between 10% and 25%, the Perez model being the one that obtained a lower MAD for all levels. As for the statistic RMSD parameter (Root Mean Square Deviation), the results show values between 17% and 32%, and again the Perez model provides the best results in all vertical planes. The difference between the estimated UV Index and the experimental UV Index, for vertical and 40° tilted planes, was also calculated. 40° is an angle close to the latitude of Burjassot, Valencia, (39.5°), which, according to various studies, is the optimum angle to capture maximum radiation on tilted planes. We conclude that the models provide a good estimate of the UV Index, as they coincide or differ in one unit compared to the experimental values in 99% of cases, and this is valid for all orientations. Finally, we examined the relation between the UV Index on vertical and 40° tilted planes, both the experimental and estimated by the Perez model, and the experimental UV Index on a horizontal plane at 12 GMT. Based on the results, we can conclude that it is possible to estimate with a good approximation the UV Index on vertical and 40° tilted planes in different directions on the basis of the experimental horizontal UVI value, thus justifying the interest of this study. This journal is © The Royal Society of Chemistry and Owner Societies 2012
Model-Based Estimation of Ankle Joint Stiffness
Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen
2017-01-01
We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683
Estimation of unsteady lift on a pitching airfoil from wake velocity surveys
NASA Technical Reports Server (NTRS)
Zaman, K. B. M. Q.; Panda, J.; Rumsey, C. L.
1993-01-01
The results of a joint experimental and computational study on the flowfield over a periodically pitched NACA0012 airfoil, and the resultant lift variation, are reported in this paper. The lift variation over a cycle of oscillation, and hence the lift hysteresis loop, is estimated from the velocity distribution in the wake measured or computed for successive phases of the cycle. Experimentally, the estimated lift hysteresis loops are compared with available data from the literature as well as with limited force balance measurements. Computationally, the estimated lift variations are compared with the corresponding variation obtained from the surface pressure distribution. Four analytical formulations for the lift estimation from wake surveys are considered and relative successes of the four are discussed.
Deng, Zhimin; Tian, Tianhai
2014-07-29
The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.
NASA Astrophysics Data System (ADS)
Saini, K. K.; Sehgal, R. K.; Sethi, B. L.
2008-10-01
In this paper major reliability estimators are analyzed and there comparatively result are discussed. There strengths and weaknesses are evaluated in this case study. Each of the reliability estimators has certain advantages and disadvantages. Inter-rater reliability is one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers. As an alternative, you could look at the correlation of ratings of the same single observer repeated on two different occasions. Each of the reliability estimators will give a different value for reliability. In general, the test-retest and inter-rater reliability estimates will be lower in value than the parallel forms and internal consistency ones because they involve measuring at different times or with different raters. Since reliability estimates are often used in statistical analyses of quasi-experimental designs.
Xiong, Yongliang
2006-01-01
In this study, a series of interaction coefficients of the Brønsted-Guggenheim-Scatchard specific interaction theory (SIT) have been estimated up to 200°C and 400 bars. The interaction coefficients involving Cl- estimated include ε(H+, Cl-), ε(Na+, Cl-), ε(Ag+, Cl-), ε(Na+, AgCl2 -), ε(Mg2+, Cl-), ε(Ca2+, Cl-), ε(Sr2+, Cl-), ε(Ba2+, Cl-), ε(Sm3+, Cl-), ε(Eu3+, Cl-), ε(Gd3+, Cl-), and ε(GdAc2+, Cl-). The interaction coefficients involving OH- estimated include ε(Li+, OH-), ε(K+, OH-), ε(Na+, OH-), ε(Cs+, OH-), ε(Sr2+, OH-), and ε(Ba2+, OH-). In addition, the interaction coefficients of ε(Na+, Ac-) and ε(Ca2+, Ac-) have also been estimated. The bulk of interaction coefficients presented in this study has been evaluated from the mean activity coefficients. A few of them have been estimated from the potentiometric and solubility studies. The above interaction coefficients are tested against both experimental mean activity coefficients and equilibrium quotients. Predicted mean activity coefficients are in satisfactory agreement with experimental data. Predicted equilibrium quotients are in very good agreement with experimental values. Based upon its relatively rapid attainment of equilibrium and the ease of determining magnesium concentrations, this study also proposes that the solubility of brucite can be used as a pH (pcH) buffer/sensor for experimental systems in NaCl solutions up to 200°C by employing the predicted solubility quotients of brucite in conjunction with the dissociation quotients of water and the first hydrolysis quotients of Mg2+, all in NaCl solutions. PMID:16759370
Estimation of Supercapacitor Energy Storage Based on Fractional Differential Equations.
Kopka, Ryszard
2017-12-22
In this paper, new results on using only voltage measurements on supercapacitor terminals for estimation of accumulated energy are presented. For this purpose, a study based on application of fractional-order models of supercapacitor charging/discharging circuits is undertaken. Parameter estimates of the models are then used to assess the amount of the energy accumulated in supercapacitor. The obtained results are compared with energy determined experimentally by measuring voltage and current on supercapacitor terminals. All the tests are repeated for various input signal shapes and parameters. Very high consistency between estimated and experimental results fully confirm suitability of the proposed approach and thus applicability of the fractional calculus to modelling of supercapacitor energy storage.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
Field Validation of the Stability Limit of a Multi MW Turbine
NASA Astrophysics Data System (ADS)
Kallesøe, Bjarne S.; Kragh, Knud A.
2016-09-01
Long slender blades of modern multi-megawatt turbines exhibit a flutter like instability at rotor speeds above a critical rotor speed. Knowing the critical rotor speed is crucial to a safe turbine design. The flutter like instability can only be estimated using geometrically non-linear aeroelastic codes. In this study, the estimated rotor speed stability limit of a 7 MW state of the art wind turbine is validated experimentally. The stability limit is estimated using Siemens Wind Powers in-house aeroelastic code, and the results show that the predicted stability limit is within 5% of the experimentally observed limit.
Using Non-experimental Data to Estimate Treatment Effects
Stuart, Elizabeth A.; Marcus, Sue M.; Horvitz-Lennon, Marcela V.; Gibbons, Robert D.; Normand, Sharon-Lise T.
2009-01-01
While much psychiatric research is based on randomized controlled trials (RCTs), where patients are randomly assigned to treatments, sometimes RCTs are not feasible. This paper describes propensity score approaches, which are increasingly used for estimating treatment effects in non-experimental settings. The primary goal of propensity score methods is to create sets of treated and comparison subjects who look as similar as possible, in essence replicating a randomized experiment, at least with respect to observed patient characteristics. A study to estimate the metabolic effects of antipsychotic medication in a sample of Florida Medicaid beneficiaries with schizophrenia illustrates methods. PMID:20563313
USDA-ARS?s Scientific Manuscript database
Soil moisture estimates are valuable for hydrologic modeling and agricultural decision support. These estimates are typically produced via a combination of sparse ¬in situ networks and remotely-sensed products or where sensory grids and quality satellite estimates are unavailable, through derived h...
ERIC Educational Resources Information Center
Bottema-Beutel, Kristen; Lloyd, Blair; Carter, Erik W.; Asmus, Jennifer M.
2014-01-01
Attaining reliable estimates of observational measures can be challenging in school and classroom settings, as behavior can be influenced by multiple contextual factors. Generalizability (G) studies can enable researchers to estimate the reliability of observational data, and decision (D) studies can inform how many observation sessions are…
76 FR 78927 - Proposed Information Collection Activity; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-20
... estimates of the effectiveness of the programs. This component will use an experimental design. Program... inform decisions related to future investments in this kind of programming as well as the design and operation of such services. To meet the objective of the study, experimental impact studies with...
ERIC Educational Resources Information Center
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim
2015-01-01
The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…
USDA-ARS?s Scientific Manuscript database
Soil moisture estimates are valuable for hydrologic modeling and agricultural decision support. These estimates are typically produced via a combination of sparse in situ networks and remotely-sensed products or where sensory grids and quality satellite estimates are unavailable, through derived hy...
On experimental damage localization by SP2E: Application of H∞ estimation and oblique projections
NASA Astrophysics Data System (ADS)
Lenzen, Armin; Vollmering, Max
2018-05-01
In this article experimental damage localization based on H∞ estimation and state projection estimation error (SP2E) is studied. Based on an introduced difference process, a state space representation is derived for advantageous numerical solvability. Because real structural excitations are presumed to be unknown, a general input is applied therein, which allows synchronization and normalization. Furthermore, state projections are introduced to enhance damage identification. While first experiments to verify method SP2E have already been conducted and published, further laboratory results are analyzed here. Therefore, SP2E is used to experimentally localize stiffness degradations and mass alterations. Furthermore, the influence of projection techniques is analyzed. In summary, method SP2E is able to localize structural alterations, which has been observed by results of laboratory experiments.
NASA Astrophysics Data System (ADS)
de Lautour, Oliver R.; Omenzetter, Piotr
2010-07-01
Developed for studying long sequences of regularly sampled data, time series analysis methods are being increasingly investigated for the use of Structural Health Monitoring (SHM). In this research, Autoregressive (AR) models were used to fit the acceleration time histories obtained from two experimental structures: a 3-storey bookshelf structure and the ASCE Phase II Experimental SHM Benchmark Structure, in undamaged and limited number of damaged states. The coefficients of the AR models were considered to be damage-sensitive features and used as input into an Artificial Neural Network (ANN). The ANN was trained to classify damage cases or estimate remaining structural stiffness. The results showed that the combination of AR models and ANNs are efficient tools for damage classification and estimation, and perform well using small number of damage-sensitive features and limited sensors.
ERIC Educational Resources Information Center
Unlu, Fatih; Yamaguchi, Ryoko; Bernstein, Larry; Edmunds, Julie
2010-01-01
This paper addresses methodological issues arising from an experimental study of North Carolina's Early College High School Initiative, a four-year longitudinal experimental study funded by Institute for Education Sciences. North Carolina implemented the Early College High School (ECHS) Initiative in response to low high school graduation rates.…
Zimmer, Christoph
2016-01-01
Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models.
Abiko, Hironobu; Furuse, Mitsuya; Takano, Tsuguo
2016-01-01
Objectives: In the use of activated carbon beds as adsorbents for various types of organic vapor in respirator gas filters, water adsorption of the bed and test gas humidity are expected to alter the accuracy in the estimation of breakthrough data. There is increasing interest in the effects of moisture on estimation methods, and this study has investigated the effects with actual breakthrough data. Methods: We prepared several activated carbon beds preconditioned by equilibration with moisture at different relative humidities (RH=40%-70%) and a constant temperature of 20°C. Then, we measured breakthrough curves in the early region of breakthrough time for 10 types of organic vapor, and investigated the effects of moisture on estimation using the Wheeler-Jonas equation, the simulation software NIOSH MultiVapor™ 2.2.3, and RBT (Relative Breakthrough Time) proposed by Tanaka et al. Results: The Wheeler-Jonas equation showed good accordance with breakthrough curves at all RH in this study. However, the correlation coefficient decreased gradually with increasing RH regardless of type of organic vapor. Estimation of breakthrough time by MultiVapor showed good accordance with experimental data at RH=50%. In contrast, it showed discordance at high RH (>50%). RBTs reported previously were consistent with experimental data at RH=50%. On the other hand, the values of RBT changed markedly with increasing RH. Conclusions: The results of each estimation method showed good accordance with experimental data under comparatively dry conditions (RH≤50%). However, there were discrepancies under high humidified conditions, and further studies are warranted. PMID:27725483
USDA-ARS?s Scientific Manuscript database
Controlling for spatial variability is important in high-throughput phenotyping studies that enable large numbers of genotypes to be evaluated across time and space. In the current study, we compared the efficacy of different experimental designs and spatial models in the analysis of canopy spectral...
Estimation of Soil-Water Characteristic Curves in Multiple-Cycles Using Membrane and TDR System
Hong, Won-Taek; Jung, Young-Seok; Kang, Seonghun; Lee, Jong-Sub
2016-01-01
The objective of this study is to estimate multiple-cycles of the soil-water characteristic curve (SWCC) using an innovative volumetric pressure plate extractor (VPPE), which is incorporated with a membrane and time domain reflectometry (TDR). The pressure cell includes the membrane to reduce the experimental time and the TDR probe to automatically estimate the volumetric water content. For the estimation of SWCC using the VPPE system, four specimens with different grain size and void ratio are prepared. The volumetric water contents of the specimens according to the matric suction are measured by the burette system and are estimated in the TDR system during five cycles of SWCC tests. The volumetric water contents estimated by the TDR system are almost identical to those determined by the burette system. The experimental time significantly decreases with the new VPPE. The hysteresis in the SWCC is largest in the first cycle and is nearly identical after 1.5 cycles. As the initial void ratio decreases, the air entry value increases. This study suggests that the new VPPE may effectively estimate multiple-cycles of the SWCC of unsaturated soils. PMID:28774139
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A Powerful, Potential Outcomes Method for Estimating Any Estimand across Multiple Groups
ERIC Educational Resources Information Center
Pattanayak, Cassandra W.; Rubin, Donald B.; Zell, Elizabeth R.
2013-01-01
In educational research, outcome measures are often estimated across separate studies or across schools, districts, or other subgroups to assess the overall causal effect of an active treatment versus a control treatment. Students may be partitioned into such strata or blocks by experimental design, or separated into studies within a…
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dikmen, Erkan; Ayaz, Mahir; Gül, Doğan; Şahin, Arzu Şencan
2017-07-01
The determination of drying behavior of herbal plants is a complex process. In this study, gene expression programming (GEP) model was used to determine drying behavior of herbal plants as fresh sweet basil, parsley and dill leaves. Time and drying temperatures are input parameters for the estimation of moisture ratio of herbal plants. The results of the GEP model are compared with experimental drying data. The statistical values as mean absolute percentage error, root-mean-squared error and R-square are used to calculate the difference between values predicted by the GEP model and the values actually observed from the experimental study. It was found that the results of the GEP model and experimental study are in moderately well agreement. The results have shown that the GEP model can be considered as an efficient modelling technique for the prediction of moisture ratio of herbal plants.
ERIC Educational Resources Information Center
Unlu, Fatih; Layzer, Carolyn; Clements, Douglas; Sarama, Julie; Cook, David
2013-01-01
Many educational Randomized Controlled Trials (RCTs) collect baseline versions of outcome measures (pretests) to be used in the estimation of impacts at posttest. Although pretest measures are not necessary for unbiased impact estimates in well executed experimental studies, using them increases the precision of impact estimates and reduces sample…
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
Experimental Demonstration of a Cheap and Accurate Phase Estimation
Rudinger, Kenneth; Kimmel, Shelby; Lobser, Daniel; ...
2017-05-11
We demonstrate an experimental implementation of robust phase estimation (RPE) to learn the phase of a single-qubit rotation on a trapped Yb + ion qubit. Here, we show this phase can be estimated with an uncertainty below 4 × 10 -4 rad using as few as 176 total experimental samples, and our estimates exhibit Heisenberg scaling. Unlike standard phase estimation protocols, RPE neither assumes perfect state preparation and measurement, nor requires access to ancillae. We crossvalidate the results of RPE with the more resource-intensive protocol of gate set tomography.
NASA Astrophysics Data System (ADS)
Haddag, B.; Kagnaya, T.; Nouari, M.; Cutard, T.
2013-01-01
Modelling machining operations allows estimating cutting parameters which are difficult to obtain experimentally and in particular, include quantities characterizing the tool-workpiece interface. Temperature is one of these quantities which has an impact on the tool wear, thus its estimation is important. This study deals with a new modelling strategy, based on two steps of calculation, for analysis of the heat transfer into the cutting tool. Unlike the classical methods, considering only the cutting tool with application of an approximate heat flux at the cutting face, estimated from experimental data (e.g. measured cutting force, cutting power), the proposed approach consists of two successive 3D Finite Element calculations and fully independent on the experimental measurements; only the definition of the behaviour of the tool-workpiece couple is necessary. The first one is a 3D thermomechanical modelling of the chip formation process, which allows estimating cutting forces, chip morphology and its flow direction. The second calculation is a 3D thermal modelling of the heat diffusion into the cutting tool, by using an adequate thermal loading (applied uniform or non-uniform heat flux). This loading is estimated using some quantities obtained from the first step calculation, such as contact pressure, sliding velocity distributions and contact area. Comparisons in one hand between experimental data and the first calculation and at the other hand between measured temperatures with embedded thermocouples and the second calculation show a good agreement in terms of cutting forces, chip morphology and cutting temperature.
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445
ERIC Educational Resources Information Center
Dong, Nianbo; Lipsey, Mark W.
2011-01-01
Attrition occurs when study participants who were assigned to the treatment and control conditions do not provide outcome data and thus do not contribute to the estimation of the treatment effects. It is very common in experimental studies in education as illustrated, for instance, in a meta-analysis studying "the effects of attrition on baseline…
Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le
2015-01-01
Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589
A comparative analysis of experimental selection on the stickleback pelvis.
Miller, S E; Barrueto, M; Schluter, D
2017-06-01
Mechanisms of natural selection can be identified using experimental approaches. However, such experiments often yield nonsignificant effects and imprecise estimates of selection due to low power and small sample sizes. Combining results from multiple experimental studies might produce an aggregate estimate of selection that is more revealing than individual studies. For example, bony pelvic armour varies conspicuously among stickleback populations, and predation by vertebrate and insect predators has been hypothesized to be the main driver of this variation. Yet experimental selection studies testing these hypotheses frequently fail to find a significant effect. We experimentally manipulated length of threespine stickleback (Gasterosteus aculeatus) pelvic spines in a mesocosm experiment to test whether prickly sculpin (Cottus asper), an intraguild predator of stickleback, favours longer spines. The probability of survival was greater for stickleback with unclipped pelvic spines, but this effect was noisy and not significant. We used meta-analysis to combine the results of our mesocosm experiment with previously published experimental studies of selection on pelvic armour. We found evidence that fish predation indeed favours increased pelvic armour, with a moderate effect size. The same approach found little evidence that insect predation favours reduced pelvic armour. The causes of reduced pelvic armour in many stickleback populations remain uncertain. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
Sensitivity and systematics of calorimetric neutrino mass experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nucciotti, A.; Cremonesi, O.; Ferri, E.
2009-12-16
A large calorimetric neutrino mass experiment using thermal detectors is expected to play a crucial role in the challenge for directly assessing the neutrino mass. We discuss and compare here two approaches for the estimation of the experimental sensitivity of such an experiment. The first method uses an analytic formulation and allows to obtain readily a close estimate over a wide range of experimental configurations. The second method is based on a Montecarlo technique and is more precise and reliable. The Montecarlo approach is then exploited to study some sources of systematic uncertainties peculiar to calorimetric experiments. Finally, the toolsmore » are applied to investigate the optimal experimental configuration of the MARE project.« less
Zimmer, Christoph
2016-01-01
Background Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. Methods The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. Results The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models. PMID:27583802
Bolt, Hermann M; Başaran, Nurşen; Duydu, Yalçın
2012-01-01
The reproductive toxicity of boric acid and borates is a matter of current regulatory concern. Based on experimental studies in rats, no-observed-adverse-effect levels (NOAELs) were found to be 17.5 mg boron (B)/kg body weight (b.w.) for male fertility and 9.6 mg B/kg b.w. for developmental toxicity. Recently, occupational human field studies in highly exposed cohorts were reported from China and Turkey, with both studies showing negative results regarding male reproduction. A comparison of the conditions of these studies with the experimental NOAEL conditions are based on reported B blood levels, which is clearly superior to a scaling according to estimated B exposures. A comparison of estimated daily B exposure levels and measured B blood levels confirms the preference of biomonitoring data for a comparison of human field studies. In general, it appears that high environmental exposures to B are lower than possible high occupational exposures. The comparison reveals no contradiction between human and experimental reproductive toxicity data. It clearly appears that human B exposures, even in the highest exposed cohorts, are too low to reach the blood (and target tissue) concentrations that would be required to exert adverse effects on reproductive functions.
New infrastructure for studies of transmutation and fast systems concepts
NASA Astrophysics Data System (ADS)
Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria
2017-09-01
In this work we report initial studies on a low power Accelerator-Driven System as a possible experimental facility for the measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.
A low power ADS for transmutation studies in fast systems
NASA Astrophysics Data System (ADS)
Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria
2017-12-01
In this work, we report studies on a fast low power accelerator driven system model as a possible experimental facility, focusing on its capabilities in terms of measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.
Experimental validation of pulsed column inventory estimators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyerlein, A.L.; Geldard, J.F.; Weh, R.
Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may bemore » an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs.« less
Stochastic estimation of human shoulder impedance with robots: an experimental design.
Park, Kyungbin; Chang, Pyung Hun
2011-01-01
Previous studies assumed the shoulder as a hinge joint during human arm impedance measurement. This is obviously a vast simplification since the shoulder is a complex of several joints with multiple degrees of freedom. In the present work, a practical methodology for more general and realistic estimation of human shoulder impedance is proposed and validated with a spring array. It includes a gravity compensation scheme, which is developed and used for the experiments with a spatial three degrees of freedom PUMA-type robot. The experimental results were accurate and reliable, and thus it has shown a strong potential of the proposed methodology in the estimation of human shoulder impedance. © 2011 IEEE
Non-Targeted Effects and the Dose Response for Heavy Ion Tumorigenesis
NASA Technical Reports Server (NTRS)
Chappell, Lori J.; Cucinotta, Francis A.
2010-01-01
There is no human epidemiology data available to estimate the heavy ion cancer risks experienced by astronauts in space. Studies of tumor induction in mice are a necessary step to estimate risks to astronauts. Previous experimental data can be better utilized to model dose response for heavy ion tumorigenesis and plan future low dose studies.
Chi, Yulang; Zhang, Huanteng; Huang, Qiansheng; Lin, Yi; Ye, Guozhu; Zhu, Huimin; Dong, Sijun
2018-02-01
Environmental risks of organic chemicals have been greatly determined by their persistence, bioaccumulation, and toxicity (PBT) and physicochemical properties. Major regulations in different countries and regions identify chemicals according to their bioconcentration factor (BCF) and octanol-water partition coefficient (Kow), which frequently displays a substantial correlation with the sediment sorption coefficient (Koc). Half-life or degradability is crucial for the persistence evaluation of chemicals. Quantitative structure activity relationship (QSAR) estimation models are indispensable for predicting environmental fate and health effects in the absence of field- or laboratory-based data. In this study, 39 chemicals of high concern were chosen for half-life testing based on total organic carbon (TOC) degradation, and two widely accepted and highly used QSAR estimation models (i.e., EPI Suite and PBT Profiler) were adopted for environmental risk evaluation. The experimental results and estimated data, as well as the two model-based results were compared, based on the water solubility, Kow, Koc, BCF and half-life. Environmental risk assessment of the selected compounds was achieved by combining experimental data and estimation models. It was concluded that both EPI Suite and PBT Profiler were fairly accurate in measuring the physicochemical properties and degradation half-lives for water, soil, and sediment. However, the half-lives between the experimental and the estimated results were still not absolutely consistent. This suggests deficiencies of the prediction models in some ways, and the necessity to combine the experimental data and predicted results for the evaluation of environmental fate and risks of pollutants. Copyright © 2016. Published by Elsevier B.V.
Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data
NASA Astrophysics Data System (ADS)
Shulenin, V. P.
2016-10-01
Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.
Transverse oscillations in plasma wakefield experiments at FACET
NASA Astrophysics Data System (ADS)
Adli, E.; Lindstrøm, C. A.; Allen, J.; Clarke, C. I.; Frederico, J.; Gessner, S. J.; Green, S. Z.; Hogan, M. J.; Litos, M. D.; White, G. R.; Yakimenko, V.; An, W.; Clayton, C. E.; Marsh, K. A.; Mori, W. B.; Joshi, C.; Vafaei-Najafabadi, N.; Corde, S.; Lu, W.
2016-09-01
We study transverse effects in a plasma wakefield accelerator. Experimental data from FACET with asymmetry in the beam-plasma system is presented. Energy dependent centroid oscillations are observed on the accelerated part of the charge. The experimental results are compared to PIC simulations and theoretical estimates.
Bayesian inference for dynamic transcriptional regulation; the Hes1 system as a case study.
Heron, Elizabeth A; Finkenstädt, Bärbel; Rand, David A
2007-10-01
In this study, we address the problem of estimating the parameters of regulatory networks and provide the first application of Markov chain Monte Carlo (MCMC) methods to experimental data. As a case study, we consider a stochastic model of the Hes1 system expressed in terms of stochastic differential equations (SDEs) to which rigorous likelihood methods of inference can be applied. When fitting continuous-time stochastic models to discretely observed time series the lengths of the sampling intervals are important, and much of our study addresses the problem when the data are sparse. We estimate the parameters of an autoregulatory network providing results both for simulated and real experimental data from the Hes1 system. We develop an estimation algorithm using MCMC techniques which are flexible enough to allow for the imputation of latent data on a finer time scale and the presence of prior information about parameters which may be informed from other experiments as well as additional measurement error.
Direction of Arrival Estimation Using a Reconfigurable Array
2005-05-06
civilian world. Keywords: Direction-of-arrival Estimation MUSIC algorithm Reconfigurable Array Experimental Created by Neevia Personal...14. SUBJECT TERMS: Direction-of-arrival ; Estimation ; MUSIC algorithm ; Reconfigurable ; Array ; Experimental 16. PRICE CODE 17...9 1.5 MuSiC Algorithm
Experimental and theoretical studies of near-ground acoustic radiation propagation in the atmosphere
NASA Astrophysics Data System (ADS)
Belov, Vladimir V.; Burkatovskaya, Yuliya B.; Krasnenko, Nikolai P.; Rakov, Aleksandr S.; Rakov, Denis S.; Shamanaeva, Liudmila G.
2017-11-01
Results of experimental and theoretical studies of the process of near-ground propagation of monochromatic acoustic radiation on atmospheric paths from a source to a receiver taking into account the contribution of multiple scattering on fluctuations of atmospheric temperature and wind velocity, refraction of sound on the wind velocity and temperature gradients, and its reflection by the underlying surface for different models of the atmosphere depending the sound frequency, coefficient of reflection from the underlying surface, propagation distance, and source and receiver altitudes are presented. Calculations were performed by the Monte Carlo method using the local estimation algorithm by the computer program developed by the authors. Results of experimental investigations under controllable conditions are compared with theoretical estimates and results of analytical calculations for the Delany-Bazley impedance model. Satisfactory agreement of the data obtained confirms the correctness of the suggested computer program.
Comparison of standing volume estimates using optical dendrometers
Neil A. Clark; Stanley J. Zarnoch; Alexander Clark; Gregory A. Reams
2001-01-01
This study compared height and diameter measurements and volume estimates on 20 hardwood and 20 softwood stems using traditional optical dendrometers, an experimental camera instrument, and mechanical calipers. Multiple comparison tests showed significant differences among the means for lower stem diameters when the camera was used. There were no significant...
Comparison of Standing Volume Estimates Using Optical Dendrometers
Neil A. Clark; Stanley J. Zarnoch; Alexander Clark; Gregory A. Reams
2001-01-01
This study compared height and diameter measurements and volume estimates on 20 hardwood and 20 softwood stems using traditional optical dendrometers, an experimental camera instrument, and mechanical calipers. Multiple comparison tests showed significant differences among the means for lower stem diameters when the camera was used. There were no significant...
Octanol/water partition coefficient (logP) and aqueous solubility (logS) are two important parameters in pharmacology and toxicology studies, and experimental measurements are usually time-consuming and expensive. In the present research, novel methods are presented for the estim...
Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G
2016-05-09
The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim
2014-09-01
The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data. © The Author(s) 2014.
Experimental and analytical studies for the NASA carbon fiber risk assessment
NASA Technical Reports Server (NTRS)
1980-01-01
Various experimental and analytical studies performed for the NASA carbon fiber risk assessment program are described with emphasis on carbon fiber characteristics, sensitivity of electrical equipment and components to shorting or arcing by carbon fibers, attenuation effect of carbon fibers on aircraft landing aids, impact of carbon fibers on industrial facilities. A simple method of estimating damage from airborne carbon fibers is presented.
A Markerless 3D Computerized Motion Capture System Incorporating a Skeleton Model for Monkeys.
Nakamura, Tomoya; Matsumoto, Jumpei; Nishimaru, Hiroshi; Bretas, Rafael Vieira; Takamura, Yusaku; Hori, Etsuro; Ono, Taketoshi; Nishijo, Hisao
2016-01-01
In this study, we propose a novel markerless motion capture system (MCS) for monkeys, in which 3D surface images of monkeys were reconstructed by integrating data from four depth cameras, and a skeleton model of the monkey was fitted onto 3D images of monkeys in each frame of the video. To validate the MCS, first, estimated 3D positions of body parts were compared between the 3D MCS-assisted estimation and manual estimation based on visual inspection when a monkey performed a shuttling behavior in which it had to avoid obstacles in various positions. The mean estimation error of the positions of body parts (3-14 cm) and of head rotation (35-43°) between the 3D MCS-assisted and manual estimation were comparable to the errors between two different experimenters performing manual estimation. Furthermore, the MCS could identify specific monkey actions, and there was no false positive nor false negative detection of actions compared with those in manual estimation. Second, to check the reproducibility of MCS-assisted estimation, the same analyses of the above experiments were repeated by a different user. The estimation errors of positions of most body parts between the two experimenters were significantly smaller in the MCS-assisted estimation than in the manual estimation. Third, effects of methamphetamine (MAP) administration on the spontaneous behaviors of four monkeys were analyzed using the MCS. MAP significantly increased head movements, tended to decrease locomotion speed, and had no significant effect on total path length. The results were comparable to previous human clinical data. Furthermore, estimated data following MAP injection (total path length, walking speed, and speed of head rotation) correlated significantly between the two experimenters in the MCS-assisted estimation (r = 0.863 to 0.999). The results suggest that the presented MCS in monkeys is useful in investigating neural mechanisms underlying various psychiatric disorders and developing pharmacological interventions.
Composing problem solvers for simulation experimentation: a case study on steady state estimation.
Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M
2014-01-01
Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.
Caiazzo, A; Caforio, Federica; Montecinos, Gino; Muller, Lucas O; Blanco, Pablo J; Toro, Eluterio F
2016-10-25
This work presents a detailed investigation of a parameter estimation approach on the basis of the reduced-order unscented Kalman filter (ROUKF) in the context of 1-dimensional blood flow models. In particular, the main aims of this study are (1) to investigate the effects of using real measurements versus synthetic data for the estimation procedure (i.e., numerical results of the same in silico model, perturbed with noise) and (2) to identify potential difficulties and limitations of the approach in clinically realistic applications to assess the applicability of the filter to such setups. For these purposes, the present numerical study is based on a recently published in vitro model of the arterial network, for which experimental flow and pressure measurements are available at few selected locations. To mimic clinically relevant situations, we focus on the estimation of terminal resistances and arterial wall parameters related to vessel mechanics (Young's modulus and wall thickness) using few experimental observations (at most a single pressure or flow measurement per vessel). In all cases, we first perform a theoretical identifiability analysis on the basis of the generalized sensitivity function, comparing then the results owith the ROUKF, using either synthetic or experimental data, to results obtained using reference parameters and to available measurements. Copyright © 2016 John Wiley & Sons, Ltd.
Bottema-Beutel, Kristen; Lloyd, Blair; Carter, Erik W; Asmus, Jennifer M
2014-11-01
Attaining reliable estimates of observational measures can be challenging in school and classroom settings, as behavior can be influenced by multiple contextual factors. Generalizability (G) studies can enable researchers to estimate the reliability of observational data, and decision (D) studies can inform how many observation sessions are necessary to achieve a criterion level of reliability. We conducted G and D studies using observational data from a randomized control trial focusing on social and academic participation of students with severe disabilities in inclusive secondary classrooms. Results highlight the importance of anchoring observational decisions to reliability estimates from existing or pilot data sets. We outline steps for conducting G and D studies and address options when reliability estimates are lower than desired.
NASA Astrophysics Data System (ADS)
Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.
2017-01-01
Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.
ERIC Educational Resources Information Center
Shin, Hye Sook
2009-01-01
Using data from a nationwide, large-scale experimental study of the effects of a connected classroom technology on student learning in algebra (Owens et al., 2004), this dissertation focuses on challenges that can arise in estimating treatment effects in educational field experiments when samples are highly heterogeneous in terms of various…
Applications of Small Area Estimation to Generalization with Subclassification by Propensity Scores
ERIC Educational Resources Information Center
Chan, Wendy
2018-01-01
Policymakers have grown increasingly interested in how experimental results may generalize to a larger population. However, recently developed propensity score-based methods are limited by small sample sizes, where the experimental study is generalized to a population that is at least 20 times larger. This is particularly problematic for methods…
Estimating the Stoichiometry of HIV Neutralization
Magnus, Carsten; Regoes, Roland R.
2010-01-01
HIV-1 virions infect target cells by first establishing contact between envelope glycoprotein trimers on the virion's surface and CD4 receptors on a target cell, recruiting co-receptors, fusing with the cell membrane and finally releasing the genetic material into the target cell. Specific experimental setups allow the study of the number of trimer-receptor-interactions needed for infection, i.e., the stoichiometry of entry and also the number of antibodies needed to prevent one trimer from engaging successfully in the entry process, i.e., the stoichiometry of (trimer) neutralization. Mathematical models are required to infer the stoichiometric parameters from these experimental data. Recently, we developed mathematical models for the estimations of the stoichiometry of entry [1]. In this article, we show how our models can be extended to investigate the stoichiometry of trimer neutralization. We study how various biological parameters affect the estimate of the stoichiometry of neutralization. We find that the distribution of trimer numbers—which is also an important determinant of the stoichiometry of entry—influences the estimated value of the stoichiometry of neutralization. In contrast, other parameters, which characterize the experimental system, diminish the information we can extract from the data about the stoichiometry of neutralization, and thus reduce our confidence in the estimate. We illustrate the use of our models by re-analyzing previously published data on the neutralization sensitivity [2], which contains measurements of neutralization sensitivity of viruses with different envelope proteins to antibodies with various specificities. Our mathematical framework represents the formal basis for the estimation of the stoichiometry of neutralization. Together with the stoichiometry of entry, the stoichiometry of trimer neutralization will allow one to calculate how many antibodies are required to neutralize a virion or even an entire population of virions. PMID:20333245
Phelps, Geoffrey; Kelcey, Benjamin; Jones, Nathan; Liu, Shuangshuang
2016-10-03
Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials. © The Author(s) 2016.
Volumetric blood flow via time-domain correlation: experimental verification.
Embree, P M; O'Brien, W R
1990-01-01
A novel ultrasonic volumetric flow measurement method using time-domain correlation of consecutive pairs of echoes has been developed. An ultrasonic data acquisition system determined the time shift between a pair of range gated echoes by searching for the time shift with the maximum correlation between the RF sampled waveforms. Experiments with a 5-MHz transducer indicate that the standard deviation of the estimate of steady fluid velocity through 6-mm-diameter tubes is less than 10% of the mean. Experimentally, Sephadex (G-50; 20-80 mum dia.) particles in water and fresh porcine blood have been used as ultrasound scattering fluids. Two-dimensional (2-D) flow velocity can be estimated by slowly sweeping the ultrasonic beam across the blood vessel phantom. Volumetric flow through the vessel is estimated by integrating the 2-D flow velocity field and then is compared to hydrodynamic flow measurements to assess the overall experimental accuracy of the time-domain method. Flow rates from 50-500 ml/min have been estimated with an accuracy better than 10% under the idealized characteristics used in this study, which include straight circular thin-walled tubes, laminar axially-symmetric steady flow, and no intervening tissues.
Lee, Jongsuh; Wang, Semyung; Pluymers, Bert; Desmet, Wim; Kindt, Peter
2015-02-01
Generally, the dynamic characteristics (natural frequency, damping, and mode shape) of a structure can be estimated by experimental modal analysis. Among these dynamic characteristics, mode shape requires multiple measurements of the structure at different positions, which increases the experimental cost and time. Recently, the Hilbert-Huang transform (HHT) method has been introduced to extract mode-shape information from a continuous measurement, which requires vibration measurements from one position to another position continuously with a non-contact sensor. In this research study, an effort has been made to estimate the mode shapes of a rolling tire with a single measurement instead of using the conventional experimental setup (i.e., measurement of the vibration of a rolling tire at multiple positions similar to the case of a non-rotating structure), which is used to estimate the dynamic behavior of a rolling tire. For this purpose, HHT, which was used in the continuous measurement of a non-rotating structure in previous research studies, has been used for the case of a rotating system in this study. Ambiguous mode combinations can occur in this rotating system, and therefore, a method to overcome this ambiguity is proposed in this study. In addition, the specific phenomenon for a rotating system is introduced, and the effect of this phenomenon with regard to the obtained results through HHT is investigated.
NASA Astrophysics Data System (ADS)
Pérez Gutiérrez, B. R.; Vera-Rivera, F. H.; Niño, E. D. V.
2016-08-01
Estimate the ionic charge generated in electrical discharges will allow us to know more accurately the concentration of ions implanted on the surfaces of nonmetallic solids. For this reason, in this research a web application was developed to allow us to calculate the ionic charge generated in an electrical discharge from the experimental parameters established in an ion implantation process performed in the JUPITER (Joint Universal Plasma and Ion Technologies Experimental Reactor) reactor. The estimated value of the ionic charge will be determined from data acquired on an oscilloscope, during startup and shutdown of electrical discharge, which will then be analyzed and processed. The study will provide best developments with regard to the application of ion implantation in various industrial sectors.
NASA Astrophysics Data System (ADS)
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.
Li, S; Oreffo, ROC; Sengers, BG; Tare, RS
2014-01-01
Significant oxygen gradients occur within tissue engineered cartilaginous constructs. Although oxygen tension is an important limiting parameter in the development of new cartilage matrix, its precise role in matrix formation by chondrocytes remains controversial, primarily due to discrepancies in the experimental setup applied in different studies. In this study, the specific effects of oxygen tension on the synthesis of cartilaginous matrix by human articular chondrocytes were studied using a combined experimental-computational approach in a “scaffold-free” 3D pellet culture model. Key parameters including cellular oxygen uptake rate were determined experimentally and used in conjunction with a mathematical model to estimate oxygen tension profiles in 21-day cartilaginous pellets. A threshold oxygen tension (pO2 ≈ 8% atmospheric pressure) for human articular chondrocytes was estimated from these inferred oxygen profiles and histological analysis of pellet sections. Human articular chondrocytes that experienced oxygen tension below this threshold demonstrated enhanced proteoglycan deposition. Conversely, oxygen tension higher than the threshold favored collagen synthesis. This study has demonstrated a close relationship between oxygen tension and matrix synthesis by human articular chondrocytes in a “scaffold-free” 3D pellet culture model, providing valuable insight into the understanding and optimization of cartilage bioengineering approaches. Biotechnol. Bioeng. 2014;111: 1876–1885. PMID:24668194
Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.
Spiess, Martin; Jordan, Pascal; Wendt, Mike
2018-05-07
In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.
Determination of some pure compound ideal-gas enthalpies of formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steele, W. V.; Chirico, R. D.; Nguyen, A.
1989-06-01
The results of a study aimed at improvement of group-additivity methodology for estimation of thermodynamic properties of organic substances are reported. Specific weaknesses where ring corrections were unknown or next-nearest-neighbor interactions were only estimated because of lack of experimental data are addressed by experimental studies of enthalpies of combustion in the condensed- phase and vapor pressure measurements. Ideal-gas enthalpies of formation are reported for acrylamide, succinimide, ..gamma..-butyrolactone, 2-pyrrolidone, 2,3-dihydrofuran, 3,4-dihydro-2H-pyran, 1,3-cyclohexadiene, 1,4-cyclohexadiene, and 1-methyl-1-phenylhydrazine. Ring corrections, group terms, and next-nearest-neighbor interaction terms useful in the application of group additivity correlations are derived. 44 refs., 2 figs., 59 tabs.
Eisenberg, Marisa C; Jain, Harsh V
2017-10-27
Mathematical modeling has a long history in the field of cancer therapeutics, and there is increasing recognition that it can help uncover the mechanisms that underlie tumor response to treatment. However, making quantitative predictions with such models often requires parameter estimation from data, raising questions of parameter identifiability and estimability. Even in the case of structural (theoretical) identifiability, imperfect data and the resulting practical unidentifiability of model parameters can make it difficult to infer the desired information, and in some cases, to yield biologically correct inferences and predictions. Here, we examine parameter identifiability and estimability using a case study of two compartmental, ordinary differential equation models of cancer treatment with drugs that are cell cycle-specific (taxol) as well as non-specific (oxaliplatin). We proceed through model building, structural identifiability analysis, parameter estimation, practical identifiability analysis and its biological implications, as well as alternative data collection protocols and experimental designs that render the model identifiable. We use the differential algebra/input-output relationship approach for structural identifiability, and primarily the profile likelihood approach for practical identifiability. Despite the models being structurally identifiable, we show that without consideration of practical identifiability, incorrect cell cycle distributions can be inferred, that would result in suboptimal therapeutic choices. We illustrate the usefulness of estimating practically identifiable combinations (in addition to the more typically considered structurally identifiable combinations) in generating biologically meaningful insights. We also use simulated data to evaluate how the practical identifiability of the model would change under alternative experimental designs. These results highlight the importance of understanding the underlying mechanisms rather than purely using parsimony or information criteria/goodness-of-fit to decide model selection questions. The overall roadmap for identifiability testing laid out here can be used to help provide mechanistic insight into complex biological phenomena, reduce experimental costs, and optimize model-driven experimentation. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Xue, Yuanyuan; Wang, Zujun; He, Baoping; Yao, Zhibin; Liu, Minbo; Ma, Wuying; Sheng, Jiangkun; Dong, Guantao; Jin, Junshan
2017-12-01
The CMOS image sensors (CISs) are irradiated with neutron from a nuclear reactor. The dark signal in CISs affected by neutron radiation is studied theoretically and experimentally. The Primary knock-on atoms (PKA) energy spectra for 1 MeV incident neutrons are simulated by Geant4. And the theoretical models for the mean dark signal, dark signal non-uniformity (DSNU) and dark signal distribution versus neutron fluence are established. The results are found to be in good agreement with the experimental outputs. Finally, the dark signal in the CISs under the different neutron fluence conditions is estimated. This study provides the theoretical and experimental evidence for the displacement damage effects on the dark signal CISs.
ERIC Educational Resources Information Center
Bordoloi Pazich, Loni
2014-01-01
This study uses statewide longitudinal data from Texas to estimate the impact of a state grant program intended to encourage low-income community college students to transfer to four-year institutions and complete the baccalaureate. Quasi-experimental methods employed include propensity score matching and regression discontinuity. Results indicate…
USDA-ARS?s Scientific Manuscript database
The impact of rater bias and assessment method on hypothesis testing was studied for different experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed ‘balanced’, and those ...
Water Withdrawn From the Luquillo Experimental Forest, 2004
Kelly E. Crook; Fred N. Scatena; Catherine M. Pringle
2007-01-01
This study quantifies the amount of water withdrawn from the Luqillo Experimental Forest (LEF) in 2004. Spatially averaged mean monthly water budgets were generated for watersheds draining the LEF by combining long-term data from various government agencies with estimated extraction data. Results suggest that, on a typical day, 70 percent of water generated within the...
NASA Astrophysics Data System (ADS)
Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven
2016-05-01
Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.
Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2017-05-01
The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
NASA Astrophysics Data System (ADS)
Lahiri, B. B.; Ranoo, Surojit; Philip, John
2017-11-01
Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ~25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and the results are compared using a lumped system thermal model. The various uncertainties involved in SAR estimation are categorized as material uncertainties, thermodynamic uncertainties and parametric uncertainties. The adiabatic reconstruction is found to decrease the uncertainties in SAR measurement by approximately three times. Additionally, a set of experimental guidelines for accurate SAR estimation using adiabatic reconstruction protocol is also recommended. These results warrant a universal experimental and data analysis protocol for SAR measurements during field induced heating of magnetic fluids under non-adiabatic conditions.
Combining Propensity Score Methods and Complex Survey Data to Estimate Population Treatment Effects
ERIC Educational Resources Information Center
Stuart, Elizabeth A.; Dong, Nianbo; Lenis, David
2016-01-01
Complex surveys are often used to estimate causal effects regarding the effects of interventions or exposures of interest. Propensity scores (Rosenbaum & Rubin, 1983) have emerged as one popular and effective tool for causal inference in non-experimental studies, as they can help ensure that groups being compared are similar with respect to a…
Assessing the Generalizability of Estimates of Causal Effects from Regression Discontinuity Designs
ERIC Educational Resources Information Center
Bloom, Howard S.; Porter, Kristin E.
2012-01-01
In recent years, the regression discontinuity design (RDD) has gained widespread recognition as a quasi-experimental method that when used correctly, can produce internally valid estimates of causal effects of a treatment, a program or an intervention (hereafter referred to as treatment effects). In an RDD study, subjects or groups of subjects…
Fisher, Sir Ronald Aylmer (1890-1962)
NASA Astrophysics Data System (ADS)
Murdin, P.
2000-11-01
Statistician, born in London, England. After studying astronomy using AIRY's manual on the Theory of Errors he became interested in statistics, and laid the foundation of randomization in experimental design, the analysis of variance and the use of data in estimating the properties of the parent population from which it was drawn. Invented the maximum likelihood method for estimating from random ...
Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J
2015-04-01
The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be used in precision feeding systems without adjustments. However, the method's ability to accommodate large genetic differences in feed intake and protein deposition patterns needs to be studied further.
Estimation of the object orientation and location with the use of MEMS sensors
NASA Astrophysics Data System (ADS)
Sawicki, Aleksander; Walendziuk, Wojciech; Idzkowski, Adam
2015-09-01
The article presents the implementation of the estimation algorithms of orientation in 3D space and the displacement of an object in a 2D space. Moreover, a general orientation storage methods using Euler angles, quaternion and rotation matrix are presented. The experimental part presents the results of the complementary filter implementation. In the study experimental microprocessor module based on STM32f4 Discovery system and myRIO hardware platform equipped with FPGA were used. The attempt to track an object in two-dimensional space, which are showed in the final part of this article, were made with the use of the equipment mentioned above.
NASA Astrophysics Data System (ADS)
Malov, Alexander N.; Neupokoeva, Anna V.; Kokorina, Lubov A.; Simonova, Elena V.
2016-11-01
A laser photomodifacation of nutrient mediums and antibiotics results at the microbiological supervision of bacteria colonies growth are discussed. It is experimentally shown, that on the irradiated media there is a delay of bacterial colonies growth number. Influence of laser radiation on activity of an antibiotic also is experimentally studied. It is revealed, that laser photomodifacation increases antimicrobic activity of a preparation. The mechanism of biological solutions activation is connected with the phenomenon laser nanoclusterization. Parameters of bacteria growth bacteria growth dynamics allow to numerically estimate degree of laser activation of nutrient mediums and pharmaceutical preparations.
NASA Astrophysics Data System (ADS)
Rao, D. V.; Cesareo, R.; Brunetti, A.; Gigante, G. E.; Takeda, T.; Itai, Y.; Akatsuka, T.
2002-10-01
A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic Kα radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work.
Paul B. Alaback; Duncan C. Lutes
1997-01-01
Methods for the quantification of coarse woody debris volume and the description of spatial patterning were studied in the Tenderfoot Creek Experimental Forest, Montana. The line transect method was found to be an accurate, unbiased estimator of down debris volume (> 10cm diameter) on 1/4 hectare fixed-area plots, when perpendicular lines were used. The Fischer...
Experimental Study on Impact Load on a Dam Due to Debris Flow
lwao Miyoshi
1991-01-01
When a dam is struck by mud or debris flow, it is put under a great impact load and sometimes is destroyed. To prevent such destruction, it is important to perform basic research about the impact load on a dam due to debris flow. Thus, we have made an experimental study and tried to establish a method to estimate such a impact load on the dam. The experiment was...
Stratton, Shawna L; Henrich, Cindy L; Matthews, Nell I; Bogusiewicz, Anna; Dawson, Amanda M; Horvath, Thomas D; Owen, Suzanne N; Boysen, Gunnar; Moran, Jeffery H; Mock, Donald M
2012-01-01
To date, marginal, asymptomatic biotin deficiency has been successfully induced experimentally by the use of labor-intensive inpatient designs requiring rigorous dietary control. We sought to determine if marginal biotin deficiency could be induced in humans in a less expensive outpatient design incorporating a self-selected, mixed general diet. We sought to examine the efficacy of three outpatient study designs: two based on oral avidin dosing and one based on a diet high in undenatured egg white for a period of 28 d. In study design 1, participants (n = 4; 3 women) received avidin in capsules with a biotin binding capacity of 7 times the estimated dietary biotin intake of a typical self-selected diet. In study design 2, participants (n = 2; 2 women) received double the amount of avidin capsules (14 times the estimated dietary biotin intake). In study design 3, participants (n = 5; 3 women) consumed egg-white beverages containing avidin with a biotin binding capacity of 7 times the estimated dietary biotin intake. Established indices of biotin status [lymphocyte propionyl-CoA carboxylase activity; urinary excretion of 3-hydroxyisovaleric acid, 3-hydroxyisovaleryl carnitine (3HIA-carnitine), and biotin; and plasma concentration of 3HIA-carnitine] indicated that study designs 1 and 2 were not effective in inducing marginal biotin deficiency, but study design 3 was as effective as previous inpatient study designs that induced deficiency by egg-white beverage. Marginal biotin deficiency can be induced experimentally by using a cost-effective outpatient design by avidin delivery in egg-white beverages. This design should be useful to the broader nutritional research community.
Experimental demonstration of cheap and accurate phase estimation
NASA Astrophysics Data System (ADS)
Rudinger, Kenneth; Kimmel, Shelby; Lobser, Daniel; Maunz, Peter
We demonstrate experimental implementation of robust phase estimation (RPE) to learn the phases of X and Y rotations on a trapped Yb+ ion qubit.. Unlike many other phase estimation protocols, RPE does not require ancillae nor near-perfect state preparation and measurement operations. Additionally, its computational requirements are minimal. Via RPE, using only 352 experimental samples per phase, we estimate phases of implemented gates with errors as small as 10-4 radians, as validated using gate set tomography. We also demonstrate that these estimates exhibit Heisenberg scaling in accuracy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Owman, T
1981-07-01
In the experimental model in the rabbit the excretion of sodium and meglumine diatrizoate, respectively, have been compared. Urographic density which was estimated through renal pelvic volume as calculated according to previous experiments (Owman 1978; Owman & Olin 1980) and urinary iodine concentration, is suggested to be more accurate than mere determination of urine iodine concentration and diuresis when evaluating and comparing urographic contrast media experimentally. More reliable dose optima are probably found when calculating density rather than determining urine concentrations. Of the examined media in this investigation, the sodium salt of diatrizoate was not superior to the meglumine salt in dose ranges up to 320 mg I/kg body weight, while at higher doses sodium diatrizoate gave higher urinary iodine concentrations and higher estimated density.
SBML-PET: a Systems Biology Markup Language-based parameter estimation tool.
Zi, Zhike; Klipp, Edda
2006-11-01
The estimation of model parameters from experimental data remains a bottleneck for a major breakthrough in systems biology. We present a Systems Biology Markup Language (SBML) based Parameter Estimation Tool (SBML-PET). The tool is designed to enable parameter estimation for biological models including signaling pathways, gene regulation networks and metabolic pathways. SBML-PET supports import and export of the models in the SBML format. It can estimate the parameters by fitting a variety of experimental data from different experimental conditions. SBML-PET has a unique feature of supporting event definition in the SMBL model. SBML models can also be simulated in SBML-PET. Stochastic Ranking Evolution Strategy (SRES) is incorporated in SBML-PET for parameter estimation jobs. A classic ODE Solver called ODEPACK is used to solve the Ordinary Differential Equation (ODE) system. http://sysbio.molgen.mpg.de/SBML-PET/. The website also contains detailed documentation for SBML-PET.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKone, Thomas E.; Maddalena, Randy L.
2007-01-01
The role of terrestrial vegetation in transferring chemicals from soil and air into specific plant tissues (stems, leaves, roots, etc.) is still not well characterized. We provide here a critical review of plant-to-soil bioconcentration ratio (BCR) estimates based on models and experimental data. This review includes the conceptual and theoretical formulations of the bioconcentration ratio, constructing and calibrating empirical and mathematical algorithms to describe this ratio and the experimental data used to quantify BCRs and calibrate the model performance. We first evaluate the theoretical basis for the BCR concept and BCR models and consider how lack of knowledge and datamore » limits reliability and consistency of BCR estimates. We next consider alternate modeling strategies for BCR. A key focus of this evaluation is the relative contributions to overall uncertainty from model uncertainty versus variability in the experimental data used to develop and test the models. As a case study, we consider a single chemical, hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX), and focus on variability of bioconcentration measurements obtained from 81 experiments with different plant species, different plant tissues, different experimental conditions, and different methods for reporting concentrations in the soil and plant tissues. We use these observations to evaluate both the magnitude of experimental variability in plant bioconcentration and compare this to model uncertainty. Among these 81 measurements, the variation of the plant/soil BCR has a geometric standard deviation (GSD) of 3.5 and a coefficient of variability (CV-ratio of arithmetic standard deviation to mean) of 1.7. These variations are significant but low relative to model uncertainties--which have an estimated GSD of 10 with a corresponding CV of 14.« less
Mousa-Pasandi, Mohammad E; Zhuge, Qunbi; Xu, Xian; Osman, Mohamed M; El-Sahn, Ziad A; Chagnon, Mathieu; Plant, David V
2012-07-02
We experimentally investigate the performance of a low-complexity non-iterative phase noise induced inter-carrier interference (ICI) compensation algorithm in reduced-guard-interval dual-polarization coherent-optical orthogonal-frequency-division-multiplexing (RGI-DP-CO-OFDM) transport systems. This interpolation-based ICI compensator estimates the time-domain phase noise samples by a linear interpolation between the CPE estimates of the consecutive OFDM symbols. We experimentally study the performance of this scheme for a 28 Gbaud QPSK RGI-DP-CO-OFDM employing a low cost distributed feedback (DFB) laser. Experimental results using a DFB laser with the linewidth of 2.6 MHz demonstrate 24% and 13% improvement in transmission reach with respect to the conventional equalizer (CE) in presence of weak and strong dispersion-enhanced-phase-noise (DEPN), respectively. A brief analysis of the computational complexity of this scheme in terms of the number of required complex multiplications is provided. This practical approach does not suffer from error propagation while enjoying low computational complexity.
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Iliff, Kenneth
2002-01-01
A maximum-likelihood output-error parameter estimation technique is used to obtain stability and control derivatives for the NASA Dryden Flight Research Center SR-71A airplane and for configurations that include experiments externally mounted to the top of the fuselage. This research is being done as part of the envelope clearance for the new experiment configurations. Flight data are obtained at speeds ranging from Mach 0.4 to Mach 3.0, with an extensive amount of test points at approximately Mach 1.0. Pilot-input pitch and yaw-roll doublets are used to obtain the data. This report defines the parameter estimation technique used, presents stability and control derivative results, and compares the derivatives for the three configurations tested. The experimental configurations studied generally show acceptable stability, control, trim, and handling qualities throughout the Mach regimes tested. The reduction of directional stability for the experimental configurations is the most significant aerodynamic effect measured and identified as a design constraint for future experimental configurations. This report also shows the significant effects of aircraft flexibility on the stability and control derivatives.
Sato, Masashi; Yamashita, Okito; Sato, Masa-Aki; Miyawaki, Yoichi
2018-01-01
To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of "information spreading" may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined.
Sato, Masashi; Yamashita, Okito; Sato, Masa-aki
2018-01-01
To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of “information spreading” may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined. PMID:29912968
NASA Astrophysics Data System (ADS)
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
1981-08-01
electro - optic effect is investigated both theoretically and experimentally. The theoretical approach is based upon W.A. Harrison’s ’Bond-Orbital Model’. The separate electronic and lattice contributions to the second-order, electro - optic susceptibility are examined within the context of this model and formulae which can accommodate any crystal structure are presented. In addition, a method for estimating the lattice response to a low frequency (dc) electric field is outlined. Finally, experimental measurements of the electro -
ERIC Educational Resources Information Center
Rossi, Robert Joseph
Methods drawn from four logical theories associated with studies of inductive processes are applied to the assessment and evaluation of experimental episode construct validity. It is shown that this application provides for estimates of episode informativeness with respect to the person examined in terms of the construct and to the construct…
"No Excuses" Charter Schools: A Meta-Analysis of the Experimental Evidence on Student Achievement
ERIC Educational Resources Information Center
Cheng, Albert; Hitt, Collin; Kisida, Brian; Mills, Jonathan N.
2017-01-01
Many most well-known charter schools in the United States use a "No Excuses" approach. We conduct the first meta-analysis of the achievement impacts of No Excuses charter schools, focusing on experimental, lottery-based studies. We estimate that No Excuses charter schools increase student math and literacy achievement by 0.25 and 0.17,…
NASA Astrophysics Data System (ADS)
Kim, Jaewook; Lee, W.-J.; Jhang, Hogun; Kaang, H. H.; Ghim, Y.-C.
2017-10-01
Stochastic magnetic fields are thought to be as one of the possible mechanisms for anomalous transport of density, momentum and heat across the magnetic field lines. Kubo number and Chirikov parameter are quantifications of the stochasticity, and previous studies show that perpendicular transport strongly depends on the magnetic Kubo number (MKN). If MKN is smaller than one, diffusion process will follow Rechester-Rosenbluth model; whereas if it is larger than one, percolation theory dominates the diffusion process. Thus, estimation of Kubo number plays an important role to understand diffusion process caused by stochastic magnetic fields. However, spatially localized experimental measurement of fluctuating magnetic fields in a tokamak is difficult, and we attempt to estimate MKNs using BOUT + + simulation data with pedestal collapse. In addition, we calculate correlation length of fluctuating pressures and Chirikov parameters to investigate variation correlation lengths in the simulation. We, then, discuss how one may experimentally estimate MKNs.
Rousu, Matthew C.; Thrasher, James F.
2014-01-01
Experimental and observational research often involves asking consumers to self-report the impact of some proposed option. Because self-reported responses involve no consequence to the respondent for falsely revealing how he or she feels about an issue, self-reports may be subject to social desirability and other influences that bias responses in important ways. In this article, we analyzed data from an experiment on the impact of cigarette packaging and pack warnings, comparing smokers’ self-reported impact (four-item scale) and the bids they placed in experimental auctions to estimate differences in demand. The results were consistent across methods; however, the estimated effect size associated with different warning labels was two times greater for the four-item self-reported response scale when compared to the change in demand as indicated by auction bids. Our study provides evidence that self-reported psychosocial responses provide a valid proxy for behavioral change as reflected by experimental auction bidding behavior. More research is needed to better understand the advantages and disadvantages of behavioral economic methods and traditional self-report approaches to evaluating health behavior change interventions. PMID:24399267
Xie, Weizhen; Zhang, Weiwei
2017-11-01
The present study dissociated the number (i.e., quantity) and precision (i.e., quality) of visual short-term memory (STM) representations in change detection using receiver operating characteristic (ROC) and experimental manipulations. Across three experiments, participants performed both recognition and recall tests of visual STM using the change-detection task and the continuous color-wheel recall task, respectively. Experiment 1 demonstrated that the estimates of the number and precision of visual STM representations based on the ROC model of change-detection performance were robustly correlated with the corresponding estimates based on the mixture model of continuous-recall performance. Experiments 2 and 3 showed that the experimental manipulation of mnemonic precision using white-noise masking and the experimental manipulation of the number of encoded STM representations using consolidation masking produced selective effects on the corresponding measures of mnemonic precision and the number of encoded STM representations, respectively, in both change-detection and continuous-recall tasks. Altogether, using the individual-differences (Experiment 1) and experimental dissociation (Experiment 2 and 3) approaches, the present study demonstrated the some-or-none nature of visual STM representations across recall and recognition.
Inter-atomic potentials for radiation damage studies in CePO4 monazite
NASA Astrophysics Data System (ADS)
Jolley, Kenny; Asuvathraman, Rajaram; Smith, Roger
2017-02-01
An original empirical potential used for modelling phosphate glasses is adapted to be suitable for use with monazite (CePO4) so as to have a consistent formulation for radiation damage studies of phosphates. This is done by adding a parameterisation for the Ce-O interaction to the existing potential set. The thermal and structural properties of the resulting computer model are compared to experimental results. The parameter set gives a stable monazite structure where the volume of the unit cell is almost identical to that measured experimentally, but with some shrinkage in the a and b lengths and a small expansion in the c direction compared to experiment. The thermal expansion, specific heat capacity and estimates of the melting point are also determined. The estimate of the melting temperature of 2500 K is comparable to the experimental value of 2318 ± 20 K, but the simulated thermal expansion of 49 ×10-6 K-1 is larger than the usually reported value. The simulated specific heat capacity at constant pressure was found to be approximately constant at 657 J kg-1 K-1 in the range 300-1000 K, however, this is not observed experimentally or in more detailed ab initio calculations.
Gitifar, Vahid; Eslamloueyan, Reza; Sarshar, Mohammad
2013-11-01
In this study, pretreatment of sugarcane bagasse and subsequent enzymatic hydrolysis is investigated using two categories of pretreatment methods: dilute acid (DA) pretreatment and combined DA-ozonolysis (DAO) method. Both methods are accomplished at different solid ratios, sulfuric acid concentrations, autoclave residence times, bagasse moisture content, and ozonolysis time. The results show that the DAO pretreatment can significantly increase the production of glucose compared to DA method. Applying k-fold cross validation method, two optimal artificial neural networks (ANNs) are trained for estimations of glucose concentrations for DA and DAO pretreatment methods. Comparing the modeling results with experimental data indicates that the proposed ANNs have good estimation abilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, J. G.; Sun, Z. C.; Wei, X. Z.; Dai, H. F.
2015-01-01
The power battery thermal management problem in EV (electric vehicle) and HEV (hybrid electric vehicle) has been widely discussed, and EIS (electrochemical impedance spectroscopy) is an effective experimental method to test and estimate the status of the battery. Firstly, an electrochemical-based impedance matrix analysis for lithium-ion battery is developed to describe the impedance response of electrochemical impedance spectroscopy. Then a method, based on electrochemical impedance spectroscopy measurement, has been proposed to estimate the internal temperature of power lithium-ion battery by analyzing the phase shift and magnitude of impedance at different ambient temperatures. Respectively, the SoC (state of charge) and temperature have different effects on the impedance characteristics of battery at various frequency ranges in the electrochemical impedance spectroscopy experimental study. Also the impedance spectrum affected by SoH (state of health) is discussed in the paper preliminary. Therefore, the excitation frequency selected to estimate the inner temperature is in the frequency range which is significantly influenced by temperature without the SoC and SoH. The intrinsic relationship between the phase shift and temperature is established under the chosen excitation frequency. And the magnitude of impedance related to temperature is studied in the paper. In practical applications, through obtaining the phase shift and magnitude of impedance, the inner temperature estimation could be achieved. Then the verification experiments are conduced to validate the estimate method. Finally, an estimate strategy and an on-line estimation system implementation scheme utilizing battery management system are presented to describe the engineering value.
Objective estimates based on experimental data and initial and final knowledge
NASA Technical Reports Server (NTRS)
Rosenbaum, B. M.
1972-01-01
An extension of the method of Jaynes, whereby least biased probability estimates are obtained, permits such estimates to be made which account for experimental data on hand as well as prior and posterior knowledge. These estimates can be made for both discrete and continuous sample spaces. The method allows a simple interpretation of Laplace's two rules: the principle of insufficient reason and the rule of succession. Several examples are analyzed by way of illustration.
Modeling the erythemal surface diffuse irradiance fraction for Badajoz, Spain
NASA Astrophysics Data System (ADS)
Sanchez, Guadalupe; Serrano, Antonio; Cancillo, María Luisa
2017-10-01
Despite its important role on the human health and numerous biological processes, the diffuse component of the erythemal ultraviolet irradiance (UVER) is scarcely measured at standard radiometric stations and therefore needs to be estimated. This study proposes and compares 10 empirical models to estimate the UVER diffuse fraction. These models are inspired from mathematical expressions originally used to estimate total diffuse fraction, but, in this study, they are applied to the UVER case and tested against experimental measurements. In addition to adapting to the UVER range the various independent variables involved in these models, the total ozone column has been added in order to account for its strong impact on the attenuation of ultraviolet radiation. The proposed models are fitted to experimental measurements and validated against an independent subset. The best-performing model (RAU3) is based on a model proposed by Ruiz-Arias et al. (2010) and shows values of r2 equal to 0.91 and relative root-mean-square error (rRMSE) equal to 6.1 %. The performance achieved by this entirely empirical model is better than those obtained by previous semi-empirical approaches and therefore needs no additional information from other physically based models. This study expands on previous research to the ultraviolet range and provides reliable empirical models to accurately estimate the UVER diffuse fraction.
Experimental Design for Parameter Estimation of Gene Regulatory Networks
Timmer, Jens
2012-01-01
Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs) are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM). This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines. PMID:22815723
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-13
... dietary choices. Results of the study will not be used to develop population estimates. To help design and... be sent to panelists to have 200 of them complete a 15-minute (0.25 hour) pretest. The total for the pretest activities is 106 hours (53 hours + 50 hours). For the survey, we estimate that 32,000 invitations...
Limits on estimating the width of thin tubular structures in 3D images.
Wörz, Stefan; Rohr, Karl
2006-01-01
This work studies limits on estimating the width of thin tubular structures in 3D images. Based on nonlinear estimation theory we analyze the minimal stochastic error of estimating the width. Given a 3D analytic model of the image intensities of tubular structures, we derive a closed-form expression for the Cramér-Rao bound of the width estimate under image noise. We use the derived lower bound as a benchmark and compare it with three previously proposed accuracy limits for vessel width estimation. Moreover, by experimental investigations we demonstrate that the derived lower bound can be achieved by fitting a 3D parametric intensity model directly to the image data.
NASA Technical Reports Server (NTRS)
Sawyer, W. C.; Allen, J. M.; Hernandez, G.; Dillenius, M. F. E.; Hemsch, M. J.
1982-01-01
This paper presents a survey of engineering computational methods and experimental programs used for estimating the aerodynamic characteristics of missile configurations. Emphasis is placed on those methods which are suitable for preliminary design of conventional and advanced concepts. An analysis of the technical approaches of the various methods is made in order to assess their suitability to estimate longitudinal and/or lateral-directional characteristics for different classes of missile configurations. Some comparisons between the predicted characteristics and experimental data are presented. These comparisons are made for a large variation in flow conditions and model attitude parameters. The paper also presents known experimental research programs developed for the specific purpose of validating analytical methods and extending the capability of data-base programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velaga, A.
1986-01-01
Packed cross-flow internals consisting of four and ten stages including the samplers for liquid and vapor were fabricated to fit into the existing distillation column. Experiments were conducted using methanol-water, ethanol-water and hexane-heptane binary mixtures. The experimental data were collected for compositions of inlet and exist streams of cross-flow stages. The overall gas phase height transfer units (H/sub og/) were estimated using the experimental data. H/sub og/ values were compared to those of counter current conditions. The individual mass transfer coefficients in the liquid and vapor phases were estimated using the collected experimental data for degree of separation, flow ratesmore » and physical properties of the binary system used. The physical properties were estimated at an average temperature of the specific cross-flow stage. The mass transfer coefficients were evaluated using three different correlations proposed by Shulman. Onda and Hayashi respectively. The interfacial areas were estimated using the evaluated mass transfer coefficients and the experimental data at each stage of the column for different runs and compared.« less
Comparative study of soil erodibility and critical shear stress between loess and purple soils
NASA Astrophysics Data System (ADS)
Xing, Hang; Huang, Yu-han; Chen, Xiao-yan; Luo, Bang-lin; Mi, Hong-xing
2018-03-01
Loess and purple soils are two very important cultivated soils, with the former in the loess region and the latter in southern sub-tropical region of China, featured with high-risks of erosion, considerable differences of soil structures due to differences in mineral and nutrient compositions. Study on soil erodibility (Kr) and critical shear stress (τc) of these two soils is beneficial to predict soil erosion with such models as WEPP. In this study, rill erosion experimental data sets of the two soils are used for estimating their Kr and τc before they are compared to understand their differences of rill erosion behaviors. The maximum detachment rates of the loess and purple soils are calculated under different hydrodynamic conditions (flow rates: 2, 4, 8 L/min; slope gradients: 5°, 10°, 15°, 20°, 25°) through analytical and numerical methods respectively. Analytical method used the derivative of the function between sediment concentration and rill length to estimate potential detachment rates, at the rill beginning. Numerical method estimated potential detachment rates with the experimental data, at the rill beginning and 0.5 m location. The Kr and τc of these two soils are determined by the linear equation based on experimental data. Results show that the methods could well estimate the Kr and τc of these two soils as they remain basically unchanged under different hydrodynamic conditions. The Kr value of loess soil is about twice of the purple soil, whereas the τc is about half of that. The numerical results have good correlations with the analytical values. These results can be useful in modeling rill erosion processes of loess and purple soils.
Epidemiologic methods in clinical trials.
Rothman, K J
1977-04-01
Epidemiologic methods developed to control confounding in non-experimental studies are equally applicable for experiments. In experiments, most confounding is usually controlled by random allocation of subjects to treatment groups, but randomization does not preclude confounding except for extremely large studies, the degree of confounding expected being inversely related to the size of the treatment groups. In experiments, as in non-experimental studies, the extent of confounding for each risk indicator should be assessed, and if sufficiently large, controlled. Confounding is properly assessed by comparing the unconfounded effect estimate to the crude effect estimate; a common error is to assess confounding by statistical tests of significance. Assessment of confounding involves its control as a prerequisite. Control is most readily and cogently achieved by stratification of the data, though with many factors to control simultaneously, multivariate analysis or a combination of multivariate analysis and stratification might be necessary.
Development and validation of a MRgHIFU non-invasive tissue acoustic property estimation technique.
Johnson, Sara L; Dillon, Christopher; Odéen, Henrik; Parker, Dennis; Christensen, Douglas; Payne, Allison
2016-11-01
MR-guided high-intensity focussed ultrasound (MRgHIFU) non-invasive ablative surgeries have advanced into clinical trials for treating many pathologies and cancers. A remaining challenge of these surgeries is accurately planning and monitoring tissue heating in the face of patient-specific and dynamic acoustic properties of tissues. Currently, non-invasive measurements of acoustic properties have not been implemented in MRgHIFU treatment planning and monitoring procedures. This methods-driven study presents a technique using MR temperature imaging (MRTI) during low-temperature HIFU sonications to non-invasively estimate sample-specific acoustic absorption and speed of sound values in tissue-mimicking phantoms. Using measured thermal properties, specific absorption rate (SAR) patterns are calculated from the MRTI data and compared to simulated SAR patterns iteratively generated via the Hybrid Angular Spectrum (HAS) method. Once the error between the simulated and measured patterns is minimised, the estimated acoustic property values are compared to the true phantom values obtained via an independent technique. The estimated values are then used to simulate temperature profiles in the phantoms, and compared to experimental temperature profiles. This study demonstrates that trends in acoustic absorption and speed of sound can be non-invasively estimated with average errors of 21% and 1%, respectively. Additionally, temperature predictions using the estimated properties on average match within 1.2 °C of the experimental peak temperature rises in the phantoms. The positive results achieved in tissue-mimicking phantoms presented in this study indicate that this technique may be extended to in vivo applications, improving HIFU sonication temperature rise predictions and treatment assessment.
Maleke, Caroline; Luo, Jianwen; Gamarnik, Viktor; Lu, Xin L; Konofagou, Elisa E
2010-07-01
The objective of this study is to show that Harmonic Motion Imaging (HMI) can be used as a reliable tumor-mapping technique based on the tumor's distinct stiffness at the early onset of disease. HMI is a radiation-force-based imaging method that generates a localized vibration deep inside the tissue to estimate the relative tissue stiffness based on the resulting displacement amplitude. In this paper, a finite-element model (FEM) study is presented, followed by an experimental validation in tissue-mimicking polyacrylamide gels and excised human breast tumors ex vivo. This study compares the resulting tissue motion in simulations and experiments at four different gel stiffnesses and three distinct spherical inclusion diameters. The elastic moduli of the gels were separately measured using mechanical testing. Identical transducer parameters were used in both the FEM and experimental studies, i.e., a 4.5-MHz single-element focused ultrasound (FUS) and a 7.5-MHz diagnostic (pulse-echo) transducer. In the simulation, an acoustic pressure field was used as the input stimulus to generate a localized vibration inside the target. Radiofrequency (rf) signals were then simulated using a 2D convolution model. A one-dimensional cross-correlation technique was performed on the simulated and experimental rf signals to estimate the axial displacement resulting from the harmonic radiation force. In order to measure the reliability of the displacement profiles in estimating the tissue stiffness distribution, the contrast-transfer efficiency (CTE) was calculated. For tumor mapping ex vivo, a harmonic radiation force was applied using a 2D raster-scan technique. The 2D HMI images of the breast tumor ex vivo could detect a malignant tumor (20 x 10 mm2) surrounded by glandular and fat tissues. The FEM and experimental results from both gels and breast tumors ex vivo demonstrated that HMI was capable of detecting and mapping the tumor or stiff inclusion with various diameters or stiffnesses. HMI may thus constitute a promising technique in tumor detection (>3 mm in diameter) and mapping based on its distinct stiffness.
Accurate position estimation methods based on electrical impedance tomography measurements
NASA Astrophysics Data System (ADS)
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.
Estimating distribution of hidden objects with drones: from tennis balls to manatees.
Martin, Julien; Edwards, Holly H; Burgess, Matthew A; Percival, H Franklin; Fagan, Daniel E; Gardner, Beth E; Ortega-Ortiz, Joel G; Ifju, Peter G; Evers, Brandon S; Rambo, Thomas J
2012-01-01
Unmanned aerial vehicles (UAV), or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection). UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348). The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants.
Estimating Distribution of Hidden Objects with Drones: From Tennis Balls to Manatees
Martin, Julien; Edwards, Holly H.; Burgess, Matthew A.; Percival, H. Franklin; Fagan, Daniel E.; Gardner, Beth E.; Ortega-Ortiz, Joel G.; Ifju, Peter G.; Evers, Brandon S.; Rambo, Thomas J.
2012-01-01
Unmanned aerial vehicles (UAV), or drones, have been used widely in military applications, but more recently civilian applications have emerged (e.g., wildlife population monitoring, traffic monitoring, law enforcement, oil and gas pipeline threat detection). UAV can have several advantages over manned aircraft for wildlife surveys, including reduced ecological footprint, increased safety, and the ability to collect high-resolution geo-referenced imagery that can document the presence of species without the use of a human observer. We illustrate how geo-referenced data collected with UAV technology in combination with recently developed statistical models can improve our ability to estimate the distribution of organisms. To demonstrate the efficacy of this methodology, we conducted an experiment in which tennis balls were used as surrogates of organisms to be surveyed. We used a UAV to collect images of an experimental field with a known number of tennis balls, each of which had a certain probability of being hidden. We then applied spatially explicit occupancy models to estimate the number of balls and created precise distribution maps. We conducted three consecutive surveys over the experimental field and estimated the total number of balls to be 328 (95%CI: 312, 348). The true number was 329 balls, but simple counts based on the UAV pictures would have led to a total maximum count of 284. The distribution of the balls in the field followed a simulated environmental gradient. We also were able to accurately estimate the relationship between the gradient and the distribution of balls. Our experiment demonstrates how this technology can be used to create precise distribution maps in which discrete regions of the study area are assigned a probability of presence of an object. Finally, we discuss the applicability and relevance of this experimental study to the case study of Florida manatee distribution at power plants. PMID:22761712
Saez Vergara, J C; Romero Gutiérrez, A M; Rodriguez Jiménez, R; Dominguez-Mompell Román, R
2004-01-01
The results from 2 years (2001-2002) of experimental measurements of in-board radiation doses received at IBERIA commercial flights are presented. The routes studied cover the most significant destinations and provide a good estimate of the route doses as required by the new Spanish regulations on air crew radiation protection. Details on the experimental procedures and calibration methods are given. The experimental measurements from the different instruments (Tissue Equivalent Proportional Counter and the combination of a high pressure ion chamber and a high-energy neutron compensated rem-counter) and their comparison with the predictions from some route-dose codes (CARI-6, EPCARD 3.2) are discussed. In contrast with the already published data, which are mainly focused on North latitudes over parallel 50, many of the data presented in this work have been obtained for routes from Spain to Central and South America.
Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J
2014-01-01
Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important implications in understanding and predicting the effects of temperature on the dynamics of populations. PMID:25558365
Forward hadron calorimeter at MPD/NICA
NASA Astrophysics Data System (ADS)
Golubeva, M.; Guber, F.; Ivashkin, A.; Izvestnyy, A.; Kurepin, A.; Morozov, S.; Parfenov, P.; Petukhov, O.; Taranenko, A.; Selyuzhenkov, I.; Svintsov, I.
2017-01-01
Forward hadron calorimeter (FHCAL) at MPD/NICA experimental setup is described. The main purpose of the FHCAL is to provide an experimental measurement of a heavy-ion collision centrality (impact parameter) and orientation of its reaction plane. Precise event-by-event estimate of these basic observables is crucial for many physics phenomena studies to be performed by the MPD experiment. The simulation results of FHCAL performance are presented.
NASA Astrophysics Data System (ADS)
Balakin, V. V.; Vorobev, N. S.; Berkaev, D. V.; Glukhov, S. A.; Gornostaev, P. B.; Dorokhov, V. L.; Chao, Ma Xiao; Meshkov, O. I.; Nikiforov, D. A.; Shashkov, E. V.; Emanov, F. A.; Astrelina, K. V.; Blinov, M. F.; Borin, V. M.
2018-03-01
The efficiency of injection from a linear accelerator into the damping ring of the BINP injection complex has been experimentally studied. The estimations of the injection efficiency are in good agreement with the experimental results. Our method of increasing the capture efficiency can enhance the productivity of the injection complex by a factor of 1.5-2.
Estimation of Transformation Temperatures in Ti-Ni-Pd Shape Memory Alloys
NASA Astrophysics Data System (ADS)
Narayana, P. L.; Kim, Seong-Woong; Hong, Jae-Keun; Reddy, N. S.; Yeom, Jong-Taek
2018-03-01
The present study focused on estimating the complex nonlinear relationship between the composition and phase transformation temperatures of Ti-Ni-Pd shape memory alloys by artificial neural networks (ANN). The ANN models were developed by using the experimental data of Ti-Ni-Pd alloys. It was found that the predictions are in good agreement with the trained and unseen test data of existing alloys. The developed model was able to simulate new virtual alloys to quantitatively estimate the effect of Ti, Ni, and Pd on transformation temperatures. The transformation temperature behavior of these virtual alloys is validated by conducting new experiments on the Ti-rich thin film that was deposited using multi target sputtering equipment. The transformation behavior of the film was measured by varying the composition with the help of aging treatment. The predicted trend of transformational temperatures was explained with the help of experimental results.
Palazoğlu, T K; Gökmen, V
2008-04-01
In this study, a numerical model was developed to simulate frying of potato strips and estimate acrylamide levels in French fries. Heat and mass transfer parameters determined during frying of potato strips and the formation and degradation kinetic parameters of acrylamide obtained with a sugar-asparagine model system were incorporated within the model. The effect of reducing sugar content (0.3 to 2.15 g/100 g dry matter), strip thickness (8.5 x 8.5 mm and 10 x 10 mm), and frying time (3, 4, 5, and 6 min) and temperature (150, 170, and 190 degrees C) on resultant acrylamide level in French fries was investigated both numerically and experimentally. The model appeared to closely estimate the acrylamide contents, and thereby may potentially save considerable time, money, and effort during the stages of process design and optimization.
Stratton, Shawna L.; Henrich, Cindy L.; Matthews, Nell I.; Bogusiewicz, Anna; Dawson, Amanda M.; Horvath, Thomas D.; Owen, Suzanne N.; Boysen, Gunnar; Moran, Jeffery H.; Mock, Donald M.
2012-01-01
To date, marginal, asymptomatic biotin deficiency has been successfully induced experimentally by the use of labor-intensive inpatient designs requiring rigorous dietary control. We sought to determine if marginal biotin deficiency could be induced in humans in a less expensive outpatient design incorporating a self-selected, mixed general diet. We sought to examine the efficacy of three outpatient study designs: two based on oral avidin dosing and one based on a diet high in undenatured egg white for a period of 28 d. In study design 1, participants (n = 4; 3 women) received avidin in capsules with a biotin binding capacity of 7 times the estimated dietary biotin intake of a typical self-selected diet. In study design 2, participants (n = 2; 2 women) received double the amount of avidin capsules (14 times the estimated dietary biotin intake). In study design 3, participants (n = 5; 3 women) consumed egg-white beverages containing avidin with a biotin binding capacity of 7 times the estimated dietary biotin intake. Established indices of biotin status [lymphocyte propionyl-CoA carboxylase activity; urinary excretion of 3-hydroxyisovaleric acid, 3-hydroxyisovaleryl carnitine (3HIA-carnitine), and biotin; and plasma concentration of 3HIA-carnitine] indicated that study designs 1 and 2 were not effective in inducing marginal biotin deficiency, but study design 3 was as effective as previous inpatient study designs that induced deficiency by egg-white beverage. Marginal biotin deficiency can be induced experimentally by using a cost-effective outpatient design by avidin delivery in egg-white beverages. This design should be useful to the broader nutritional research community. PMID:22157538
Losses in radial inflow turbines
NASA Technical Reports Server (NTRS)
Khalil, I. M.; Tabakoff, W.; Hamed, A.
1976-01-01
A study was conducted to determine experimentally and theoretically the losses in radial inflow turbine nozzles. Extensive experimental data was obtained to investigate the flow behavior in a full-scale radial turbine stator annulus. A theoretical model to predict the losses in both the vaned and vaneless regions of the nozzle was developed. In this analysis, the interaction effects between the stator and the rotor are not considered. It was found that the losses incurred due to the end wall boundary layers can be significant, especially if they are characterized by a strong crossflow. The losses estimated using the analytical study are compared with the experimentally determined values.
Classes of Split-Plot Response Surface Designs for Equivalent Estimation
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey
2006-01-01
When planning an experimental investigation, we are frequently faced with factors that are difficult or time consuming to manipulate, thereby making complete randomization impractical. A split-plot structure differentiates between the experimental units associated with these hard-to-change factors and others that are relatively easy-to-change and provides an efficient strategy that integrates the restrictions imposed by the experimental apparatus. Several industrial and scientific examples are presented to illustrate design considerations encountered in the restricted randomization context. In this paper, we propose classes of split-plot response designs that provide an intuitive and natural extension from the completely randomized context. For these designs, the ordinary least squares estimates of the model are equivalent to the generalized least squares estimates. This property provides best linear unbiased estimators and simplifies model estimation. The design conditions that allow for equivalent estimation are presented enabling design construction strategies to transform completely randomized Box-Behnken, equiradial, and small composite designs into a split-plot structure.
Feng, Yuan; Lee, Chung-Hao; Sun, Lining; Ji, Songbai; Zhao, Xuefeng
2017-01-01
Characterizing the mechanical properties of white matter is important to understand and model brain development and injury. With embedded aligned axonal fibers, white matter is typically modeled as a transversely isotropic material. However, most studies characterize the white matter tissue using models with a single anisotropic invariant or in a small-strain regime. In this study, we combined a single experimental procedure - asymmetric indentation - with inverse finite element (FE) modeling to estimate the nearly incompressible transversely isotropic material parameters of white matter. A minimal form comprising three parameters was employed to simulate indentation responses in the large-strain regime. The parameters were estimated using a global optimization procedure based on a genetic algorithm (GA). Experimental data from two indentation configurations of porcine white matter, parallel and perpendicular to the axonal fiber direction, were utilized to estimate model parameters. Results in this study confirmed a strong mechanical anisotropy of white matter in large strain. Further, our results suggested that both indentation configurations are needed to estimate the parameters with sufficient accuracy, and that the indenter-sample friction is important. Finally, we also showed that the estimated parameters were consistent with those previously obtained via a trial-and-error forward FE method in the small-strain regime. These findings are useful in modeling and parameterization of white matter, especially under large deformation, and demonstrate the potential of the proposed asymmetric indentation technique to characterize other soft biological tissues with transversely isotropic properties. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veselka, T. D.; Poch, L. A.; Palmer, C. S.
2010-04-21
Because of concerns about the impact that Glen Canyon Dam (GCD) operations were having on downstream ecosystems and endangered species, the Bureau of Reclamation (Reclamation) conducted an Environmental Impact Statement (EIS) on dam operations (DOE 1996). New operating rules and management goals for GCD that had been specified in the Record of Decision (ROD) (Reclamation 1996) were adopted in February 1997. In addition to issuing new operating criteria, the ROD mandated experimental releases for the purpose of conducting scientific studies. This paper examines the financial implications of the experimental flows that were conducted at the GCD from 1997 to 2005.more » An experimental release may have either a positive or negative impact on the financial value of energy production. This study estimates the financial costs of experimental releases, identifies the main factors that contribute to these costs, and compares the interdependencies among these factors. An integrated set of tools was used to compute the financial impacts of the experimental releases by simulating the operation of the GCD under two scenarios, namely, (1) a baseline scenario that assumes operations comply with the ROD operating criteria and experimental releases that actually took place during the study period, and (2) a ''without experiments'' scenario that is identical to the baseline scenario of operations that comply with the GCD ROD, except it assumes that experimental releases did not occur. The Generation and Transmission Maximization (GTMax) model was the main simulation tool used to dispatch GCD and other hydropower plants that comprise the Salt Lake City Area Integrated Projects (SLCA/IP). Extensive data sets and historical information on SLCA/IP power plant characteristics, hydrologic conditions, and Western Area Power Administration's (Western's) power purchase prices were used for the simulation. In addition to estimating the financial impact of experimental releases, the GTMax model was also used to gain insights into the interplay among ROD operating criteria, exceptions that were made to criteria to accommodate the experimental releases, and Western operating practices. Experimental releases in some water years resulted in financial benefits to Western while others resulted in financial costs. During the study period, the total financial costs of all experimental releases were $11.9 million.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veselka, T. D.; Poch, L. A.; Palmer, C. S.
2011-01-11
Because of concerns about the impact that Glen Canyon Dam (GCD) operations were having on downstream ecosystems and endangered species, the Bureau of Reclamation (Reclamation) conducted an Environmental Impact Statement (EIS) on dam operations (DOE 1996). New operating rules and management goals for GCD that had been specified in the Record of Decision (ROD) (Reclamation 1996) were adopted in February 1997. In addition to issuing new operating criteria, the ROD mandated experimental releases for the purpose of conducting scientific studies. This paper examines the financial implications of the experimental flows that were conducted at the GCD from 1997 to 2005.more » An experimental release may have either a positive or negative impact on the financial value of energy production. This study estimates the financial costs of experimental releases, identifies the main factors that contribute to these costs, and compares the interdependencies among these factors. An integrated set of tools was used to compute the financial impacts of the experimental releases by simulating the operation of the GCD under two scenarios, namely, (1) a baseline scenario that assumes operations comply with the ROD operating criteria and experimental releases that actually took place during the study period, and (2) a 'without experiments' scenario that is identical to the baseline scenario of operations that comply with the GCD ROD, except it assumes that experimental releases did not occur. The Generation and Transmission Maximization (GTMax) model was the main simulation tool used to dispatch GCD and other hydropower plants that comprise the Salt Lake City Area Integrated Projects (SLCA/IP). Extensive data sets and historical information on SLCA/IP power plant characteristics, hydrologic conditions, and Western Area Power Administration's (Western's) power purchase prices were used for the simulation. In addition to estimating the financial impact of experimental releases, the GTMax model was also used to gain insights into the interplay among ROD operating criteria, exceptions that were made to criteria to accommodate the experimental releases, and Western operating practices. Experimental releases in some water years resulted in financial benefits to Western whileothers resulted in financial costs. During the study period, the total financial costs of all experimental releases were more than $23 million.« less
Estimating Per-Pixel GPP of the Contiguous USA Directly from MODIS EVI Data
NASA Astrophysics Data System (ADS)
Rahman, A. F.; Sims, D. A.; El-Masri, B. Z.; Cordova, V. D.
2005-12-01
We estimated gross primary production (GPP) of the contiguous USA using enhanced vegetation index (EVI) data from NASA's moderate resolution imaging spectroradiometer (MODIS). Based on recently published values of correlation coefficients between EVI and GPP of North American vegetations, we derived GPP maps of the contiguous USA for 2001-2004, which included one La Nina year and three moderately El Nino years. The product was a truly per-pixel GPP estimate (named E-GPP), in contrast to the pseudo-continuous MOD17, the standard MODIS GPP product. We compared E-GPP with fine-scale experimental GPP data and MOD17 estimates from three Bigfoot experimental sites, and also with MOD17 estimates from the whole contiguous USA for the above-mentioned four years. For each of the '7 by 7' km Bigfoot experimental sites, E-GPP was able to track the primary production activity during the green-up period while MOD17 failed to do so. The E-GPP estimates during peak production season were similar to those from Bigfoot and MOD17 for most vegetation types except for the deciduous types, where it was lower. Annual E-GPP of the Bigfoot sites compared well with Bigfoot experimental GPP (r = 0.71) and MOD17 (r = 0.78). But for the contiguous USA for 2001-2004, annual E-GPP showed disagreement with MOD17 in both magnitude and seasonal trends for deciduous forests and grass lands. In this study we explored the reasons for this mismatch between E-GPP and MOD17 and also analyzed the uncertainties in E-GPP across multiple spatial scales. Our results show that the E-GPP, based on a simple regression model, can work as a robust alternative to MOD17 for large-area annual GPP estimation. The relative advantages of E-GPP are that it is truly per-pixel, solely dependent on remotely sensed data that is routinely available from NASA, easy to compute and has the potential of being used as an operational product.
Swirling flow in a model of the carotid artery: Numerical and experimental study
NASA Astrophysics Data System (ADS)
Kotmakova, Anna A.; Gataulin, Yakov A.; Yukhnev, Andrey D.
2018-05-01
The present contribution is aimed at numerical and experimental study of inlet swirling flow in a model of the carotid artery. Flow visualization is performed both with the ultrasound color Doppler imaging mode and with CFD data postprocessing of swirling flows in a carotid artery model. Special attention is paid to obtaining data for the secondary motion in the internal carotid artery. Principal errors of the measurement technique developed are estimated using the results of flow calculations.
Motion estimation of subcellular structures from fluorescence microscopy images.
Vallmitjana, A; Civera-Tregon, A; Hoenicka, J; Palau, F; Benitez, R
2017-07-01
We present an automatic image processing framework to study moving intracellular structures from live cell fluorescence microscopy. The system includes the identification of static and dynamic structures from time-lapse images using data clustering as well as the identification of the trajectory of moving objects with a probabilistic tracking algorithm. The method has been successfully applied to study mitochondrial movement in neurons. The approach provides excellent performance under different experimental conditions and is robust to common sources of noise including experimental, molecular and biological fluctuations.
ERIC Educational Resources Information Center
Biesma, R. G.; Pavlova, M.; van Merode, G. G.; Groot, W.
2007-01-01
This paper uses an experimental design to estimate preferences of employers for key competencies during the transition from initial education to the labor market. The study is restricted to employers of entry-level academic graduates entering public health organizations in the Netherlands. Given the changing and complex demands in public health,…
Estimation of strength parameters of small-bore metal-polymer pipes
NASA Astrophysics Data System (ADS)
Shaydakov, V. V.; Chernova, K. V.; Penzin, A. V.
2018-03-01
The paper presents results from a set of laboratory studies of strength parameters of small-bore metal-polymer pipes of type TG-5/15. A wave method was used to estimate the provisional modulus of elasticity of the metal-polymer material of the pipes. Longitudinal deformation, transverse deformation and leak-off pressure were determined experimentally, with considerations for mechanical damage and pipe bend.
Mueller, Christoph Emanuel
2015-06-01
In order to assess website content effectiveness (WCE), investigations have to be made into whether the reception of website contents leads to a change in the characteristics of website visitors or not. Because randomized controlled trials (RCTs) are not always the method of choice, researchers may have to follow other strategies such as using retrospective pretest methodology (RPM), a straightforward and easy-to-implement tool for estimating intervention effects. This article aims to introduce RPM in the context of website evaluation and test its viability under experimental conditions. Building on the idea that RCTs deliver unbiased estimates of the true causal effects of website content reception, I compared the performance of RPM with that of an RCT within the same study. Hence, if RPM provides effect estimates similar to those of the RCT, it can be considered a viable tool for assessing the effectiveness of the website content features under study. RPM was capable of delivering comparatively resilient estimates of the effects of a YouTube video and a text feature on knowledge and attitudes. With regard to all of the outcome variables considered, the differences between the sizes of the effects estimated by the RCT and RPM were not significant. Additionally, RPM delivered relatively accurate effect size estimates in most of the cases. Therefore, I conclude that RPM could be a viable alternative for assessing WCE in cases where RCTs are not the preferred method. © The Author(s) 2015.
Yuan, Zaijian; Shen, Yanjun
2013-01-01
Over-exploitation of groundwater resources for irrigated grain production in Hebei province threatens national grain food security. The objective of this study was to quantify agricultural water consumption (AWC) and irrigation water consumption in this region. A methodology to estimate AWC was developed based on Penman-Monteith method using meteorological station data (1984–2008) and existing actual ET (2002–2008) data which estimated from MODIS satellite data through a remote sensing ET model. The validation of the model using the experimental plots (50 m2) data observed from the Luancheng Agro-ecosystem Experimental Station, Chinese Academy of Sciences, showed the average deviation of the model was −3.7% for non-rainfed plots. The total AWC and irrigation water (mainly groundwater) consumption for Hebei province from 1984–2008 were then estimated as 864 km3 and 139 km3, respectively. In addition, we found the AWC has significantly increased during the past 25 years except for a few counties located in mountainous regions. Estimations of net groundwater consumption for grain food production within the plain area of Hebei province in the past 25 years accounted for 113 km3 which could cause average groundwater decrease of 7.4 m over the plain. The integration of meteorological and satellite data allows us to extend estimation of actual ET beyond the record available from satellite data, and the approach could be applicable in other regions globally where similar data are available. PMID:23516537
A combined experimental and computational thermodynamic study of difluoronitrobenzene isomers.
Ribeiro da Silva, Manuel A V; Monte, Manuel J S; Lobo Ferreira, Ana I M C; Oliveira, Juliana A S A; Cimas, Álvaro
2010-10-14
This work reports the experimental and computational thermochemical study performed on three difluorinated nitrobenzene isomers: 2,4-difluoronitrobenzene (2,4-DFNB), 2,5-difluoronitrobenzene (2,5-DFNB), and 3,4-difluoronitrobenzene (3,4-DFNB). The standard (p° = 0.1 MPa) molar enthalpies of formation in the liquid phase of these compounds were derived from the standard molar energies of combustion, in oxygen, at T = 298.15 K, measured by rotating bomb combustion calorimetry. A static method was used to perform the vapor pressure study of the referred compounds allowing the construction of the phase diagrams and determination of the respective triple point coordinates, as well as the standard molar enthalpies of vaporization, sublimation, and fusion for two of the isomers (2,4-DFNB and 3,4-DFNB). For 2,5-difluoronitrobenzene, only liquid vapor pressures were measured enabling the determination of the standard molar enthalpies of vaporization. Combining the thermodynamic parameters of the compounds studied, the following standard (p° = 0.1 MPa) molar enthalpies of formation in the gaseous phase, at T = 298.15 K, were derived: Δ(f)H(m)° (2,4-DFNB, g) = -(296.3 ± 1.8) kJ · mol⁻¹, Δ(f)H(m)° (2,5-DFNB, g) = -(288.2 ± 2.1) kJ · mol⁻¹, and Δ(f)H(m)° (3,4-DFNB, g) = -(302.4 ± 2.1) kJ · mol⁻¹. Using the empirical scheme developed by Cox, several approaches were evaluated in order to identify the best method for estimating the standard molar gas phase enthalpies of formation of these compounds. The estimated values were compared to the ones obtained experimentally, and the approach providing the best comparison with experiment was used to estimate the thermodynamic behavior of the other difluorinated nitrobenzene isomers not included in this study. Additionally, the enthalpies of formation of these compounds along with the enthalpies of formation of the other isomers not studied experimentally, i.e., 2,3-DFNB, 2,6-DFNB, and 3,5-DFNB, were estimated using the composite G3MP2B3 approach together with adequate gas-phase working reactions. Furthermore, we also used this computational approach to calculate the gas-phase basicities, proton and electron affinities, and, finally, adiabatic ionization enthalpies.
NASA Astrophysics Data System (ADS)
Dai, Z.; Wolfsberg, A. V.; Zhu, L.; Reimus, P. W.
2017-12-01
Colloids have the potential to enhance mobility of strongly sorbing radionuclide contaminants in fractured rocks at underground nuclear test sites. This study presents an experimental and numerical investigation of colloid-facilitated plutonium reactive transport in fractured porous media for identifying plutonium sorption/filtration processes. The transport parameters for dispersion, diffusion, sorption, and filtration are estimated with inverse modeling for minimizing the least squares objective function of multicomponent concentration data from multiple transport experiments with the Shuffled Complex Evolution Metropolis (SCEM). Capitalizing on an unplanned experimental artifact that led to colloid formation and migration, we adopt a stepwise strategy to first interpret the data from each experiment separately and then to incorporate multiple experiments simultaneously to identify a suite of plutonium-colloid transport processes. Nonequilibrium or kinetic attachment and detachment of plutonium-colloid in fractures was clearly demonstrated and captured in the inverted modeling parameters along with estimates of the source plutonium fraction that formed plutonium-colloids. The results from this study provide valuable insights for understanding the transport mechanisms and environmental impacts of plutonium in fractured formations and groundwater aquifers.
NASA Astrophysics Data System (ADS)
Bermúdez, Vicente; Pastor, José V.; López, J. Javier; Campos, Daniel
2014-06-01
A study of soot measurement deviation using a diffusion charger sensor with three dilution ratios was conducted in order to obtain an optimum setting that can be used to obtain accurate measurements in terms of soot mass emitted by a light-duty diesel engine under transient operating conditions. The paper includes three experimental phases: an experimental validation of the measurement settings in steady-state operating conditions; evaluation of the proposed setting under the New European Driving Cycle; and a study of correlations for different measurement techniques. These correlations provide a reliable tool for estimating soot emission from light extinction measurement or from accumulation particle mode concentration. There are several methods and correlations to estimate soot concentration in the literature but most of them were assessed for steady-state operating points. In this case, the correlations are obtained by more than 4000 points measured in transient conditions. The results of the new two correlations, with less than 4% deviation from the reference measurement, are presented in this paper.
Comparison of volume estimation methods for pancreatic islet cells
NASA Astrophysics Data System (ADS)
Dvořák, JiřÃ.; Å vihlík, Jan; Habart, David; Kybic, Jan
2016-03-01
In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion in the quality control. Also, the individual islet volume distribution is interesting -- it has been indicated that smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The input of the volume estimation methods are segmented images of individual islets. The segmentation step is not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are observed. We present a simulation study comparing the performance of the volume estimation methods in different scenarios and an experimental study comparing the methods on a real dataset.
A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.
Kim, Joo H; Roberts, Dustyn
2015-09-01
Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Shadish, William R.
2011-01-01
This article reviews several decades of the author's meta-analytic and experimental research on the conditions under which nonrandomized experiments can approximate the results from randomized experiments (REs). Several studies make clear that we can expect accurate effect estimates from the regression discontinuity design, though its statistical…
Experimental approach for thermal parameters estimation during glass forming process
NASA Astrophysics Data System (ADS)
Abdulhay, B.; Bourouga, B.; Alzetto, F.; Challita, C.
2016-10-01
In this paper, an experimental device designed and developedto estimate thermal conditions at the Glass / piston contact interface is presented. This deviceis made of two parts: the upper part contains the piston made of metal and a heating device to raise the temperature of the piston up to 500 °C. The lower part is composed of a lead crucible and a glass sample. The assembly is provided with a heating system, an induction furnace of 6 kW for heating the glass up to 950 °C.The developed experimental procedure has permitted in a previous published study to estimate the Thermal Contact ResistanceTCR using the inverse technique developed by Beck [1]. The semi-transparent character of the glass has been taken into account by an additional radiative heat flux and an equivalent thermal conductivity. After the set-up tests, reproducibility experiments for a specific contact pressure have been carried outwith a maximum dispersion that doesn't exceed 6%. Then, experiments under different conditions for a specific glass forming process regarding the application (Packaging, Buildings and Automobile) were carried out. The objective is to determine, experimentallyfor each application,the typical conditions capable to minimize the glass temperature loss during the glass forming process.
NASA Astrophysics Data System (ADS)
Kurokawa, Yusaku; Taki, Hirofumi; Yashiro, Satoshi; Nagasawa, Kan; Ishigaki, Yasushi; Kanai, Hiroshi
2016-07-01
We propose a method for assessment of the degree of red blood cell (RBC) aggregation using the backscattering property of high-frequency ultrasound. In this method, the scattering property of RBCs is extracted from the power spectrum of RBC echoes normalized by that from the posterior wall of a vein. In an experimental study using a phantom, employing the proposed method, the sizes of microspheres 5 and 20 µm in diameter were estimated to have mean values of 4.7 and 17.3 µm and standard deviations of 1.9 and 1.4 µm, respectively. In an in vivo experimental study, we compared the results between three healthy subjects and four diabetic patients. The average estimated scatterer diameters in healthy subjects at rest and during avascularization were 7 and 28 µm, respectively. In contrast, those in diabetic patients receiving both antithrombotic therapy and insulin therapy were 11 and 46 µm, respectively. These results show that the proposed method has high potential for clinical application to assess RBC aggregation, which may be related to the progress of diabetes.
Genescà, Meritxell; Svensson, U Peter; Taraldsen, Gunnar
2015-04-01
Ground reflections cause problems when estimating the direction of arrival of aircraft noise. In traditional methods, based on the time differences between the microphones of a compact array, they may cause a significant loss of accuracy in the vertical direction. This study evaluates the use of first-order directional microphones, instead of omnidirectional, with the aim of reducing the amplitude of the reflected sound. Such a modification allows the problem to be treated as in free field conditions. Although further tests are needed for a complete evaluation of the method, the experimental results presented here show that under the particular conditions tested the vertical angle error is reduced ∼10° for both jet and propeller aircraft by selecting an appropriate directivity pattern. It is also shown that the final level of error depends on the vertical angle of arrival of the sound, and that the estimates of the horizontal angle of arrival are not influenced by the directivity pattern of the microphones nor by the reflective properties of the ground.
Quantitative Determination of Isotope Ratios from Experimental Isotopic Distributions
Kaur, Parminder; O’Connor, Peter B.
2008-01-01
Isotope variability due to natural processes provides important information for studying a variety of complex natural phenomena from the origins of a particular sample to the traces of biochemical reaction mechanisms. These measurements require high-precision determination of isotope ratios of a particular element involved. Isotope Ratio Mass Spectrometers (IRMS) are widely employed tools for such a high-precision analysis, which have some limitations. This work aims at overcoming the limitations inherent to IRMS by estimating the elemental isotopic abundance from the experimental isotopic distribution. In particular, a computational method has been derived which allows the calculation of 13C/12C ratios from the whole isotopic distributions, given certain caveats, and these calculations are applied to several cases to demonstrate their utility. The limitations of the method in terms of the required number of ions and S/N ratio are discussed. For high-precision estimates of the isotope ratios, this method requires very precise measurement of the experimental isotopic distribution abundances, free from any artifacts introduced by noise, sample heterogeneity, or other experimental sources. PMID:17263354
Flexible manipulator control experiments and analysis
NASA Technical Reports Server (NTRS)
Yurkovich, S.; Ozguner, U.; Tzes, A.; Kotnik, P. T.
1987-01-01
Modeling and control design for flexible manipulators, both from an experimental and analytical viewpoint, are described. From the application perspective, an ongoing effort within the laboratory environment at the Ohio State University, where experimentation on a single link flexible arm is underway is described. Several unique features of this study are described here. First, the manipulator arm is slewed by a direct drive dc motor and has a rigid counterbalance appendage. Current experimentation is from two viewpoints: (1) rigid body slewing and vibration control via actuation with the hub motor, and (2) vibration suppression through the use of structure-mounted proof-mass actuation at the tip. Such an application to manipulator control is of interest particularly in design of space-based telerobotic control systems, but has received little attention to date. From an analytical viewpoint, parameter estimation techniques within the closed-loop for self-tuning adaptive control approaches are discussed. Also introduced is a control approach based on output feedback and frequency weighting to counteract effects of spillover in reduced-order model design. A model of the flexible manipulator based on experimental measurements is evaluated for such estimation and control approaches.
NASA Astrophysics Data System (ADS)
Meyer, V.; Maxit, L.; Renou, Y.; Audoly, C.
2017-09-01
The understanding of the influence of non-axisymmetric internal frames on the vibroacoustic behavior of a stiffened cylindrical shell is of high interest for the naval or aeronautic industries. Several numerical studies have shown that the non-axisymmetric internal frame can increase the radiation efficiency significantly in the case of a mechanical point force. However, less attention has been paid to the experimental verification of this statement. That is why this paper proposes to compare the radiation efficiency estimated experimentally for a stiffened cylindrical shell with and without internal frames. The experimental process is based on scanning laser vibrometer measurements of the vibrations on the surface of the shell. A transform of the vibratory field in the wavenumber domain is then performed. It allows estimating the far-field radiated pressure with the stationary phase theorem. An increase of the radiation efficiency is observed in the low frequencies. Analysis of the velocity field in the physical and wavenumber spaces allows highlighting the coupling of the circumferential orders at the origin of the increase in the radiation efficiency.
ERIC Educational Resources Information Center
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Methods for the accurate estimation of confidence intervals on protein folding ϕ-values
Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.
2006-01-01
ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714
Graphical Models for Quasi-Experimental Designs
ERIC Educational Resources Information Center
Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan
2016-01-01
Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…
Sampling flies or sampling flaws? Experimental design and inference strength in forensic entomology.
Michaud, J-P; Schoenly, Kenneth G; Moreau, G
2012-01-01
Forensic entomology is an inferential science because postmortem interval estimates are based on the extrapolation of results obtained in field or laboratory settings. Although enormous gains in scientific understanding and methodological practice have been made in forensic entomology over the last few decades, a majority of the field studies we reviewed do not meet the standards for inference, which are 1) adequate replication, 2) independence of experimental units, and 3) experimental conditions that capture a representative range of natural variability. Using a mock case-study approach, we identify design flaws in field and lab experiments and suggest methodological solutions for increasing inference strength that can inform future casework. Suggestions for improving data reporting in future field studies are also proposed.
Dillon, C R; Borasi, G; Payne, A
2016-01-01
For thermal modeling to play a significant role in treatment planning, monitoring, and control of magnetic resonance-guided focused ultrasound (MRgFUS) thermal therapies, accurate knowledge of ultrasound and thermal properties is essential. This study develops a new analytical solution for the temperature change observed in MRgFUS which can be used with experimental MR temperature data to provide estimates of the ultrasound initial heating rate, Gaussian beam variance, tissue thermal diffusivity, and Pennes perfusion parameter. Simulations demonstrate that this technique provides accurate and robust property estimates that are independent of the beam size, thermal diffusivity, and perfusion levels in the presence of realistic MR noise. The technique is also demonstrated in vivo using MRgFUS heating data in rabbit back muscle. Errors in property estimates are kept less than 5% by applying a third order Taylor series approximation of the perfusion term and ensuring the ratio of the fitting time (the duration of experimental data utilized for optimization) to the perfusion time constant remains less than one. PMID:26741344
Noroozi, Javad; Paluch, Andrew S
2017-02-23
Molecular dynamics simulations were employed to both estimate the solubility of nonelectrolyte solids, such as acetanilide, acetaminophen, phenacetin, methylparaben, and lidocaine, in supercritical carbon dioxide and understand the underlying molecular-level driving forces. The solubility calculations involve the estimation of the solute's limiting activity coefficient, which may be computed using conventional staged free-energy calculations. For the case of lidocaine, wherein the infinite dilution approximation is not appropriate, we demonstrate how the activity coefficient at finite concentrations may be estimated without additional effort using the dilute solution approximation and how this may be used to further understand the solvation process. Combining with experimental pure-solid properties, namely, the normal melting point and enthalpy of fusion, solubilities were estimated. The results are in good quantitative agreement with available experimental data, suggesting that molecular simulations may be a powerful tool for understanding supercritical processes and the design of carbon dioxide-philic molecular systems. Structural analyses were performed to shed light on the microscopic details of the solvation of different functional groups by carbon dioxide and the observed solubility trends.
Fast myopic 2D-SIM super resolution microscopy with joint modulation pattern estimation
NASA Astrophysics Data System (ADS)
Orieux, François; Loriette, Vincent; Olivo-Marin, Jean-Christophe; Sepulveda, Eduardo; Fragola, Alexandra
2017-12-01
Super-resolution in structured illumination microscopy (SIM) is obtained through de-aliasing of modulated raw images, in which high frequencies are measured indirectly inside the optical transfer function. Usual approaches that use 9 or 15 images are often too slow for dynamic studies. Moreover, as experimental conditions change with time, modulation parameters must be estimated within the images. This paper tackles the problem of image reconstruction for fast super resolution in SIM, where the number of available raw images is reduced to four instead of nine or fifteen. Within an optimization framework, the solution is inferred via a joint myopic criterion for image and modulation (or acquisition) parameters, leading to what is frequently called a myopic or semi-blind inversion problem. The estimate is chosen as the minimizer of the nonlinear criterion, numerically calculated by means of a block coordinate optimization algorithm. The effectiveness of the proposed method is demonstrated for simulated and experimental examples. The results show precise estimation of the modulation parameters jointly with the reconstruction of the super resolution image. The method also shows its effectiveness for thick biological samples.
Gupta, Manoj; Gupta, T C
2017-10-01
The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.
ERIC Educational Resources Information Center
Garrido, Mariquita; Payne, David A.
Minimum competency cut-off scores on a statistics exam were estimated under four conditions: the Angoff judging method with item data (n=20), and without data available (n=19); and the Modified Angoff method with (n=19), and without (n=19) item data available to judges. The Angoff method required free response percentage estimates (0-100) percent,…
Consistency of nuclear thermometric measurements at moderate excitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, T. K.; Bhattacharya, C.; Kundu, S.
2008-08-15
A comparison of various thermometric techniques used for the estimation of nuclear temperature has been made from the decay of hot composite {sup 32}S* produced in the reaction {sup 20}Ne (145 MeV) + {sup 12}C. It is shown that the temperatures estimated by different techniques, known to vary significantly in the Fermi energy domain, are consistent with each other within experimental limits for the system studied here.
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2011-12-01
Structures in service are often subjected to fatigue loads. Cracks would develop and lead to failure if left unnoticed after a large number of cyclic loadings. Monitoring the process of fatigue crack propagation as well as estimating the remaining useful life of a structure is thus essential to prevent catastrophe while minimizing earlier-than-required replacement. The advent of smart materials such as piezo-impedance transducers (lead zirconate titanate, PZT) has ushered in a new era of structural health monitoring (SHM) based on non-destructive evaluation (NDE). This paper presents a series of investigative studies to evaluate the feasibility of fatigue crack monitoring and estimation of remaining useful life using the electromechanical impedance (EMI) technique employing a PZT transducer. Experimental tests were conducted to study the ability of the EMI technique in monitoring fatigue crack in 1D lab-sized aluminum beams. The experimental results prove that the EMI technique is very sensitive to fatigue crack propagation. A proof-of-concept semi-analytical damage model for fatigue life estimation has been developed by incorporating the linear elastic fracture mechanics (LEFM) theory into the finite element (FE) model. The prediction of the model matches closely with the experiment, suggesting the possibility of replacing costly experiments in future.
Design and analysis of three-arm trials with negative binomially distributed endpoints.
Mütze, Tobias; Munk, Axel; Friede, Tim
2016-02-20
A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.
Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations
ERIC Educational Resources Information Center
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.
2016-01-01
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Cagetti, Maria Grazia; Strohmenger, Laura; Basile, Valentina; Abati, Silvio; Mastroberardino, Stefano; Campus, Guglielmo
2015-05-01
This randomized double-blind in vivo pilot study has evaluated the effects of a toothpaste containing fluoride (control) versus toothpaste containing fluoride, triclosan, cetylpyridinium chloride and essential oils (experimental) in controlling supragingival dental plaque and bleeding on probing in a sample of healthy schoolchildren. In total, 48 children (8 to 10 years) were selected and randomly divided into two groups (experimental and control), using the two different toothpastes twice a day for 2 minutes each for a 4-week period. The investigation included an evaluation of plaque quantity, using the Turesky modified Quigley-Hein method, and bleeding on probing that was recorded dichotomously. The unit of analysis was set at the gingival site level. Plaque Index and bleeding on probing were analyzed using distribution tables and chi-square test. A generalized estimating equation was used to estimate the parameters of a generalized linear model with a possible unknown correlation between outcomes. In total, 40 schoolchildren completed the trial. Considering each group separately, a statistically significant difference in plaque scores was recorded for both treatments (z-test = 9.23, P < .01 for the experimental toothpaste; and z-test = 7.47, P < .01 for the control toothpaste). Nevertheless, the effect over time was higher for the experimental toothpaste than for the control one (3.38 vs 1.96). No statistically significant results were observed regarding bleeding on probing. The 4-week use of the experimental toothpaste seems to produce higher plaque reduction compared to fluoridated toothpaste without other antibacterial ingredients. This finding has to be confirmed in a larger study.
Kinetic modeling of the photocatalytic degradation of clofibric acid in a slurry reactor.
Manassero, Agustina; Satuf, María Lucila; Alfano, Orlando Mario
2015-01-01
A kinetic study of the photocatalytic degradation of the pharmaceutical clofibric acid is presented. Experiments were carried out under UV radiation employing titanium dioxide in water suspension. The main reaction intermediates were identified and quantified. Intrinsic expressions to represent the kinetics of clofibric acid and the main intermediates were derived. The modeling of the radiation field in the reactor was carried out by Monte Carlo simulation. Experimental runs were performed by varying the catalyst concentration and the incident radiation. Kinetic parameters were estimated from the experiments by applying a non-linear regression procedure. Good agreement was obtained between model predictions and experimental data, with an error of 5.9 % in the estimations of the primary pollutant concentration.
The measurement of bacterial translation by photon correlation spectroscopy.
Stock, G B; Jenkins, T C
1978-01-01
Photon correlation spectroscopy is shown to be a practical technique for the accurate determination of translational speeds of bacteria. Though other attempts have been made to use light scattering as a probe of various aspects of bacterial motility, no other comprehensive studies to establish firmly the basic capabilities and limitations of the technique have been published. The intrinsic accuracy of the assay of translational speeds by photon correlation spectroscopy is investigated by analysis of synthetic autocorrelation data; consistently accurate estimates of the mean and second moment of the speed distribution can be calculated. Extensive analyses of experimental preparations of Salmonella typhimurium examine the possible sources of experimental difficulty with the assay. Cinematography confirms the bacterial speed estimates obtained by photon correlation techniques. PMID:346073
Believers' estimates of God's beliefs are more egocentric than estimates of other people's beliefs
Epley, Nicholas; Converse, Benjamin A.; Delbosc, Alexa; Monteleone, George A.; Cacioppo, John T.
2009-01-01
People often reason egocentrically about others' beliefs, using their own beliefs as an inductive guide. Correlational, experimental, and neuroimaging evidence suggests that people may be even more egocentric when reasoning about a religious agent's beliefs (e.g., God). In both nationally representative and more local samples, people's own beliefs on important social and ethical issues were consistently correlated more strongly with estimates of God's beliefs than with estimates of other people's beliefs (Studies 1–4). Manipulating people's beliefs similarly influenced estimates of God's beliefs but did not as consistently influence estimates of other people's beliefs (Studies 5 and 6). A final neuroimaging study demonstrated a clear convergence in neural activity when reasoning about one's own beliefs and God's beliefs, but clear divergences when reasoning about another person's beliefs (Study 7). In particular, reasoning about God's beliefs activated areas associated with self-referential thinking more so than did reasoning about another person's beliefs. Believers commonly use inferences about God's beliefs as a moral compass, but that compass appears especially dependent on one's own existing beliefs. PMID:19955414
NASA Astrophysics Data System (ADS)
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery
2016-10-01
This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Contour-based object orientation estimation
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel
2016-04-01
Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.
Two-photon absorption in oxazole derivatives: An experimental and quantum chemical study
NASA Astrophysics Data System (ADS)
Silva, D. L.; De Boni, L.; Correa, D. S.; Costa, S. C. S.; Hidalgo, A. A.; Zilio, S. C.; Canuto, S.; Mendonca, C. R.
2012-05-01
Experimental and theoretical studies on the two-photon absorption properties of two oxazole derivatives: 2,5-diphenyloxazole (PPO) and 2-(4-biphenylyl)-5-phenyl-1,3,4-oxadiazole (PBD) are presented. The two-photon absorption cross-section spectra were determined by means of the Z-scan technique, from 460 up to 650 nm, and reached peak values of 84 GM for PBD and 27 GM for PPO. Density Functional Theory and response function formalism are used to determine the molecular structures and the one- and two-photon absorption properties and to assist in the interpretation of the experimental results. The Polarizable Continuum Model in one-photon absorption calculations is used to estimate solvent effects.
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Ridgeway, Greg; Morral, Andrew R.
2004-01-01
Causal effect modeling with naturalistic rather than experimental data is challenging. In observational studies participants in different treatment conditions may also differ on pretreatment characteristics that influence outcomes. Propensity score methods can theoretically eliminate these confounds for all observed covariates, but accurate…
NASA Astrophysics Data System (ADS)
Wang, S. G.; Li, X.; Han, X. J.; Jin, R.
2011-05-01
Radar remote sensing has demonstrated its applicability to the retrieval of basin-scale soil moisture. The mechanism of radar backscattering from soils is complicated and strongly influenced by surface roughness. Additionally, retrieval of soil moisture using AIEM (advanced integrated equation model)-like models is a classic example of underdetermined problem due to a lack of credible known soil roughness distributions at a regional scale. Characterization of this roughness is therefore crucial for an accurate derivation of soil moisture based on backscattering models. This study aims to simultaneously obtain surface roughness parameters (standard deviation of surface height σ and correlation length cl) along with soil moisture from multi-angular ASAR images by using a two-step retrieval scheme based on the AIEM. The method firstly used a semi-empirical relationship that relates the roughness slope, Zs (Zs = σ2/cl) and the difference in backscattering coefficient (Δσ) from two ASAR images acquired with different incidence angles. Meanwhile, by using an experimental statistical relationship between σ and cl, both these parameters can be estimated. Then, the deduced roughness parameters were used for the retrieval of soil moisture in association with the AIEM. An evaluation of the proposed method was performed in an experimental area in the middle stream of the Heihe River Basin, where the Watershed Allied Telemetry Experimental Research (WATER) was taken place. It is demonstrated that the proposed method is feasible to achieve reliable estimation of soil water content. The key challenge is the presence of vegetation cover, which significantly impacts the estimates of surface roughness and soil moisture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poch, L. A.; Veselka, T. D.; Palmer, C. S.
2011-08-22
Because of concerns about the impact that Glen Canyon Dam (GCD) operations were having on downstream ecosystems and endangered species, the Bureau of Reclamation (Reclamation) conducted an Environmental Impact Statement (EIS) on dam operations (DOE 1996). New operating rules and management goals for GCD that had been specified in the Record of Decision (ROD) (Reclamation 1996) were adopted in February 1997. In addition to issuing new operating criteria, the ROD mandated experimental releases for the purpose of conducting scientific studies. A report released in January 2011 examined the financial implications of the experimental flows that were conducted at the GCDmore » from 1997 to 2005. This report continues the analysis and examines the financial implications of the experimental flows conducted at the GCD from 2006 to 2010. An experimental release may have either a positive or negative impact on the financial value of energy production. This study estimates the financial costs of experimental releases, identifies the main factors that contribute to these costs, and compares the interdependencies among these factors. An integrated set of tools was used to compute the financial impacts of the experimental releases by simulating the operation of the GCD under two scenarios, namely, (1) a baseline scenario that assumes both that operations comply with the ROD operating criteria and the experimental releases that actually took place during the study period, and (2) a 'without experiments' scenario that is identical to the baseline scenario of operations that comply with the GCD ROD, except it assumes that experimental releases did not occur. The Generation and Transmission Maximization (GTMax) model was the main simulation tool used to dispatch GCD and other hydropower plants that comprise the Salt Lake City Area Integrated Projects (SLCA/IP). Extensive data sets and historical information on SLCA/IP powerplant characteristics, hydrologic conditions, and Western Area Power Administration's (Western's) power purchase prices were used for the simulation. In addition to estimating the financial impact of experimental releases, the GTMax model was also used to gain insights into the interplay among ROD operating criteria, exceptions that were made to criteria to accommodate the experimental releases, and Western operating practices. Experimental releases in some water years resulted in financial benefits to Western while others resulted in financial costs. During the study period, the total financial costs of all experimental releases were more than $4.8 million.« less
Roelker, Sarah A; Caruthers, Elena J; Baker, Rachel K; Pelz, Nicholas C; Chaudhari, Ajit M W; Siston, Robert A
2017-11-01
With more than 29,000 OpenSim users, several musculoskeletal models with varying levels of complexity are available to study human gait. However, how different model parameters affect estimated joint and muscle function between models is not fully understood. The purpose of this study is to determine the effects of four OpenSim models (Gait2392, Lower Limb Model 2010, Full-Body OpenSim Model, and Full Body Model 2016) on gait mechanics and estimates of muscle forces and activations. Using OpenSim 3.1 and the same experimental data for all models, six young adults were scaled in each model, gait kinematics were reproduced, and static optimization estimated muscle function. Simulated measures differed between models by up to 6.5° knee range of motion, 0.012 Nm/Nm peak knee flexion moment, 0.49 peak rectus femoris activation, and 462 N peak rectus femoris force. Differences in coordinate system definitions between models altered joint kinematics, influencing joint moments. Muscle parameter and joint moment discrepancies altered muscle activations and forces. Additional model complexity yielded greater error between experimental and simulated measures; therefore, this study suggests Gait2392 is a sufficient model for studying walking in healthy young adults. Future research is needed to determine which model(s) is best for tasks with more complex motion.
Cellular signaling identifiability analysis: a case study.
Roper, Ryan T; Pia Saccomani, Maria; Vicini, Paolo
2010-05-21
Two primary purposes for mathematical modeling in cell biology are (1) simulation for making predictions of experimental outcomes and (2) parameter estimation for drawing inferences from experimental data about unobserved aspects of biological systems. While the former purpose has become common in the biological sciences, the latter is less common, particularly when studying cellular and subcellular phenomena such as signaling-the focus of the current study. Data are difficult to obtain at this level. Therefore, even models of only modest complexity can contain parameters for which the available data are insufficient for estimation. In the present study, we use a set of published cellular signaling models to address issues related to global parameter identifiability. That is, we address the following question: assuming known time courses for some model variables, which parameters is it theoretically impossible to estimate, even with continuous, noise-free data? Following an introduction to this problem and its relevance, we perform a full identifiability analysis on a set of cellular signaling models using DAISY (Differential Algebra for the Identifiability of SYstems). We use our analysis to bring to light important issues related to parameter identifiability in ordinary differential equation (ODE) models. We contend that this is, as of yet, an under-appreciated issue in biological modeling and, more particularly, cell biology. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
The Influence of Social Networking Photos on Social Norms and Sexual Health Behaviors
Jordan, Alexander H.
2013-01-01
Abstract Two studies tested whether online social networking technologies influence health behavioral social norms, and in turn, personal health behavioral intentions. In Study 1, experimental participants browsed peers' Facebook photos on a college network with a low prevalence of sexually suggestive content. Participants estimated the percentage of their peers who have sex without condoms, and rated their own future intentions to use condoms. Experimental participants, compared to controls who did not view photos, estimated that a larger percentage of their peers use condoms, and indicated a greater intention to use condoms themselves in the future. In Study 2, participants were randomly assigned to view sexually suggestive or nonsexually suggestive Facebook photos, and responded to sexual risk behavioral questions. Compared to participants viewing nonsuggestive photos, those who viewed sexually suggestive Facebook photos estimated that a larger percentage of their peers have unprotected sexual intercourse and sex with strangers and were more likely to report that they themselves would engage in these behaviors. Thus, online social networks can influence perceptions of the peer prevalence of sexual risk behaviors, and can influence users' own intentions with regard to such behaviors. These studies suggest the potential power of social networks to affect health behaviors by altering perceptions of peer norms. PMID:23438268
Application of the Bernoulli enthalpy concept to the study of vortex noise and jet impingement noise
NASA Technical Reports Server (NTRS)
Yates, J. E.
1978-01-01
A complete theory of aeroacoustics of homentropic fluid media is developed and compared with previous theories. The theory is applied to study the interaction of sound with vortex flows, for the DC-9 in a standard take-off configuration. The maximum engine-wake interference noise is estimated to be 3 or 4 db in the ground plane. It is shown that the noise produced by a corotating vortex pair departs significantly from the compact M scaling law for eddy Mach numbers (M) greater than 0.1. An estimate of jet impingement noise is given that is in qualitative agreement with experimental results. The increased noise results primarily from the nonuniform acceleration of turbulent eddies through the stagnation point flow. It is shown that the corotating vortex pair can be excited or de-excited by an externally applied sound field. The model is used to qualitatively explain experimental results on excited jets.
A combined crossed molecular beams and theoretical study of the reaction CN + C2H4
NASA Astrophysics Data System (ADS)
Balucani, Nadia; Leonori, Francesca; Petrucci, Raffaele; Wang, Xingan; Casavecchia, Piergiorgio; Skouteris, Dimitrios; Albernaz, Alessandra F.; Gargano, Ricardo
2015-03-01
The CN + C2H4 reaction has been investigated experimentally, in crossed molecular beam (CMB) experiments at the collision energy of 33.4 kJ/mol, and theoretically, by electronic structure calculations of the relevant potential energy surface and Rice-Ramsperger-Kassel-Marcus (RRKM) estimates of the product branching ratio. Differently from previous CMB experiments at lower collision energies, but similarly to a high energy study, we have some indication that a second reaction channel is open at this collision energy, the characteristics of which are consistent with the channel leading to CH2CHNC + H. The RRKM estimates using M06L electronic structure calculations qualitatively support the experimental observation of C2H3NC formation at this and at the higher collision energy of 42.7 kJ/mol of previous experiments.
Effect of corrosion on the buckling capacity of tubular members
NASA Astrophysics Data System (ADS)
Øyasæter, F. H.; Aeran, A.; Siriwardane, S. C.; Mikkelsen, O.
2017-12-01
Offshore installations are subjected to harsh marine environment and often have damages from corrosion. Several experimental and numerical studies were performed in the past to estimate buckling capacity of corroded tubular members. However, these studies were either based on limited experimental tests or numerical analyses of few cases resulting in semi-empirical relations. Also, there are no guidelines and recommendations in the currently available design standards. To fulfil this research gap, a new formula is proposed to estimate the residual strength of tubular members considering corrosion and initial geometrical imperfections. The proposed formula is verified with results from finite element analyses performed on several members and for varying corrosion patch parameters. The members are selected to represent the most relevant Eurocode buckling curve for tubular members. It is concluded that corrosion reduces the buckling capacity significantly and the proposed formula can be easily applied by practicing engineers without performing detailed numerical analyses.
Lerner, J S; Keltner, D
2001-07-01
Drawing on an appraisal-tendency framework (J. S. Lerner & D. Keltner, 2000), the authors predicted and found that fear and anger have opposite effects on risk perception. Whereas fearful people expressed pessimistic risk estimates and risk-averse choices, angry people expressed optimistic risk estimates and risk-seeking choices. These opposing patterns emerged for naturally occurring and experimentally induced fear and anger. Moreover, estimates of angry people more closely resembled those of happy people than those of fearful people. Consistent with predictions, appraisal tendencies accounted for these effects: Appraisals of certainty and control moderated and (in the case of control) mediated the emotion effects. As a complement to studies that link affective valence to judgment outcomes, the present studies highlight multiple benefits of studying specific emotions.
Financial analysis of experimental releases conducted at Glen Canyon Dam during water year 2011
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poch, L. A.; Veselka, T. D.; Palmer, C. S.
2012-07-16
This report examines the financial implications of experimental flows conducted at the Glen Canyon Dam (GCD) in water year 2011. It is the third report in a series examining financial implications of experimental flows conducted since the Record of Decision (ROD) was adopted in February 1997 (Reclamation 1996). A report released in January 2011 examined water years 1997 to 2005 (Veselka et al. 2011), and a report released in August 2011 examined water years 2006 to 2010 (Poch et al. 2011). An experimental release may have either a positive or negative impact on the financial value of energy production. Thismore » study estimates the financial costs of experimental releases, identifies the main factors that contribute to these costs, and compares the interdependencies among these factors. An integrated set of tools was used to compute the financial impacts of the experimental releases by simulating the operation of the GCD under two scenarios, namely, (1) a baseline scenario that assumes both that operations comply with the ROD operating criteria and the experimental releases that actually took place during the study period, and (2) a 'without experiments' scenario that is identical to the baseline scenario of operations that comply with the GCD ROD, except it assumes that experimental releases did not occur. The Generation and Transmission Maximization (GTMax) model was the main simulation tool used to dispatch GCD and other hydropower plants that comprise the Salt Lake City Area Integrated Projects (SLCA/IP). Extensive data sets and historical information on SLCA/IP powerplant characteristics, hydrologic conditions, and Western Area Power Administration's (Western's) power purchase prices were used for the simulation. In addition to estimating the financial impact of experimental releases, the GTMax model was also used to gain insights into the interplay among ROD operating criteria, exceptions that were made to criteria to accommodate the experimental releases, and Western operating practices. Experimental releases conducted in water year 2011 resulted only in financial costs; the total cost of all experimental releases was about $622,000.« less
75 FR 2197 - Western Pacific Fisheries; Regulatory Restructuring
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-14
....16 Vessel identification. Sec. 665.17 Experimental fishing..... Sec. 665.17 Experimental fishing. Sec.... 0586, and -0589. Sec. 665.17 Experimental Sec. 665.17 0648-0214 and -0490. fishing. Experimental...) experimental fishing reports estimated at 4 hours (hr) per reporting action; (d) sales and transshipment...
Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A
2013-06-27
The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.
Melanoma Cell Colony Expansion Parameters Revealed by Approximate Bayesian Computation
Vo, Brenda N.; Drovandi, Christopher C.; Pettitt, Anthony N.; Pettet, Graeme J.
2015-01-01
In vitro studies and mathematical models are now being widely used to study the underlying mechanisms driving the expansion of cell colonies. This can improve our understanding of cancer formation and progression. Although much progress has been made in terms of developing and analysing mathematical models, far less progress has been made in terms of understanding how to estimate model parameters using experimental in vitro image-based data. To address this issue, a new approximate Bayesian computation (ABC) algorithm is proposed to estimate key parameters governing the expansion of melanoma cell (MM127) colonies, including cell diffusivity, D, cell proliferation rate, λ, and cell-to-cell adhesion, q, in two experimental scenarios, namely with and without a chemical treatment to suppress cell proliferation. Even when little prior biological knowledge about the parameters is assumed, all parameters are precisely inferred with a small posterior coefficient of variation, approximately 2–12%. The ABC analyses reveal that the posterior distributions of D and q depend on the experimental elapsed time, whereas the posterior distribution of λ does not. The posterior mean values of D and q are in the ranges 226–268 µm2h−1, 311–351 µm2h−1 and 0.23–0.39, 0.32–0.61 for the experimental periods of 0–24 h and 24–48 h, respectively. Furthermore, we found that the posterior distribution of q also depends on the initial cell density, whereas the posterior distributions of D and λ do not. The ABC approach also enables information from the two experiments to be combined, resulting in greater precision for all estimates of D and λ. PMID:26642072
A practical model for pressure probe system response estimation (with review of existing models)
NASA Astrophysics Data System (ADS)
Hall, B. F.; Povey, T.
2018-04-01
The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk
2009-01-12
An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented anmore » offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less
Liang, Yuzhen; Kuo, Dave T F; Allen, Herbert E; Di Toro, Dominic M
2016-10-01
There is concern about the environmental fate and effects of munition constituents (MCs). Polyparameter linear free energy relationships (pp-LFERs) that employ Abraham solute parameters can aid in evaluating the risk of MCs to the environment. However, poor predictions using pp-LFERs and ABSOLV estimated Abraham solute parameters are found for some key physico-chemical properties. In this work, the Abraham solute parameters are determined using experimental partition coefficients in various solvent-water systems. The compounds investigated include hexahydro-1,3,5-trinitro-1,3,5-triazacyclohexane (RDX), octahydro-1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane (HMX), hexahydro-1-nitroso-3,5-dinitro-1,3,5-triazine (MNX), hexahydro-1,3,5-trinitroso-1,3,5-triazine (TNX), hexahydro-1,3-dinitroso-5- nitro-1,3,5-triazine (DNX), 2,4,6-trinitrotoluene (TNT), 1,3,5-trinitrobenzene (TNB), and 4-nitroanisole. The solvents in the solvent-water systems are hexane, dichloromethane, trichloromethane, octanol, and toluene. The only available reported solvent-water partition coefficients are for octanol-water for some of the investigated compounds and they are in good agreement with the experimental measurements from this study. Solvent-water partition coefficients fitted using experimentally derived solute parameters from this study have significantly smaller root mean square errors (RMSE = 0.38) than predictions using ABSOLV estimated solute parameters (RMSE = 3.56) for the investigated compounds. Additionally, the predictions for various physico-chemical properties using the experimentally derived solute parameters agree with available literature reported values with prediction errors within 0.79 log units except for water solubility of RDX and HMX with errors of 1.48 and 2.16 log units respectively. However, predictions using ABSOLV estimated solute parameters have larger prediction errors of up to 7.68 log units. This large discrepancy is probably due to the missing R2NNO2 and R2NNO2 functional groups in the ABSOLV fragment database. Copyright © 2016. Published by Elsevier Ltd.
Predictive Surface Complexation Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sverjensky, Dimitri A.
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO 2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall,more » my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.« less
EXPERIMENTAL MOLTEN-SALT-FUELED 30-Mw POWER REACTOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, L.G.; Kinyon, B.W.; Lackey, M.E.
1960-03-24
A preliminary design study was made of an experimental molten-salt- fueled power reactor. The reactor considered is a single-region homogeneous burner coupled with a Loeffler steam-generating cycle. Conceptual plant layouts, basic information on the major fuel circuit components, a process flowsheet, and the nuclear characteristics of the core are presented. The design plant electrical output is 10 Mw, and the total construction cost is estimated to be approximately ,000,000. (auth)
Quantum state estimation when qubits are lost: a no-data-left-behind approach
Williams, Brian P.; Lougovski, Pavel
2017-04-06
We present an approach to Bayesian mean estimation of quantum states using hyperspherical parametrization and an experiment-specific likelihood which allows utilization of all available data, even when qubits are lost. With this method, we report the first closed-form Bayesian mean and maximum likelihood estimates for the ideal single qubit. Due to computational constraints, we utilize numerical sampling to determine the Bayesian mean estimate for a photonic two-qubit experiment in which our novel analysis reduces burdens associated with experimental asymmetries and inefficiencies. This method can be applied to quantum states of any dimension and experimental complexity.
Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal
2017-12-01
Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in small scale systems. In addition, the results of this study can be used to estimate kinetic parameter values in the stage of model selection for different experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
2012-07-01
Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the...PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1 . REPORT DATE (DD-MM-YYYY) 12 July 2012 2. REPORT TYPE Final Report 3. DATES COVERED...From – To) 1 October 2008 – 31 January 2012 4. TITLE AND SUBTITLE Experimental Studies on the Effects of Thermal Bumps in the Flow-Field around a
Estimating the SCS runoff curve number in forest catchments of Korea
NASA Astrophysics Data System (ADS)
Choi, Hyung Tae; Kim, Jaehoon; Lim, Hong-geun
2016-04-01
To estimate flood runoff discharge is a very important work in design for many hydraulic structures in streams, rivers and lakes such as dams, bridges, culverts, and so on. So, many researchers have tried to develop better methods for estimating flood runoff discharge. The SCS runoff curve number is an empirical parameter determined by empirical analysis of runoff from small catchments and hillslope plots monitored by the USDA. This method is an efficient method for determining the approximate amount of runoff from a rainfall even in a particular area, and is very widely used all around the world. However, there is a quite difference between the conditions of Korea and USA in topography, geology and land use. Therefore, examinations in adaptability of the SCS runoff curve number need to raise the accuracy of runoff prediction using SCS runoff curve number method. The purpose of this study is to find the SCS runoff curve number based on the analysis of observed data from several experimental forest catchments monitored by the National Institute of Forest Science (NIFOS), as a pilot study to modify SCS runoff curve number for forest lands in Korea. Rainfall and runoff records observed in Gwangneung coniferous and broad leaves forests, Sinwol, Hwasoon, Gongju and Gyeongsan catchments were selected to analyze the variability of flood runoff coefficients during the last 5 years. This study shows that runoff curve numbers of the experimental forest catchments range from 55 to 65. SCS Runoff Curve number method is a widely used method for estimating design discharge for small ungauged watersheds. Therefore, this study can be helpful technically to estimate the discharge for forest watersheds in Korea with more accuracy.
Emissions of biogenic volatile organic compounds (BVOC) observed during 2007 from a Pinus taeda experimental plantation in Central North Carolina are compared with model estimates from MEGAN 2.1. Relaxed Eddy Accumulation (REA) estimates of 2-methyl-3-buten-2-ol (MBO) fluxes are ...
Thoracic and respirable particle definitions for human health risk assessment.
Brown, James S; Gordon, Terry; Price, Owen; Asgharian, Bahman
2013-04-10
Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects.
Thoracic and respirable particle definitions for human health risk assessment
2013-01-01
Background Particle size-selective sampling refers to the collection of particles of varying sizes that potentially reach and adversely affect specific regions of the respiratory tract. Thoracic and respirable fractions are defined as the fraction of inhaled particles capable of passing beyond the larynx and ciliated airways, respectively, during inhalation. In an attempt to afford greater protection to exposed individuals, current size-selective sampling criteria overestimate the population means of particle penetration into regions of the lower respiratory tract. The purpose of our analyses was to provide estimates of the thoracic and respirable fractions for adults and children during typical activities with both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of health effects evidence. Methods We estimated the fraction of inhaled particles (0.5-20 μm aerodynamic diameter) penetrating beyond the larynx (based on experimental data) and ciliated airways (based on a mathematical model) for an adult male, adult female, and a 10 yr old child during typical daily activities and breathing patterns. Results Our estimates show less penetration of coarse particulate matter into the thoracic and gas exchange regions of the respiratory tract than current size-selective criteria. Of the parameters we evaluated, particle penetration into the lower respiratory tract was most dependent on route of breathing. For typical activity levels and breathing habits, we estimated a 50% cut-size for the thoracic fraction at an aerodynamic diameter of around 3 μm in adults and 5 μm in children, whereas current ambient and occupational criteria suggest a 50% cut-size of 10 μm. Conclusions By design, current size-selective sample criteria overestimate the mass of particles generally expected to penetrate into the lower respiratory tract to provide protection for individuals who may breathe orally. We provide estimates of thoracic and respirable fractions for a variety of breathing habits and activities that may benefit the design of experimental studies and interpretation of particle size-specific health effects. PMID:23575443
NASA Astrophysics Data System (ADS)
Fitton, N.; Datta, A.; Hastings, A.; Kuhnert, M.; Topp, C. F. E.; Cloy, J. M.; Rees, R. M.; Cardenas, L. M.; Williams, J. R.; Smith, K.; Chadwick, D.; Smith, P.
2014-09-01
The United Kingdom currently reports nitrous oxide emissions from agriculture using the IPCC default Tier 1 methodology. However Tier 1 estimates have a large degree of uncertainty as they do not account for spatial variations in emissions. Therefore biogeochemical models such as DailyDayCent (DDC) are increasingly being used to provide a spatially disaggregated assessment of annual emissions. Prior to use, an assessment of the ability of the model to predict annual emissions should be undertaken, coupled with an analysis of how model inputs influence model outputs, and whether the modelled estimates are more robust that those derived from the Tier 1 methodology. The aims of the study were (a) to evaluate if the DailyDayCent model can accurately estimate annual N2O emissions across nine different experimental sites, (b) to examine its sensitivity to different soil and climate inputs across a number of experimental sites and (c) to examine the influence of uncertainty in the measured inputs on modelled N2O emissions. DailyDayCent performed well across the range of cropland and grassland sites, particularly for fertilized fields indicating that it is robust for UK conditions. The sensitivity of the model varied across the sites and also between fertilizer/manure treatments. Overall our results showed that there was a stronger correlation between the sensitivity of N2O emissions to changes in soil pH and clay content than the remaining input parameters used in this study. The lower the initial site values for soil pH and clay content, the more sensitive DDC was to changes from their initial value. When we compared modelled estimates with Tier 1 estimates for each site, we found that DailyDayCent provided a more accurate representation of the rate of annual emissions.
Murado, M A; Prieto, M A
2013-09-01
NOEC and LOEC (no and lowest observed effect concentrations, respectively) are toxicological concepts derived from analysis of variance (ANOVA), a not very sensitive method that produces ambiguous results and does not provide confidence intervals (CI) of its estimates. For a long time, despite the abundant criticism that such concepts have raised, the field of the ecotoxicology is reticent to abandon them (two possible reasons will be discussed), adducing the difficulty of clear alternatives. However, this work proves that a debugged dose-response (DR) modeling, through explicit algebraic equations, enables two simple options to accurately calculate the CI of substantially lower doses than NOEC. Both ANOVA and DR analyses are affected by the experimental error, response profile, number of observations and experimental design. The study of these effects--analytically complex and experimentally unfeasible--was carried out using systematic simulations with realistic data, including different error levels. Results revealed the weakness of NOEC and LOEC notions, confirmed the feasibility of the proposed alternatives and allowed to discuss the--often violated--conditions that minimize the CI of the parametric estimates from DR assays. In addition, a table was developed providing the experimental design that minimizes the parametric CI for a given set of working conditions. This makes possible to reduce the experimental effort and to avoid the inconclusive results that are frequently obtained from intuitive experimental plans. Copyright © 2013 Elsevier B.V. All rights reserved.
Using Reading as an Automated Learning Tool
ERIC Educational Resources Information Center
Ruiz Fodor, Ana
2017-01-01
The problem addressed in this quantitative experimental study was that students were having more difficulty learning from audiovisual lessons than necessary because educators had eliminated textual references, based on early findings from CLT research. In more recent studies, CLT researchers estimated that long-term memory schemas may be used by…
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
Estimation of Electron Bernstein Emission in the TJ-II Stellarator
NASA Astrophysics Data System (ADS)
García-Rega\\ Na, J. M.; Cappa, A.; Castejón, F.; Tereshchenko, M.
2009-05-01
In order to study experimentally the viability of first harmonic EBW heating in the TJ-II stellarator by means of the O-X-B tecnique [1], an EBE diagnostic was recently installed [2]. In the present work a theoretical estimation of the EBW radiation in the TJ-II plasmas have been carried out making use of the ray tracing code TRUBA [3]. The line of sight of the EBE diagnostic may be modified using an internal movable mirror and therefore, for comparison with the experimental results, the theoretical O-X-B emission window has been determined. Experimental density and temperature profiles obtained in NBI discharges are considered in the simulations.References:[1] F. Castejon et al, Nucl. Fusion 48, 075011 (2008).[2] J. Caughman et al, Proc. 15th Joint Workshop on ECE and ECRH, Yosemite, USA (2008).[3] M. A. Tereshchenko et. al, Proc. 30th EPS Conference on Contr. Fusion and Plasma Phys., 27A, P-1.18 (2003).
The CaO orange system in meteor spectra
NASA Astrophysics Data System (ADS)
Berezhnoy, A. A.; Borovička, J.; Santos, J.; Rivas-Silva, J. F.; Sandoval, L.; Stolyarov, A. V.; Palma, A.
2018-02-01
The CaO orange band system was simulated in the region 5900-6300 Å and compared with the experimentally observed spectra of Benešov bolide wake. The required vibronic Einstein emission coefficients were estimated by means of the experimental radiative lifetimes under the simplest Franck-Condon approximation. A moderate agreement was achieved, and the largest uncertainties come from modeling shape of FeO orange bands. Using a simple model the CaO column density in the wake of the Benešov bolide at the height of 29 km was estimated as (5 ± 2) × 1014 cm-2 by a comparison of the present CaO spectra with the AlO bands nicely observed at 4600-5200 Å in the same spectrum. The obtained CaO content is in a good agreement with the quenching model developed for the impact-produced cloud, although future theoretical and experimental studies of both CaO and FeO orange systems contribution would be needed to confirm these results.
Alici, Gursel; Canty, Taylor; Mutlu, Rahim; Hu, Weiping; Sencadas, Vitor
2018-02-01
In this article, we have established an analytical model to estimate the quasi-static bending displacement (i.e., angle) of the pneumatic actuators made of two different elastomeric silicones (Elastosil M4601 with a bulk modulus of elasticity of 262 kPa and Translucent Soft silicone with a bulk modulus of elasticity of 48 kPa-both experimentally determined) and of discrete chambers, partially separated from each other with a gap in between the chambers to increase the magnitude of their bending angle. The numerical bending angle results from the proposed gray-box model, and the corresponding experimental results match well that the model is accurate enough to predict the bending behavior of this class of pneumatic soft actuators. Further, by using the experimental bending angle results and blocking force results, the effective modulus of elasticity of the actuators is estimated from a blocking force model. The numerical and experimental results presented show that the bending angle and blocking force models are valid for this class of pneumatic actuators. Another contribution of this study is to incorporate a bistable flexible thin metal typified by a tape measure into the topology of the actuators to prevent the deflection of the actuators under their own weight when operating in the vertical plane.
2012-09-01
make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma
William T. Simpson
2005-01-01
To use small-diameter trees effectively as square timbers, we need to be able to estimate the amount of time it takes for these timbers to air-dry. Since experimental data on estimating air-drying time for small-diameter logs have been developed, this study looked at a way to relate that method to square timbers. Drying times were determined for a group of round cross-...
Experimental Estimation of Entanglement at the Quantum Limit
NASA Astrophysics Data System (ADS)
Brida, Giorgio; Degiovanni, Ivo Pietro; Florio, Angela; Genovese, Marco; Giorda, Paolo; Meda, Alice; Paris, Matteo G. A.; Shurupov, Alexander
2010-03-01
Entanglement is the central resource of quantum information processing and the precise characterization of entangled states is a crucial issue for the development of quantum technologies. This leads to the necessity of a precise, experimental feasible measure of entanglement. Nevertheless, such measurements are limited both from experimental uncertainties and intrinsic quantum bounds. Here we present an experiment where the amount of entanglement of a family of two-qubit mixed photon states is estimated with the ultimate precision allowed by quantum mechanics.
The price elasticity of demand for heroin: matched longitudinal and experimental evidence#
Olmstead, Todd A.; Alessi, Sheila M.; Kline, Brendan; Pacula, Rosalie Liccardo; Petry, Nancy M.
2015-01-01
This paper reports estimates of the price elasticity of demand for heroin based on a newly constructed dataset. The dataset has two matched components concerning the same sample of regular heroin users: longitudinal information about real-world heroin demand (actual price and actual quantity at daily intervals for each heroin user in the sample) and experimental information about laboratory heroin demand (elicited by presenting the same heroin users with scenarios in a laboratory setting). Two empirical strategies are used to estimate the price elasticity of demand for heroin. The first strategy exploits the idiosyncratic variation in the price experienced by a heroin user over time that occurs in markets for illegal drugs. The second strategy exploits the experimentally-induced variation in price experienced by a heroin user across experimental scenarios. Both empirical strategies result in the estimate that the conditional price elasticity of demand for heroin is approximately −0.80. PMID:25702687
Dawson, Ree; Lavori, Philip W
2012-01-01
Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.
Stated Choice design comparison in a developing country: recall and attribute nonattendance
2014-01-01
Background Experimental designs constitute a vital component of all Stated Choice (aka discrete choice experiment) studies. However, there exists limited empirical evaluation of the statistical benefits of Stated Choice (SC) experimental designs that employ non-zero prior estimates in constructing non-orthogonal constrained designs. This paper statistically compares the performance of contrasting SC experimental designs. In so doing, the effect of respondent literacy on patterns of Attribute non-Attendance (ANA) across fractional factorial orthogonal and efficient designs is also evaluated. The study uses a ‘real’ SC design to model consumer choice of primary health care providers in rural north India. A total of 623 respondents were sampled across four villages in Uttar Pradesh, India. Methods Comparison of orthogonal and efficient SC experimental designs is based on several measures. Appropriate comparison of each design’s respective efficiency measure is made using D-error results. Standardised Akaike Information Criteria are compared between designs and across recall periods. Comparisons control for stated and inferred ANA. Coefficient and standard error estimates are also compared. Results The added complexity of the efficient SC design, theorised elsewhere, is reflected in higher estimated amounts of ANA among illiterate respondents. However, controlling for ANA using stated and inferred methods consistently shows that the efficient design performs statistically better. Modelling SC data from the orthogonal and efficient design shows that model-fit of the efficient design outperform the orthogonal design when using a 14-day recall period. The performance of the orthogonal design, with respect to standardised AIC model-fit, is better when longer recall periods of 30-days, 6-months and 12-months are used. Conclusions The effect of the efficient design’s cognitive demand is apparent among literate and illiterate respondents, although, more pronounced among illiterate respondents. This study empirically confirms that relaxing the orthogonality constraint of SC experimental designs increases the information collected in choice tasks, subject to the accuracy of the non-zero priors in the design and the correct specification of a ‘real’ SC recall period. PMID:25386388
Estimating Divergence Dates and Substitution Rates in the Drosophila Phylogeny
Obbard, Darren J.; Maclennan, John; Kim, Kang-Wook; Rambaut, Andrew; O’Grady, Patrick M.; Jiggins, Francis M.
2012-01-01
An absolute timescale for evolution is essential if we are to associate evolutionary phenomena, such as adaptation or speciation, with potential causes, such as geological activity or climatic change. Timescales in most phylogenetic studies use geologically dated fossils or phylogeographic events as calibration points, but more recently, it has also become possible to use experimentally derived estimates of the mutation rate as a proxy for substitution rates. The large radiation of drosophilid taxa endemic to the Hawaiian islands has provided multiple calibration points for the Drosophila phylogeny, thanks to the "conveyor belt" process by which this archipelago forms and is colonized by species. However, published date estimates for key nodes in the Drosophila phylogeny vary widely, and many are based on simplistic models of colonization and coalescence or on estimates of island age that are not current. In this study, we use new sequence data from seven species of Hawaiian Drosophila to examine a range of explicit coalescent models and estimate substitution rates. We use these rates, along with a published experimentally determined mutation rate, to date key events in drosophilid evolution. Surprisingly, our estimate for the date for the most recent common ancestor of the genus Drosophila based on mutation rate (25–40 Ma) is closer to being compatible with independent fossil-derived dates (20–50 Ma) than are most of the Hawaiian-calibration models and also has smaller uncertainty. We find that Hawaiian-calibrated dates are extremely sensitive to model choice and give rise to point estimates that range between 26 and 192 Ma, depending on the details of the model. Potential problems with the Hawaiian calibration may arise from systematic variation in the molecular clock due to the long generation time of Hawaiian Drosophila compared with other Drosophila and/or uncertainty in linking island formation dates with colonization dates. As either source of error will bias estimates of divergence time, we suggest mutation rate estimates be used until better models are available. PMID:22683811
A linear least squares approach for evaluation of crack tip stress field parameters using DIC
NASA Astrophysics Data System (ADS)
Harilal, R.; Vyasarayani, C. P.; Ramji, M.
2015-12-01
In the present work, an experimental study is carried out to estimate the mixed-mode stress intensity factors (SIF) for different cracked specimen configurations using digital image correlation (DIC) technique. For the estimation of mixed-mode SIF's using DIC, a new algorithm is proposed for the extraction of crack tip location and coefficients in the multi-parameter displacement field equations. From those estimated coefficients, SIF could be extracted. The required displacement data surrounding the crack tip has been obtained using 2D-DIC technique. An open source 2D DIC software Ncorr is used for the displacement field extraction. The presented methodology has been used to extract mixed-mode SIF's for specimen configurations like single edge notch (SEN) specimen and centre slant crack (CSC) specimens made out of Al 2014-T6 alloy. The experimental results have been compared with the analytical values and they are found to be in good agreement, thereby confirming the accuracy of the algorithm being proposed.
NASA Astrophysics Data System (ADS)
Cardona, Javier Fernando; García Bonilla, Alba Carolina; Tomás García, Rogelio
2017-11-01
This article shows that the effect of all quadrupole errors present in an interaction region with low β * can be modeled by an equivalent magnetic kick, which can be estimated from action and phase jumps found on beam position data. This equivalent kick is used to find the strengths that certain normal and skew quadrupoles located on the IR must have to make an effective correction in that region. Additionally, averaging techniques to reduce noise on beam position data, which allows precise estimates of equivalent kicks, are presented and mathematically justified. The complete procedure is tested with simulated data obtained from madx and 2015-LHC experimental data. The analyses performed in the experimental data indicate that the strengths of the IR skew quadrupole correctors and normal quadrupole correctors can be estimated within a 10% uncertainty. Finally, the effect of IR corrections in the β* is studied, and a correction scheme that returns this parameter to its designed value is proposed.
Hadronic production of the P-wave excited B{sub c} states (B{sub cJ,L=1}*)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C.-H.; Institute of Theoretical Physics, Chinese Academy of Sciences, P.O. Box 2735, Beijing 100080; Wang, J.-X.
2004-12-01
Adopting the complete {alpha}{sub s}{sup 4} approach of the perturbative QCD and the updated parton distribution functions, we have estimated the hadronic production of the P-wave excited B{sub c} states (B{sub cJ,L=1}*). In the estimate, special care has been paid to the dependence of the production amplitude on the derivative of the wave function at origin which is obtained by the potential model. For experimental references, main theoretical uncertainties are discussed, and the total cross section as well as the distributions of the production with reasonable cuts at the energies of Tevatron and CERN LHC are computed and presented properly.more » The results show that the P-wave production may contribute to the B{sub c}-meson production indirectly by a factor of about 0.5 of the direct production, and according to the estimated cross section, it is further worthwhile to study the possibility of observing the P-wave production itself experimentally.« less
NASA Astrophysics Data System (ADS)
Mikhailovna Smolenskaya, Natalia; Vladimirovich Smolenskii, Victor; Vladimirovich Korneev, Nicholas
2018-02-01
The work is devoted to the substantiation and practical implementation of a new approach for estimating the change in internal energy by pressure and volume. The pressure is measured with a calibrated sensor. The change in volume inside the cylinder is determined by changing the position of the piston. The position of the piston is precisely determined by the angle of rotation of the crankshaft. On the basis of the proposed approach, the thermodynamic efficiency of the working process of spark ignition engines on natural gas with the addition of hydrogen was estimated. Experimental studies were carried out on a single-cylinder unit UIT-85. Their analysis showed an increase in the thermodynamic efficiency of the working process with the addition of hydrogen in a compressed natural gas (CNG).The results obtained make it possible to determine the characteristic of heat release from the analysis of experimental data. The effect of hydrogen addition on the CNG combustion process is estimated.
NASA Astrophysics Data System (ADS)
Cunha, J. S.; Cavalcante, F. R.; Souza, S. O.; Souza, D. N.; Santos, W. S.; Carvalho Júnior, A. B.
2017-11-01
One of the main criteria that must be held in Total Body Irradiation (TBI) is the uniformity of dose in the body. In TBI procedures the certification that the prescribed doses are absorbed in organs is made with dosimeters positioned on the patient skin. In this work, we modelled TBI scenarios in the MCNPX code to estimate the entrance dose rate in the skin for comparison and validation of simulations with experimental measurements from literature. Dose rates were estimated simulating an ionization chamber laterally positioned on thorax, abdomen, leg and thigh. Four exposure scenarios were simulated: ionization chamber (S1), TBI room (S2), and patient represented by hybrid phantom (S3) and water stylized phantom (S4) in sitting posture. The posture of the patient in experimental work was better represented by S4 compared with hybrid phantom, and this led to minimum and maximum percentage differences of 1.31% and 6.25% to experimental measurements for thorax and thigh regions, respectively. As for all simulations reported here the percentage differences in the estimated dose rates were less than 10%, we considered that the obtained results are consistent with experimental measurements and the modelled scenarios are suitable to estimate the absorbed dose in organs during TBI procedure.
Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.
2001-03-08
1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.
Dynamic modulus estimation and structural vibration analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, A.
1998-11-18
Often the dynamic elastic modulus of a material with frequency dependent properties is difficult to estimate. These uncertainties are compounded in any structural vibration analysis using the material properties. Here, different experimental techniques are used to estimate the properties of a particular elastomeric material over a broad frequency range. Once the properties are determined, various structures incorporating the elastomer are analyzed by an interactive finite element method to determine natural frequencies and mode shapes. Then, the finite element results are correlated with results obtained by experimental modal analysis.
NASA Astrophysics Data System (ADS)
Zhu, Meng-Hua; Liu, Liang-Gang; You, Zhong; Xu, Ao-Ao
2009-03-01
In this paper, a heuristic approach based on Slavic's peak searching method has been employed to estimate the width of peak regions for background removing. Synthetic and experimental data are used to test this method. With the estimated peak regions using the proposed method in the whole spectrum, we find it is simple and effective enough to be used together with the Statistics-sensitive Nonlinear Iterative Peak-Clipping method.
Blowers, Paul; Hollingshead, Kyle
2009-05-21
In this work, the global warming potential (GWP) of methylene fluoride (CH(2)F(2)), or HFC-32, is estimated through computational chemistry methods. We find our computational chemistry approach reproduces well all phenomena important for predicting global warming potentials. Geometries predicted using the B3LYP/6-311g** method were in good agreement with experiment, although some other computational methods performed slightly better. Frequencies needed for both partition function calculations in transition-state theory and infrared intensities needed for radiative forcing estimates agreed well with experiment compared to other computational methods. A modified CBS-RAD method used to obtain energies led to superior results to all other previous heat of reaction estimates and most barrier height calculations when the B3LYP/6-311g** optimized geometry was used as the base structure. Use of the small-curvature tunneling correction and a hindered rotor treatment where appropriate led to accurate reaction rate constants and radiative forcing estimates without requiring any experimental data. Atmospheric lifetimes from theory at 277 K were indistinguishable from experimental results, as were the final global warming potentials compared to experiment. This is the first time entirely computational methods have been applied to estimate a global warming potential for a chemical, and we have found the approach to be robust, inexpensive, and accurate compared to prior experimental results. This methodology was subsequently used to estimate GWPs for three additional species [methane (CH(4)); fluoromethane (CH(3)F), or HFC-41; and fluoroform (CHF(3)), or HFC-23], where estimations also compare favorably to experimental values.
A Statistical Guide to the Design of Deep Mutational Scanning Experiments
Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia
2016-01-01
The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710
Silverman, Arielle M; Pitonyak, Jennifer S; Nelson, Ian K; Matsuda, Patricia N; Kartin, Deborah; Molton, Ivan R
2018-05-01
To develop and test a novel impairment simulation activity to teach beginning rehabilitation students how people adapt to physical impairments. Masters of Occupational Therapy students (n = 14) and Doctor of Physical Therapy students (n = 18) completed the study during the first month of their program. Students were randomized to the experimental or control learning activity. Experimental students learned to perform simple tasks while simulating paraplegia and hemiplegia. Control students viewed videos of others completing tasks with these impairments. Before and after the learning activities, all students estimated average self-perceived health, life satisfaction, and depression ratings among people with paraplegia and hemiplegia. Experimental students increased their estimates of self-perceived health, and decreased their estimates of depression rates, among people with paraplegia and hemiplegia after the learning activity. The control activity had no effect on these estimates. Impairment simulation can be an effective way to teach rehabilitation students about the adaptations that people make to physical impairments. Positive impairment simulations should allow students to experience success in completing activities of daily living with impairments. Impairment simulation is complementary to other pedagogical methods, such as simulated clinical encounters using standardized patients. Implication of Rehabilitation It is important for rehabilitation students to learn how people live well with disabilities. Impairment simulations can improve students' assessments of quality of life with disabilities. To be beneficial, impairment simulations must include guided exposure to effective methods for completing daily tasks with disabilities.
Youn, Wonkeun; Kim, Jung
2010-11-01
Mechanomyography (MMG) is the muscle surface oscillations that are generated by the dimensional change of the contracting muscle fibers. Because MMG reflects the number of recruited motor units and their firing rates, just as electromyography (EMG) is influenced by these two factors, it can be used to estimate the force exerted by skeletal muscles. The aim of this study was to demonstrate the feasibility of MMG for estimating the elbow flexion force at the wrist under an isometric contraction by using an artificial neural network in comparison with EMG. We performed experiments with five subjects, and the force at the wrist and the MMG from the contributing muscles were recorded. It was found that MMG could be utilized to accurately estimate the isometric elbow flexion force based on the values of the normalized root mean square error (NRMSE = 0.131 ± 0.018) and the cross-correlation coefficient (CORR = 0.892 ± 0.033). Although MMG can be influenced by the physical milieu/morphology of the muscle and EMG performed better than MMG, these experimental results suggest that MMG has the potential to estimate muscle forces. These experimental results also demonstrated that MMG in combination with EMG resulted in better performance estimation in comparison with EMG or MMG alone, indicating that a combination of MMG and EMG signals could be used to provide complimentary information on muscle contraction.
NASA Astrophysics Data System (ADS)
Ben Shabat, Yael; Shitzer, Avraham
2012-07-01
Facial heat exchange convection coefficients were estimated from experimental data in cold and windy ambient conditions applicable to wind chill calculations. Measured facial temperature datasets, that were made available to this study, originated from 3 separate studies involving 18 male and 6 female subjects. Most of these data were for a -10°C ambient environment and wind speeds in the range of 0.2 to 6 m s-1. Additional single experiments were for -5°C, 0°C and 10°C environments and wind speeds in the same range. Convection coefficients were estimated for all these conditions by means of a numerical facial heat exchange model, applying properties of biological tissues and a typical facial diameter of 0.18 m. Estimation was performed by adjusting the guessed convection coefficients in the computed facial temperatures, while comparing them to measured data, to obtain a satisfactory fit ( r 2 > 0.98, in most cases). In one of the studies, heat flux meters were additionally used. Convection coefficients derived from these meters closely approached the estimated values for only the male subjects. They differed significantly, by about 50%, when compared to the estimated female subjects' data. Regression analysis was performed for just the -10°C ambient temperature, and the range of experimental wind speeds, due to the limited availability of data for other ambient temperatures. The regressed equation was assumed in the form of the equation underlying the "new" wind chill chart. Regressed convection coefficients, which closely duplicated the measured data, were consistently higher than those calculated by this equation, except for one single case. The estimated and currently used convection coefficients are shown to diverge exponentially from each other, as wind speed increases. This finding casts considerable doubts on the validity of the convection coefficients that are used in the computation of the "new" wind chill chart and their applicability to humans in cold and windy environments.
Ben Shabat, Yael; Shitzer, Avraham
2012-07-01
Facial heat exchange convection coefficients were estimated from experimental data in cold and windy ambient conditions applicable to wind chill calculations. Measured facial temperature datasets, that were made available to this study, originated from 3 separate studies involving 18 male and 6 female subjects. Most of these data were for a -10°C ambient environment and wind speeds in the range of 0.2 to 6 m s(-1). Additional single experiments were for -5°C, 0°C and 10°C environments and wind speeds in the same range. Convection coefficients were estimated for all these conditions by means of a numerical facial heat exchange model, applying properties of biological tissues and a typical facial diameter of 0.18 m. Estimation was performed by adjusting the guessed convection coefficients in the computed facial temperatures, while comparing them to measured data, to obtain a satisfactory fit (r(2) > 0.98, in most cases). In one of the studies, heat flux meters were additionally used. Convection coefficients derived from these meters closely approached the estimated values for only the male subjects. They differed significantly, by about 50%, when compared to the estimated female subjects' data. Regression analysis was performed for just the -10°C ambient temperature, and the range of experimental wind speeds, due to the limited availability of data for other ambient temperatures. The regressed equation was assumed in the form of the equation underlying the "new" wind chill chart. Regressed convection coefficients, which closely duplicated the measured data, were consistently higher than those calculated by this equation, except for one single case. The estimated and currently used convection coefficients are shown to diverge exponentially from each other, as wind speed increases. This finding casts considerable doubts on the validity of the convection coefficients that are used in the computation of the "new" wind chill chart and their applicability to humans in cold and windy environments.
Ruíz, A; Ramos, A; San Emeterio, J L
2004-04-01
An estimation procedure to efficiently find approximate values of internal parameters in ultrasonic transducers intended for broadband operation would be a valuable tool to discover internal construction data. This information is necessary in the modelling and simulation of acoustic and electrical behaviour related to ultrasonic systems containing commercial transducers. There is not a general solution for this generic problem of parameter estimation in the case of broadband piezoelectric probes. In this paper, this general problem is briefly analysed for broadband conditions. The viability of application in this field of an artificial intelligence technique supported on the modelling of the transducer internal components is studied. A genetic algorithm (GA) procedure is presented and applied to the estimation of different parameters, related to two transducers which are working as pulsed transmitters. The efficiency of this GA technique is studied, considering the influence of the number and variation range of the estimated parameters. Estimation results are experimentally ratified.
Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie
2016-09-01
The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).
Health Auctions: a Valuation Experiment (HAVE) study protocol.
Kularatna, Sanjeewa; Petrie, Dennis; Scuffham, Paul A; Byrnes, Joshua
2016-04-07
Quality-adjusted life years are derived using health state utility weights which adjust for the relative value of living in each health state compared with living in perfect health. Various techniques are used to estimate health state utility weights including time-trade-off and standard gamble. These methods have exhibited limitations in terms of complexity, validity and reliability. A new composite approach using experimental auctions to value health states is introduced in this protocol. A pilot study will test the feasibility and validity of using experimental auctions to value health states in monetary terms. A convenient sample (n=150) from a population of university staff and students will be invited to participate in 30 auction sets with a group of 5 people in each set. The 9 health states auctioned in each auction set will come from the commonly used EQ-5D-3L instrument. At most participants purchase 2 health states, and the participant who acquires the 2 'best' health states on average will keep the amount of money they do not spend in acquiring those health states. The value (highest bid and average bid) of each of the 24 health states will be compared across auctions to test for reliability across auction groups and across auctioneers. A test retest will be conducted for 10% of the sample to assess reliability of responses for health states auctions. Feasibility of conducting experimental auctions to value health states will also be examined. The validity of estimated health states values will be compared with published utility estimates from other methods. This pilot study will explore the feasibility, reliability and validity in using experimental auction for valuing health states. Ethical clearance was obtained from Griffith University ethics committee. The results will be disseminated in peer-reviewed journals and major international conferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao
Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less
NASA Astrophysics Data System (ADS)
Roy, Kuntal
2017-11-01
There exists considerable confusion in estimating the spin diffusion length of materials with high spin-orbit coupling from spin pumping experiments. For designing functional devices, it is important to determine the spin diffusion length with sufficient accuracy from experimental results. An inaccurate estimation of spin diffusion length also affects the estimation of other parameters (e.g., spin mixing conductance, spin Hall angle) concomitantly. The spin diffusion length for platinum (Pt) has been reported in the literature in a wide range of 0.5-14 nm, and in particular it is a constant value independent of Pt's thickness. Here, the key reasonings behind such a wide range of reported values of spin diffusion length have been identified comprehensively. In particular, it is shown here that a thickness-dependent conductivity and spin diffusion length is necessary to simultaneously match the experimental results of effective spin mixing conductance and inverse spin Hall voltage due to spin pumping. Such a thickness-dependent spin diffusion length is tantamount to the Elliott-Yafet spin relaxation mechanism, which bodes well for transitional metals. This conclusion is not altered even when there is significant interfacial spin memory loss. Furthermore, the variations in the estimated parameters are also studied, which is important for technological applications.
Kalman-variant estimators for state of charge in lithium-sulfur batteries
NASA Astrophysics Data System (ADS)
Propp, Karsten; Auger, Daniel J.; Fotouhi, Abbas; Longo, Stefano; Knap, Vaclav
2017-03-01
Lithium-sulfur batteries are now commercially available, offering high specific energy density, low production costs and high safety. However, there is no commercially-available battery management system for them, and there are no published methods for determining state of charge in situ. This paper describes a study to address this gap. The properties and behaviours of lithium-sulfur are briefly introduced, and the applicability of 'standard' lithium-ion state-of-charge estimation methods is explored. Open-circuit voltage methods and 'Coulomb counting' are found to have a poor fit for lithium-sulfur, and model-based methods, particularly recursive Bayesian filters, are identified as showing strong promise. Three recursive Bayesian filters are implemented: an extended Kalman filter (EKF), an unscented Kalman filter (UKF) and a particle filter (PF). These estimators are tested through practical experimentation, considering both a pulse-discharge test and a test based on the New European Driving Cycle (NEDC). Experimentation is carried out at a constant temperature, mirroring the environment expected in the authors' target automotive application. It is shown that the estimators, which are based on a relatively simple equivalent-circuit-network model, can deliver useful results. If the three estimators implemented, the unscented Kalman filter gives the most robust and accurate performance, with an acceptable computational effort.
Computation of acoustic ressure fields produced in feline brain by high-intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Omidi, Nazanin
In 1975, Dunn et al. (JASA 58:512-514) showed that a simple relation describes the ultrasonic threshold for cavitation-induced changes in the mammalian brain. The thresholds for tissue damage were estimated for a variety of acoustic parameters in exposed feline brain. The goal of this study was to improve the estimates for acoustic pressures and intensities present in vivo during those experimental exposures by estimating them using nonlinear rather than linear theory. In our current project, the acoustic pressure waveforms produced in the brains of anesthetized felines were numerically simulated for a spherically focused, nominally f1-transducer (focal length = 13 cm) at increasing values of the source pressure at frequencies of 1, 3, and 9 MHz. The corresponding focal intensities were correlated with the experimental data of Dunn et al. The focal pressure waveforms were also computed at the location of the true maximum. For low source pressures, the computed waveforms were the same as those determined using linear theory, and the focal intensities matched experimentally determined values. For higher source pressures, the focal pressure waveforms became increasingly distorted, with the compressional amplitude of the wave becoming greater, and the rarefactional amplitude becoming lower than the values calculated using linear theory. The implications of these results for clinical exposures are discussed.
NASA Astrophysics Data System (ADS)
Shaanika, E.; Yamaguchi, K.; Miki, M.; Ida, T.; Izumi, M.; Murase, Y.; Oryu, T.; Yanamoto, T.
2017-12-01
Superconducting generators offer numerous advantages over conventional generators of the same rating. They are lighter, smaller and more efficient. Amongst a host of methods for cooling HTS machinery, thermosyphon-based cooling systems have been employed due to their high heat transfer rate and near-isothermal operating characteristics associated with them. To use them optimally, it is essential to study thermal characteristics of these cryogenic thermosyphons. To this end, a stand-alone neon thermosyphon cooling system with a topology resembling an HTS rotating machine was studied. Heat load tests were conducted on the neon thermosyphon cooling system by applying a series of heat loads to the evaporator at different filling ratios. The temperature at selected points of evaporator, adiabatic tube and condenser as well as total heat leak were measured. A further study involving a computer thermal model was conducted to gain further insight into the estimated temperature distribution of thermosyphon components and heat leak of the cooling system. The model employed boundary conditions from data of heat load tests. This work presents a comparison between estimated (by model) and experimental (measured) temperature distribution in a two-phase cryogenic thermosyphon cooling system. The simulation results of temperature distribution and heat leak compared generally well with experimental data.
Aggregate and individual replication probability within an explicit model of the research process.
Miller, Jeff; Schwarz, Wolf
2011-09-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.
Quantifying Selection with Pool-Seq Time Series Data.
Taus, Thomas; Futschik, Andreas; Schlötterer, Christian
2017-11-01
Allele frequency time series data constitute a powerful resource for unraveling mechanisms of adaptation, because the temporal dimension captures important information about evolutionary forces. In particular, Evolve and Resequence (E&R), the whole-genome sequencing of replicated experimentally evolving populations, is becoming increasingly popular. Based on computer simulations several studies proposed experimental parameters to optimize the identification of the selection targets. No such recommendations are available for the underlying parameters selection strength and dominance. Here, we introduce a highly accurate method to estimate selection parameters from replicated time series data, which is fast enough to be applied on a genome scale. Using this new method, we evaluate how experimental parameters can be optimized to obtain the most reliable estimates for selection parameters. We show that the effective population size (Ne) and the number of replicates have the largest impact. Because the number of time points and sequencing coverage had only a minor effect, we suggest that time series analysis is feasible without major increase in sequencing costs. We anticipate that time series analysis will become routine in E&R studies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Céspedes, V; Pallarés, S; Arribas, P; Millán, A; Velasco, J
2013-10-01
Water salinity and ionic composition are among the main environmental variables that constrain the fundamental niches of aquatic species, and accordingly, physiological tolerance to these factors constitutes a crucial part of the evolution, ecology, and biogeography of these organisms. The present study experimentally estimated the fundamental saline and anionic niches of adults of two pairs of congeneric saline beetle species that differ in habitat preference (lotic and lentic) in order to test the habitat constraint hypothesis. Osmotic and anionic realised niches were also estimated based on the field occurrences of adult beetle species using Outlying Mean Index analysis and their relationship with experimental tolerances. In the laboratory, all of the studied species showed a threshold response to increased salinity, displaying high survival times when exposed to low and intermediate conductivity levels. These results suggest that these species are not strictly halophilic, but that they are able to regulate both hyperosmotically and hypoosmotically. Anionic water composition had a significant effect on salinity tolerance at conductivity levels near their upper tolerance limits, with decreased species survival at elevated sulphate concentrations. Species occupying lentic habitats demonstrated higher salinity tolerance than their lotic congeners in agreement with the habitat constraint hypothesis. As expected, realised salinity niches were narrower than fundamental niches and corresponded to conditions near the upper tolerance limits of the species. These species are uncommon on freshwater-low conductivity habitats despite the fact that these conditions might be physiologically suitable for the adult life stage. Other factors, such as biotic interactions, could prevent their establishment at low salinities. Differences in the realised anionic niches of congeneric species could be partially explained by the varying habitat availability in the study area. Combining the experimental estimation of fundamental niches with realised field data niche estimates is a powerful method for understanding the main factors constraining species' distribution at multiple scales, which is a key issue when predicting species' ability to cope with global change. Copyright © 2013 Elsevier Ltd. All rights reserved.
Applying Propensity Score Methods in Medical Research: Pitfalls and Prospects
Luo, Zhehui; Gardiner, Joseph C.; Bradley, Cathy J.
2012-01-01
The authors review experimental and nonexperimental causal inference methods, focusing on assumptions for the validity of instrumental variables and propensity score (PS) methods. They provide guidance in four areas for the analysis and reporting of PS methods in medical research and selectively evaluate mainstream medical journal articles from 2000 to 2005 in the four areas, namely, examination of balance, overlapping support description, use of estimated PS for evaluation of treatment effect, and sensitivity analyses. In spite of the many pitfalls, when appropriately evaluated and applied, PS methods can be powerful tools in assessing average treatment effects in observational studies. Appropriate PS applications can create experimental conditions using observational data when randomized controlled trials are not feasible and, thus, lead researchers to an efficient estimator of the average treatment effect. PMID:20442340
Quark degrees of freedom in the production of soft pion jets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okorokov, V. A., E-mail: VAOkorokov@mephi.ru, E-mail: Okorokov@bnl.gov
2015-05-15
Experimental results obtained by studying the properties of soft jets in the 4-velocity space at √s ∼ 2 to 20 GeV are presented. The changes in the mean distance from the jet axis to the jet particles, the mean kinetic energy of these particles, and the cluster dimension in response to the growth of the collision energy are consistent with the assumption that quark degrees of freedom manifest themselves in processes of pion-jet production at intermediate energies. The energy at which quark degrees of freedom begin to manifest themselves experimentally in the production of soft pion jets is estimated formore » the first time. The estimated value of this energy is 2.8 ± 0.6 GeV.« less
Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices
Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling
2008-01-01
The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...
Sources of uncertainty in estimating stream solute export from headwater catchments at three sites
Ruth D. Yanai; Naoko Tokuchi; John L. Campbell; Mark B. Green; Eiji Matsuzaki; Stephanie N. Laseter; Cindi L. Brown; Amey S. Bailey; Pilar Lyons; Carrie R. Levine; Donald C. Buso; Gene E. Likens; Jennifer D. Knoepp; Keitaro Fukushima
2015-01-01
Uncertainty in the estimation of hydrologic export of solutes has never been fully evaluated at the scale of a small-watershed ecosystem. We used data from the Gomadansan Experimental Forest, Japan, Hubbard Brook Experimental Forest, USA, and Coweeta Hydrologic Laboratory, USA, to evaluate many sources of uncertainty, including the precision and accuracy of...
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey William; Devaud, Cecile
2017-05-01
A Reynolds-Averaged Navier-Stokes (RANS) simulation of the semi-industrial International Flame Research Foundation (IFRF) furnace is performed using a non-adiabatic Conditional Source-term Estimation (CSE) formulation. This represents the first time that a CSE formulation, which accounts for the effect of radiation on the conditional reaction rates, has been applied to a large scale semi-industrial furnace. The objective of the current study is to assess the capabilities of CSE to accurately reproduce the velocity field, temperature, species concentration and nitrogen oxides (NOx) emission for the IFRF furnace. The flow field is solved using the standard k-ε turbulence model and detailed chemistry is included. NOx emissions are calculated using two different methods. Predicted velocity profiles are in good agreement with the experimental data. The predicted peak temperature occurs closer to the centreline, as compared to the experimental observations, suggesting that the mixing between the fuel jet and vitiated air jet may be overestimated. Good agreement between the species concentrations, including NOx, and the experimental data is observed near the burner exit. Farther downstream, the centreline oxygen concentration is found to be underpredicted. Predicted NOx concentrations are in good agreement with experimental data when calculated using the method of Peters and Weber. The current study indicates that RANS-CSE can accurately predict the main characteristics seen in a semi-industrial IFRF furnace.
NASA Astrophysics Data System (ADS)
Nammi, Srinagalakshmi; Vasa, Nilesh J.; Gurusamy, Balaganesan; Mathur, Anil C.
2017-09-01
A plasma shielding phenomenon and its influence on micromachining is studied experimentally and theoretically for laser wavelengths of 355 nm, 532 nm and 1064 nm. A time resolved pump-probe technique is proposed and demonstrated by splitting a single nanosecond Nd3+:YAG laser into an ablation laser (pump laser) and a probe laser to understand the influence of plasma shielding on laser ablation of copper (Cu) clad on polyimide thin films. The proposed nanosecond pump-probe technique allows simultaneous measurement of the absorption characteristics of plasma produced during Cu film ablation by the pump laser. Experimental measurements of the probe intensity distinctly show that the absorption by the ablated plume increases with increase in the pump intensity, as a result of plasma shielding. Theoretical estimation of the intensity of the transmitted pump beam based on the thermo-temporal modeling is in qualitative agreement with the pump-probe based experimental measurements. The theoretical estimate of the depth attained for a single pulse with high pump intensity value on a Cu thin film is limited by the plasma shielding of the incident laser beam, similar to that observed experimentally. Further, the depth of micro-channels produced shows a similar trend for all three wavelengths, however, the channel depth achieved is lesser at the wavelength of 1064 nm.
The determination of accurate dipole polarizabilities alpha and gamma for the noble gases
NASA Technical Reports Server (NTRS)
Rice, Julia E.; Taylor, Peter R.; Lee, Timothy J.; Almloef, Jan
1989-01-01
The static dipole polarizabilities alpha and gamma for the noble gases helium through xenon were determined using large flexible one-particle basis sets in conjunction with high-level treatments of electron correlation. The electron correlation methods include single and double excitation coupled-cluster theory (CCSD), an extension of CCSD that includes a perturbational estimate of connected triple excitations, CCSD(T), and second order perturbation theory (MP2). The computed alpha and gamma values are estimated to be accurate to within a few percent. Agreement with experimental data for the static hyperpolarizability gamma is good for neon and xenon, but for argon and krypton the differences are larger than the combined theoretical and experimental uncertainties. Based on our calculations, we suggest that the experimental value of gamma for argon is too low; adjusting this value would bring the experimental value of gamma for krypton into better agreement with our computed result. The MP2 values for the polarizabilities of neon, argon, krypton and zenon are in reasonabe agreement with the CCSD and CCSD(T) values, suggesting that this less expensive method may be useful in studies of polarizabilities for larger systems.
Weight training in youth-growth, maturation, and safety: an evidence-based review.
Malina, Robert M
2006-11-01
To review the effects of resistance training programs on pre- and early-pubertal youth in the context of response, potential influence on growth and maturation, and occurrence of injury. Evidence-based review. Twenty-two reports dealing with experimental resistance training protocols, excluding isometric programs, in pre- and early-pubertal youth, were reviewed in the context of subject characteristics, training protocol, responses, and occurrence of injury. Experimental programs most often used isotonic machines and free weights, 2- and 3-day protocols, and 8- and 12-week durations, with significant improvements in muscular strength during childhood and early adolescence. Strength gains were lost during detraining. Experimental resistance training programs did not influence growth in height and weight of pre- and early-adolescent youth, and changes in estimates of body composition were variable and quite small. Only 10 studies systematically monitored injuries, and only three injuries were reported. Estimated injury rates were 0.176, 0.053, and 0.055 per 100 participant-hours in the respective programs. Experimental training protocols with weights and resistance machines and with supervision and low instructor/participant ratios are relatively safe and do not negatively impact growth and maturation of pre- and early-pubertal youth.
Term Dependence: Truncating the Bahadur Lazarsfeld Expansion.
ERIC Educational Resources Information Center
Losee, Robert M., Jr.
1994-01-01
Studies the performance of probabilistic information retrieval systems using differing statistical dependence assumptions when estimating the probabilities inherent in the retrieval model. Experimental results using the Bahadur Lazarsfeld expansion on the Cystic Fibrosis database are discussed that suggest that incorporating term dependence…
Thoracic and respirable particle definitions for human health risk assessment
Provides estimates of the thoracic and respirable fractions, for adults and children during typical activities during both nasal and oral inhalation, that may be used in the design of experimental studies and interpretation of evidence of health effects.
Martins Cunha, Raphael; Raiana Bentes, Mariana; Araújo, Victor H; DA Costa Souza, Mayara C; Vasconcelos Noleto, Marcelo; Azevedo Soares, Ademar; Machado Lehnen, Alexandre
2016-12-01
Blood glucose changes response during and after exercise are modulated by the postabsorptive state, intensity and duration of exercise, and the level of physical fitness as well. This study focused on the idea that high-intensity interval exercise, as mini-trampoline class, can reduce blood glucose. Thus, we examined acute changes in blood glucose among trained normoglycemic adults during a mini-trampoline exercise session. Twenty-four normoglycemic adult subjects were enrolled in the study. After physical assessment they were randomly assigned to either the experimental (N.=12) or the control group (N.=12). The experimental group performed a 50-minute session of moderate-to-high intensity (70 to 85% HRmax) exercise on a mini-trampoline commonly used in fitness classes. The control group did not perform any exercise, and all procedures were otherwise similar to the experimental group. Capillary blood glucose was measured before and every 15 minutes during the exercise session. The effects of exercise on blood glucose levels (group; time; and group interaction) were estimated using a generalized estimating equation (GEE) followed by Bonferroni's post-hoc Test (P<0.05). The experimental group showed a decrease in blood glucose levels from baseline (108.7 mg/dL): 26.1% reduction (15 min; P<0.001), 24.2% (30 min; P<0.001), and 15.7% (45 min; P<0.001). Compared to the control group, blood glucose levels in the experimental group were reduced by 18.8% (15 min; P<0.001), 14.3% (30 min; P<0.001) and 6.9% (45 min; P=0.025). The study results provide good evidence that a prescribed exercise program on a mini-trampoline can be used for reducing blood glucose levels and thus can potentially control blood glucose.
Alkaline phosphatase activity in gingival crevicular fluid during canine retraction.
Batra, P; Kharbanda, Op; Duggal, R; Singh, N; Parkash, H
2006-02-01
The aim of the study was to investigate alkaline phosphatase activity in the gingival crevicular fluid (GCF) during orthodontic tooth movement in humans. Postgraduate orthodontic clinic. Ten female patients requiring all first premolar extractions were selected and treated with standard edgewise mechanotherapy. Canine retraction was done using 100 g sentalloy springs. Maxillary canine on one side acted as experimental site while the contralateral canine acted as control. Gingival crevicular fluid was collected from mesial and distal of canines before initiation of canine retraction (baseline), immediately after initiation of retraction, and on 1st, 7th, 14th and 21st day and the alkaline phosphatase activity was estimated. The results show significant (p < 0.05) changes in alkaline phosphatase activity on the 7th, 14th and 21st day on both mesial and distal aspects of the compared experimental and control sides. The peak in enzyme activity occurred on the 14th day of initiation of retraction followed by a significant fall in activity especially on the mesial aspect. The study showed that alkaline phosphatase activity could be successfully estimated in the GCF using calorimetric estimation assay kits. The enzyme activity showed variation according to the amount of tooth movement.
Estimating neural response functions from fMRI
Kumar, Sukhbinder; Penny, William
2014-01-01
This paper proposes a methodology for estimating Neural Response Functions (NRFs) from fMRI data. These NRFs describe non-linear relationships between experimental stimuli and neuronal population responses. The method is based on a two-stage model comprising an NRF and a Hemodynamic Response Function (HRF) that are simultaneously fitted to fMRI data using a Bayesian optimization algorithm. This algorithm also produces a model evidence score, providing a formal model comparison method for evaluating alternative NRFs. The HRF is characterized using previously established “Balloon” and BOLD signal models. We illustrate the method with two example applications based on fMRI studies of the auditory system. In the first, we estimate the time constants of repetition suppression and facilitation, and in the second we estimate the parameters of population receptive fields in a tonotopic mapping study. PMID:24847246
NASA Astrophysics Data System (ADS)
Barone, Alessandro; Fenton, Flavio; Veneziani, Alessandro
2017-09-01
An accurate estimation of cardiac conductivities is critical in computational electro-cardiology, yet experimental results in the literature significantly disagree on the values and ratios between longitudinal and tangential coefficients. These are known to have a strong impact on the propagation of potential particularly during defibrillation shocks. Data assimilation is a procedure for merging experimental data and numerical simulations in a rigorous way. In particular, variational data assimilation relies on the least-square minimization of the misfit between simulations and experiments, constrained by the underlying mathematical model, which in this study is represented by the classical Bidomain system, or its common simplification given by the Monodomain problem. Operating on the conductivity tensors as control variables of the minimization, we obtain a parameter estimation procedure. As the theory of this approach currently provides only an existence proof and it is not informative for practical experiments, we present here an extensive numerical simulation campaign to assess practical critical issues such as the size and the location of the measurement sites needed for in silico test cases of potential experimental and realistic settings. This will be finalized with a real validation of the variational data assimilation procedure. Results indicate the presence of lower and upper bounds for the number of sites which guarantee an accurate and minimally redundant parameter estimation, the location of sites being generally non critical for properly designed experiments. An effective combination of parameter estimation based on the Monodomain and Bidomain models is tested for the sake of computational efficiency. Parameter estimation based on the Monodomain equation potentially leads to the accurate computation of the transmembrane potential in real settings.
Schneider, Iris K.; Parzuchowski, Michal; Wojciszke, Bogdan; Schwarz, Norbert; Koole, Sander L.
2015-01-01
Previous work suggests that perceived importance of an object influences estimates of its weight. Specifically, important books were estimated to be heavier than non-important books. However, the experimental set-up of these studies may have suffered from a potential confound and findings may be confined to books only. Addressing this, we investigate the effect of importance on weight estimates by examining whether the importance of information stored on a data storage device (USB-stick or portable hard drive) can alter weight estimates. Results show that people thinking a USB-stick holds important tax information (vs. expired tax information vs. no information) estimate it to be heavier (Experiment 1) compared to people who do not. Similarly, people who are told a portable hard drive holds personally relevant information (vs. irrelevant), also estimate the drive to be heavier (Experiments 2A,B). PMID:25620942
Experimental Bayesian Quantum Phase Estimation on a Silicon Photonic Chip.
Paesani, S; Gentile, A A; Santagati, R; Wang, J; Wiebe, N; Tew, D P; O'Brien, J L; Thompson, M G
2017-03-10
Quantum phase estimation is a fundamental subroutine in many quantum algorithms, including Shor's factorization algorithm and quantum simulation. However, so far results have cast doubt on its practicability for near-term, nonfault tolerant, quantum devices. Here we report experimental results demonstrating that this intuition need not be true. We implement a recently proposed adaptive Bayesian approach to quantum phase estimation and use it to simulate molecular energies on a silicon quantum photonic device. The approach is verified to be well suited for prethreshold quantum processors by investigating its superior robustness to noise and decoherence compared to the iterative phase estimation algorithm. This shows a promising route to unlock the power of quantum phase estimation much sooner than previously believed.
Planned Axial Reorientation Investigation on Sloshsat
NASA Technical Reports Server (NTRS)
Chato, David J.
2000-01-01
This paper details the design and logic of an experimental investigation to study axial reorientation in low gravity. The Sloshsat free-flyer is described. The planned axial reorientation experiments and test matrixes are presented. Existing analytical tools are discussed. Estimates for settling range from 64 to 1127 seconds. The planned experiments are modelled using computational fluid dynamics. These models show promise in reducing settling estimates and demonstrate the ability of pulsed high thrust settling to emulate lower thrust continuous firing.
Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica
2015-01-01
Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.
Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad
2015-01-01
Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797
NASA Astrophysics Data System (ADS)
Song, H.; Huerta-Lopez, C. I.; Martinez-Cruzado, J. A.; Rodriguez-Lozoya, H. E.; Espinoza-Barreras, F.
2009-05-01
Results of an ongoing study to estimate the ground response upon weak and moderate earthquake excitations are presented. A reliable site characterization in terms of its soil properties and sub-soil layer configuration are parameters required in order to do a trustworthy estimation of the ground response upon dynamic loads. This study can be described by the following four steps: (1) Ambient noise measurements were collected at the study site where a bridge was under construction between the cities of Tijuana and Ensenada in Mexico. The time series were collected using a six channels recorder with an ADC converter of 16 bits within a maximum voltage range of ± 2.5 V, the recorder has an optional settings of: Butterworth/Bessel filters, gain and sampling rate. The sensors were a three orthogonal component (X, Y, Z) accelerometers with a sensitivity of 20 V/g, flat frequency response between DC to 200 Hz, and total full range of ±0.25 of g, (2) experimental H/V Spectral Ratios were computed to estimate the fundamental vibration frequency at the site, (3) using the time domain experimental H/V spectral ratios as well as the original recorded time series, the random decrement method was applied to estimate the fundamental frequency and damping of the site (system), and (4) finally the theoretical H/V spectral ratios were obtained by means of the stiffness matrix wave propagation method.. The interpretation of the obtained results was then finally compared with a geotechnical study available at the site.
Extension of the thermal porosimetry method to high gas pressure for nanoporosimetry estimation
NASA Astrophysics Data System (ADS)
Jannot, Y.; Degiovanni, A.; Camus, M.
2018-04-01
Standard pore size determination methods like mercury porosimetry, nitrogen sorption, microscopy, or X-ray tomography are not suited to highly porous, low density, and thus very fragile materials. For this kind of materials, a method based on thermal characterization has been developed in a previous study. This method has been used with air pressure varying from 10-1 to 105 Pa for materials having a thermal conductivity less than 0.05 W m-1 K-1 at atmospheric pressure. It enables the estimation of pore size distribution between 100 nm and 1 mm. In this paper, we present a new experimental device enabling thermal conductivity measurement under gas pressure up to 106 Pa, enabling the estimation of the volume fraction of pores having a 10 nm diameter. It is also demonstrated that the main thermal conductivity models (parallel, series, Maxwell, Bruggeman, self-consistent) lead to the same estimation of the pore size distribution as the extended parallel model (EPM) presented in this paper and then used to process the experimental data. Three materials with thermal conductivities at atmospheric pressure ranging from 0.014 W m-1 K-1 to 0.04 W m-1 K-1 are studied. The thermal conductivity measurement results obtained with the three materials are presented, and the corresponding pore size distributions between 10 nm and 1 mm are presented and discussed.
NASA Astrophysics Data System (ADS)
Settar, Abdelhakim; Abboudi, Saïd; Madani, Brahim; Nebbali, Rachid
2018-02-01
Due to the endothermic nature of the steam methane reforming reaction, the process is often limited by the heat transfer behavior in the reactors. Poor thermal behavior sometimes leads to slow reaction kinetics, which is characterized by the presence of cold spots in the catalytic zones. Within this framework, the present work consists on a numerical investigation, in conjunction with an experimental one, on the one-dimensional heat transfer phenomenon during the heat supply of a catalytic-wall reactor, which is designed for hydrogen production. The studied reactor is inserted in an electric furnace where the heat requirement of the endothermic reaction is supplied by electric heating system. During the heat supply, an unknown heat flux density, received by the reactive flow, is estimated using inverse methods. In the basis of the catalytic-wall reactor model, an experimental setup is engineered in situ to measure the temperature distribution. Then after, the measurements are injected in the numerical heat flux estimation procedure, which is based on the Function Specification Method (FSM). The measured and estimated temperatures are confronted and the heat flux density which crosses the reactor wall is determined.
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
Servais, P
1995-03-01
In aquatic ecosystems, [(3)H]thymidine incorporation into bacterial DNA and [(3)H]leucine incorporation into proteins are usually used to estimate bacterial production. The incorporation rates of four amino acids (leucine, tyrosine, lysine, alanine) into proteins of bacteria were measured in parallel on natural freshwater samples from the basin of the river Meuse (Belgium). Comparison of the incorporation into proteins and into the total macromolecular fraction showed that these different amino acids were incorporated at more than 90% into proteins. From incorporation measurements at four subsaturated concentrations (range, 2-77 nm), the maximum incorporation rates were determined. Strong correlations (r > 0.91 for all the calculated correlations) were found between the maximum incorporation rates of the different tested amino acids over a range of two orders of magnitude of bacterial activity. Bacterial production estimates were calculated using theoretical and experimental conversion factors. The productions calculated from the incorporation rates of the four amino acids were in good concordance, especially when the experimental conversion factors were used (slope range, 0.91-1.11, and r > 0.91). This study suggests that the incorporation of various amino acids into proteins can be used to estimate bacterial production.
ERIC Educational Resources Information Center
Chen, Mo; Hyppa-Martin, Jolene K.; Reichle, Joe E.; Symons, Frank J.
2016-01-01
Meaningfully synthesizing single case experimental data from intervention studies comprised of individuals with low incidence conditions and generating effect size estimates remains challenging. Seven effect size metrics were compared for single case design (SCD) data focused on teaching speech generating device use to individuals with…
A Critical Look at Methodologies Used to Evaluate Charter School Effectiveness
ERIC Educational Resources Information Center
Ackerman, Matthew; Egalite, Anna J.
2017-01-01
There is no consensus among researchers on charter school effectiveness in the USA, in part because of discrepancies in the research methods employed across various studies. Causal impact estimates from experimental studies demonstrate large positive impacts, but concerns about the generalizability of these results have prompted the development of…
Pixel-By Estimation of Scene Motion in Video
NASA Astrophysics Data System (ADS)
Tashlinskii, A. G.; Smirnov, P. V.; Tsaryov, M. G.
2017-05-01
The paper considers the effectiveness of motion estimation in video using pixel-by-pixel recurrent algorithms. The algorithms use stochastic gradient decent to find inter-frame shifts of all pixels of a frame. These vectors form shift vectors' field. As estimated parameters of the vectors the paper studies their projections and polar parameters. It considers two methods for estimating shift vectors' field. The first method uses stochastic gradient descent algorithm to sequentially process all nodes of the image row-by-row. It processes each row bidirectionally i.e. from the left to the right and from the right to the left. Subsequent joint processing of the results allows compensating inertia of the recursive estimation. The second method uses correlation between rows to increase processing efficiency. It processes rows one after the other with the change in direction after each row and uses obtained values to form resulting estimate. The paper studies two criteria of its formation: gradient estimation minimum and correlation coefficient maximum. The paper gives examples of experimental results of pixel-by-pixel estimation for a video with a moving object and estimation of a moving object trajectory using shift vectors' field.
NASA Technical Reports Server (NTRS)
Rutledge, Charles K.
1988-01-01
The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.
Model-Based Estimation of Knee Stiffness
Pfeifer, Serge; Vallery, Heike; Hardegger, Michael; Riener, Robert; Perreault, Eric J.
2013-01-01
During natural locomotion, the stiffness of the human knee is modulated continuously and subconsciously according to the demands of activity and terrain. Given modern actuator technology, powered transfemoral prostheses could theoretically provide a similar degree of sophistication and function. However, experimentally quantifying knee stiffness modulation during natural gait is challenging. Alternatively, joint stiffness could be estimated in a less disruptive manner using electromyography (EMG) combined with kinetic and kinematic measurements to estimate muscle force, together with models that relate muscle force to stiffness. Here we present the first step in that process, where we develop such an approach and evaluate it in isometric conditions, where experimental measurements are more feasible. Our EMG-guided modeling approach allows us to consider conditions with antagonistic muscle activation, a phenomenon commonly observed in physiological gait. Our validation shows that model-based estimates of knee joint stiffness coincide well with experimental data obtained using conventional perturbation techniques. We conclude that knee stiffness can be accurately estimated in isometric conditions without applying perturbations, which presents an important step towards our ultimate goal of quantifying knee stiffness during gait. PMID:22801482
Pierobon, Alberto; DiZio, Paul; Lackner, James R.
2013-01-01
We tested an innovative method to estimate joint stiffness and damping during multijoint unfettered arm movements. The technique employs impulsive perturbations and a time-frequency analysis to estimate the arm's mechanical properties along a reaching trajectory. Each single impulsive perturbation provides a continuous estimation on a single-reach basis, making our method ideal to investigate motor adaptation in the presence of force fields and to study the control of movement in impaired individuals with limited kinematic repeatability. In contrast with previous dynamic stiffness studies, we found that stiffness varies during movement, achieving levels higher than during static postural control. High stiffness was associated with elevated reflexive activity. We observed a decrease in stiffness and a marked reduction in long-latency reflexes around the reaching movement velocity peak. This pattern could partly explain the difference between the high stiffness reported in postural studies and the low stiffness measured in dynamic estimation studies, where perturbations are typically applied near the peak velocity point. PMID:23945781
Murdande, Sharad B; Pikal, Michael J; Shanker, Ravi M; Bogner, Robin H
2010-12-01
To quantitatively assess the solubility advantage of amorphous forms of nine insoluble drugs with a wide range of physico-chemical properties utilizing a previously reported thermodynamic approach. Thermal properties of amorphous and crystalline forms of drugs were measured using modulated differential calorimetry. Equilibrium moisture sorption uptake by amorphous drugs was measured by a gravimetric moisture sorption analyzer, and ionization constants were determined from the pH-solubility profiles. Solubilities of crystalline and amorphous forms of drugs were measured in de-ionized water at 25°C. Polarized microscopy was used to provide qualitative information about the crystallization of amorphous drug in solution during solubility measurement. For three out the nine compounds, the estimated solubility based on thermodynamic considerations was within two-fold of the experimental measurement. For one compound, estimated solubility enhancement was lower than experimental value, likely due to extensive ionization in solution and hence its sensitivity to error in pKa measurement. For the remaining five compounds, estimated solubility was about 4- to 53-fold higher than experimental results. In all cases where the theoretical solubility estimates were significantly higher, it was observed that the amorphous drug crystallized rapidly during the experimental determination of solubility, thus preventing an accurate experimental assessment of solubility advantage. It has been demonstrated that the theoretical approach does provide an accurate estimate of the maximum solubility enhancement by an amorphous drug relative to its crystalline form for structurally diverse insoluble drugs when recrystallization during dissolution is minimal.
Nitrogen balance study in young Nigerian adult males using four levels of protein intake.
Atinmo, T; Mbofung, C M; Egun, G; Osotimehin, B
1988-11-01
1. The present study was carried out to estimate precisely, via the nitrogen balance technique, the protein requirement of Nigerians (earlier estimated via the obligatory N method) using graded levels of protein intake. 2. Fifteen medical students of the University of Ibadan who volunteered to participate in the study were given graded levels of protein (0.3, 0.45, 0.6 and 0.75 g/kg body-weight per d) derived from foods similar to those usually consumed by the subjects. 3. Each subject was given each of the dietary protein levels for a period of 10 d. Subjects were divided into two groups and the feeding pattern followed a criss-cross design with one group starting with the highest level of protein intake (0.3 g). Mean energy intake during each of the eleven experimental periods was maintained at 0.2 MJ/kg per d. After an initial 5 d adaptation period in each experimental period, 24 h urine and faecal samples were collected in marked containers for five consecutive days for N determination. 4. Mean N balance during consumption of the four protein levels (0.30, 0.45, 0.6 and 0.75 g/kg) were -11.02 (SD 8.07), -9.90 (SD 6.64), +9.70 (SD 4.15) and +5.13 (SD 4.62) respectively. Using regression analysis, the mean daily N requirement was estimated at 110.25 mg N/kg body-weight (0.69 g protein/kg body-weight). Estimates of allowances for individual variations to cover 97.5% of the population adjusted this value to 0.75 g protein/kg body-weight. Net protein utilization for the diet at maintenance level was estimated at 57.5.
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Equation of State for RX-08-EL and RX-08-EP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, E.L.; Walton, J.
1985-05-07
JWL Equations of State (EOS's) have been estimated for RX-08-EL and RX-08-EP. The estimated JWL EOS parameters are listed. Previously, we derived a JWL EOS for RX-08-EN based on DYNA2D hydrodynamic code cylinder computations and comparisons with experimental cylinder test results are shown. The experimental cylinder shot results for RX-08-EL, shot K-473, were compared to the experimental cylinder shot results for RX-08-EN, shot K-463, as a reference. 10 figs., 6 tabs.
Liang, Hua; Miao, Hongyu; Wu, Hulin
2010-03-01
Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.
NASA Astrophysics Data System (ADS)
Adam, L.; Frehner, M.; Sauer, K. M.; Toy, V.; Guerin-Marthe, S.; Boulton, C. J.
2017-12-01
Reconciling experimental and static-dynamic numerical estimations of seismic anisotropy in Alpine Fault mylonitesLudmila Adam1, Marcel Frehner2, Katrina Sauer3, Virginia Toy3, Simon Guerin-Marthe4, Carolyn Boulton5(1) University of Auckland, New Zealand, (2) ETH Zurich, Switzerland, (3) University of Otago, New Zealand (4) Durham University, Earth Sciences, United Kingdom (5) Victoria University of Wellington, New Zealand Quartzo-feldspathic mylonites and schists are the main contributors to seismic wave anisotropy in the vicinity of the Alpine Fault (New Zealand). We must determine how the physical properties of rocks like these influence elastic wave anisotropy if we want to unravel both the reasons for heterogeneous seismic wave propagation, and interpret deformation processes in fault zones. To study such controls on velocity anisotropy we can: 1) experimentally measure elastic wave anisotropy on cores at in-situ conditions or 2) estimate wave velocities by static (effective medium averaging) or dynamic (finite element) modelling based on EBSD data or photomicrographs. Here we compare all three approaches in study of schist and mylonite samples from the Alpine Fault. Volumetric proportions of intrinsically anisotropic micas in cleavage domains and comparatively isotropic quartz+feldspar in microlithons commonly vary significantly within one sample. Our analysis examines the effects of these phases and their arrangement, and further addresses how heterogeneity influences elastic wave anisotropy. We compare P-wave seismic anisotropy estimates based on millimetres-scale ultrasonic waves under in situ conditions, with simulations that account for micrometre-scale variations in elastic properties of constitutent minerals with the MTEX toolbox and finite-element wave propagation on EBSD images. We observe that the sorts of variations in the distribution of micas and quartz+feldspar within any one of our real core samples can change the elastic wave anisotropy by 10%. In addition, at 60 MPa confining pressure, experimental elastic anisotropy is greater than modelled anisotropy, which could indicate that open microfractures dramatically influence seismic wave anisotropy in the top 3 to 4 km of the crust, or be related to the different resolutions of the two methods.
Allen, Marcus; Zhong, Qiang; Kirsch, Nicholas; Dani, Ashwin; Clark, William W; Sharma, Nitin
2017-12-01
Miniature inertial measurement units (IMUs) are wearable sensors that measure limb segment or joint angles during dynamic movements. However, IMUs are generally prone to drift, external magnetic interference, and measurement noise. This paper presents a new class of nonlinear state estimation technique called state-dependent coefficient (SDC) estimation to accurately predict joint angles from IMU measurements. The SDC estimation method uses limb dynamics, instead of limb kinematics, to estimate the limb state. Importantly, the nonlinear limb dynamic model is formulated into state-dependent matrices that facilitate the estimator design without performing a Jacobian linearization. The estimation method is experimentally demonstrated to predict knee joint angle measurements during functional electrical stimulation of the quadriceps muscle. The nonlinear knee musculoskeletal model was identified through a series of experiments. The SDC estimator was then compared with an extended kalman filter (EKF), which uses a Jacobian linearization and a rotation matrix method, which uses a kinematic model instead of the dynamic model. Each estimator's performance was evaluated against the true value of the joint angle, which was measured through a rotary encoder. The experimental results showed that the SDC estimator, the rotation matrix method, and EKF had root mean square errors of 2.70°, 2.86°, and 4.42°, respectively. Our preliminary experimental results show the new estimator's advantage over the EKF method but a slight advantage over the rotation matrix method. However, the information from the dynamic model allows the SDC method to use only one IMU to measure the knee angle compared with the rotation matrix method that uses two IMUs to estimate the angle.
Rolf, Megan M; Taylor, Jeremy F; Schnabel, Robert D; McKay, Stephanie D; McClure, Matthew C; Northcutt, Sally L; Kerley, Monty S; Weaber, Robert L
2010-04-19
Molecular estimates of breeding value are expected to increase selection response due to improvements in the accuracy of selection and a reduction in generation interval, particularly for traits that are difficult or expensive to record or are measured late in life. Several statistical methods for incorporating molecular data into breeding value estimation have been proposed, however, most studies have utilized simulated data in which the generated linkage disequilibrium may not represent the targeted livestock population. A genomic relationship matrix was developed for 698 Angus steers and 1,707 Angus sires using 41,028 single nucleotide polymorphisms and breeding values were estimated using feed efficiency phenotypes (average daily feed intake, residual feed intake, and average daily gain) recorded on the steers. The number of SNPs needed to accurately estimate a genomic relationship matrix was evaluated in this population. Results were compared to estimates produced from pedigree-based mixed model analysis of 862 Angus steers with 34,864 identified paternal relatives but no female ancestors. Estimates of additive genetic variance and breeding value accuracies were similar for AFI and RFI using the numerator and genomic relationship matrices despite fewer animals in the genomic analysis. Bootstrap analyses indicated that 2,500-10,000 markers are required for robust estimation of genomic relationship matrices in cattle. This research shows that breeding values and their accuracies may be estimated for commercially important sires for traits recorded in experimental populations without the need for pedigree data to establish identity by descent between members of the commercial and experimental populations when at least 2,500 SNPs are available for the generation of a genomic relationship matrix.
Bielecki, K; Grotowski, M; Kalczak, M
1995-01-01
The purpose of this study was to evaluate the healing of an experimental left-sided colonic anastomosis in rats protected by an end diverting proimal colostomy. The anastomoses were studied by radiological and biochemical examination and breaking strength was estimated. The results were compared with a non-operated group and with a group of rats having a non-defunctional anastomosis constructed in the same manner. In animals with an end diverting colostomy, anastomotic protein levels and enzymic activity were lower than in those with a colostomy, and the development of anastomotic strength was delayed compared with those not defunctioned.
Experimental investigation of non-planar sheared outboard wing planforms
NASA Technical Reports Server (NTRS)
Naik, D. A.; Ostowari, C.
1988-01-01
The outboard planforms of wings have been found to be of prime importance in studies of induced drag reduction. This conclusion is based on an experimental and theoretical study of the aerodynamic characteristics of planar and nonplanar outboard wing forms. Six different configurations; baseline rectangular, planar sheared, sheared with dihedral, sheared with anhedral, rising arc, and drooping arc were investigated for two different spans. Span efficiencies as much as 20 percent greater than baseline can be realized with nonplanar wing forms. Optimization studies show that this advantage can be achieved along with a bending moment benefit. Parasite drag and lateral stability estimations were not included in the analysis.
Modelling of Batch Lactic Acid Fermentation in the Presence of Anionic Clay
Jinescu, Cosmin; Aruş, Vasilica Alisa; Nistor, Ileana Denisa
2014-01-01
Summary Batch fermentation of milk inoculated with lactic acid bacteria was conducted in the presence of hydrotalcite-type anionic clay under static and ultrasonic conditions. An experimental study of the effect of fermentation temperature (t=38–43 °C), clay/milk ratio (R=1–7.5 g/L) and ultrasonic field (ν=0 and 35 kHz) on process dynamics was performed. A mathematical model was selected to describe the fermentation process kinetics and its parameters were estimated based on experimental data. A good agreement between the experimental and simulated results was achieved. Consequently, the model can be employed to predict the dynamics of batch lactic acid fermentation with values of process variables in the studied ranges. A statistical analysis of the data based on a 23 factorial experiment was performed in order to express experimental and model-regressed process responses depending on t, R and ν factors. PMID:27904318
The isobaric heat capacity of liquid water at low temperatures and high pressures
NASA Astrophysics Data System (ADS)
Troncoso, Jacobo
2017-08-01
Isobaric heat capacity for water shows a rather strong anomalous behavior, especially at low temperature. However, almost all experimental studies supporting this statement have been carried out at low pressure; very few experimental data were reported above 100 MPa. In order to explore the behavior of this magnitude for water up to 500 MPa, a new heat flux calorimeter was developed. With the aim of testing the experimental methodology and comparing with water results, isobaric heat capacity was also measured for methanol and hexane. Good agreement with indirect heat capacity estimations from the literature was obtained for the three liquids. Experimental results show large anomalies in water heat capacity. This is especially true as regards its temperature dependence, qualitatively different from that observed for other liquids. Heat capacity versus temperature curves show minima for most studied isobars, whose location decreases with the pressure up to around 100 MPa but increases at higher pressures.
Optimal Doppler centroid estimation for SAR data from a quasi-homogeneous source
NASA Technical Reports Server (NTRS)
Jin, M. Y.
1986-01-01
This correspondence briefly describes two Doppler centroid estimation (DCE) algorithms, provides a performance summary for these algorithms, and presents the experimental results. These algorithms include that of Li et al. (1985) and a newly developed one that is optimized for quasi-homogeneous sources. The performance enhancement achieved by the optimal DCE algorithm is clearly demonstrated by the experimental results.
Phillip E. Farnes; Ward W. McCaughey; Katherine J. Hansen
1999-01-01
The objectives of this Research Joint Venture Agreement (RJVA) were to install and calibrate three flumes on Tenderfoot Creek Experimental Forest (TCEF) in central Montana; check calibration of the existing seven flumes on TCEF; estimate the influence of fire on water yields over the 400-year fire history period; and estimate back records of monthly temperature,...
Austen, Emily J.; Weis, Arthur E.
2016-01-01
Our understanding of selection through male fitness is limited by the resource demands and indirect nature of the best available genetic techniques. Applying complementary, independent approaches to this problem can help clarify evolution through male function. We applied three methods to estimate selection on flowering time through male fitness in experimental populations of the annual plant Brassica rapa: (i) an analysis of mating opportunity based on flower production schedules, (ii) genetic paternity analysis, and (iii) a novel approach based on principles of experimental evolution. Selection differentials estimated by the first method disagreed with those estimated by the other two, indicating that mating opportunity was not the principal driver of selection on flowering time. The genetic and experimental evolution methods exhibited striking agreement overall, but a slight discrepancy between the two suggested that negative environmental covariance between age at flowering and male fitness may have contributed to phenotypic selection. Together, the three methods enriched our understanding of selection on flowering time, from mating opportunity to phenotypic selection to evolutionary response. The novel experimental evolution method may provide a means of examining selection through male fitness when genetic paternity analysis is not possible. PMID:26911957
Judged Frequency of Lethal Events.
ERIC Educational Resources Information Center
Lichtenstein, Sarah; And Others
1978-01-01
College student and adult subjects were studied in five experimental formats to gauge how well people can estimate the frequency of death from specific causes. Subjects tended to overestimate the rate of rare causes, underestimate likely causes, and be influenced by drama or vividness. (Author/SJL)
Impact of study design on development and evaluation of an activity-type classifier.
van Hees, Vincent T; Golubic, Rajna; Ekelund, Ulf; Brage, Søren
2013-04-01
Methods to classify activity types are often evaluated with an experimental protocol involving prescribed physical activities under confined (laboratory) conditions, which may not reflect real-life conditions. The present study aims to evaluate how study design may impact on classifier performance in real life. Twenty-eight healthy participants (21-53 yr) were asked to wear nine triaxial accelerometers while performing 58 activity types selected to simulate activities in real life. For each sensor location, logistic classifiers were trained in subsets of up to 8 activities to distinguish between walking and nonwalking activities and were then evaluated in all 58 activities. Different weighting factors were used to convert the resulting confusion matrices into an estimation of the confusion matrix as would apply in the real-life setting by creating four different real-life scenarios, as well as one traditional laboratory scenario. The sensitivity of a classifier estimated with a traditional laboratory protocol is within the range of estimates derived from real-life scenarios for any body location. The specificity, however, was systematically overestimated by the traditional laboratory scenario. Walking time was systematically overestimated, except for lower back sensor data (range: 7-757%). In conclusion, classifier performance under confined conditions may not accurately reflect classifier performance in real life. Future studies that aim to evaluate activity classification methods are warranted to pay special attention to the representativeness of experimental conditions for real-life conditions.
Puente, Gabriela F; Bonetto, Fabián J
2005-05-01
We used the temporal evolution of the bubble radius in single-bubble sonoluminescence to estimate the water liquid-vapor accommodation coefficient. The rapid changes in the bubble radius that occur during the bubble collapse and rebounds are a function of the actual value of the accommodation coefficient. We selected bubble radius measurements obtained from two different experimental techniques in conjunction with a robust parameter estimation strategy and we obtained that for water at room temperature the mass accommodation coefficient is in the confidence interval [0.217,0.329].
SCDFT Study of High Tc Nitride Superconductors
NASA Astrophysics Data System (ADS)
Arita, R.
Based on the density functional theory for superconductors (SCDFT), we study the pairing mechanism of the layered nitride superconductors, β-LixMNCl (M=Zr, Hf). Recently, it has been shown that SCDFT reproduces experimental superconducting transition temperatures (Tc) of conventional superconductors very accurately. Here we use SCDFT as a "litmus paper" to determine whether the system is a conventional or unconventional superconductor. We show that Tc estimated by SCDFT is less than half of the experimental Tc and its doping dependence is opposite to that observed in the experiments. The present result suggests that β- LixMNCl is not a Migdal-Eliashberg type superconductor.
NASA Astrophysics Data System (ADS)
Shen, Lin; Huang, Da; Wu, Genxing
2018-05-01
In this paper, an aircraft model was tested in the wind tunnel with different degrees of yaw-roll coupling at different angles of attack. The dynamic increments of yawing and rolling moments are compared to study the coupling effects on damping characteristics. The characteristic time constants are calculated to study the changes of flow field structure related to coupling ratios. The damping characteristics and time lag effects of aerodynamic loads calculated by dynamic derivative method are also compared with experimental results to estimate the applicability of linear superposition principle at large angles of attack.
Vavoulis, Dimitrios V.; Straub, Volko A.; Aston, John A. D.; Feng, Jianfeng
2012-01-01
Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models. PMID:22396632
Non-linear identification of a squeeze-film damper
NASA Technical Reports Server (NTRS)
Stanway, Roger; Mottershead, John; Firoozian, Riaz
1987-01-01
Described is an experimental study to identify the damping laws associated with a squeeze-film vibration damper. This is achieved by using a non-linear filtering algorithm to process displacement responses of the damper ring to synchronous excitation and thus to estimate the parameters in an nth-power velocity model. The experimental facility is described in detail and a representative selection of results is included. The identified models are validated through the prediction of damper-ring orbits and comparison with observed responses.
Analytical determination of thermal conductivity of W-UO2 and W-UN CERMET nuclear fuels
NASA Astrophysics Data System (ADS)
Webb, Jonathan A.; Charit, Indrajit
2012-08-01
The thermal conductivity of tungsten based CERMET fuels containing UO2 and UN fuel particles are determined as a function of particle geometry, stabilizer fraction and fuel-volume fraction, by using a combination of an analytical approach and experimental data collected from literature. Thermal conductivity is estimated using the Bruggeman-Fricke model. This study demonstrates that thermal conductivities of various CERMET fuels can be analytically predicted to values that are very close to the experimentally determined ones.
NASA Astrophysics Data System (ADS)
Copur, Hanifi; Bilgin, Nuh; Balci, Cemal; Tumac, Deniz; Avunduk, Emre
2017-06-01
This study aims at determining the effects of single-, double-, and triple-spiral cutting patterns; the effects of tool cutting speeds on the experimental scale; and the effects of the method of yield estimation on cutting performance by performing a set of full-scale linear cutting tests with a conical cutting tool. The average and maximum normal, cutting and side forces; specific energy; yield; and coarseness index are measured and compared in each cutting pattern at a 25-mm line spacing, at varying depths of cut per revolution, and using two cutting speeds on five different rock samples. The results indicate that the optimum specific energy decreases by approximately 25% with an increasing number of spirals from the single- to the double-spiral cutting pattern for the hard rocks, whereas generally little effect was observed for the soft- and medium-strength rocks. The double-spiral cutting pattern appeared to be more effective than the single- or triple-spiral cutting pattern and had an advantage of lower side forces. The tool cutting speed had no apparent effect on the cutting performance. The estimation of the specific energy by the yield based on the theoretical swept area was not significantly different from that estimated by the yield based on the muck weighing, especially for the double- and triple-spiral cutting patterns and with the optimum ratio of line spacing to depth of cut per revolution. This study also demonstrated that the cutterhead and mechanical miner designs, semi-theoretical deterministic computer simulations and empirical performance predictions and optimization models should be based on realistic experimental simulations. Studies should be continued to obtain more reliable results by creating a larger database of laboratory tests and field performance records for mechanical miners using drag tools.
Purohit, S; Joisa, Y S; Raval, J V; Ghosh, J; Tanna, R; Shukla, B K; Bhatt, S B
2014-11-01
Silicon drift detector based X-ray spectrometer diagnostic was developed to study the non-thermal electron for Aditya tokamak plasma. The diagnostic was mounted on a radial mid plane port at the Aditya. The objective of diagnostic includes the estimation of the non-thermal electron temperature for the ohmically heated plasma. Bi-Maxwellian plasma model was adopted for the temperature estimation. Along with that the study of high Z impurity line radiation from the ECR pre-ionization experiments was also aimed. The performance and first experimental results from the new X-ray spectrometer system are presented.
Adhesive joint evaluation by ultrasonic interface and lamb waves
NASA Technical Reports Server (NTRS)
Rokhlin, S. I.
1986-01-01
Some results on the application of interface and Lamb waves for the study of curing of thin adhesive layers were summarized. In the case of thick substrates (thickness much more than the wave length) the interface waves can be used. In this case the experimental data can be inverted and the shear modulus of the adhesive film may be explicitly found based on the measured interface wave velocity. It is shown that interface waves can be used for the study of curing of structural adhesives as a function of different temperatures and other experimental conditions. The kinetics of curing was studied. In the case of thin substrates the wave phenomena are much more complicated. It is shown that for successful measurements proper selection of experimental conditions is very important. This can be done based on theoretical estimations. For correctly selected experimental conditions the Lamb waves may be a sensitive probe of adhesive bond quality and may be used or cure monitoring.
The price elasticity of demand for heroin: Matched longitudinal and experimental evidence.
Olmstead, Todd A; Alessi, Sheila M; Kline, Brendan; Pacula, Rosalie Liccardo; Petry, Nancy M
2015-05-01
This paper reports estimates of the price elasticity of demand for heroin based on a newly constructed dataset. The dataset has two matched components concerning the same sample of regular heroin users: longitudinal information about real-world heroin demand (actual price and actual quantity at daily intervals for each heroin user in the sample) and experimental information about laboratory heroin demand (elicited by presenting the same heroin users with scenarios in a laboratory setting). Two empirical strategies are used to estimate the price elasticity of demand for heroin. The first strategy exploits the idiosyncratic variation in the price experienced by a heroin user over time that occurs in markets for illegal drugs. The second strategy exploits the experimentally induced variation in price experienced by a heroin user across experimental scenarios. Both empirical strategies result in the estimate that the conditional price elasticity of demand for heroin is approximately -0.80. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Unnikrishnan, Madhusudanan; Rajan, Akash; Basanthvihar Raghunathan, Binulal; Kochupillai, Jayaraj
2017-08-01
Experimental modal analysis is the primary tool for obtaining the fundamental dynamic characteristics like natural frequency, mode shape and modal damping ratio that determine the behaviour of any structure under dynamic loading conditions. This paper discusses about a carefully designed experimental method for calculating the dynamic characteristics of a pre-stretched horizontal flexible tube made of polyurethane material. The factors that affect the modal parameter estimation like the application time of shaker excitation, pause time between successive excitation cycles, averaging and windowing of measured signal, as well as the precautions to be taken during the experiment are explained in detail. The modal parameter estimation is done using MEscopeVESTM software. A finite element based pre-stressed modal analysis of the flexible tube is also done using ANSYS ver.14.0 software. The experimental and analytical results agreed well. The proposed experimental methodology may be extended for carrying out the modal analysis of many flexible structures like inflatables, tires and membranes.
Takamiya, K; Imanaka, T; Ota, Y; Akamine, M; Shibata, S; Shibata, T; Ito, Y; Imamura, M; Uwamino, Y; Nogawa, N; Baba, M; Iwasaki, S; Matsuyama, S
2008-07-01
The upper and lower limits of the excitation function of the (63)Cu(n,p)(63)Ni reaction were experimentally determined, and the number of (63)Ni nuclei produced in copper samples exposed to atomic bomb neutrons in Hiroshima was estimated by using the experimental excitation functions and the neutron fluences given in the DS02 dosimetry system. The estimated number of (63)Ni nuclei was compared with that measured and with that calculated using the DS02 dosimetry system and the corresponding ENDF/B-VI cross section. In comparison with DS02, there is about a 60% maximum difference in (63)Ni production at the hypocenter when the experimental upper cross section values are used. The difference becomes smaller at greater distances from the hypocenter and decreases, for example, to less than 30 and 5% when using the upper and lower experimental cross sections at 1,000 m, respectively.
NASA Astrophysics Data System (ADS)
Hamed Mashhadzadeh, A.; Fereidoon, Ab.; Ghorbanzadeh Ahangari, M.
2017-10-01
In current study we combined theoretical and experimental studies to evaluate the effect of functionalization and silanization on mechanical behavior of polymer-based/CNT nanocomposites. Epoxy was selected as thermoset polymer, polypropylene and poly vinyl chloride were selected as thermoplastic polymers. The whole procedure is divided to two sections . At first we applied density functional theory (DFT) to analyze the effect of functionalization on equilibrium distance and adsorption energy of unmodified, functionalized by sbnd OH group and silanized epoxy/CNT, PP/CNT and PVC/CNT nanocomposites and the results showed that functionalization increased adsorption energy and reduced the equilibrium distance in all studied nanocomposites and silanization had higher effect comparing to OH functionalizing. Then we prepared experimental samples of all mentioned nanocomposites and tested their tensile and flexural strength properties. The obtained results showed that functionalization increased the studied mechanical properties in all evaluated nanocomposites. Finally we compared the results of experimental and theoretical sections with each other and estimated a suitable agreement between these parts.
Space Weather Studies Using Ground-based Experimental Complex in Kazakhstan
NASA Astrophysics Data System (ADS)
Kryakunova, O.; Yakovets, A.; Monstein, C.; Nikolayevskiy, N.; Zhumabayev, B.; Gordienko, G.; Andreyev, A.; Malimbayev, A.; Levin, Yu.; Salikhov, N.; Sokolova, O.; Tsepakina, I.
2015-12-01
Kazakhstan ground-based experimental complex for space weather study is situated near Almaty. Results of space environment monitoring are accessible via Internet on the web-site of the Institute of Ionosphere (http://www.ionos.kz/?q=en/node/21) in real time. There is a complex database with hourly data of cosmic ray intensity, geomagnetic field intensity, and solar radio flux at 10.7 cm and 27.8 cm wavelengths. Several studies using those data are reported. They are an estimation of speed of a coronal mass ejection, a study of large scale traveling distrubances, an analysis of geomagnetically induced currents using the geomagnetic field data, and a solar energetic proton event on 27 January 2012.
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.
Studying the P c ( 4450 ) resonance in J / ψ photoproduction off protons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blin, A. N. Hiller; Fernandez-Ramirez, C.; Jackura, A.
2016-08-01
In this study, a resonance-like structure, the P c(4450), has recently been observed in the J/ψ p spectrum by the LHCb collaboration. We discuss the feasibility of detecting this structure in J/ψ photoproduction in the CLAS12 experiment at JLab. We present a first estimate of the upper limit for the branching ratio of the P c(4450) to J/ψ p. Our estimates, which take into account the experimental resolution effects, lead to a sizable cross section close to the J/ψ production threshold, which makes future experiments covering this region very promising.
Peer influence on students' estimates of performance: social comparison in clinical rotations.
Raat, A N Janet; Kuks, Jan B M; van Hell, E Ally; Cohen-Schotanus, Janke
2013-02-01
During clinical rotations, students move from one clinical situation to another. Questions exist about students' strategies for coping with these transitions. These strategies may include a process of social comparison because in this context it offers the student an opportunity to estimate his or her abilities to master a novel rotation. These estimates are relevant for learning and performance because they are related to self-efficacy. We investigated whether student estimates of their own future performance are influenced by the performance level and gender of the peer with whom the student compares him- or herself. We designed an experimental study in which participating students (n = 321) were divided into groups assigned to 12 different conditions. Each condition entailed a written comparison situation in which a peer student had completed the rotation the participant was required to undertake next. Differences between conditions were determined by the performance level (worse, similar or better) and gender of the comparison peer. The overall grade achieved by the comparison peer remained the same in all conditions. We asked participants to estimate their own future performance in that novel rotation. Differences between their estimates were analysed using analysis of variance (ANOVA). Students' estimates of their future performance were highest when the comparison peer was presented as performing less well and lowest when the comparison peer was presented as performing better (p < 0.001). Estimates of male and female students in same-gender comparison conditions did not differ. In two of three opposite-gender conditions, male students' estimates were higher than those of females (p < 0.001 and p < 0.05, respectively). Social comparison influences students' estimates of their future performance in a novel rotation. The effect depends on the performance level and gender of the comparison peer. This indicates that comparisons against particular peers may strengthen or diminish a student's self-efficacy, which, in turn, may ease or hamper the student's learning during clinical rotations. The study is limited by its experimental design. Future research should focus on students' comparison behaviour in real transitions. © Blackwell Publishing Ltd 2013.
Anomalous T2 relaxation in normal and degraded cartilage.
Reiter, David A; Magin, Richard L; Li, Weiguo; Trujillo, Juan J; Pilar Velasco, M; Spencer, Richard G
2016-09-01
To compare the ordinary monoexponential model with three anomalous relaxation models-the stretched Mittag-Leffler, stretched exponential, and biexponential functions-using both simulated and experimental cartilage relaxation data. Monte Carlo simulations were used to examine both the ability of identifying a given model under high signal-to-noise ratio (SNR) conditions and the accuracy and precision of parameter estimates under more modest SNR as would be encountered clinically. Experimental transverse relaxation data were analyzed from normal and enzymatically degraded cartilage samples under high SNR and rapid echo sampling to compare each model. Both simulation and experimental results showed improvement in signal representation with the anomalous relaxation models. The stretched exponential model consistently showed the lowest mean squared error in experimental data and closely represents the signal decay over multiple decades of the decay time (e.g., 1-10 ms, 10-100 ms, and >100 ms). The stretched exponential parameter αse showed an inverse correlation with biochemically derived cartilage proteoglycan content. Experimental results obtained at high field suggest potential application of αse as a measure of matrix integrity. Simulation reflecting more clinical imaging conditions, indicate the ability to robustly estimate αse and distinguish between normal and degraded tissue, highlighting its potential as a biomarker for human studies. Magn Reson Med 76:953-962, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Kassen, Dan
2016-11-01
As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.
Optical model calculations of 14.6A GeV silicon fragmentation cross sections
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Khan, Ferdous; Tripathi, Ram K.
1993-01-01
An optical potential abrasion-ablation collision model is used to calculate hadronic dissociation cross sections for a 14.6 A GeV(exp 28) Si beam fragmenting in aluminum, tin, and lead targets. The frictional-spectator-interaction (FSI) contributions are computed with two different formalisms for the energy-dependent mean free path. These estimates are compared with experimental data and with estimates obtained from semi-empirical fragmentation models commonly used in galactic cosmic ray transport studies.
JASMINE project Instrument design and centroiding experiment
NASA Astrophysics Data System (ADS)
Yano, Taihei; Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki
JASMINE will study the fundamental structure and evolution of the Milky Way Galaxy. To accomplish these objectives, JASMINE will measure trigonometric parallaxes, positions and proper motions of about 10 million stars with a precision of 10 μarcsec at z = 14 mag. In this paper the instrument design (optics, detectors, etc.) of JASMINE is presented. We also show a CCD centroiding experiment for estimating positions of star images. The experimental result shows that the accuracy of estimated distances has a variance of less than 0.01 pixel.
NASA Astrophysics Data System (ADS)
Léon, Olivier; Piot, Estelle; Sebbane, Delphine; Simon, Frank
2017-06-01
The present study provides theoretical details and experimental validation results to the approach proposed by Minotti et al. (Aerosp Sci Technol 12(5):398-407, 2008) for measuring amplitudes and phases of acoustic velocity components (AVC) that are waveform parameters of each component of velocity induced by an acoustic wave, in fully turbulent duct flows carrying multi-tone acoustic waves. Theoretical results support that the turbulence rejection method proposed, based on the estimation of cross power spectra between velocity measurements and a reference signal such as a wall pressure measurement, provides asymptotically efficient estimators with respect to the number of samples. Furthermore, it is shown that the estimator uncertainties can be simply estimated, accounting for the characteristics of the measured flow turbulence spectra. Two laser-based measurement campaigns were conducted in order to validate the acoustic velocity estimation approach and the uncertainty estimates derived. While in previous studies estimates were obtained using laser Doppler velocimetry (LDV), it is demonstrated that high-repetition rate particle image velocimetry (PIV) can also be successfully employed. The two measurement techniques provide very similar acoustic velocity amplitude and phase estimates for the cases investigated, that are of practical interest for acoustic liner studies. In a broader sense, this approach may be beneficial for non-intrusive sound emission studies in wind tunnel testings.
Witnessing eigenstates for quantum simulation of Hamiltonian spectra
Santagati, Raffaele; Wang, Jianwei; Gentile, Antonio A.; Paesani, Stefano; Wiebe, Nathan; McClean, Jarrod R.; Morley-Short, Sam; Shadbolt, Peter J.; Bonneau, Damien; Silverstone, Joshua W.; Tew, David P.; Zhou, Xiaoqi; O’Brien, Jeremy L.; Thompson, Mark G.
2018-01-01
The efficient calculation of Hamiltonian spectra, a problem often intractable on classical machines, can find application in many fields, from physics to chemistry. We introduce the concept of an “eigenstate witness” and, through it, provide a new quantum approach that combines variational methods and phase estimation to approximate eigenvalues for both ground and excited states. This protocol is experimentally verified on a programmable silicon quantum photonic chip, a mass-manufacturable platform, which embeds entangled state generation, arbitrary controlled unitary operations, and projective measurements. Both ground and excited states are experimentally found with fidelities >99%, and their eigenvalues are estimated with 32 bits of precision. We also investigate and discuss the scalability of the approach and study its performance through numerical simulations of more complex Hamiltonians. This result shows promising progress toward quantum chemistry on quantum computers. PMID:29387796
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlcek, Lukas; Chialvo, Ariel; Simonson, J Michael
2013-01-01
Molecular models and experimental estimates based on the cluster pair approximation (CPA) provide inconsistent predictions of absolute single-ion hydration properties. To understand the origin of this discrepancy we used molecular simulations to study the transition between hydration of alkali metal and halide ions in small aqueous clusters and bulk water. The results demonstrate that the assumptions underlying the CPA are not generally valid as a result of a significant shift in the ion hydration free energies (~15 kJ/mol) and enthalpies (~47 kJ/mol) in the intermediate range of cluster sizes. When this effect is accounted for, the systematic differences between modelsmore » and experimental predictions disappear, and the value of absolute proton hydration enthalpy based on the CPA gets in closer agreement with other estimates.« less
Experimental Study of Hydraulic Systems Transient Response Characteristics
1978-12-01
of Filter .. ... ...... ..... ..... 28 Effects of Quincke -Tube. .. ..... ...... ... 28 Error ’Estimation. .. ... ...... ..... ..... 33 I. CONCLUSIONS...System With Quincke -Tube i Configuration ..... ..................... ... 11 6 Schematic of Pump System .... ............... ... 12 7 Example of Computer...Filter Configuration ........ ..................... 32 20 Transient Response, Reservoir System, Quincke -Tube (Short) Configuration, 505 PSIA
Towing Tank Tests on a Ram Wing in a Rectangular Guideway
DOT National Transportation Integrated Search
1973-07-01
The object of the study was to set the theoretical and experimental basis for a preliminary design of a ram wing vehicle. A simplified one-dimensional mathematical model is developed in an attempt to estimate the stability derivatives of this type of...
Improving precision of forage yield trials: A case study
USDA-ARS?s Scientific Manuscript database
Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to several facto...
Experimental study of influence characteristics of flue gas fly ash on acid dew point
NASA Astrophysics Data System (ADS)
Song, Jinhui; Li, Jiahu; Wang, Shuai; Yuan, Hui; Ren, Zhongqiang
2017-12-01
The long-term operation experience of a large number of utility boilers shows that the measured value of acid dew point is generally lower than estimated value. This is because the influence of CaO and MgO on acid dew point in flue gas fly ash is not considered in the estimation formula of acid dew point. On the basis of previous studies, the experimental device for acid dew point measurement was designed and constructed, and the acid dew point under different smoke conditions was measured. The results show that the CaO and MgO in the flue gas fly ash have an obvious influence on the acid dew point, and the content of the fly ash is negatively correlated with the temperature of acid dew point At the same time, the concentration of H2SO4 in flue gas is different, and the acid dew point of flue gas is different, and positively correlated with the acid dew point.
On A Problem Of Propagation Of Shock Waves Generated By Explosive Volcanic Eruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gusev, V. A.; Sobissevitch, A. L.
2008-06-24
Interdisciplinary study of flows of matter and energy in geospheres has become one of the most significant advances in Earth sciences. It is carried out by means of direct quantitative estimations based on detailed analysis of geological and geophysical observations and experimental data. The actual contribution is the interdisciplinary study of nonlinear acoustics and physical volcanology dedicated to shock wave propagation in a viscous and inhomogeneous medium. The equations governing evolution of shock waves with an arbitrary initial profile and an arbitrary cross-section of a beam are obtained. For the case of low viscous medium, the asymptotic solution meant tomore » calculate a profile of a shock wave in an arbitrary point has been derived. The analytical solution of the problem on propagation of shock pulses from atmosphere into a two-phase fluid-saturated geophysical medium is analysed. Quantitative estimations were carried out with respect to experimental results obtained in the course of real explosive volcanic eruptions.« less
Hatfield, Laura A.; Gutreuter, Steve; Boogaard, Michael A.; Carlin, Bradley P.
2011-01-01
Estimation of extreme quantal-response statistics, such as the concentration required to kill 99.9% of test subjects (LC99.9), remains a challenge in the presence of multiple covariates and complex study designs. Accurate and precise estimates of the LC99.9 for mixtures of toxicants are critical to ongoing control of a parasitic invasive species, the sea lamprey, in the Laurentian Great Lakes of North America. The toxicity of those chemicals is affected by local and temporal variations in water chemistry, which must be incorporated into the modeling. We develop multilevel empirical Bayes models for data from multiple laboratory studies. Our approach yields more accurate and precise estimation of the LC99.9 compared to alternative models considered. This study demonstrates that properly incorporating hierarchical structure in laboratory data yields better estimates of LC99.9 stream treatment values that are critical to larvae control in the field. In addition, out-of-sample prediction of the results of in situ tests reveals the presence of a latent seasonal effect not manifest in the laboratory studies, suggesting avenues for future study and illustrating the importance of dual consideration of both experimental and observational data.
A Statistical Guide to the Design of Deep Mutational Scanning Experiments.
Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia
2016-09-01
The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.
2011-01-01
Background Our objective was to estimate the effect of various childbirth care packages on neonatal mortality due to intrapartum-related events (“birth asphyxia”) in term babies for use in the Lives Saved Tool (LiST). Methods We conducted a systematic literature review to identify studies or reviews of childbirth care packages as defined by United Nations norms (basic and comprehensive emergency obstetric care, skilled care at birth). We also reviewed Traditional Birth Attendant (TBA) training. Data were abstracted into standard tables and quality assessed by adapted GRADE criteria. For interventions with low quality evidence, but strong GRADE recommendation for implementation, an expert Delphi consensus process was conducted to estimate cause-specific mortality effects. Results We identified evidence for the effect on perinatal/neonatal mortality of emergency obstetric care packages: 9 studies (8 observational, 1 quasi-experimental), and for skilled childbirth care: 10 studies (8 observational, 2 quasi-experimental). Studies were of low quality, but the GRADE recommendation for implementation is strong. Our Delphi process included 21 experts representing all WHO regions and achieved consensus on the reduction of intrapartum-related neonatal deaths by comprehensive emergency obstetric care (85%), basic emergency obstetric care (40%), and skilled birth care (25%). For TBA training we identified 2 meta-analyses and 9 studies reporting mortality effects (3 cRCT, 1 quasi-experimental, 5 observational). There was substantial between-study heterogeneity and the overall quality of evidence was low. Because the GRADE recommendation for TBA training is conditional on the context and region, the effect was not estimated through a Delphi or included in the LiST tool. Conclusion Evidence quality is rated low, partly because of challenges in undertaking RCTs for obstetric interventions, which are considered standard of care. Additional challenges for evidence interpretation include varying definitions of obstetric packages and inconsistent measurement of mortality outcomes. Thus, the LiST effect estimates for skilled birth and emergency obstetric care were based on expert opinion. Using LiST modelling, universal coverage of comprehensive obstetric care could avert 591,000 intrapartum-related neonatal deaths each year. Investment in childbirth care packages should be a priority and accompanied by implementation research and further evaluation of intervention impact and cost. Funding This work was supported by the Bill and Melinda Gates Foundation through a grant to the US Fund for UNICEF, and to Saving Newborn Lives Save the Children, through Save the Children US. PMID:21501427
In-silico wear prediction for knee replacements--methodology and corroboration.
Strickland, M A; Taylor, M
2009-07-22
The capability to predict in-vivo wear of knee replacements is a valuable pre-clinical analysis tool for implant designers. Traditionally, time-consuming experimental tests provided the principal means of investigating wear. Today, computational models offer an alternative. However, the validity of these models has not been demonstrated across a range of designs and test conditions, and several different formulas are in contention for estimating wear rates, limiting confidence in the predictive power of these in-silico models. This study collates and retrospectively simulates a wide range of experimental wear tests using fast rigid-body computational models with extant wear prediction algorithms, to assess the performance of current in-silico wear prediction tools. The number of tests corroborated gives a broader, more general assessment of the performance of these wear-prediction tools, and provides better estimates of the wear 'constants' used in computational models. High-speed rigid-body modelling allows a range of alternative algorithms to be evaluated. Whilst most cross-shear (CS)-based models perform comparably, the 'A/A+B' wear model appears to offer the best predictive power amongst existing wear algorithms. However, the range and variability of experimental data leaves considerable uncertainty in the results. More experimental data with reduced variability and more detailed reporting of studies will be necessary to corroborate these models with greater confidence. With simulation times reduced to only a few minutes, these models are ideally suited to large-volume 'design of experiment' or probabilistic studies (which are essential if pre-clinical assessment tools are to begin addressing the degree of variation observed clinically and in explanted components).
Adaptive torque estimation of robot joint with harmonic drive transmission
NASA Astrophysics Data System (ADS)
Shi, Zhiguo; Li, Yuankai; Liu, Guangjun
2017-11-01
Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.
An estimate of the amount of road in the staggered-setting system of clearcutting.
Roy R. Silen; H.J. Gratkowski
1953-01-01
One question frequently asked by foresters in the Douglas-fir region is: "How much land is taken out of forest production by logging roads and landings?" The final answer is not known, but a rough estimate recently prepared for a sizable portion of the H. J. Andrews Experimental Forest may be useful as a tentative figure. The experimental area is located...
ERIC Educational Resources Information Center
Fortson, Kenneth; Verbitsky-Savitz, Natalya; Kopa, Emma; Gleason, Philip
2012-01-01
Randomized controlled trials (RCTs) are widely considered to be the gold standard in evaluating the impacts of a social program. When an RCT is infeasible, researchers often estimate program impacts by comparing outcomes of program participants with those of a nonexperimental comparison group, adjusting for observable differences between the two…
A novel approach to neutron dosimetry.
Balmer, Matthew J I; Gamage, Kelum A A; Taylor, Graeme C
2016-11-01
Having been overlooked for many years, research is now starting to take into account the directional distribution of neutron workplace fields. Existing neutron dosimetry instrumentation does not account for this directional distribution, resulting in conservative estimates of dose in neutron workplace fields (by around a factor of 2, although this is heavily dependent on the type of field). This conservatism could influence epidemiological studies on the health effects of radiation exposure. This paper reports on the development of an instrument which can estimate the effective dose of a neutron field, accounting for both the direction and the energy distribution. A 6 Li-loaded scintillator was used to perform neutron assays at a number of locations in a 20 × 20 × 17.5 cm 3 water phantom. The variation in thermal and fast neutron response to different energies and field directions was exploited. The modeled response of the instrument to various neutron fields was used to train an artificial neural network (ANN) to learn the effective dose and ambient dose equivalent of these fields. All experimental data published in this work were measured at the National Physical Laboratory (UK). Experimental results were obtained for a number of radionuclide source based neutron fields to test the performance of the system. The results of experimental neutron assays at 25 locations in a water phantom were fed into the trained ANN. A correlation between neutron counting rates in the phantom and neutron fluence rates was experimentally found to provide dose rate estimates. A radionuclide source behind shadow cone was used to create a more complex field in terms of energy and direction. For all fields, the resulting estimates of effective dose rate were within 45% or better of their calculated values, regardless of energy distribution or direction for measurement times greater than 25 min. This work presents a novel, real-time, approach to workplace neutron dosimetry. It is believed that in the research presented in this paper, for the first time, a single instrument has been able to estimate effective dose.
Helgason, Benedikt; Viceconti, Marco; Rúnarsson, Tómas P; Brynjólfsson, Sigurour
2008-01-01
Pushout tests can be used to estimate the shear strength of the bone implant interface. Numerous such experimental studies have been published in the literature. Despite this researchers are still some way off with respect to the development of accurate numerical models to simulate implant stability. In the present work a specific experimental pushout study from the literature was simulated using two different bones implant interface models. The implant was a porous coated Ti-6Al-4V retrieved 4 weeks postoperatively from a dog model. The purpose was to find out which of the interface models could replicate the experimental results using physically meaningful input parameters. The results showed that a model based on partial bone ingrowth (ingrowth stability) is superior to an interface model based on friction and prestressing due to press fit (initial stability). Even though the present study is limited to a single experimental setup, the authors suggest that the presented methodology can be used to investigate implant stability from other experimental pushout models. This would eventually enhance the much needed understanding of the mechanical response of the bone implant interface and help to quantify how implant stability evolves with time.
Bekker, Cindy; Voogd, Eef; Fransman, Wouter; Vermeulen, Roel
2016-11-01
Control banding can be used as a first-tier assessment to control worker exposure to nano-objects and their aggregates and agglomerates (NOAA). In a second tier, more advanced modelling approaches are needed to produce quantitative exposure estimates. As currently no general quantitative nano-specific exposure models are available, this study evaluated the validity and applicability of using a generic exposure assessment model (the Advanced REACH Tool-ART) for occupational exposure to NOAA. The predictive capability of ART for occupational exposure to NOAA was tested by calculating the relative bias and correlations (Pearson) between the model estimates and measured concentrations using a dataset of 102 NOAA exposure measurements collected during experimental and workplace exposure studies. Moderate to (very) strong correlations between the ART estimates and measured concentrations were found. Estimates correlated better to measured concentration levels of dust (r = 0.76, P < 0.01) than liquid aerosols (r = 0.51, P = 0.19). However, ART overestimated the measured NOAA concentrations for both the experimental and field measurements (factor 2-127). Overestimation was highest at low concentrations and decreased with increasing concentration. Correlations seemed to be better when looking at the nanomaterials individually compared to combined scenarios, indicating that nanomaterial-specific characteristics are not well captured within the mechanistic model of the ART. Although ART in its current state is not capable to estimate occupational exposure to NOAA, the strong correlations for the individual nanomaterials indicate that the ART (and potentially other generic exposure models) have the potential to be extended or adapted for exposure to NOAA. In the future, studies investigating the potential to estimate exposure to NOAA should incorporate more explicitly nanomaterial-specific characteristics in their models. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Maximum-likelihood estimation of parameterized wavefronts from multifocal data
Sakamoto, Julia A.; Barrett, Harrison H.
2012-01-01
A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282
NASA Astrophysics Data System (ADS)
Jiménez-Martínez, J.; Molinero-Huguet, J.; Candela, L.
2009-04-01
Water requirements for different crop types according to soil type and climate conditions play not only an important role in agricultural efficiency production, though also for water resources management and control of pollutants in drainage water. The key issue to attain these objectives is the irrigation efficiency. Application of computer codes for irrigation simulation constitutes a fast and inexpensive approach to study optimal agricultural management practices. To simulate daily water balance in the soil, vadose zone and aquifer the VisualBALAN V. 2.0 code was applied to an experimental area under irrigation characterized by its aridity. The test was carried out in three experimental plots for annual row crops (lettuce and melon), perennial vegetables (artichoke), and fruit trees (citrus) under common agricultural practices in open air for October 1999-September 2008. Drip irrigation was applied to crops production due to the scarcity of water resources and the need for water conservation. Water level change was monitored in the top unconfined aquifer for each experimental plot. Results of water balance modelling show a good agreement between observed and estimated water level values. For the study period, mean drainage obtained values were 343 mm, 261 mm and 205 mm for lettuce and melon, artichoke and citrus respectively. Assessment of water use efficiency was based on the IE indicator proposed by the ASCE Task Committee. For the modelled period, water use efficiency was estimated as 73, 71 and 78 % of the applied dose (irrigation + precipitation) for lettuce and melon, artichoke and citrus, respectively.
McCaffrey, Daniel; Ramchand, Rajeev; Hunter, Sarah B.; Suttorp, Marika
2012-01-01
We develop a new tool for assessing the sensitivity of findings on treatment effectiveness to differential follow-up rates in the two treatment conditions being compared. The method censors the group with the higher response rate to create a synthetic respondent group that is then compared with the observed cases in the other condition to estimate a treatment effect. Censoring is done under various assumptions about the strength of the relationship between follow-up and outcomes to determine how informative differential dropout can alter inferences relative to estimates from models that assume the data are missing at random. The method provides an intuitive measure for understanding the strength of the association between outcomes and dropout that would be required to alter inferences about treatment effects. Our approach is motivated by translational research in which treatments found to be effective under experimental conditions are tested in standard treatment conditions. In such applications, follow-up rates in the experimental setting are likely to be substantially higher than in the standard setting, especially when observational data are used in the evaluation. We test the method on a case study evaluation of the effectiveness of an evidence-supported adolescent substance abuse treatment program (Motivational Enhancement Therapy/Cognitive Behavioral Therapy-5 [MET/CBT-5]) delivered by community-based treatment providers relative to its performance in a controlled research trial. In this case study, follow-up rates in the community based settings were extremely low (54%) compared to the experimental setting (95%) giving raise to concerns about non-ignorable drop-out. PMID:22956890
NASA Astrophysics Data System (ADS)
Talaghat, Mohammad Reza; Jokar, Seyyed Mohammad
2017-12-01
This article offers a study on estimation of heat transfer parameters (coefficient and thermal diffusivity) using analytical solutions and experimental data for regular geometric shapes (such as infinite slab, infinite cylinder, and sphere). Analytical solutions have a broad use in experimentally determining these parameters. Here, the method of Finite Integral Transform (FIT) was used for solutions of governing differential equations. The temperature change at centerline location of regular shapes was recorded to determine both the thermal diffusivity and heat transfer coefficient. Aluminum and brass were used for testing. Experiments were performed for different conditions such as in a highly agitated water medium ( T = 52 °C) and in air medium ( T = 25 °C). Then, with the known slope of the temperature ratio vs. time curve and thickness of slab or radius of the cylindrical or spherical materials, thermal diffusivity value and heat transfer coefficient may be determined. According to the method presented in this study, the estimated of thermal diffusivity of aluminum and brass is 8.395 × 10-5 and 3.42 × 10-5 for a slab, 8.367 × 10-5 and 3.41 × 10-5 for a cylindrical rod and 8.385 × 10-5 and 3.40 × 10-5 m2/s for a spherical shape, respectively. The results showed there is close agreement between the values estimated here and those already published in the literature. The TAAD% is 0.42 and 0.39 for thermal diffusivity of aluminum and brass, respectively.
Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.
2017-01-01
A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.
NASA Technical Reports Server (NTRS)
Oliver, W. R.
1980-01-01
The development of an advanced technology high lift system for an energy efficient transport incorporating a high aspect ratio supercritical wing is described. This development is based on the results of trade studies to select the high lift system, analysis techniques utilized to design the high lift system, and results of a wind tunnel test program. The program included the first experimental low speed, high Reynolds number wind tunnel test for this class of aircraft. The experimental results include the effects on low speed aerodynamic characteristics of various leading and trailing edge devices, nacelles and pylons, aileron, spoilers, and Mach and Reynolds numbers. Results are discussed and compared with the experimental data and the various aerodynamic characteristics are estimated.
NASA Astrophysics Data System (ADS)
Joewondo, N.; Zhang, Y.; Prasad, M.
2016-12-01
Sequestration of carbon dioxide in shale has been a subject of interest as the result of the technological advancement in gas shale production. The process involves injection of CO2 to enhance methane recovery and storing CO2 in depleted shale reservoir at elevated pressures. To better understand both shale production and carbon storage one must study the physical phenomena acting at different scales that control the in situ fluid flow. Shale rocks are complex systems with heterogeneous structures and compositions. Pore structures of these systems are in nanometer scales and have significant gas storage capacity and surface area. Adsorption is prominent in nanometer sized pores due to the high attraction between gas molecules and the surface of the pores. Recent studies attempt to find correlation between storage capacity and the rock composition, particularly the clay content. This study, however, focuses on the study of supercritical adsorption of CO2 on pure clay sample. We have built an in-house manometric experimental setup that can be used to study both the equilibrium and kinetics of adsorption. The experiment is conducted at isothermal condition. The study of equilibrium of adsorption gives insight on the storage capacity of these systems, and the study of the kinetics of adsorption is essential in understanding the resistance to fluid transport. The diffusion coefficient, which can be estimated from the dynamic experimental results, is a parameter which quantify diffusion mobility, and is affected by many factors including pressure and temperature. The first part of this paper briefly discusses the study of both equilibrium and kinetics of the CO2 adsorption on illite. Both static and dynamic measurements on the system are compared to theoretical models available in the literature to estimate the storage capacity and the diffusion time constants. The main part of the paper discusses the effect of varying temperature on the static and dynamic experimental results.
A comparative simulation study of AR(1) estimators in short time series.
Krone, Tanja; Albers, Casper J; Timmerman, Marieke E
2017-01-01
Various estimators of the autoregressive model exist. We compare their performance in estimating the autocorrelation in short time series. In Study 1, under correct model specification, we compare the frequentist r 1 estimator, C-statistic, ordinary least squares estimator (OLS) and maximum likelihood estimator (MLE), and a Bayesian method, considering flat (B f ) and symmetrized reference (B sr ) priors. In a completely crossed experimental design we vary lengths of time series (i.e., T = 10, 25, 40, 50 and 100) and autocorrelation (from -0.90 to 0.90 with steps of 0.10). The results show a lowest bias for the B sr , and a lowest variability for r 1 . The power in different conditions is highest for B sr and OLS. For T = 10, the absolute performance of all measurements is poor, as expected. In Study 2, we study robustness of the methods through misspecification by generating the data according to an ARMA(1,1) model, but still analysing the data with an AR(1) model. We use the two methods with the lowest bias for this study, i.e., B sr and MLE. The bias gets larger when the non-modelled moving average parameter becomes larger. Both the variability and power show dependency on the non-modelled parameter. The differences between the two estimation methods are negligible for all measurements.
Network dynamics of social influence in the wisdom of crowds
Brackbill, Devon; Centola, Damon
2017-01-01
A longstanding problem in the social, biological, and computational sciences is to determine how groups of distributed individuals can form intelligent collective judgments. Since Galton’s discovery of the “wisdom of crowds” [Galton F (1907) Nature 75:450–451], theories of collective intelligence have suggested that the accuracy of group judgments requires individuals to be either independent, with uncorrelated beliefs, or diverse, with negatively correlated beliefs [Page S (2008) The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies]. Previous experimental studies have supported this view by arguing that social influence undermines the wisdom of crowds. These results showed that individuals’ estimates became more similar when subjects observed each other’s beliefs, thereby reducing diversity without a corresponding increase in group accuracy [Lorenz J, Rauhut H, Schweitzer F, Helbing D (2011) Proc Natl Acad Sci USA 108:9020–9025]. By contrast, we show general network conditions under which social influence improves the accuracy of group estimates, even as individual beliefs become more similar. We present theoretical predictions and experimental results showing that, in decentralized communication networks, group estimates become reliably more accurate as a result of information exchange. We further show that the dynamics of group accuracy change with network structure. In centralized networks, where the influence of central individuals dominates the collective estimation process, group estimates become more likely to increase in error. PMID:28607070
Network dynamics of social influence in the wisdom of crowds.
Becker, Joshua; Brackbill, Devon; Centola, Damon
2017-06-27
A longstanding problem in the social, biological, and computational sciences is to determine how groups of distributed individuals can form intelligent collective judgments. Since Galton's discovery of the "wisdom of crowds" [Galton F (1907) Nature 75:450-451], theories of collective intelligence have suggested that the accuracy of group judgments requires individuals to be either independent, with uncorrelated beliefs, or diverse, with negatively correlated beliefs [Page S (2008) The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies ]. Previous experimental studies have supported this view by arguing that social influence undermines the wisdom of crowds. These results showed that individuals' estimates became more similar when subjects observed each other's beliefs, thereby reducing diversity without a corresponding increase in group accuracy [Lorenz J, Rauhut H, Schweitzer F, Helbing D (2011) Proc Natl Acad Sci USA 108:9020-9025]. By contrast, we show general network conditions under which social influence improves the accuracy of group estimates, even as individual beliefs become more similar. We present theoretical predictions and experimental results showing that, in decentralized communication networks, group estimates become reliably more accurate as a result of information exchange. We further show that the dynamics of group accuracy change with network structure. In centralized networks, where the influence of central individuals dominates the collective estimation process, group estimates become more likely to increase in error.
Statistical strategies for averaging EC50 from multiple dose-response experiments.
Jiang, Xiaoqi; Kopp-Schneider, Annette
2015-11-01
In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.
Xu, Ming; Lei, Zhipeng; Yang, James
2015-01-01
N95 filtering facepiece respirator (FFR) dead space is an important factor for respirator design. The dead space refers to the cavity between the internal surface of the FFR and the wearer's facial surface. This article presents a novel method to estimate the dead space volume of FFRs and experimental validation. In this study, six FFRs and five headforms (small, medium, large, long/narrow, and short/wide) are used for various FFR and headform combinations. Microsoft Kinect Sensors (Microsoft Corporation, Redmond, WA) are used to scan the headforms without respirators and then scan the headforms with the FFRs donned. The FFR dead space is formed through geometric modeling software, and finally the volume is obtained through LS-DYNA (Livermore Software Technology Corporation, Livermore, CA). In the experimental validation, water is used to measure the dead space. The simulation and experimental dead space volumes are 107.5-167.5 mL and 98.4-165.7 mL, respectively. Linear regression analysis is conducted to correlate the results from Kinect and water, and R(2) = 0.85.
Bröder, Arndt; Malejka, Simone
2017-07-01
The experimental manipulation of response biases in recognition-memory tests is an important means for testing recognition models and for estimating their parameters. The textbook manipulations for binary-response formats either vary the payoff scheme or the base rate of targets in the recognition test, with the latter being the more frequently applied procedure. However, some published studies reverted to implying different base rates by instruction rather than actually changing them. Aside from unnecessarily deceiving participants, this procedure may lead to cognitive conflicts that prompt response strategies unknown to the experimenter. To test our objection, implied base rates were compared to actual base rates in a recognition experiment followed by a post-experimental interview to assess participants' response strategies. The behavioural data show that recognition-memory performance was estimated to be lower in the implied base-rate condition. The interview data demonstrate that participants used various second-order response strategies that jeopardise the interpretability of the recognition data. We thus advice researchers against substituting actual base rates with implied base rates.
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
Effects of a growth check on daily age estimates of age-0 alligator gar
Snow, Richard A.; Long, James M.
2016-01-01
Accurate age and growth information is essential for a complete knowledge of life history, growth rates, age at sexual maturity, and average life span in fishes. Alligator gar are becoming increasingly managed throughout their range and because this species spawns in backwater flooded areas, their offspring are prone to stranding in areas with limited prey, potentially affecting their growth. Because fish growth is tightly linked with otolith growth and annulus formation, the ability to discern marks not indicative of annuli (age checks) in alligator gar would give managers some insight when estimating ages. Previous studies have suggested that checks are often present prior to the first annulus in otoliths of alligator gar, affecting age estimates. We investigated check formation in otoliths of alligator gar in relation to growth and food availability. Sixteen age-0 alligator gar were marked with oxytetracycline (OTC) to give a reference point and divided equitably into two groups: a control group with abundant prey and an experimental group with limited prey. The experimental group was given 2 g of food per week for 20 days and then given the same prey availability as the control group for the next 20 days. After 40 days, the gar were measured, sacrificed, and their sagittae removed to determine if checks were present. Checks were visible on 14 of the 16 otoliths in the experimental group, associated with low growth during the first 20 days when prey was limited and accelerated growth after prey availability was increased. No checks were observed on otoliths of the control group, where growth and prey availability were consistent. Age estimates of fish in the control group were more accurate than those in the experimental group, showing that fish growth as a function of prey availability likely induced the checks by compressing daily ring formation.
NASA Astrophysics Data System (ADS)
Termini, Donatella
2013-04-01
Recent catastrophic events due to intense rainfalls have mobilized large amount of sediments causing extensive damages in vast areas. These events have highlighted how debris-flows runout estimations are of crucial importance to delineate the potentially hazardous areas and to make reliable assessment of the level of risk of the territory. Especially in recent years, several researches have been conducted in order to define predicitive models. But, existing runout estimation methods need input parameters that can be difficult to estimate. Recent experimental researches have also allowed the assessment of the physics of the debris flows. But, the major part of the experimental studies analyze the basic kinematic conditions which determine the phenomenon evolution. Experimental program has been recently conducted at the Hydraulic laboratory of the Department of Civil, Environmental, Aerospatial and of Materials (DICAM) - University of Palermo (Italy). The experiments, carried out in a laboratory flume appositely constructed, were planned in order to evaluate the influence of different geometrical parameters (such as the slope and the geometrical characteristics of the confluences to the main channel) on the propagation phenomenon of the debris flow and its deposition. Thus, the aim of the present work is to give a contribution to defining input parameters in runout estimation by numerical modeling. The propagation phenomenon is analyzed for different concentrations of solid materials. Particular attention is devoted to the identification of the stopping distance of the debris flow and of the involved parameters (volume, angle of depositions, type of material) in the empirical predictive equations available in literature (Rickenmanm, 1999; Bethurst et al. 1997). Bethurst J.C., Burton A., Ward T.J. 1997. Debris flow run-out and landslide sediment delivery model tests. Journal of hydraulic Engineering, ASCE, 123(5), 419-429 Rickenmann D. 1999. Empirical relationships fro debris flow. Natural hazards, 19, pp. 47-77
Kapelner, Adam; Krieger, Abba; Blanford, William J
2016-10-14
When measuring Henry's law constants (k H ) using the phase ratio variation (PRV) method via headspace gas chromatography (G C ), the value of k H of the compound under investigation is calculated from the ratio of the slope to the intercept of a linear regression of the inverse G C response versus the ratio of gas to liquid volumes of a series of vials drawn from the same parent solution. Thus, an experimenter collects measurements consisting of the independent variable (the gas/liquid volume ratio) and dependent variable (the G C -1 peak area). A review of the literature found that the common design is a simple uniform spacing of liquid volumes. We present an optimal experimental design which estimates k H with minimum error and provides multiple means for building confidence intervals for such estimates. We illustrate performance improvements of our design with an example measuring the k H for Naphthalene in aqueous solution as well as simulations on previous studies. Our designs are most applicable after a trial run defines the linear G C response and the linear phase ratio to the G C -1 region (where the PRV method is suitable) after which a practitioner can collect measurements in bulk. The designs can be easily computed using our open source software optDesignSlopeInt, an R package on CRAN. Copyright © 2016 Elsevier B.V. All rights reserved.
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-04-01
Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-05
... research; quasi-experimental research; and other appropriate methods. The program of research and... designs such as quasi-experimental, single- subject, qualitative, and experimental research. This research.... Figlio, D.N., Rush, M. & Yin, L. (2010). Is it live or is it Internet? Experimental estimates of the...
Zhou, Y; Jenkins, M E; Naish, M D; Trejos, A L
2016-08-01
The design of a tremor estimator is an important part of designing mechanical tremor suppression orthoses. A number of tremor estimators have been developed and applied with the assumption that tremor is a mono-frequency signal. However, recent experimental studies have shown that Parkinsonian tremor consists of multiple frequencies, and that the second and third harmonics make a large contribution to the tremor. Thus, the current estimators may have limited performance on estimation of the tremor harmonics. In this paper, a high-order tremor estimation algorithm is proposed and compared with its lower-order counterpart and a widely used estimator, the Weighted-frequency Fourier Linear Combiner (WFLC), using 18 Parkinsonian tremor data sets. The results show that the proposed estimator has better performance than its lower-order counterpart and the WFLC. The percentage estimation accuracy of the proposed estimator is 85±2.9%, an average improvement of 13% over the lower-order counterpart. The proposed algorithm holds promise for use in wearable tremor suppression devices.
Hade, Erinn M; Murray, David M; Pennell, Michael L; Rhoda, Dale; Paskett, Electra D; Champion, Victoria L; Crabtree, Benjamin F; Dietrich, Allen; Dignan, Mark B; Farmer, Melissa; Fenton, Joshua J; Flocke, Susan; Hiatt, Robert A; Hudson, Shawna V; Mitchell, Michael; Monahan, Patrick; Shariff-Marco, Salma; Slone, Stacey L; Stange, Kurt; Stewart, Susan L; Strickland, Pamela A Ohman
2010-01-01
Screening has become one of our best tools for early detection and prevention of cancer. The group-randomized trial is the most rigorous experimental design for evaluating multilevel interventions. However, identifying the proper sample size for a group-randomized trial requires reliable estimates of intraclass correlation (ICC) for screening outcomes, which are not available to researchers. We present crude and adjusted ICC estimates for cancer screening outcomes for various levels of aggregation (physician, clinic, and county) and provide an example of how these ICC estimates may be used in the design of a future trial. Investigators working in the area of cancer screening were contacted and asked to provide crude and adjusted ICC estimates using the analysis of variance method estimator. Of the 29 investigators identified, estimates were obtained from 10 investigators who had relevant data. ICC estimates were calculated from 13 different studies, with more than half of the studies collecting information on colorectal screening. In the majority of cases, ICC estimates could be adjusted for age, education, and other demographic characteristics, leading to a reduction in the ICC. ICC estimates varied considerably by cancer site and level of aggregation of the groups. Previously, only two articles had published ICCs for cancer screening outcomes. We have complied more than 130 crude and adjusted ICC estimates covering breast, cervical, colon, and prostate screening and have detailed them by level of aggregation, screening measure, and study characteristics. We have also demonstrated their use in planning a future trial and the need for the evaluation of the proposed interval estimator for binary outcomes under conditions typically seen in GRTs.
Damping in aerospace composite materials
NASA Astrophysics Data System (ADS)
Agneni, A.; Balis Crema, L.; Castellani, A.
Experimental results are presented on specimens of carbon and Kevlar fibers in epoxy resin, materials used in many aerospace structures (control surfaces and wings in aircraft, large antennas in spacecraft, etc.). Some experimental methods of estimating damping ratios are first reviewed, either in the time domain or in the frequency domain. Some damping factor estimates from experimental tests are then shown; in order to evaluate the effects of the aerospace environment, damping factors have been obtained in a typical range of temperature, namely between +120 C and -120 C, and in the pressure range from room pressure to 10 exp -6 torr. Finally, a theoretical approach for predicting the bounds of the damping coefficients is shown, and prediction data are compared with experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polder, M.D.; Hulzebos, E.M.; Jager, D.T.
1998-01-01
This literature study is performed to support the implementation of two models in a risk assessment system for the evaluation of chemicals and their risk for human health and the environment. One of the exposure pathways for humans and cattle is the uptake of chemicals by plants. In this risk assessment system the transfer of gaseous organic substances from air to plants modeled by Riederer is included. A similar model with a more refined approach, including dilution by growth, is proposed by Trapp and Matthies, which was implemented in the European version of this risk assessment system (EUSES). In thismore » study both models are evaluated by comparison with experimental data on leaf/air partition coefficients found in the literature. For herbaceous plants both models give good estimations for the leaf/air partition coefficient up to 10{sup 7}, with deviations for most substances within a factor of five. For the azalea and spruce group the fit between experimental BCF values and the calculated model values is less adequate. For substances for which Riederer estimates a leaf/air partition coefficient above 10{sup 7}, the approach of Trapp and Matthies seems more adequate; however, few data were available.« less
NASA Astrophysics Data System (ADS)
Cara, Javier
2016-05-01
Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.
Predicting of the refractive index of haemoglobin using the Hybrid GA-SVR approach.
Oyehan, Tajudeen A; Alade, Ibrahim O; Bagudu, Aliyu; Sulaiman, Kazeem O; Olatunji, Sunday O; Saleh, Tawfik A
2018-04-30
The optical properties of blood play crucial roles in medical diagnostics and treatment, and in the design of new medical devices. Haemoglobin is a vital constituent of the blood whose optical properties affect all of the optical properties of human blood. The refractive index of haemoglobin has been reported to strongly depend on its concentration which is a function of the physiology of biological cells. This makes the refractive index of haemoglobin an essential non-invasive bio-marker of diseases. Unfortunately, the complexity of blood tissue makes it challenging to experimentally measure the refractive index of haemoglobin. While a few studies have reported on the refractive index of haemoglobin, there is no solid consensus with the data obtained due to different measuring instruments and the conditions of the experiments. Moreover, obtaining the refractive index via an experimental approach is quite laborious. In this work, an accurate, fast and relatively convenient strategy to estimate the refractive index of haemoglobin is reported. Thus, the GA-SVR model is presented for the prediction of the refractive index of haemoglobin using wavelength, temperature, and the concentration of haemoglobin as descriptors. The model developed is characterised by an excellent accuracy and very low error estimates. The correlation coefficients obtained in these studies are 99.94% and 99.91% for the training and testing results, respectively. In addition, the result shows an almost perfect match with the experimental data and also demonstrates significant improvement over a recent mathematical model available in the literature. The GA-SVR model predictions also give insights into the influence of concentration, wavelength, and temperature on the RI measurement values. The model outcome can be used not only to accurately estimate the refractive index of haemoglobin but also could provide a reliable common ground to benchmark the experimental refractive index results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wagner, Brian J.; Harvey, Judson W.
1997-01-01
Tracer experiments are valuable tools for analyzing the transport characteristics of streams and their interactions with shallow groundwater. The focus of this work is the design of tracer studies in high-gradient stream systems subject to advection, dispersion, groundwater inflow, and exchange between the active channel and zones in surface or subsurface water where flow is stagnant or slow moving. We present a methodology for (1) evaluating and comparing alternative stream tracer experiment designs and (2) identifying those combinations of stream transport properties that pose limitations to parameter estimation and therefore a challenge to tracer test design. The methodology uses the concept of global parameter uncertainty analysis, which couples solute transport simulation with parameter uncertainty analysis in a Monte Carlo framework. Two general conclusions resulted from this work. First, the solute injection and sampling strategy has an important effect on the reliability of transport parameter estimates. We found that constant injection with sampling through concentration rise, plateau, and fall provided considerably more reliable parameter estimates than a pulse injection across the spectrum of transport scenarios likely encountered in high-gradient streams. Second, for a given tracer test design, the uncertainties in mass transfer and storage-zone parameter estimates are strongly dependent on the experimental Damkohler number, DaI, which is a dimensionless combination of the rates of exchange between the stream and storage zones, the stream-water velocity, and the stream reach length of the experiment. Parameter uncertainties are lowest at DaI values on the order of 1.0. When DaI values are much less than 1.0 (owing to high velocity, long exchange timescale, and/or short reach length), parameter uncertainties are high because only a small amount of tracer interacts with storage zones in the reach. For the opposite conditions (DaI ≫ 1.0), solute exchange rates are fast relative to stream-water velocity and all solute is exchanged with the storage zone over the experimental reach. As DaI increases, tracer dispersion caused by hyporheic exchange eventually reaches an equilibrium condition and storage-zone exchange parameters become essentially nonidentifiable.
NASA Astrophysics Data System (ADS)
Zhou, Xianfeng; Huang, Wenjiang; Kong, Weiping; Ye, Huichun; Dong, Yingying; Casa, Raffaele
2017-05-01
Leaf carotenoids content (LCar) is an important indicator of plant physiological status. Accurate estimation of LCar provides valuable insight into early detection of stress in vegetation. With spectroscopy techniques, a semi-empirical approach based on spectral indices was extensively used for carotenoids content estimation. However, established spectral indices for carotenoids that generally rely on limited measured data, might lack predictive accuracy for carotenoids estimation in various species and at different growth stages. In this study, we propose a new carotenoid index (CARI) for LCar assessment based on a large synthetic dataset simulated from the leaf radiative transfer model PROSPECT-5, and evaluate its capability with both simulated data from PROSPECT-5 and 4SAIL and extensive experimental datasets: the ANGERS dataset and experimental data acquired in field experiments in China in 2004. Results show that CARI was the index most linearly correlated with carotenoids content at the leaf level using a synthetic dataset (R2 = 0.943, RMSE = 1.196 μg/cm2), compared with published spectral indices. Cross-validation results with CARI using ANGERS data achieved quite an accurate estimation (R2 = 0.545, RMSE = 3.413 μg/cm2), though the RBRI performed as the best index (R2 = 0.727, RMSE = 2.640 μg/cm2). CARI also showed good accuracy (R2 = 0.639, RMSE = 1.520 μg/cm2) for LCar assessment with leaf level field survey data, though PRI performed better (R2 = 0.710, RMSE = 1.369 μg/cm2). Whereas RBRI, PRI and other assessed spectral indices showed a good performance for a given dataset, overall their estimation accuracy was not consistent across all datasets used in this study. Conversely CARI was more robust showing good results in all datasets. Further assessment of LCar with simulated and measured canopy reflectance data indicated that CARI might not be very sensitive to LCar changes at low leaf area index (LAI) value, and in these conditions soil moisture influenced the LCar retrieval accuracy.
Test Equality between Three Treatments under an Incomplete Block Crossover Design.
Lui, Kung-Jong
2015-01-01
Under a random effects linear additive risk model, we compare two experimental treatments with a placebo in continuous data under an incomplete block crossover trial. We develop three test procedures for simultaneously testing equality between two experimental treatments and a placebo, as well as interval estimators for the mean difference between treatments. We apply Monte Carlo simulations to evaluate the performance of these test procedures and interval estimators in a variety of situations. We note that the bivariate test procedure accounting for the dependence structure based on the F-test is preferable to the other two procedures when there is only one of the two experimental treatments has a non-zero effect vs. the placebo. We note further that when the effects of the two experimental treatments vs. a placebo are in the same relative directions and are approximately of equal magnitude, the summary test procedure based on a simple average of two weighted-least-squares (WLS) estimators can outperform the other two procedures with respect to power. When one of the two experimental treatments has a relatively large effect vs. the placebo, the univariate test procedure with using Bonferroni's equality can be still of use. Finally, we use the data about the forced expiratory volume in 1 s (FEV1) readings taken from a double-blind crossover trial comparing two different doses of formoterol with a placebo to illustrate the use of test procedures and interval estimators proposed here.
ERIC Educational Resources Information Center
Chingos, Matthew M.; Peterson, Paul E.
2015-01-01
We provide the first experimental estimates of the long-term impacts of a voucher to attend private school by linking data from a privately sponsored voucher initiative in New York City, which awarded the scholarships by lottery to low-income families, to administrative records on college enrollment and degree attainment. We find no significant…
Simulated maximum likelihood method for estimating kinetic rates in gene expression.
Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin
2007-01-01
Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.
Bucci, Melanie E.; Callahan, Peggy; Koprowski, John L.; Polfus, Jean L.; Krausman, Paul R.
2015-01-01
Stable isotope analysis of diet has become a common tool in conservation research. However, the multiple sources of uncertainty inherent in this analysis framework involve consequences that have not been thoroughly addressed. Uncertainty arises from the choice of trophic discrimination factors, and for Bayesian stable isotope mixing models (SIMMs), the specification of prior information; the combined effect of these aspects has not been explicitly tested. We used a captive feeding study of gray wolves (Canis lupus) to determine the first experimentally-derived trophic discrimination factors of C and N for this large carnivore of broad conservation interest. Using the estimated diet in our controlled system and data from a published study on wild wolves and their prey in Montana, USA, we then investigated the simultaneous effect of discrimination factors and prior information on diet reconstruction with Bayesian SIMMs. Discrimination factors for gray wolves and their prey were 1.97‰ for δ13C and 3.04‰ for δ15N. Specifying wolf discrimination factors, as opposed to the commonly used red fox (Vulpes vulpes) factors, made little practical difference to estimates of wolf diet, but prior information had a strong effect on bias, precision, and accuracy of posterior estimates. Without specifying prior information in our Bayesian SIMM, it was not possible to produce SIMM posteriors statistically similar to the estimated diet in our controlled study or the diet of wild wolves. Our study demonstrates the critical effect of prior information on estimates of animal diets using Bayesian SIMMs, and suggests species-specific trophic discrimination factors are of secondary importance. When using stable isotope analysis to inform conservation decisions researchers should understand the limits of their data. It may be difficult to obtain useful information from SIMMs if informative priors are omitted and species-specific discrimination factors are unavailable. PMID:25803664
Derbridge, Jonathan J; Merkle, Jerod A; Bucci, Melanie E; Callahan, Peggy; Koprowski, John L; Polfus, Jean L; Krausman, Paul R
2015-01-01
Stable isotope analysis of diet has become a common tool in conservation research. However, the multiple sources of uncertainty inherent in this analysis framework involve consequences that have not been thoroughly addressed. Uncertainty arises from the choice of trophic discrimination factors, and for Bayesian stable isotope mixing models (SIMMs), the specification of prior information; the combined effect of these aspects has not been explicitly tested. We used a captive feeding study of gray wolves (Canis lupus) to determine the first experimentally-derived trophic discrimination factors of C and N for this large carnivore of broad conservation interest. Using the estimated diet in our controlled system and data from a published study on wild wolves and their prey in Montana, USA, we then investigated the simultaneous effect of discrimination factors and prior information on diet reconstruction with Bayesian SIMMs. Discrimination factors for gray wolves and their prey were 1.97‰ for δ13C and 3.04‰ for δ15N. Specifying wolf discrimination factors, as opposed to the commonly used red fox (Vulpes vulpes) factors, made little practical difference to estimates of wolf diet, but prior information had a strong effect on bias, precision, and accuracy of posterior estimates. Without specifying prior information in our Bayesian SIMM, it was not possible to produce SIMM posteriors statistically similar to the estimated diet in our controlled study or the diet of wild wolves. Our study demonstrates the critical effect of prior information on estimates of animal diets using Bayesian SIMMs, and suggests species-specific trophic discrimination factors are of secondary importance. When using stable isotope analysis to inform conservation decisions researchers should understand the limits of their data. It may be difficult to obtain useful information from SIMMs if informative priors are omitted and species-specific discrimination factors are unavailable.
Diallel analysis for technological traits in upland cotton.
Queiroz, D R; Farias, F J C; Cavalcanti, J J V; Carvalho, L P; Neder, D G; Souza, L S S; Farias, F C; Teodoro, P E
2017-09-21
Final cotton quality is of great importance, and it depends on intrinsic and extrinsic fiber characteristics. The objective of this study was to estimate general (GCA) and specific (SCA) combining abilities for technological fiber traits among six upland cotton genotypes and their fifteen hybrid combinations, as well as to determine the effective genetic effects in controlling the traits evaluated. In 2015, six cotton genotypes: FM 993, CNPA 04-2080, PSC 355, TAM B 139-17, IAC 26, and TAMCOT-CAMD-E and fifteen hybrid combinations were evaluated at the Experimental Station of Embrapa Algodão, located in Patos, PB, Brazil. The experimental design was a randomized block with three replications. Technological fiber traits evaluated were: length (mm); strength (gf/tex); fineness (Micronaire index); uniformity (%); short fiber index (%), and spinning index. The diallel analysis was carried out according to the methodology proposed by Griffing, using method II and model I. Significant differences were detected between the treatments and combining abilities (GCA and SCA), indicating the variability of the study material. There was a predominance of additive effects for the genetic control of all traits. TAM B 139-17 presented the best GCA estimates for all traits. The best combinations were: FM 993 x TAM B 139-17, CNPA 04-2080 x PSC 355, FM 993 x TAMCOT-CAMD-E, PSC 355 x TAM B 139-17, and TAM B 139-17 x TAMCOT-CAMD-E, by obtaining the best estimates of SCA, with one of the parents having favorable estimates for GCA.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
Quantifying the effect of experimental design choices for in vitro scratch assays.
Johnston, Stuart T; Ross, Joshua V; Binder, Benjamin J; Sean McElwain, D L; Haridas, Parvathi; Simpson, Matthew J
2016-07-07
Scratch assays are often used to investigate potential drug treatments for chronic wounds and cancer. Interpreting these experiments with a mathematical model allows us to estimate the cell diffusivity, D, and the cell proliferation rate, λ. However, the influence of the experimental design on the estimates of D and λ is unclear. Here we apply an approximate Bayesian computation (ABC) parameter inference method, which produces a posterior distribution of D and λ, to new sets of synthetic data, generated from an idealised mathematical model, and experimental data for a non-adhesive mesenchymal population of fibroblast cells. The posterior distribution allows us to quantify the amount of information obtained about D and λ. We investigate two types of scratch assay, as well as varying the number and timing of the experimental observations captured. Our results show that a scrape assay, involving one cell front, provides more precise estimates of D and λ, and is more computationally efficient to interpret than a wound assay, with two opposingly directed cell fronts. We find that recording two observations, after making the initial observation, is sufficient to estimate D and λ, and that the final observation time should correspond to the time taken for the cell front to move across the field of view. These results provide guidance for estimating D and λ, while simultaneously minimising the time and cost associated with performing and interpreting the experiment. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
Parameter estimation for lithium ion batteries
NASA Astrophysics Data System (ADS)
Santhanagopalan, Shriram
With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of road conditions is important. An algorithm to predict the SOC in time intervals as small as 5 ms is of critical demand. In such cases, the conventional non-linear estimation procedure is not time-effective. There exist methodologies in the literature, such as those based on fuzzy logic; however, these techniques require a lot of computational storage space. Consequently, it is not possible to implement such techniques on a micro-chip for integration as a part of a real-time device. The Extended Kalman Filter (EKF) based approach presented in this work is a first step towards developing an efficient method to predict online, the State of Charge of a lithium ion cell based on an electrochemical model. The final part of the dissertation focuses on incorporating uncertainty in parameter values into electrochemical models using the polynomial chaos theory (PCT).
Estimated landmark calibration of biomechanical models for inverse kinematics.
Trinler, Ursula; Baker, Richard
2018-01-01
Inverse kinematics is emerging as the optimal method in movement analysis to fit a multi-segment biomechanical model to experimental marker positions. A key part of this process is calibrating the model to the dimensions of the individual being analysed which requires scaling of the model, pose estimation and localisation of tracking markers within the relevant segment coordinate systems. The aim of this study is to propose a generic technique for this process and test a specific application to the OpenSim model Gait2392. Kinematic data from 10 healthy adult participants were captured in static position and normal walking. Results showed good average static and dynamic fitting errors between virtual and experimental markers of 0.8 cm and 0.9 cm, respectively. Highest fitting errors were found on the epicondyle (static), feet (static, dynamic) and on the thigh (dynamic). These result from inconsistencies between the model geometry and degrees of freedom and the anatomy and movement pattern of the individual participants. A particular limitation is in estimating anatomical landmarks from the bone meshes supplied with Gait2392 which do not conform with the bone morphology of the participants studied. Soft tissue artefact will also affect fitting the model to walking trials. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Glorieux, Cédric; Cuguen, Joel; Roux, Fabrice
2012-01-01
Phenological traits often show variation within and among natural populations of annual plants. Nevertheless, the adaptive value of post-anthesis traits is seldom tested. In this study, we estimated the adaptive values of pre- and post-anthesis traits in two stressful environments (water stress and interspecific competition), using the selfing annual species Arabidopsis thaliana. By estimating seed production and by performing laboratory natural selection (LNS), we assessed the strength and nature (directional, disruptive and stabilizing) of selection acting on phenological traits in A. thaliana under the two tested stress conditions, each with four intensities. Both the type of stress and its intensity affected the strength and nature of selection, as did genetic constraints among phenological traits. Under water stress, both experimental approaches demonstrated directional selection for a shorter life cycle, although bolting time imposes a genetic constraint on the length of the interval between bolting and anthesis. Under interspecific competition, results from the two experimental approaches showed discrepancies. Estimation of seed production predicted directional selection toward early pre-anthesis traits and long post-anthesis periods. In contrast, the LNS approach suggested neutrality for all phenological traits. This study opens questions on adaptation in complex natural environment where many selective pressures act simultaneously. PMID:22403624
NASA Astrophysics Data System (ADS)
Belyaev, Vadim S.; Guterman, Vitaly Y.; Ivanov, Anatoly V.
2004-06-01
The report presents the theoretical and experimental results obtained during the first year of the ISTC project No. 1926. The energy and temporal characteristics of the laser radiation necessary to ignite the working components mixture in a rocket engine combustion chamber have been predicted. Two approaches have been studied: the optical gas fuel laser-induced breakdown; the laser-initiated plasma torch on target surface. The possibilities and conditions of the rocket fuel components ignition by a laser beam in the differently designed combustion chambers have been estimated and studied. The comparative analysis shows that both the optical spark and light focusing on target techniques can ignite the mixture.
NASA Astrophysics Data System (ADS)
Sidnyaev, N. I.
2018-05-01
The results of studying the high-velocity impact interactions of a particle flux of space's meteoric background with satellites have been presented. The effects that arises during the microparticle motion in the material have been described; the models of solid particle interactions with spacecraft's onboard hardware protection have been presented. The experimental and analytical dependences have been given. The basic factors have been revealed, and their effect on the erosion wear of satellite's surface has been estimated. The dependences for calculating the rectilinear (horizontal, inclined and vertical) sections of satellite's surface have been given. The presented dependences represent the results of experimental and analytical studies.
Kim, Eunjung; Cain, Kevin; Boutain, Doris; Chun, Jin-Joo; Kim, Sangho; Im, Hyesang
2017-01-01
Problems Korean American (KA) children experience mental health problems due to difficulties in parenting dysfunction complicated by living in two cultures. Methods Korean Parent Training Program (KPTP) was pilot tested with 48 KA mothers of children (ages 3–8) using partial group randomized controlled experimental study design. Self-report survey and observation data were gathered. Findings Analyses using generalized estimating equation indicated the intervention group mothers increased effective parenting and their children decreased behavior problems and reported less acculturation conflict with mothers. Conclusions The KPTP is a promising way to promote effective parenting and increase positive child mental health in KA families. PMID:24645901
Flexural properties of three kinds of experimental fiber-reinforced composite posts.
Kim, Mi-Joo; Jung, Won-Chang; Oh, Seunghan; Hattori, Masayuki; Yoshinari, Masao; Kawada, Eiji; Oda, Yutaka; Bae, Ji-Myung
2011-01-01
The aim of this study was to estimate the flexural properties of three kinds of experimental fiber-reinforced composite (FRC) posts and to evaluate their potential use as posts. Experimental FRC posts were fabricated with glass, aramid, and UHMWP fibers. Commercial FRC posts were used for comparison. A three-point bending test was performed at a crosshead speed of 1 mm/min. Experimental glass fiber posts showed significantly higher flexural strengths and moduli than aramid and UHMWP posts. Experimental UHMWP posts demonstrated superior toughness to the commercial posts. The glass fiber posts displayed stiff, strong and brittle features, while the UHMWP posts were flexible, weak and ductile. The flexural properties of the aramid posts fell between those of the glass and UHMWP posts. In conclusion, the glass fiber posts proved excellent in flexural strengths and moduli. However, the superior toughness of UHMWP fibers suggests the possibility of their use as posts in combination with glass fibers.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-03
... study including the list formats, study design, and measurement plans for the listed unintended... responses per response Pretest 60 1 60 0.5 30 Screener 10,000 1 10,000 0.0167 167 Experimental Survey 3,150... part in a pretest of the study, estimated to last 30 minutes (0.5 hours), for a total of 30 hours...
Lucas, P Avilés; Aubineau-Lanièce, I; Lourenço, V; Vermesse, D; Cutarella, D
2014-01-01
The absorbed dose to water is the fundamental reference quantity for brachytherapy treatment planning systems and thermoluminescence dosimeters (TLDs) have been recognized as the most validated detectors for measurement of such a dosimetric descriptor. The detector response in a wide energy spectrum as that of an (192)Ir brachytherapy source as well as the specific measurement medium which surrounds the TLD need to be accounted for when estimating the absorbed dose. This paper develops a methodology based on highly sensitive LiF:Mg,Cu,P TLDs to directly estimate the absorbed dose to water in liquid water around a high dose rate (192)Ir brachytherapy source. Different experimental designs in liquid water and air were constructed to study the response of LiF:Mg,Cu,P TLDs when irradiated in several standard photon beams of the LNE-LNHB (French national metrology laboratory for ionizing radiation). Measurement strategies and Monte Carlo techniques were developed to calibrate the LiF:Mg,Cu,P detectors in the energy interval characteristic of that found when TLDs are immersed in water around an (192)Ir source. Finally, an experimental system was designed to irradiate TLDs at different angles between 1 and 11 cm away from an (192)Ir source in liquid water. Monte Carlo simulations were performed to correct measured results to provide estimates of the absorbed dose to water in water around the (192)Ir source. The dose response dependence of LiF:Mg,Cu,P TLDs with the linear energy transfer of secondary electrons followed the same variations as those of published results. The calibration strategy which used TLDs in air exposed to a standard N-250 ISO x-ray beam and TLDs in water irradiated with a standard (137)Cs beam provided an estimated mean uncertainty of 2.8% (k = 1) in the TLD calibration coefficient for irradiations by the (192)Ir source in water. The 3D TLD measurements performed in liquid water were obtained with a maximum uncertainty of 11% (k = 1) found at 1 cm from the source. Radial dose values in water were compared against published results of the American Association of Physicists in Medicine and the European Society for Radiotherapy and Oncology and no significant differences (maximum value of 3.1%) were found within uncertainties except for one position at 9 cm (5.8%). At this location the background contribution relative to the TLD signal is relatively small and an unexpected experimental fluctuation in the background estimate may have caused such a large discrepancy. This paper shows that reliable measurements with TLDs in complex energy spectra require a study of the detector dose response with the radiation quality and specific calibration methodologies which model accurately the experimental conditions where the detectors will be used. The authors have developed and studied a method with highly sensitive TLDs and contributed to its validation by comparison with results from the literature. This methodology can be used to provide direct estimates of the absorbed dose rate in water for irradiations with HDR (192)Ir brachytherapy sources.
Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)
Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.
Lee, Junkyo; Lee, Min Woo; Choi, Dongil; Cha, Dong Ik; Lee, Sunyoung; Kang, Tae Wook; Yang, Jehoon; Jo, Jaemoon; Bang, Won-Chul; Kim, Jongsik; Shin, Dongkuk
2017-12-21
The purpose of this study was to evaluate the accuracy of an active contour model for estimating the posterior ablative margin in images obtained by the fusion of real-time ultrasonography (US) and 3-dimensional (3D) US or magnetic resonance (MR) images of an experimental tumor model for radiofrequency ablation. Chickpeas (n=12) and bovine rump meat (n=12) were used as an experimental tumor model. Grayscale 3D US and T1-weighted MR images were pre-acquired for use as reference datasets. US and MR/3D US fusion was performed for one group (n=4), and US and 3D US fusion only (n=8) was performed for the other group. Half of the models in each group were completely ablated, while the other half were incompletely ablated. Hyperechoic ablation areas were extracted using an active contour model from real-time US images, and the posterior margin of the ablation zone was estimated from the anterior margin. After the experiments, the ablated pieces of bovine rump meat were cut along the electrode path and the cut planes were photographed. The US images with the estimated posterior margin were compared with the photographs and post-ablation MR images. The extracted contours of the ablation zones from 12 US fusion videos and post-ablation MR images were also matched. In the four models fused under real-time US with MR/3D US, compression from the transducer and the insertion of an electrode resulted in misregistration between the real-time US and MR images, making the estimation of the ablation zones less accurate than was achieved through fusion between real-time US and 3D US. Eight of the 12 post-ablation 3D US images were graded as good when compared with the sectioned specimens, and 10 of the 12 were graded as good in a comparison with nicotinamide adenine dinucleotide staining and histopathologic results. Estimating the posterior ablative margin using an active contour model is a feasible way of predicting the ablation area, and US/3D US fusion was more accurate than US/MR fusion.
Schilling, Chris; Petrie, Dennis; Dowsey, Michelle M; Choong, Peter F; Clarke, Philip
2017-12-01
Many treatments are evaluated using quasi-experimental pre-post studies susceptible to regression to the mean (RTM). Ignoring RTM could bias the economic evaluation. We investigated this issue using the contemporary example of total knee replacement (TKR), a common treatment for end-stage osteoarthritis of the knee. Data (n = 4796) were obtained from the Osteoarthritis Initiative database, a longitudinal observational study of osteoarthritis. TKR patients (n = 184) were matched to non-TKR patients, using propensity score matching on the predicted hazard of TKR and exact matching on osteoarthritis severity and health-related quality of life (HrQoL). The economic evaluation using the matched control group was compared to the standard method of using the pre-surgery score as the control. Matched controls were identified for 56% of the primary TKRs. The matched control HrQoL trajectory showed evidence of RTM accounting for a third of the estimated QALY gains from surgery using the pre-surgery HrQoL as the control. Incorporating RTM into the economic evaluation significantly reduced the estimated cost effectiveness of TKR and increased the uncertainty. A generalized ICER bias correction factor was derived to account for RTM in cost-effectiveness analysis. RTM should be considered in economic evaluations based on quasi-experimental pre-post studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Leaf Chlorophyll Content Estimation of Winter Wheat Based on Visible and Near-Infrared Sensors.
Zhang, Jianfeng; Han, Wenting; Huang, Lvwen; Zhang, Zhiyong; Ma, Yimian; Hu, Yamin
2016-03-25
The leaf chlorophyll content is one of the most important factors for the growth of winter wheat. Visual and near-infrared sensors are a quick and non-destructive testing technology for the estimation of crop leaf chlorophyll content. In this paper, a new approach is developed for leaf chlorophyll content estimation of winter wheat based on visible and near-infrared sensors. First, the sliding window smoothing (SWS) was integrated with the multiplicative scatter correction (MSC) or the standard normal variable transformation (SNV) to preprocess the reflectance spectra images of wheat leaves. Then, a model for the relationship between the leaf relative chlorophyll content and the reflectance spectra was developed using the partial least squares (PLS) and the back propagation neural network. A total of 300 samples from areas surrounding Yangling, China, were used for the experimental studies. The samples of visible and near-infrared spectroscopy at the wavelength of 450,900 nm were preprocessed using SWS, MSC and SNV. The experimental results indicate that the preprocessing using SWS and SNV and then modeling using PLS can achieve the most accurate estimation, with the correlation coefficient at 0.8492 and the root mean square error at 1.7216. Thus, the proposed approach can be widely used for winter wheat chlorophyll content analysis.
Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M
2016-01-01
Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.
Comparison of Experimental and Computational Aerothermodynamics of a 70-deg Sphere-Cone
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Perkins, John N.
1996-01-01
Numerical solutions for hypersonic flows of carbon-dioxide and air around a 70-deg sphere-cone have been computed using an axisymmetric non-equilibrium Navier-Stokes solver. Freestream flow conditions for these computations were equivalent to those obtained in an experimental blunt-body heat-transfer study conducted in a high-enthalpy, hypervelocity expansion tube. Comparisons have been made between the computed and measured surface heat-transfer rates on the forebody and afterbody of the sphere-cone and on the sting which supported the test model. Computed forebody heating rates were within the estimated experimental uncertainties of 10% on the forebody and 15% in the wake except for within the recirculating flow region of the wake.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Shuler, M. L.
1986-01-01
An experimental system was developed to study the microbial growth kinetic of an undefined mixed culture in an erobic biological waste treatment process. The experimental results were used to develop a mathematical model that can predict the performance of a bioreactor. The bioreactor will be used to regeneratively treat waste material which is expected to be generated during a long term manned space mission. Since the presence of insoluble particles in the chemically undefined complex media made estimating biomass very difficult in the real system, a clean system was devised to study the microbial growth from the soluble substrate.
Methods for determining time of death.
Madea, Burkhard
2016-12-01
Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.
Chimpanzees demonstrate individual differences in social information use.
Watson, Stuart K; Vale, Gillian L; Hopper, Lydia M; Dean, Lewis G; Kendal, Rachel L; Price, Elizabeth E; Wood, Lara A; Davis, Sarah J; Schapiro, Steven J; Lambeth, Susan P; Whiten, Andrew
2018-06-19
Studies of transmission biases in social learning have greatly informed our understanding of how behaviour patterns may diffuse through animal populations, yet within-species inter-individual variation in social information use has received little attention and remains poorly understood. We have addressed this question by examining individual performances across multiple experiments with the same population of primates. We compiled a dataset spanning 16 social learning studies (26 experimental conditions) carried out at the same study site over a 12-year period, incorporating a total of 167 chimpanzees. We applied a binary scoring system to code each participant's performance in each study according to whether they demonstrated evidence of using social information from conspecifics to solve the experimental task or not (Social Information Score-'SIS'). Bayesian binomial mixed effects models were then used to estimate the extent to which individual differences influenced SIS, together with any effects of sex, rearing history, age, prior involvement in research and task type on SIS. An estimate of repeatability found that approximately half of the variance in SIS was accounted for by individual identity, indicating that individual differences play a critical role in the social learning behaviour of chimpanzees. According to the model that best fit the data, females were, depending on their rearing history, 15-24% more likely to use social information to solve experimental tasks than males. However, there was no strong evidence of an effect of age or research experience, and pedigree records indicated that SIS was not a strongly heritable trait. Our study offers a novel, transferable method for the study of individual differences in social learning.
NASA Astrophysics Data System (ADS)
Anghel, D.-C.; Ene, A.; Ştirbu, C.; Sicoe, G.
2017-10-01
This paper presents a study about the factors that influence the working performances of workers in the automotive industry. These factors regard mainly the transportations conditions, taking into account the fact that a large number of workers live in places that are far away of the enterprise. The quantitative data obtained from this study will be generalized by using a neural network, software simulated. The neural network is able to estimate the performance of workers even for the combinations of input factors that had been not recorded by the study. The experimental data obtained from the study will be divided in two classes. The first class that contains approximately 80% of data will be used by the Java software for the training of the neural network. The weights resulted from the training process will be saved in a text file. The other class that contains the rest of the 20% of experimental data will be used to validate the neural network. The training and the validation of the networks are performed in a Java software (TrainAndValidate java class). We designed another java class, Test.java that will be used with new input data, for new situations. The experimental data collected from the study. The software that simulated the neural network. The software that estimates the working performance, when new situations are met. This application is useful for human resources department of an enterprise. The output results are not quantitative. They are qualitative (from low performance to high performance, divided in five classes).
Evidence to Support Peer Tutoring Programs at the Undergraduate Level
ERIC Educational Resources Information Center
Colver, Mitchell; Fry, Trevor
2016-01-01
The present study examined undergraduate peer tutoring in three phases. Phase I qualitatively surveyed students' perceptions about the effectiveness of tutoring. Phase II examined the usefulness of promoting regular use of services through a tutoring contract. Phase III utilized an archival, quasi-experimental approach to estimate the effect of…
2014-06-03
nozzle exit) was developed to aid in porting the VENOM diagnostic to high-enthalpy impulse tunnels. Measurements were also made in the supersonic high...Colonius T, Fedorov AV. 2009. Alternate designs of ultrasonic absorptive coatings for hypersonic boundary layer control. AIAA Pap. No. 2009-4217 51. Craig
Three-body radiative capture reactions
NASA Astrophysics Data System (ADS)
Casal, J.; Rodríguez-Gallardo, M.; Arias, J. M.; Gómez-Camacho, J.
2018-01-01
Radiative capture reaction rates for 6He, 9Be and 17Ne formation at astrophysical conditions are studied within a three-body model using the analytical transformed harmonic oscillator method to calculate their states. An alternative procedure to estimate these rates from experimental data on low-energy breakup is also discussed.
EVALUATION OF FOREST CANOPY MODELS FOR ESTIMATING ISOPRENE EMISSIONS
During the summer of 1992, isoprene emissions were measured in a mixed deciduous forest near Oak Ridge, Tennessee. Measurements were aimed at the experimental scale-up of emissions from the leaf level to the forest canopy to the mixed layer. Results from the scale-up study are co...
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gurung, H.; Banerjee, A.
2016-02-01
This report presents the development of an extended Kalman filter (EKF) to harness the self-sensing capability of a shape memory alloy (SMA) wire, actuating a linear spring. The stress and temperature of the SMA wire, constituting the state of the system, are estimated using the EKF, from the measured change in electrical resistance (ER) of the SMA. The estimated stress is used to compute the change in length of the spring, eliminating the need for a displacement sensor. The system model used in the EKF comprises the heat balance equation and the constitutive relation of the SMA wire coupled with the force-displacement behavior of a spring. Both explicit and implicit approaches are adopted to evaluate the system model at each time-update step of the EKF. Next, in the measurement-update step, estimated states are updated based on the measured electrical resistance. It has been observed that for the same time step, the implicit approach consumes less computational time than the explicit method. To verify the implementation, EKF estimated states of the system are compared with those of an established model for different inputs to the SMA wire. An experimental setup is developed to measure the actual spring displacement and ER of the SMA, for any time-varying voltage applied to it. The process noise covariance is decided using a heuristic approach, whereas the measurement noise covariance is obtained experimentally. Finally, the EKF is used to estimate the spring displacement for a given input and the corresponding experimentally obtained ER of the SMA. The qualitative agreement between the EKF estimated displacement with that obtained experimentally reveals the true potential of this approach to harness the self-sensing capability of the SMA.
Estimates of Stellar Weak Interaction Rates for Nuclei in the Mass Range A=65-80
NASA Astrophysics Data System (ADS)
Pruet, Jason; Fuller, George M.
2003-11-01
We estimate lepton capture and emission rates, as well as neutrino energy loss rates, for nuclei in the mass range A=65-80. These rates are calculated on a temperature/density grid appropriate for a wide range of astrophysical applications including simulations of late time stellar evolution and X-ray bursts. The basic inputs in our single-particle and empirically inspired model are (i) experimentally measured level information, weak transition matrix elements, and lifetimes, (ii) estimates of matrix elements for allowed experimentally unmeasured transitions based on the systematics of experimentally observed allowed transitions, and (iii) estimates of the centroids of the GT resonances motivated by shell model calculations in the fp shell as well as by (n, p) and (p, n) experiments. Fermi resonances (isobaric analog states) are also included, and it is shown that Fermi transitions dominate the rates for most interesting proton-rich nuclei for which an experimentally determined ground state lifetime is unavailable. For the purposes of comparing our results with more detailed shell model based calculations we also calculate weak rates for nuclei in the mass range A=60-65 for which Langanke & Martinez-Pinedo have provided rates. The typical deviation in the electron capture and β-decay rates for these ~30 nuclei is less than a factor of 2 or 3 for a wide range of temperature and density appropriate for presupernova stellar evolution. We also discuss some subtleties associated with the partition functions used in calculations of stellar weak rates and show that the proper treatment of the partition functions is essential for estimating high-temperature β-decay rates. In particular, we show that partition functions based on unconverged Lanczos calculations can result in errors in estimates of high-temperature β-decay rates.
[Various aspects of personality change in aphasia].
Glozman, Zh M; Tsyganok, A A
1982-01-01
An experimental study of self-estimation of patients with aphasia carried out by the polar profile method in the course of restorative training is described. It is shown that the aphasia causes substantial changes in the patients' self-estimation that manifest themselves in a disparity of the latter during and before the disease. A comparison with a control group of neurological patients without aphasia showed a specificity of the revealed changes for aphasia and their connection with the communication disruption. As the general and verbal communication restore, a positive course of the patients' self-estimation, and approach of the latter to the premorbid level are noted. A relation between the self-estimation shift and the aphasia form was discovered. A conclusion on diagnostic and prognostic importance of personality examination in aphasia is drawn.
NASA Technical Reports Server (NTRS)
Abbey, Craig K.; Eckstein, Miguel P.
2002-01-01
We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.
Raguin, Olivier; Gruaz-Guyon, Anne; Barbet, Jacques
2002-11-01
An add-in to Microsoft Excel was developed to simulate multiple binding equilibriums. A partition function, readily written even when the equilibrium is complex, describes the experimental system. It involves the concentrations of the different free molecular species and of the different complexes present in the experiment. As a result, the software is not restricted to a series of predefined experimental setups but can handle a large variety of problems involving up to nine independent molecular species. Binding parameters are estimated by nonlinear least-square fitting of experimental measurements as supplied by the user. The fitting process allows user-defined weighting of the experimental data. The flexibility of the software and the way it may be used to describe common experimental situations and to deal with usual problems such as tracer reactivity or nonspecific binding is demonstrated by a few examples. The software is available free of charge upon request.
A new biodegradation prediction model specific to petroleum hydrocarbons.
Howard, Philip; Meylan, William; Aronson, Dallas; Stiteler, William; Tunkel, Jay; Comber, Michael; Parkerton, Thomas F
2005-08-01
A new predictive model for determining quantitative primary biodegradation half-lives of individual petroleum hydrocarbons has been developed. This model uses a fragment-based approach similar to that of several other biodegradation models, such as those within the Biodegradation Probability Program (BIOWIN) estimation program. In the present study, a half-life in days is estimated using multiple linear regression against counts of 31 distinct molecular fragments. The model was developed using a data set consisting of 175 compounds with environmentally relevant experimental data that was divided into training and validation sets. The original fragments from the Ministry of International Trade and Industry BIOWIN model were used initially as structural descriptors and additional fragments were then added to better describe the ring systems found in petroleum hydrocarbons and to adjust for nonlinearity within the experimental data. The training and validation sets had r2 values of 0.91 and 0.81, respectively.
Maternal employment and the health of low-income young children.
Gennetian, Lisa A; Hill, Heather D; London, Andrew S; Lopoo, Leonard M
2010-05-01
This study examines whether maternal employment affects the health status of low-income, elementary-school-aged children using instrumental variables estimation and experimental data from a welfare-to-work program implemented in the early 1990s. Maternal report of child health status is predicted as a function of exogenous variation in maternal employment associated with random assignment to the experimental group. IV estimates show a modest adverse effect of maternal employment on children's health. Making use of data from another welfare-to-work program we propose that any adverse effect on child health may be tempered by increased family income and access to public health insurance coverage, findings with direct relevance to a number of current policy discussions. In a secondary analysis using fixed effects techniques on longitudinal survey data collected in 1998 and 2001, we find a comparable adverse effect of maternal employment on child health that supports the external validity of our primary result.
NASA Astrophysics Data System (ADS)
Schönherr, Holger; Hain, Nicole; Walczyk, Wiktoria; Wesner, Daniel; Druzhinin, Sergey I.
2016-08-01
In this review surface nanobubbles, which are presumably gas-filled enclosures found at the solid-liquid interface, are introduced and discussed together with key experimental findings that suggest that these nanoscale features indeed exist and are filled with gas. The most prominent technique used thus far has been atomic force microscopy (AFM). However, due to its potentially invasive nature, AFM data must be interpreted with great care. Owing to their curved interface, the Laplace internal pressure of surface nanobubbles exceeds substantially the outside ambient pressure, and the experimentally observed long term stability is in conflict with estimates of gas transport rates and predicted surface nanobubble lifetimes. Despite recent explanations of both the stability and the unusual nanoscopic contact angles, the development of new co-localization approaches and the adequate analysis of AFM data of surface nanobubbles are important as a means to confirm the gaseous nature and correctly estimate the interfacial curvature.
Effect of Iron Redox Equilibrium on the Foaming Behavior of MgO-Saturated Slags
NASA Astrophysics Data System (ADS)
Park, Youngjoo; Min, Dong Joon
2018-04-01
In this study, the foaming index of CaO-SiO2-FetO and CaO-SiO2-FetO-Al2O3 slags saturated with MgO was measured to understand the relationship between their foaming behavior and physical properties. The foaming index of MgO-saturated slags increases with the FetO content due to the redox equilibrium of FetO. Experimental results indicated that MgO-saturated slag has relatively high ferric ion concentration, and the foaming index increases due to the effect of ferric ion. Therefore, the foaming behavior of MgO-saturated slag is more reasonably explained by considering the effect of ferric ion on the estimation of slag properties such as viscosity, surface tension, and density. Specifically, the estimation of slag viscosity was additionally verified by NBO/T, and this is experimentally obtained through Raman spectroscopy.
Fabietti, P G; Calabrese, G; Iorio, M; Bistoni, S; Brunetti, P; Sarti, E; Benedetti, M M
2001-10-01
Nine type 1 diabetic patients were studied for 24 hours. During this period they were given three calibrated meals. The glycemia was feedback-controlled by means of an artificial pancreas. The blood concentration of glucose and the infusion speed of the insulin were measured every minute. The experimental data referring to each of the three meals were used to estimate the parameters of a mathematical model suitable for describing the glycemic response of diabetic patients at meals and at the i.v. infusion of exogenous insulin. From the estimate a marked dispersion of the parameters was found, both interindividual and intraindividual. Nevertheless the models thus obtained seem to be usable for the synthesis of a feedback controller, especially in view of creating a portable artificial pancreas that now seems possible owing to the realization (so far experimental) of sufficiently reliable glucose concentration sensors.
Spectral Induced Polarization approaches to characterize reactive transport parameters and processes
NASA Astrophysics Data System (ADS)
Schmutz, M.; Franceschi, M.; Revil, A.; Peruzzo, L.; Maury, T.; Vaudelet, P.; Ghorbani, A.; Hubbard, S. S.
2017-12-01
For almost a decade, geophysical methods have explored the potential for characterization of reactive transport parameters and processes relevant to hydrogeology, contaminant remediation, and oil and gas applications. Spectral Induced Polarization (SIP) methods show particular promise in this endeavour, given the sensitivity of the SIP signature to geological material electrical double layer properties and the critical role of the electrical double layer on reactive transport processes, such as adsorption. In this presentation, we discuss results from several recent studies that have been performed to quantify the value of SIP parameters for characterizing reactive transport parameters. The advances have been realized through performing experimental studies and interpreting their responses using theoretical and numerical approaches. We describe a series of controlled experimental studies that have been performed to quantify the SIP responses to variations in grain size and specific surface area, pore fluid geochemistry, and other factors. We also model chemical reactions at the interface fluid/matrix linked to part of our experimental data set. For some examples, both geochemical modelling and measurements are integrated into a SIP physico-chemical based model. Our studies indicate both the potential of and the opportunity for using SIP to estimate reactive transport parameters. In case of well sorted granulometry of the samples, we find that the grain size characterization (as well as the permeabililty for some specific examples) value can be estimated using SIP. We show that SIP is sensitive to physico-chemical conditions at the fluid/mineral interface, including the different pore fluid dissolved ions (Na+, Cu2+, Zn2+, Pb2+) due to their different adsorption behavior. We also showed the relevance of our approach to characterize the fluid/matrix interaction for various organic contents (wetting and non-wetting oils). We also discuss early efforts to jointly interpret SIP and other information for improved estimation, approaches to use SIP information to constrain mechanistic flow and transport models, and the potential to apply some of the approaches to field scale applications.
ERIC Educational Resources Information Center
Tanner-Smith, Emily E.; Lipsey, Mark W.
2014-01-01
There are many situations where random assignment of participants to treatment and comparison conditions may be unethical or impractical. This article provides an overview of propensity score techniques that can be used for estimating treatment effects in nonrandomized quasi-experimental studies. After reviewing the logic of propensity score…
Stand-level bird response to experimental forest management in the Missouri Ozarks
Sarah W. Kendrick; Paul A. Porneluzi; Frank R. Thompson; Dana L. Morris; Janet M. Haslerig; John Faaborg
2015-01-01
Long-term landscape-scale experiments allow for the detection of effects of silviculture on bird abundance. Manipulative studies allow for strong inference on effects and confirmation of patterns from observational studies.We estimated bird-territory density within forest stands (2.89-62 ha) for 19 years of the Missouri Ozark Forest Ecosystem Project (MOFEP), a 100-...
Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Pardo-Vazquez, Jose L; Leboran, Victor; Molenberghs, Geert; Faes, Christel; Acuña, Carlos
2011-06-30
It is well established that neural activity is stochastically modulated over time. Therefore, direct comparisons across experimental conditions and determination of change points or maximum firing rates are not straightforward. This study sought to compare temporal firing probability curves that may vary across groups defined by different experimental conditions. Odds-ratio (OR) curves were used as a measure of comparison, and the main goal was to provide a global test to detect significant differences of such curves through the study of their derivatives. An algorithm is proposed that enables ORs based on generalized additive models, including factor-by-curve-type interactions to be flexibly estimated. Bootstrap methods were used to draw inferences from the derivatives curves, and binning techniques were applied to speed up computation in the estimation and testing processes. A simulation study was conducted to assess the validity of these bootstrap-based tests. This methodology was applied to study premotor ventral cortex neural activity associated with decision-making. The proposed statistical procedures proved very useful in revealing the neural activity correlates of decision-making in a visual discrimination task. Copyright © 2011 John Wiley & Sons, Ltd.
Del Casale, Antonio; Ferracuti, Stefano; Rapinesi, Chiara; De Rossi, Pietro; Angeletti, Gloria; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo
2015-12-01
Several studies reported that hypnosis can modulate pain perception and tolerance by affecting cortical and subcortical activity in brain regions involved in these processes. We conducted an Activation Likelihood Estimation (ALE) meta-analysis on functional neuroimaging studies of pain perception under hypnosis to identify brain activation-deactivation patterns occurring during hypnotic suggestions aiming at pain reduction, including hypnotic analgesic, pleasant, or depersonalization suggestions (HASs). We searched the PubMed, Embase and PsycInfo databases; we included papers published in peer-reviewed journals dealing with functional neuroimaging and hypnosis-modulated pain perception. The ALE meta-analysis encompassed data from 75 healthy volunteers reported in 8 functional neuroimaging studies. HASs during experimentally-induced pain compared to control conditions correlated with significant activations of the right anterior cingulate cortex (Brodmann's Area [BA] 32), left superior frontal gyrus (BA 6), and right insula, and deactivation of right midline nuclei of the thalamus. HASs during experimental pain impact both cortical and subcortical brain activity. The anterior cingulate, left superior frontal, and right insular cortices activation increases could induce a thalamic deactivation (top-down inhibition), which may correlate with reductions in pain intensity. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kornelia, Indykiewicz; Bogdan, Paszkiewicz; Tomasz, Szymański; Regina, Paszkiewicz
2015-01-01
The Hi/Lo bilayer resist system exposure in e-beam lithography (EBL) process, intended for mushroom-like profile fabrication, was studied. Different exposure parameters and theirs influence on the resist layers were simulated in CASINO software and the obtained results were compared with the experimental data. The AFM technique was used for the estimation of the e-beam penetration depth in the resist stack. Performed numerical and experimental results allow us to establish the useful ranges of the exposure parameters.
Coagulation of dust grains in the plasma of an RF discharge in argon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mankelevich, Yu. A.; Olevanov, M. A.; Pal', A. F.
2009-03-15
Results are presented from experimental studies of coagulation of dust grains of different sizes injected into a low-temperature plasma of an RF discharge in argon. A theoretical model describing the formation of dust clusters in a low-temperature plasma is developed and applied to interpret the results of experiments on the coagulation of dust grains having large negative charges. The grain size at which coagulation under the given plasma conditions is possible is estimated using the developed theory. The theoretical results are compared with the experimental data.
Ahmed, Hafiz; Salgado, Ivan; Ríos, Héctor
2018-02-01
Robust synchronization of master slave chaotic systems are considered in this work. First an approximate model of the error system is obtained using the ultra-local model concept. Then a Continuous Singular Terminal Sliding-Mode (CSTSM) Controller is designed for the purpose of synchronization. The proposed approach is output feedback-based and uses fixed-time higher order sliding-mode (HOSM) differentiator for state estimation. Numerical simulation and experimental results are given to show the effectiveness of the proposed technique. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Experimental Hydrodynamics of Turning Maneuvers in Koi Carps
NASA Astrophysics Data System (ADS)
Wu, G. H.; Yang, Y.; Zeng, L. J.
Experimental hydrodynamics of two types of turning maneuvers in koi carps (cyprinus carpio koi) are studied. The flow patterns generated by koi carps during turning are quantified by using digital particle image velocimetry. Based on the velocity fields measured, the momentums in the wake and the impulsive moments exerted on the carps are estimated. On the other hand, turning rates and radii, and moments of inertia of the carps including added mass during turning are obtained by processing the images recorded. Comparisons of the impulsive moments and moments of inertia show good agreements.
Stuart, Elizabeth A.; DuGoff, Eva; Abrams, Michael; Salkever, David; Steinwachs, Donald
2013-01-01
Electronic health data sets, including electronic health records (EHR) and other administrative databases, are rich data sources that have the potential to help answer important questions about the effects of clinical interventions as well as policy changes. However, analyses using such data are almost always non-experimental, leading to concerns that those who receive a particular intervention are likely different from those who do not in ways that may confound the effects of interest. This paper outlines the challenges in estimating causal effects using electronic health data and offers some solutions, with particular attention paid to propensity score methods that help ensure comparisons between similar groups. The methods are illustrated with a case study describing the design of a study using Medicare and Medicaid administrative data to estimate the effect of the Medicare Part D prescription drug program on individuals with serious mental illness. PMID:24921064
Chen, S C; You, S H; Liu, C Y; Chio, C P; Liao, C M
2012-09-01
The aim of this work was to use experimental infection data of human influenza to assess a simple viral dynamics model in epithelial cells and better understand the underlying complex factors governing the infection process. The developed study model expands on previous reports of a target cell-limited model with delayed virus production. Data from 10 published experimental infection studies of human influenza was used to validate the model. Our results elucidate, mechanistically, the associations between epithelial cells, human immune responses, and viral titres and were supported by the experimental infection data. We report that the maximum total number of free virions following infection is 10(3)-fold higher than the initial introduced titre. Our results indicated that the infection rates of unprotected epithelial cells probably play an important role in affecting viral dynamics. By simulating an advanced model of viral dynamics and applying it to experimental infection data of human influenza, we obtained important estimates of the infection rate. This work provides epidemiologically meaningful results, meriting further efforts to understand the causes and consequences of influenza A infection.
Crowdsourcing for Cognitive Science – The Utility of Smartphones
Brown, Harriet R.; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A.; McNab, Fiona; Rutledge, Robb B.; Dolan, Raymond J.
2014-01-01
By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations. PMID:25025865
Crowdsourcing for cognitive science--the utility of smartphones.
Brown, Harriet R; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A; McNab, Fiona; Rutledge, Robb B; Dolan, Raymond J
2014-01-01
By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations.
Link-state-estimation-based transmission power control in wireless body area networks.
Kim, Seungku; Eom, Doo-Seop
2014-07-01
This paper presents a novel transmission power control protocol to extend the lifetime of sensor nodes and to increase the link reliability in wireless body area networks (WBANs). We first experimentally investigate the properties of the link states using the received signal strength indicator (RSSI). We then propose a practical transmission power control protocol based on both short- and long-term link-state estimations. Both the short- and long-term link-state estimations enable the transceiver to adapt the transmission power level and target the RSSI threshold range, respectively, to simultaneously satisfy the requirements of energy efficiency and link reliability. Finally, the performance of the proposed protocol is experimentally evaluated in two experimental scenarios-body posture change and dynamic body motion-and compared with the typical WBAN transmission power control protocols, a real-time reactive scheme, and a dynamic postural position inference mechanism. From the experimental results, it is found that the proposed protocol increases the lifetime of the sensor nodes by a maximum of 9.86% and enhances the link reliability by reducing the packet loss by a maximum of 3.02%.
Studies on possible propagation of microbial contamination in planetary clouds
NASA Technical Reports Server (NTRS)
Dimmick, R. L.; Chatigny, M. A.; Wolochow, H.
1973-01-01
One of the key parameters in estimation of the probability of contamintion of the outer planets (Jupiter, Saturn, Uranus, etc.) is the probability of growth (Pg) of terrestrial microorganisms on or near these planets. For example, Jupiter appears to have an atmosphere in which some microbial species could metabolize and propagate. This study includes investigation of the likelihood of metabolism and propagation of microbes suspended in dynamic atmospheres. It is directed toward providing experimental information needed to aid in rational estimation of Pg for these outer planets. Current work is directed at demonstration of aerial metabolism under near optimal conditions and tests of propagation in simulated Jovian atmospheres.
Studies on possible propagation of microbial contamination in planetary clouds
NASA Technical Reports Server (NTRS)
Dimmick, R. L.; Chatigny, M. A.
1973-01-01
Current U.S. planetary quarantine standards based on international agreements require consideration of the probability of contamination (Pc) of the outer planets, Venus, Jupiter, Saturn, etc. One of the key parameters in estimation of the Pc of these planets is the probability of growth (Pg) of terrestrial microorganisms on or near these planets. For example, Jupiter and Saturn appear to have an atmosphere in which some microbial species could metabolize and propagate. This study includes investigation of the likelihood of metabolism and propagation of microbes suspended in dynamic atmospheres. It is directed toward providing experimental information needed to aid in rational estimation of Pg for these outer plants.
Modeling air concentration over macro roughness conditions by Artificial Intelligence techniques
NASA Astrophysics Data System (ADS)
Roshni, T.; Pagliara, S.
2018-05-01
Aeration is improved in rivers by the turbulence created in the flow over macro and intermediate roughness conditions. Macro and intermediate roughness flow conditions are generated by flows over block ramps or rock chutes. The measurements are taken in uniform flow region. Efficacy of soft computing methods in modeling hydraulic parameters are not common so far. In this study, modeling efficiencies of MPMR model and FFNN model are found for estimating the air concentration over block ramps under macro roughness conditions. The experimental data are used for training and testing phases. Potential capability of MPMR and FFNN model in estimating air concentration are proved through this study.
Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo
2011-10-11
We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology.
2011-01-01
Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology. PMID:21989196
Neutral, ion gas-phase energetics and structural properties of hydroxybenzophenones.
Dávalos, Juan Z; Guerrero, Andrés; Herrero, Rebeca; Jimenez, Pilar; Chana, Antonio; Abboud, José Luis M; Lima, Carlos F R A C; Santos, Luís M N B F; Lago, Alexsandre F
2010-04-16
We have carried out a study of the energetics, structural, and physical properties of o-, m-, and p-hydroxybenzophenone neutral molecules, C(13)H(10)O(2), and their corresponding anions. In particular, the standard enthalpies of formation in the gas phase at 298.15 K for all of these species were determined. A reliable experimental estimation of the enthalpy associated with intramolecular hydrogen bonding in chelated species was experimentally obtained. The gas-phase acidities (GA) of benzophenones, substituted phenols, and several aliphatic alcohols are compared with the corresponding aqueous acidities (pK(a)), covering a range of 278 kJ.mol(-1) in GA and 11.4 in pK(a). A computational study of the various species shed light on structural effects and further confirmed the self-consistency of the experimental results.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Estimation of sample size and testing power (part 6).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-03-01
The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).
Sapra, Mahak; Ugrani, Suraj; Mayya, Y S; Venkataraman, Chandra
2017-08-15
Air-jet atomization of solution into droplets followed by controlled drying is increasingly being used for producing nanoparticles for drug delivery applications. Nanoparticle size is an important parameter that influences the stability, bioavailability and efficacy of the drug. In air-jet atomization technique, dry particle diameters are generally predicted by using solute diffusion models involving the key concept of critical supersaturation solubility ratio (Sc) that dictates the point of crust formation within the droplet. As no reliable method exists to determine this quantity, the present study proposes an aerosol based method to determine Sc for a given solute-solvent system and process conditions. The feasibility has been demonstrated by conducting experiments for stearic acid in ethanol and chloroform as well as for anti-tubercular drug isoniazid in ethanol. Sc values were estimated by combining the experimentally observed particle and droplet diameters with simulations from a solute diffusion model. Important findings of the study were: (i) the measured droplet diameters systematically decreased with increasing precursor concentration (ii) estimated Sc values were 9.3±0.7, 13.3±2.4 and 18±0.8 for stearic acid in chloroform, stearic acid and isoniazid in ethanol respectively (iii) experimental results pointed at the correct interfacial tension pre-factor to be used in theoretical estimates of Sc and (iv) results showed a consistent evidence for the existence of induction time delay between the attainment of theoretical Sc and crust formation. The proposed approach has been validated by testing its predictive power for a challenge concentration against experimental data. The study not only advances spray-drying technique by establishing an aerosol based approach to determine Sc, but also throws considerable light on the interfacial processes responsible for solid-phase formation in a rapidly supersaturating system. Until satisfactory theoretical formulae for predicting CSS are developed, the present approach appears to offer the best option for engineering nanoparticle size through solute diffusion models. Copyright © 2017 Elsevier Inc. All rights reserved.
Tumlinson, Samuel E; Sass, Daniel A; Cano, Stephanie M
2014-03-01
While experimental designs are regarded as the gold standard for establishing causal relationships, such designs are usually impractical owing to common methodological limitations. The objective of this article is to illustrate how propensity score matching (PSM) and using propensity scores (PS) as a covariate are viable alternatives to reduce estimation error when experimental designs cannot be implemented. To mimic common pediatric research practices, data from 140 simulated participants were used to resemble an experimental and nonexperimental design that assessed the effect of treatment status on participant weight loss for diabetes. Pretreatment participant characteristics (age, gender, physical activity, etc.) were then used to generate PS for use in the various statistical approaches. Results demonstrate how PSM and using the PS as a covariate can be used to reduce estimation error and improve statistical inferences. References for issues related to the implementation of these procedures are provided to assist researchers.
Estimation of sample size and testing power (Part 3).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2011-12-01
This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
Peyrot des Gachons, Catherine; Avrillier, Julie; Gleason, Michael; Algarra, Laure; Zhang, Siyu; Mura, Emi; Nagai, Hajime
2016-01-01
Fluid ingestion is necessary for life, and thirst sensations are a prime motivator to drink. There is evidence of the influence of oropharyngeal stimulation on thirst and water intake in both animals and humans, but how those oral sensory cues impact thirst and ultimately the amount of liquid ingested is not well understood. We investigated which sensory trait(s) of a beverage influence the thirst quenching efficacy of ingested liquids and the perceived amount ingested. We deprived healthy individuals of liquid and food overnight (> 12 hours) to make them thirsty. After asking them to drink a fixed volume (400 mL) of an experimental beverage presenting one or two specific sensory traits, we determined the volume ingested of additional plain, ‘still’, room temperature water to assess their residual thirst and, by extension, the thirst-quenching properties of the experimental beverage. In a second study, participants were asked to drink the experimental beverages from an opaque container through a straw and estimate the volume ingested. We found that among several oro-sensory traits, the perceptions of coldness, induced either by cold water (thermally) or by l-menthol (chemically), and the feeling of oral carbonation, strongly enhance the thirst quenching properties of a beverage in water-deprived humans (additional water intake after the 400 ml experimental beverage was reduced by up to 50%). When blinded to the volume of liquid consumed, individual’s estimation of ingested volume is increased (~22%) by perceived oral cold and carbonation, raising the idea that cold and perhaps CO2 induced-irritation sensations are included in how we normally encode water in the mouth and how we estimate the quantity of volume swallowed. These findings have implications for addressing inadequate hydration state in populations such as the elderly. PMID:27685093
Linearization improves the repeatability of quantitative dynamic contrast-enhanced MRI.
Jones, Kyle M; Pagel, Mark D; Cárdenas-Rodríguez, Julio
2018-04-01
The purpose of this study was to compare the repeatabilities of the linear and nonlinear Tofts and reference region models (RRM) for dynamic contrast-enhanced MRI (DCE-MRI). Simulated and experimental DCE-MRI data from 12 rats with a flank tumor of C6 glioma acquired over three consecutive days were analyzed using four quantitative and semi-quantitative DCE-MRI metrics. The quantitative methods used were: 1) linear Tofts model (LTM), 2) non-linear Tofts model (NTM), 3) linear RRM (LRRM), and 4) non-linear RRM (NRRM). The following semi-quantitative metrics were used: 1) maximum enhancement ratio (MER), 2) time to peak (TTP), 3) initial area under the curve (iauc64), and 4) slope. LTM and NTM were used to estimate K trans , while LRRM and NRRM were used to estimate K trans relative to muscle (R Ktrans ). Repeatability was assessed by calculating the within-subject coefficient of variation (wSCV) and the percent intra-subject variation (iSV) determined with the Gage R&R analysis. The iSV for R Ktrans using LRRM was two-fold lower compared to NRRM at all simulated and experimental conditions. A similar trend was observed for the Tofts model, where LTM was at least 50% more repeatable than the NTM under all experimental and simulated conditions. The semi-quantitative metrics iauc64 and MER were as equally repeatable as K trans and R Ktrans estimated by LTM and LRRM respectively. The iSV for iauc64 and MER were significantly lower than the iSV for slope and TTP. In simulations and experimental results, linearization improves the repeatability of quantitative DCE-MRI by at least 30%, making it as repeatable as semi-quantitative metrics. Copyright © 2017 Elsevier Inc. All rights reserved.
Neutron density profile in the lunar subsurface produced by galactic cosmic rays
NASA Astrophysics Data System (ADS)
Ota, Shuya; Sihver, Lembit; Kobayashi, Shingo; Hasebe, Nobuyuki
Neutron production by galactic cosmic rays (GCR) in the lunar subsurface is very important when performing lunar and planetary nuclear spectroscopy and space dosimetry. Further im-provements to estimate the production with increased accuracy is therefore required. GCR, which is a main contributor to the neutron production in the lunar subsurface, consists of not only protons but also of heavy components such as He, C, N, O, and Fe. Because of that, it is important to precisely estimate the neutron production from such components for the lunar spectroscopy and space dosimetry. Therefore, the neutron production from GCR particles in-cluding heavy components in the lunar subsurface was simulated with the Particle and Heavy ion Transport code System (PHITS), using several heavy ion interaction models. This work presents PHITS simulations of the neutron density as a function of depth (neutron density profile) in the lunar subsurface and the results are compared with experimental data obtained by Apollo 17 Lunar Neutron Probe Experiment (LNPE). From our previous study, it has been found that the accuracy of the proton-induced neutron production models is the most influen-tial factor when performing precise calculations of neutron production in the lunar subsurface. Therefore, a benchmarking of proton-induced neutron production models against experimental data was performed to estimate and improve the precision of the calculations. It was found that the calculated neutron production using the best model of Cugnon Old (E < 3 GeV) and JAM (E > 3 GeV) gave up to 30% higher values than experimental results. Therefore, a high energy nuclear data file (JENDL-HE) was used instead of the Cugnon Old model at the energies below 3 GeV. Then, the calculated neutron density profile successfully reproduced the experimental data from LNPE within experimental errors of 15% (measurement) + 30% (systematic). In this presentation, we summarize and discuss our calculated results of neutron production in the lunar subsurface.
Estimating peer effects in networks with peer encouragement designs.
Eckles, Dean; Kizilcec, René F; Bakshy, Eytan
2016-07-05
Peer effects, in which the behavior of an individual is affected by the behavior of their peers, are central to social science. Because peer effects are often confounded with homophily and common external causes, recent work has used randomized experiments to estimate effects of specific peer behaviors. These experiments have often relied on the experimenter being able to randomly modulate mechanisms by which peer behavior is transmitted to a focal individual. We describe experimental designs that instead randomly assign individuals' peers to encouragements to behaviors that directly affect those individuals. We illustrate this method with a large peer encouragement design on Facebook for estimating the effects of receiving feedback from peers on posts shared by focal individuals. We find evidence for substantial effects of receiving marginal feedback on multiple behaviors, including giving feedback to others and continued posting. These findings provide experimental evidence for the role of behaviors directed at specific individuals in the adoption and continued use of communication technologies. In comparison, observational estimates differ substantially, both underestimating and overestimating effects, suggesting that researchers and policy makers should be cautious in relying on them.
Estimating peer effects in networks with peer encouragement designs
Eckles, Dean; Kizilcec, René F.; Bakshy, Eytan
2016-01-01
Peer effects, in which the behavior of an individual is affected by the behavior of their peers, are central to social science. Because peer effects are often confounded with homophily and common external causes, recent work has used randomized experiments to estimate effects of specific peer behaviors. These experiments have often relied on the experimenter being able to randomly modulate mechanisms by which peer behavior is transmitted to a focal individual. We describe experimental designs that instead randomly assign individuals’ peers to encouragements to behaviors that directly affect those individuals. We illustrate this method with a large peer encouragement design on Facebook for estimating the effects of receiving feedback from peers on posts shared by focal individuals. We find evidence for substantial effects of receiving marginal feedback on multiple behaviors, including giving feedback to others and continued posting. These findings provide experimental evidence for the role of behaviors directed at specific individuals in the adoption and continued use of communication technologies. In comparison, observational estimates differ substantially, both underestimating and overestimating effects, suggesting that researchers and policy makers should be cautious in relying on them. PMID:27382145
Analytical estimation show low depth-independent water loss due to vapor flux from deep aquifers
NASA Astrophysics Data System (ADS)
Selker, John S.
2017-06-01
Recent articles have provided estimates of evaporative flux from water tables in deserts that span 5 orders of magnitude. In this paper, we present an analytical calculation that indicates aquifer vapor flux to be limited to 0.01 mm/yr for sites where there is negligible recharge and the water table is well over 20 m below the surface. This value arises from the geothermal gradient, and therefore, is nearly independent of the actual depth of the aquifer. The value is in agreement with several numerical studies, but is 500 times lower than recently reported experimental values, and 100 times larger than an earlier analytical estimate.
Estimation for time-changed self-similar stochastic processes
NASA Astrophysics Data System (ADS)
Arroum, W.; Jones, O. D.
2005-12-01
We consider processes of the form X(t) = X ~(θ(t)) where X ~ is a self-similar process with stationary increments and θ is a deterministic subordinator with a periodic activity function a = θ'> 0. Such processes have been proposed as models for high-frequency financial data, such as currency exchange rates, where there are known to be daily and weekly periodic fluctuations in the volatility, captured here by the periodic activity function. We review an existing estimator for the activity function then propose three new methods for estimating it and present some experimental studies of their performance. We finish with an application to some foreign exchange and FTSE100 futures data.
Exact dimension estimation of interacting qubit systems assisted by a single quantum probe
NASA Astrophysics Data System (ADS)
Sone, Akira; Cappellaro, Paola
2017-12-01
Estimating the dimension of an Hilbert space is an important component of quantum system identification. In quantum technologies, the dimension of a quantum system (or its corresponding accessible Hilbert space) is an important resource, as larger dimensions determine, e.g., the performance of quantum computation protocols or the sensitivity of quantum sensors. Despite being a critical task in quantum system identification, estimating the Hilbert space dimension is experimentally challenging. While there have been proposals for various dimension witnesses capable of putting a lower bound on the dimension from measuring collective observables that encode correlations, in many practical scenarios, especially for multiqubit systems, the experimental control might not be able to engineer the required initialization, dynamics, and observables. Here we propose a more practical strategy that relies not on directly measuring an unknown multiqubit target system, but on the indirect interaction with a local quantum probe under the experimenter's control. Assuming only that the interaction model is given and the evolution correlates all the qubits with the probe, we combine a graph-theoretical approach and realization theory to demonstrate that the system dimension can be exactly estimated from the model order of the system. We further analyze the robustness in the presence of background noise of the proposed estimation method based on realization theory, finding that despite stringent constrains on the allowed noise level, exact dimension estimation can still be achieved.
Nematode Damage Functions: The Problems of Experimental and Sampling Error
Ferris, H.
1984-01-01
The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865
Jones, Terry L; Schlegel, Cara
2014-02-01
Accurate, precise, unbiased, reliable, and cost-effective estimates of nursing time use are needed to insure safe staffing levels. Direct observation of nurses is costly, and conventional surrogate measures have limitations. To test the potential of electronic capture of time and motion through real time location systems (RTLS), a pilot study was conducted to assess efficacy (method agreement) of RTLS time use; inter-rater reliability of RTLS time-use estimates; and associated costs. Method agreement was high (mean absolute difference = 28 seconds); inter-rater reliability was high (ICC = 0.81-0.95; mean absolute difference = 2 seconds); and costs for obtaining RTLS time-use estimates on a single nursing unit exceeded $25,000. Continued experimentation with RTLS to obtain time-use estimates for nursing staff is warranted. © 2013 Wiley Periodicals, Inc.
Concepción-Acevedo, Jeniffer; Weiss, Howard N; Chaudhry, Waqas Nasir; Levin, Bruce R
2015-01-01
The maximum exponential growth rate, the Malthusian parameter (MP), is commonly used as a measure of fitness in experimental studies of adaptive evolution and of the effects of antibiotic resistance and other genes on the fitness of planktonic microbes. Thanks to automated, multi-well optical density plate readers and computers, with little hands-on effort investigators can readily obtain hundreds of estimates of MPs in less than a day. Here we compare estimates of the relative fitness of antibiotic susceptible and resistant strains of E. coli, Pseudomonas aeruginosa and Staphylococcus aureus based on MP data obtained with automated multi-well plate readers with the results from pairwise competition experiments. This leads us to question the reliability of estimates of MP obtained with these high throughput devices and the utility of these estimates of the maximum growth rates to detect fitness differences.
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.
Trading Speed and Accuracy by Coding Time: A Coupled-circuit Cortical Model
Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C.
2013-01-01
Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification. PMID:23592967
Lim, Tau Meng; Cheng, Shanbao; Chua, Leok Poh
2009-07-01
Axial flow blood pumps are generally smaller as compared to centrifugal pumps. This is very beneficial because they can provide better anatomical fit in the chest cavity, as well as lower the risk of infection. This article discusses the design, levitated responses, and parameter estimation of the dynamic characteristics of a compact hybrid magnetic bearing (HMB) system for axial flow blood pump applications. The rotor/impeller of the pump is driven by a three-phase permanent magnet brushless and sensorless motor. It is levitated by two HMBs at both ends in five degree of freedom with proportional-integral-derivative controllers, among which four radial directions are actively controlled and one axial direction is passively controlled. The frequency domain parameter estimation technique with statistical analysis is adopted to validate the stiffness and damping coefficients of the HMB system. A specially designed test rig facilitated the estimation of the bearing's coefficients in air-in both the radial and axial directions. Experimental estimation showed that the dynamic characteristics of the HMB system are dominated by the frequency-dependent stiffness coefficients. By injecting a multifrequency excitation force signal onto the rotor through the HMBs, it is noticed in the experimental results the maximum displacement linear operating range is 20% of the static eccentricity with respect to the rotor and stator gap clearance. The actuator gain was also successfully calibrated and may potentially extend the parameter estimation technique developed in the study of identification and monitoring of the pump's dynamic properties under normal operating conditions with fluid.
NASA Astrophysics Data System (ADS)
Ulianova, O. V.; Uianov, S. S.; Li, Pengcheng; Luo, Qingming
2011-04-01
The method of speckle microscopy was adapted to estimate the reactogenicity of the prototypes of vaccine preparations against extremely dangerous infections. The theory is proposed to describe the mechanism of formation of the output signal from the super-high spatial resolution speckle microscope. The experimental studies show that bacterial suspensions, irradiated in different regimes of inactivation, do not exert negative influence on the blood microcirculations in laboratory animals.
Plasma disappearance of exogenous erythropoietin in mice under different experimental conditions.
Lezón, C E; Martínez, M P; Conti, M I; Bozzini, C E
1998-06-01
Erythropoietin (EPO) is a glycoprotein hormone produced primarily in the kidneys and to a lesser extent in the liver that regulates red cell production. Most of the studies conducted in experimental animals to assess the role of EPO in the regulation of erythropoiesis were performed in mouse models. However, little is known about the in vivo metabolism of the hormone in this species. The present study was thus undertaken to measure the plasma tl/2 of radiolabeled recombinant human EPO (rh-EPO) in normal mice as well as in mice with altered erythrocyte production rates (EPR), plasma EPO (pEPO) titer, marrow responsiveness, red cell volume, or liver function. Adult CF-1 mice of both sexes were used throughout. For the EPO life-span studies, 30 mice in each experiment were intravenously injected with 600,000 cpm of 125l-rh-EPO and bled by cardiac puncture in groups of five every hour for 6 h. Trichloroacetic acid (TCA) was added to each plasma sample and the radioactivity in the precipitate measured in a gamma-counter. EPO, pEPO, marrow responsiveness, or red cell volume were altered by either injections of rh-EPO, 5-fluorouracil, or phenylhydrazine, or by bleeding, or red cell transfusion. Liver function was altered by CI4C administration. In the normal groups of mice, the estimated tl/2 was 182.75+/-14.4 (SEM) min. The estimated tl/2 of the other experimental groups was not significantly different from normal. These results, therefore, strongly suggest that the clearance rate of EPO in mice is not subjected to physiologic regulation and that pEPO titer can be really taken as the reflection of the EPO production rate, at least in the experimental conditions reported here.
A Study of Wake Development and Structure in Constant Pressure Gradients
NASA Technical Reports Server (NTRS)
Thomas, Flint O.; Nelson, R. C.; Liu, Xiaofeng
2000-01-01
Motivated by the application to high-lift aerodynamics for commercial transport aircraft, a systematic investigation into the response of symmetric/asymmetric planar turbulent wake development to constant adverse, zero, and favorable pressure gradients has been conducted. The experiments are performed at a Reynolds number of 2.4 million based on the chord of the wake generator. A unique feature of this wake study is that the pressure gradients imposed on the wake flow field are held constant. The experimental measurements involve both conventional LDV and hot wire flow field surveys of mean and turbulent quantities including the turbulent kinetic energy budget. In addition, similarity analysis and numerical simulation have also been conducted for this wake study. A focus of the research has been to isolate the effects of both pressure gradient and initial wake asymmetry on the wake development. Experimental results reveal that the pressure gradient has a tremendous influence on the wake development, despite the relatively modest pressure gradients imposed. For a given pressure gradient, the development of an initially asymmetric wake is different from the initially symmetric wake. An explicit similarity solution for the shape parameters of the symmetric wake is obtained and agrees with the experimental results. The turbulent kinetic energy budget measurements of the symmetric wake demonstrate that except for the convection term, the imposed pressure gradient does not change the fundamental flow physics of turbulent kinetic energy transport. Based on the turbulent kinetic energy budget measurements, an approach to correct the bias error associated with the notoriously difficult dissipation estimate is proposed and validated through the comparison of the experimental estimate with a direct numerical simulation result.
NASA Astrophysics Data System (ADS)
Akbarnejad, Shahin; Saffari Pour, Mohsen; Jonsson, Lage Tord Ingemar; Jönsson, Pӓr Göran
2017-02-01
Ceramic foam filters (CFFs) are used to remove solid particles and inclusions from molten metal. In general, molten metal which is poured on the top of a CFF needs to reach a certain height to build the required pressure (metal head) to prime the filter. To estimate the required metal head, it is necessary to obtain permeability coefficients using permeametry experiments. It has been mentioned in the literature that to avoid fluid bypassing, during permeametry, samples need to be sealed. However, the effect of fluid bypassing on the experimentally obtained pressure gradients seems not to be explored. Therefore, in this research, the focus was on studying the effect of fluid bypassing on the experimentally obtained pressure gradients as well as the empirically obtained Darcy and non-Darcy permeability coefficients. Specifically, the aim of the research was to investigate the effect of fluid bypassing on the liquid permeability of 30, 50, and 80 pores per inch (PPI) commercial alumina CFFs. In addition, the experimental data were compared to the numerically modeled findings. Both studies showed that no sealing results in extremely poor estimates of the pressure gradients and Darcy and non-Darcy permeability coefficients for all studied filters. The average deviations between the pressure gradients of the sealed and unsealed 30, 50, and 80 PPI samples were calculated to be 57.2, 56.8, and 61.3 pct. The deviations between the Darcy coefficients of the sealed and unsealed 30, 50, and 80 PPI samples found to be 9, 20, and 31 pct. The deviations between the non-Darcy coefficients of the sealed and unsealed 30, 50, and 80 PPI samples were calculated to be 59, 58, and 63 pct.
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Fuzzy neural network for flow estimation in sewer systems during wet weather.
Shen, Jun; Shen, Wei; Chang, Jian; Gong, Ning
2006-02-01
Estimation of the water flow from rainfall intensity during storm events is important in hydrology, sewer system control, and environmental protection. The runoff-producing behavior of a sewer system changes from one storm event to another because rainfall loss depends not only on rainfall intensities, but also on the state of the soil and vegetation, the general condition of the climate, and so on. As such, it would be difficult to obtain a precise flowrate estimation without sufficient a priori knowledge of these factors. To establish a model for flow estimation, one can also use statistical methods, such as the neural network STORMNET, software developed at Lyonnaise des Eaux, France, analyzing the relation between rainfall intensity and flowrate data of the known storm events registered in the past for a given sewer system. In this study, the authors propose a fuzzy neural network to estimate the flowrate from rainfall intensity. The fuzzy neural network combines four STORMNETs and fuzzy deduction to better estimate the flowrates. This study's system for flow estimation can be calibrated automatically by using known storm events; no data regarding the physical characteristics of the drainage basins are required. Compared with the neural network STORMNET, this method reduces the mean square error of the flow estimates by approximately 20%. Experimental results are reported herein.
NASA Astrophysics Data System (ADS)
Fonseca, E. S. R.; de Jesus, M. E. P.
2007-07-01
The estimation of optical properties of highly turbid and opaque biological tissue is a difficult task since conventional purely optical methods rapidly loose sensitivity as the mean photon path length decreases. Photothermal methods, such as pulsed or frequency domain photothermal radiometry (FD-PTR), on the other hand, show remarkable sensitivity in experimental conditions that produce very feeble optical signals. Photothermal Radiometry is primarily sensitive to absorption coefficient yielding considerably higher estimation errors on scattering coefficients. Conversely, purely optical methods such as Local Diffuse Reflectance (LDR) depend mainly on the scattering coefficient and yield much better estimates of this parameter. Therefore, at moderate transport albedos, the combination of photothermal and reflectance methods can improve considerably the sensitivity of detection of tissue optical properties. The authors have recently proposed a novel method that combines FD-PTR with LDR, aimed at improving sensitivity on the determination of both optical properties. Signal analysis was performed by global fitting the experimental data to forward models based on Monte-Carlo simulations. Although this approach is accurate, the associated computational burden often limits its use as a forward model. Therefore, the application of analytical models based on the diffusion approximation offers a faster alternative. In this work, we propose the calculation of the diffuse reflectance and the fluence rate profiles under the δ-P I approximation. This approach is known to approximate fluence rate expressions better close to collimated sources and boundaries than the standard diffusion approximation (SDA). We extend this study to the calculation of the diffuse reflectance profiles. The ability of the δ-P I based model to provide good estimates of the absorption, scattering and anisotropy coefficients is tested against Monte-Carlo simulations over a wide range of scattering to absorption ratios. Experimental validation of the proposed method is accomplished by a set of measurements on solid absorbing and scattering phantoms.
NASA Astrophysics Data System (ADS)
Takagi, Hideo D.; Swaddle, Thomas W.
1996-01-01
The outer-sphere contribution to the volume of activation of homogeneous electron exchange reactions is estimated for selected solvents on the basis of the mean spherical approximation (MSA), and the calculated values are compared with those estimated by the Strank-Hush-Marcus (SHM) theory and with activation volumes obtained experimentally for the electron exchange reaction between tris(hexafluoroacetylacetonato)ruthenium(III) and -(II) in acetone, acetonitrile, methanol and chloroform. The MSA treatment, which recognizes the molecular nature of the solvent, does not improve significantly upon the continuous-dielectric SHM theory, which represents the experimental data adequately for the more polar solvents.
Estimating Coherence Measures from Limited Experimental Data Available
NASA Astrophysics Data System (ADS)
Zhang, Da-Jian; Liu, C. L.; Yu, Xiao-Dong; Tong, D. M.
2018-04-01
Quantifying coherence has received increasing attention, and considerable work has been directed towards finding coherence measures. While various coherence measures have been proposed in theory, an important issue following is how to estimate these coherence measures in experiments. This is a challenging task, since the state of a system is often unknown in practical applications and the accessible measurements in a real experiment are typically limited. In this Letter, we put forward an approach to estimate coherence measures of an unknown state from any limited experimental data available. Our approach is not only applicable to coherence measures but can be extended to other resource measures.
Estimation of channel parameters and background irradiance for free-space optical link.
Khatoon, Afsana; Cowley, William G; Letzepis, Nick; Giggenbach, Dirk
2013-05-10
Free-space optical communication can experience severe fading due to optical scintillation in long-range links. Channel estimation is also corrupted by background and electrical noise. Accurate estimation of channel parameters and scintillation index (SI) depends on perfect removal of background irradiance. In this paper, we propose three different methods, the minimum-value (MV), mean-power (MP), and maximum-likelihood (ML) based methods, to remove the background irradiance from channel samples. The MV and MP methods do not require knowledge of the scintillation distribution. While the ML-based method assumes gamma-gamma scintillation, it can be easily modified to accommodate other distributions. Each estimator's performance is compared using simulation data as well as experimental measurements. The estimators' performance are evaluated from low- to high-SI areas using simulation data as well as experimental trials. The MV and MP methods have much lower complexity than the ML-based method. However, the ML-based method shows better SI and background-irradiance estimation performance.
Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.
Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J
2018-03-01
Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.
Cyber bullying prevention: intervention in Taiwan.
Lee, Ming-Shinn; Zi-Pei, Wu; Svanström, Leif; Dalal, Koustuv
2013-01-01
This study aimed to explore the effectiveness of the cyber bullying prevention WebQuest course implementation. The study adopted the quasi-experimental design with two classes made up of a total of 61 junior high school students of seventh grade. The study subjects comprised of 30 students from the experimental group and 31 students from the control group. The experimental group received eight sessions (total 360 minutes) of the teaching intervention for four consecutive weeks, while the control group did not engage in any related courses. The self-compiled questionnaire for the student's knowledge, attitudes, and intentions toward cyber bullying prevention was adopted. Data were analysed through generalized estimating equations to understand the immediate results on the student's knowledge, attitudes, and intentions after the intervention. The results show that the WebQuest course immediately and effectively enhanced the knowledge of cyber bullying, reduced the intentions, and retained the effects after the learning. But it produced no significant impact on the attitude toward cyber bullying. The intervention through this pilot study was effective and positive for cyber bulling prevention. It was with small number of students. Therefore, studies with large number of students and long experimental times, in different areas and countries are warranted.
NASA Astrophysics Data System (ADS)
Alifanov, O. M.; Budnik, S. A.; Mikhaylov, V. V.; Nenarokomov, A. V.; Titov, D. M.; Yudin, V. M.
2007-06-01
An experimental-computational system, which is developed at the Thermal Laboratory, Department Space Systems Engineering, Moscow Aviation Institute (MAI), is presented for investigating the thermal properties of composite materials by methods of inverse heat transfer problems. The system is aimed at investigating the materials in conditions of unsteady contact and/or radiation heating over a wide range of temperature changes and heating rates in a vacuum, air and inert gas medium. The paper considers the hardware components of the system, including the experiment facility and the automated system of control, measurement, data acquisition and processing, as well as the aspects of methodical support of thermal tests. In the next part the conception and realization of a computer code for experimental data processing to estimate the thermal properties of thermal-insulating materials is given. The most promising direction in further development of methods for non-destructive composite materials using the procedure of solving inverse problems is the simultaneous determination of a combination of their thermal and radiation properties. The general method of iterative regularization is concerned with application to the estimation of materials properties (e.g., example: thermal conductivity λ(T) and heat capacity C(T)). Such problems are of great practical importance in the study of material properties used as non-destructive surface shield in objects of space engineering, power engineering, etc. In the third part the results of practical implementation of hardware and software presented in the previous two parts are given for the estimating of thermal properties of thermal-insulating materials. The main purpose of this study is to confirm the feasibility and effectiveness of the methods developed and hardware equipment for determining thermal properties of particular modern high porous materials.
Hatfield, L.A.; Gutreuter, S.; Boogaard, M.A.; Carlin, B.P.
2011-01-01
Estimation of extreme quantal-response statistics, such as the concentration required to kill 99.9% of test subjects (LC99.9), remains a challenge in the presence of multiple covariates and complex study designs. Accurate and precise estimates of the LC99.9 for mixtures of toxicants are critical to ongoing control of a parasitic invasive species, the sea lamprey, in the Laurentian Great Lakes of North America. The toxicity of those chemicals is affected by local and temporal variations in water chemistry, which must be incorporated into the modeling. We develop multilevel empirical Bayes models for data from multiple laboratory studies. Our approach yields more accurate and precise estimation of the LC99.9 compared to alternative models considered. This study demonstrates that properly incorporating hierarchical structure in laboratory data yields better estimates of LC99.9 stream treatment values that are critical to larvae control in the field. In addition, out-of-sample prediction of the results of in situ tests reveals the presence of a latent seasonal effect not manifest in the laboratory studies, suggesting avenues for future study and illustrating the importance of dual consideration of both experimental and observational data. ?? 2011, The International Biometric Society.
Sensitivity analysis of pars-tensa young's modulus estimation using inverse finite-element modeling
NASA Astrophysics Data System (ADS)
Rohani, S. Alireza; Elfarnawany, Mai; Agrawal, Sumit K.; Ladak, Hanif M.
2018-05-01
Accurate estimates of the pars-tensa (PT) Young's modulus (EPT) are required in finite-element (FE) modeling studies of the middle ear. Previously, we introduced an in-situ EPT estimation technique by optimizing a sample-specific FE model to match experimental eardrum pressurization data. This optimization process requires choosing some modeling assumptions such as PT thickness and boundary conditions. These assumptions are reported with a wide range of variation in the literature, hence affecting the reliability of the models. In addition, the sensitivity of the estimated EPT to FE modeling assumptions has not been studied. Therefore, the objective of this study is to identify the most influential modeling assumption on EPT estimates. The middle-ear cavity extracted from a cadaveric temporal bone was pressurized to 500 Pa. The deformed shape of the eardrum after pressurization was measured using a Fourier transform profilometer (FTP). A base-line FE model of the unpressurized middle ear was created. The EPT was estimated using golden section optimization method, which minimizes the cost function comparing the deformed FE model shape to the measured shape after pressurization. The effect of varying the modeling assumptions on EPT estimates were investigated. This included the change in PT thickness, pars flaccida Young's modulus and possible FTP measurement error. The most influential parameter on EPT estimation was PT thickness and the least influential parameter was pars flaccida Young's modulus. The results of this study provide insight into how different parameters affect the results of EPT optimization and which parameters' uncertainties require further investigation to develop robust estimation techniques.
A review on the effects of different parameters on contact heat transfer
NASA Astrophysics Data System (ADS)
Abdollahi, H.; Shahraki, S.; Motahari-Nezhad, M.
2017-07-01
In this paper, a complete literature review for thermal contact between fixed and periodic contacting surfaces and also thermal contact between exhaust valve and its seat in internal combustion engines is presented. Furthermore, the effects of some parameters such as contact pressure, contact frequency, the contacting surfaces topography and roughness, curvature radius of surfaces, loading-unloading cycles, gas gap conductance and properties, interface interstitial material properties, surfaces coatings and surfaces temperature on thermal contact conductance are investigated according to the papers presented in this field. The reviewed papers and studies included theoretical/ analytical/experimental and numerical studies on thermal contact conductance. In studying the thermal contact between exhaust valve and its seat, most of the experimental studies include two axial rods as the exhaust valve, and seat and the one ends of both rods are considered at constant and different temperatures. In the experimental methods, the temperatures of multi-points on rods are measured in different conditions, and thermal contact conductance is estimated using them.
3-D Vector Flow Estimation With Row-Column-Addressed Arrays.
Holbek, Simon; Christiansen, Thomas Lehrmann; Stuart, Matthias Bo; Beers, Christopher; Thomsen, Erik Vilain; Jensen, Jorgen Arendt
2016-11-01
Simulation and experimental results from 3-D vector flow estimations for a 62 + 62 2-D row-column (RC) array with integrated apodization are presented. A method for implementing a 3-D transverse oscillation (TO) velocity estimator on a 3-MHz RC array is developed and validated. First, a parametric simulation study is conducted, where flow direction, ensemble length, number of pulse cycles, steering angles, transmit/receive apodization, and TO apodization profiles and spacing are varied, to find the optimal parameter configuration. The performance of the estimator is evaluated with respect to relative mean bias ~B and mean standard deviation ~σ . Second, the optimal parameter configuration is implemented on the prototype RC probe connected to the experimental ultrasound scanner SARUS. Results from measurements conducted in a flow-rig system containing a constant laminar flow and a straight-vessel phantom with a pulsating flow are presented. Both an M-mode and a steered transmit sequence are applied. The 3-D vector flow is estimated in the flow rig for four representative flow directions. In the setup with 90° beam-to-flow angle, the relative mean bias across the entire velocity profile is (-4.7, -0.9, 0.4)% with a relative standard deviation of (8.7, 5.1, 0.8)% for ( v x , v y , v z ). The estimated peak velocity is 48.5 ± 3 cm/s giving a -3% bias. The out-of-plane velocity component perpendicular to the cross section is used to estimate volumetric flow rates in the flow rig at a 90° beam-to-flow angle. The estimated mean flow rate in this setup is 91.2 ± 3.1 L/h corresponding to a bias of -11.1%. In a pulsating flow setup, flow rate measured during five cycles is 2.3 ± 0.1 mL/stroke giving a negative 9.7% bias. It is concluded that accurate 3-D vector flow estimation can be obtained using a 2-D RC-addressed array.
NASA Astrophysics Data System (ADS)
Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.
2005-12-01
The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.
Fernandes Santos, Carlos Antonio; de Souza Gama, Renata Natália Cândido
2013-06-01
The umbu tree (Spondias tuberosa) is one of the most important endemic species to the Brazilian tropical semiarid region. The umbu tree has edible fruits with a peculiar flavor that are consumed in natura or in a semi-industrialized form, such as jams, candies and juices. The majority of endemic species to Brazilian semiarid region have not been studied or sampled to form germ-plasm collections, which increases the risk of losing genetic variability of the adapted species to xerophytic conditions. The aim of this study was to estimate outcrossing rates in S. tuberosa using a multilocus mixed model in order to guide genetic resources and breeding programs of this species. DNA samples were extracted from 92 progenies of umbu trees, which were distributed among 12 families. These trees were planted by seed in 1991 in Petrolina, PE, Brazil. The experimental design was a randomized block, with a total of 42 progenies sampled in three regions. The experimental units were composed by five plants and five replications. The outcrossing rate was estimated by the multilocus model, which is available in the MLTR software, and was based on 17 polymorphic AFLP bands obtained from AAA_CTG and AAA_CTC primer combinations. The observed heterozygotes ranged from 0.147 to 0.499, with a maximum frequency estimated for the AAA_CTC 10 amplicon. The multilocus outcrossing estimation (t(m)) was 0.804 +/- 0.072, while the single-locus (t(s)) was 0.841 +/- 0.079, which suggests that S. tuberosa is predominantly an outcrossing species. The difference between t(m) and t(s) was -0.037 +/- 0.029, which indicates that biparental inbreeding was nearly absent. The mean inbreeding coefficient or fixation index (F) among maternal plants was--0.103 +/- 0.045, and the expected F was 0.108, which indicates that there was no excess of heterozygotes in the maternal population. The outcrossing estimates obtained in the present study indicate that S. tuberosa is an open-pollinated species. Biometrical models applied to this species should therefore take into account the deviation from random outcrossing to estimate genetic parameters and the constitution of broad germplasm samples to preserve the genetic variability of the species. Outcrossing rates based on AFLP and the mixed-mating model should be applied to other studies of plant species in the Brazilian semiarid region.
The performance of sample selection estimators to control for attrition bias.
Grasdal, A
2001-07-01
Sample attrition is a potential source of selection bias in experimental, as well as non-experimental programme evaluation. For labour market outcomes, such as employment status and earnings, missing data problems caused by attrition can be circumvented by the collection of follow-up data from administrative registers. For most non-labour market outcomes, however, investigators must rely on participants' willingness to co-operate in keeping detailed follow-up records and statistical correction procedures to identify and adjust for attrition bias. This paper combines survey and register data from a Norwegian randomized field trial to evaluate the performance of parametric and semi-parametric sample selection estimators commonly used to correct for attrition bias. The considered estimators work well in terms of producing point estimates of treatment effects close to the experimental benchmark estimates. Results are sensitive to exclusion restrictions. The analysis also demonstrates an inherent paradox in the 'common support' approach, which prescribes exclusion from the analysis of observations outside of common support for the selection probability. The more important treatment status is as a determinant of attrition, the larger is the proportion of treated with support for the selection probability outside the range, for which comparison with untreated counterparts is possible. Copyright 2001 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Dolenc, B.; Vrečko, D.; Juričić, Ð.; Pohjoranta, A.; Pianese, C.
2017-03-01
Degradation and poisoning of solid oxide fuel cell (SOFC) stacks are continuously shortening the lifespan of SOFC systems. Poisoning mechanisms, such as carbon deposition, form a coating layer, hence rapidly decreasing the efficiency of the fuel cells. Gas composition of inlet gases is known to have great impact on the rate of coke formation. Therefore, monitoring of these variables can be of great benefit for overall management of SOFCs. Although measuring the gas composition of the gas stream is feasible, it is too costly for commercial applications. This paper proposes three distinct approaches for the design of gas composition estimators of an SOFC system in anode off-gas recycle configuration which are (i.) accurate, and (ii.) easy to implement on a programmable logic controller. Firstly, a classical approach is briefly revisited and problems related to implementation complexity are discussed. Secondly, the model is simplified and adapted for easy implementation. Further, an alternative data-driven approach for gas composition estimation is developed. Finally, a hybrid estimator employing experimental data and 1st-principles is proposed. Despite the structural simplicity of the estimators, the experimental validation shows a high precision for all of the approaches. Experimental validation is performed on a 10 kW SOFC system.
NASA Astrophysics Data System (ADS)
Jin, Minquan; Delshad, Mojdeh; Dwarakanath, Varadarajan; McKinney, Daene C.; Pope, Gary A.; Sepehrnoori, Kamy; Tilburg, Charles E.; Jackson, Richard E.
1995-05-01
In this paper we present a partitioning interwell tracer test (PITT) technique for the detection, estimation, and remediation performance assessment of the subsurface contaminated by nonaqueous phase liquids (NAPLs). We demonstrate the effectiveness of this technique by examples of experimental and simulation results. The experimental results are from partitioning tracer experiments in columns packed with Ottawa sand. Both the method of moments and inverse modeling techniques for estimating NAPL saturation in the sand packs are demonstrated. In the simulation examples we use UTCHEM, a comprehensive three-dimensional, chemical flood compositional simulator developed at the University of Texas, to simulate a hypothetical two-dimensional aquifer with properties similar to the Borden site contaminated by tetrachloroethylene (PCE), and we show how partitioning interwell tracer tests can be used to estimate the amount of PCE contaminant before remedial action and as the remediation process proceeds. Tracer tests results from different stages of remediation are compared to determine the quantity of PCE removed and the amount remaining. Both the experimental (small-scale) and simulation (large-scale) results demonstrate that PITT can be used as an innovative and effective technique to detect and estimate the amount of residual NAPL and for remediation performance assessment in subsurface formations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, M.; Delshad, M.; Dwarakanath, V.
1995-05-01
In this paper we present a partitioning interwell tracer test (PITT) technique for the detection, estimation, and remediation performance assessment of the subsurface contaminated by nonaqueous phase liquids (NAPLs). We demonstrate the effectiveness of this technique by examples of experimental and simulation results. The experimental results are from partitioning tracer experiments in columns packed with Ottawa sand. Both the method of moments and inverse modeling techniques for estimating NAPL saturation in the sand packs are demonstrated. In the simulation examples we use UTCHEM, a comprehensive three-dimensional, chemical flood compositional simulator developed at the University of Texas, to simulate a hypotheticalmore » two-dimensional aquifer with properties similar to the Borden site contaminated by tetrachloroethylene (PCE), and we show how partitioning interwell tracer tests can be used to estimate the amount of PCE contaminant before remedial action and as the remediation process proceeds. Tracer test results from different stages of remediation are compared to determine the quantity of PCE removed and the amount remaining. Both the experimental (small-scale) and simulation (large-scale) results demonstrate that PITT can be used as an innovative and effective technique to detect and estimate the amount of residual NAPL and for remediation performance assessment in subsurface formations. 43 refs., 10 figs., 1 tab.« less
The Implications of "Contamination" for Experimental Design in Education
ERIC Educational Resources Information Center
Rhoads, Christopher H.
2011-01-01
Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…
Inclusion of quasi-experimental studies in systematic reviews of health systems research.
Rockers, Peter C; Røttingen, John-Arne; Shemilt, Ian; Tugwell, Peter; Bärnighausen, Till
2015-04-01
Systematic reviews of health systems research commonly limit studies for evidence synthesis to randomized controlled trials. However, well-conducted quasi-experimental studies can provide strong evidence for causal inference. With this article, we aim to stimulate and inform discussions on including quasi-experiments in systematic reviews of health systems research. We define quasi-experimental studies as those that estimate causal effect sizes using exogenous variation in the exposure of interest that is not directly controlled by the researcher. We incorporate this definition into a non-hierarchical three-class taxonomy of study designs - experiments, quasi-experiments, and non-experiments. Based on a review of practice in three disciplines related to health systems research (epidemiology, economics, and political science), we discuss five commonly used study designs that fit our definition of quasi-experiments: natural experiments, instrumental variable analyses, regression discontinuity analyses, interrupted times series studies, and difference studies including controlled before-and-after designs, difference-in-difference designs and fixed effects analyses of panel data. We further review current practices regarding quasi-experimental studies in three non-health fields that utilize systematic reviews (education, development, and environment studies) to inform the design of approaches for synthesizing quasi-experimental evidence in health systems research. Ultimately, the aim of any review is practical: to provide useful information for policymakers, practitioners, and researchers. Future work should focus on building a consensus among users and producers of systematic reviews regarding the inclusion of quasi-experiments. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Jacobson, Eiren K; Forney, Karin A; Barlow, Jay
2017-01-01
Passive acoustic monitoring is a promising approach for monitoring long-term trends in harbor porpoise (Phocoena phocoena) abundance. Before passive acoustic monitoring can be implemented to estimate harbor porpoise abundance, information about the detectability of harbor porpoise is needed to convert recorded numbers of echolocation clicks to harbor porpoise densities. In the present study, paired data from a grid of nine passive acoustic click detectors (C-PODs, Chelonia Ltd., United Kingdom) and three days of simultaneous aerial line-transect visual surveys were collected over a 370 km 2 study area. The focus of the study was estimating the effective detection area of the passive acoustic sensors, which was defined as the product of the sound production rate of individual animals and the area within which those sounds are detected by the passive acoustic sensors. Visually estimated porpoise densities were used as informative priors in a Bayesian model to solve for the effective detection area for individual harbor porpoises. This model-based approach resulted in a posterior distribution of the effective detection area of individual harbor porpoises consistent with previously published values. This technique is a viable alternative for estimating the effective detection area of passive acoustic sensors when other experimental approaches are not feasible.
NASA Astrophysics Data System (ADS)
Papanastasiou, Dimitrios K.; Beltrone, Allison; Marshall, Paul; Burkholder, James B.
2018-05-01
Hydrochlorofluorocarbons (HCFCs) are ozone depleting substances and potent greenhouse gases that are controlled under the Montreal Protocol. However, the majority of the 274 HCFCs included in Annex C of the protocol do not have reported global warming potentials (GWPs) which are used to guide the phaseout of HCFCs and the future phase down of hydrofluorocarbons (HFCs). In this study, GWPs for all C1-C3 HCFCs included in Annex C are reported based on estimated atmospheric lifetimes and theoretical methods used to calculate infrared absorption spectra. Atmospheric lifetimes were estimated from a structure activity relationship (SAR) for OH radical reactivity and estimated O(1D) reactivity and UV photolysis loss processes. The C1-C3 HCFCs display a wide range of lifetimes (0.3 to 62 years) and GWPs (5 to 5330, 100-year time horizon) dependent on their molecular structure and the H-atom content of the individual HCFC. The results from this study provide estimated policy-relevant GWP metrics for the HCFCs included in the Montreal Protocol in the absence of experimentally derived metrics.
NASA Astrophysics Data System (ADS)
Sade, Ziv; Halevy, Itay
2017-10-01
CO2 (de)hydration (i.e., CO2 hydration/HCO3- dehydration) and (de)hydroxylation (i.e., CO2 hydroxylation/HCO3- dehydroxylation) are key reactions in the dissolved inorganic carbon (DIC) system. Kinetic isotope effects (KIEs) during these reactions are likely to be expressed in the DIC and recorded in carbonate minerals formed during CO2 degassing or dissolution of gaseous CO2. Thus, a better understanding of KIEs during CO2 (de)hydration and (de)hydroxylation would improve interpretations of disequilibrium compositions in carbonate minerals. To date, the literature lacks direct experimental constraints on most of the oxygen KIEs associated with these reactions. In addition, theoretical estimates describe oxygen KIEs during separate individual reactions. The KIEs of the related reverse reactions were neither derived directly nor calculated from a link to the equilibrium fractionation. Consequently, KIE estimates of experimental and theoretical studies have been difficult to compare. Here we revisit experimental and theoretical data to provide new constraints on oxygen KIEs during CO2 (de)hydration and (de)hydroxylation. For this purpose, we provide a clearer definition of the KIEs and relate them both to isotopic rate constants and equilibrium fractionations. Such relations are well founded in studies of single isotope source/sink reactions, but they have not been established for reactions that involve dual isotopic sources/sinks, such as CO2 (de)hydration and (de)hydroxylation. We apply the new quantitative constraints on the KIEs to investigate fractionations during simultaneous CaCO3 precipitation and HCO3- dehydration far from equilibrium.
Estimation and uncertainty analysis of dose response in an inter-laboratory experiment
NASA Astrophysics Data System (ADS)
Toman, Blaza; Rösslein, Matthias; Elliott, John T.; Petersen, Elijah J.
2016-02-01
An inter-laboratory experiment for the evaluation of toxic effects of NH2-polystyrene nanoparticles on living human cancer cells was performed with five participating laboratories. Previously published results from nanocytoxicity assays are often contradictory, mostly due to challenges related to producing a reliable cytotoxicity assay protocol for use with nanomaterials. Specific challenges include reproducibility preparing nanoparticle dispersions, biological variability from testing living cell lines, and the potential for nano-related interference effects. In this experiment, such challenges were addressed by developing a detailed experimental protocol and using a specially designed 96-well plate layout which incorporated a range of control measurements to assess multiple factors such as nanomaterial interference, pipetting accuracy, cell seeding density, and instrument performance. Detailed data analysis of these control measurements showed that good control of the experiments was attained by all participants in most cases. The main measurement objective of the study was the estimation of a dose response relationship between concentration of the nanoparticles and metabolic activity of the living cells, under several experimental conditions. The dose curve estimation was achieved by imbedding a three parameter logistic curve in a three level Bayesian hierarchical model, accounting for uncertainty due to all known experimental conditions as well as between laboratory variability in a top-down manner. Computation was performed using Markov Chain Monte Carlo methods. The fit of the model was evaluated using Bayesian posterior predictive probabilities and found to be satisfactory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, J.B.; Christensen, T.H.
1999-11-01
Complexation of cadmium (Cd), nickel (Ni), and zinc (Zn) by dissolved organic carbon (DOC) in leachate-polluted groundwater was measured using a resin equilibrium method and an aquifer material sorption technique. The first method is commonly used in complexation studies, while the second method better represents aquifer conditions. The two approaches gave similar results. Metal-DOC complexation was measured over a range of DOC concentrations using the resin equilibrium method, and the results were compared to simulations made by two speciation models containing default databases on metal-DOC complexes (WHAM and MINTEQA2). The WHAM model gave reasonable estimates of Cd and Ni complexationmore » by DOC for both leachate-polluted groundwater samples. The estimated effect of complexation differed less than 50% from the experimental values corresponding to a deviation on the activity of the free metal ion of a factor of 2.5. The effect of DOC complexation for Zn was largely overestimated by the WHAM model, and it was found that using a binding constant of 1.7 instead of the default value of 1.3 would improve the fit between the simulations and experimental data. The MINTEQA2 model gave reasonable predictions of the complexation of Cd and Zn by DOC, whereas deviations in the estimated activity of the free Ni{sup 2+} ion as compared to experimental results are up to a factor of 5.« less
Wear, Keith A
2013-04-01
The presence of two longitudinal waves in poroelastic media is predicted by Biot's theory and has been confirmed experimentally in through-transmission measurements in cancellous bone. Estimation of attenuation coefficients and velocities of the two waves is challenging when the two waves overlap in time. The modified least squares Prony's (MLSP) method in conjuction with curve-fitting (MLSP + CF) is tested using simulations based on published values for fast and slow wave attenuation coefficients and velocities in cancellous bone from several studies in bovine femur, human femur, and human calcaneus. The search algorithm is accelerated by exploiting correlations among search parameters. The performance of the algorithm is evaluated as a function of signal-to-noise ratio (SNR). For a typical experimental SNR (40 dB), the root-mean-square errors (RMSEs) for one example (human femur) with fast and slow waves separated by approximately half of a pulse duration were 1 m/s (slow wave velocity), 4 m/s (fast wave velocity), 0.4 dB/cm MHz (slow wave attenuation slope), and 1.7 dB/cm MHz (fast wave attenuation slope). The MLSP + CF method is fast (requiring less than 2 s at SNR = 40 dB on a consumer-grade notebook computer) and is flexible with respect to the functional form of the parametric model for the transmission coefficient. The MLSP + CF method provides sufficient accuracy and precision for many applications such that experimental error is a greater limiting factor than estimation error.
Phan, Hoang Vu; Park, Hoon Cheol
2018-04-18
Studies on wing kinematics indicate that flapping insect wings operate at higher angles of attack (AoAs) than conventional rotary wings. Thus, effectively flying an insect-like flapping-wing micro air vehicle (FW-MAV) requires appropriate wing design for achieving low power consumption and high force generation. Even though theoretical studies can be performed to identify appropriate geometric AoAs for a wing for achieving efficient hovering flight, designing an actual wing by implementing these angles into a real flying robot is challenging. In this work, we investigated the wing morphology of an insect-like tailless FW-MAV, which was named KUBeetle, for obtaining high vertical force/power ratio or power loading. Several deformable wing configurations with various vein structures were designed, and their characteristics of vertical force generation and power requirement were theoretically and experimentally investigated. The results of the theoretical study based on the unsteady blade element theory (UBET) were validated with reference data to prove the accuracy of power estimation. A good agreement between estimated and measured results indicated that the proposed UBET model can be used to effectively estimate the power requirement and force generation of an FW-MAV. Among the investigated wing configurations operating at flapping frequencies of 23 Hz to 29 Hz, estimated results showed that the wing with a suitable vein placed outboard exhibited an increase of approximately 23.7% ± 0.5% in vertical force and approximately 10.2% ± 1.0% in force/power ratio. The estimation was supported by experimental results, which showed that the suggested wing enhanced vertical force by approximately 21.8% ± 3.6% and force/power ratio by 6.8% ± 1.6%. In addition, wing kinematics during flapping motion was analyzed to determine the reason for the observed improvement.