Sample records for estimated interaction parameters

  1. Effects of Ignoring Item Interaction on Item Parameter Estimation and Detection of Interacting Items

    ERIC Educational Resources Information Center

    Chen, Cheng-Te; Wang, Wen-Chung

    2007-01-01

    This study explores the effects of ignoring item interaction on item parameter estimation and the efficiency of using the local dependence index Q[subscript 3] and the SAS NLMIXED procedure to detect item interaction under the three-parameter logistic model and the generalized partial credit model. Through simulations, it was found that ignoring…

  2. Fisher information of a single qubit interacts with a spin-qubit in the presence of a magnetic field

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2018-06-01

    In this contribution, quantum Fisher information is utilized to estimate the parameters of a central qubit interacting with a single-spin qubit. The effect of the longitudinal, transverse and the rotating strengths of the magnetic field on the estimation degree is discussed. It is shown that, in the resonance case, the number of peaks and consequently the size of the estimation regions increase as the rotating magnetic field strength increases. The precision estimation of the central qubit parameters depends on the initial state settings of the central and the spin-qubit, either encode classical or quantum information. It is displayed that, the upper bounds of the estimation degree are large if the two qubits encode classical information. In the non-resonance case, the estimation degree depends on which of the longitudinal/transverse strength is larger. The coupling constant between the central qubit and the spin-qubit has a different effect on the estimation degree of the weight and the phase parameters, where the possibility of estimating the weight parameter decreases as the coupling constant increases, while it increases for the phase parameter. For large number of spin-particles, namely, we have a spin-bath particles, the upper bounds of the Fisher information with respect to the weight parameter of the central qubit decreases as the number of the spin particle increases. As the interaction time increases, the upper bounds appear at different initial values of the weight parameter.

  3. qPIPSA: Relating enzymatic kinetic parameters and interaction fields

    PubMed Central

    Gabdoulline, Razif R; Stein, Matthias; Wade, Rebecca C

    2007-01-01

    Background The simulation of metabolic networks in quantitative systems biology requires the assignment of enzymatic kinetic parameters. Experimentally determined values are often not available and therefore computational methods to estimate these parameters are needed. It is possible to use the three-dimensional structure of an enzyme to perform simulations of a reaction and derive kinetic parameters. However, this is computationally demanding and requires detailed knowledge of the enzyme mechanism. We have therefore sought to develop a general, simple and computationally efficient procedure to relate protein structural information to enzymatic kinetic parameters that allows consistency between the kinetic and structural information to be checked and estimation of kinetic constants for structurally and mechanistically similar enzymes. Results We describe qPIPSA: quantitative Protein Interaction Property Similarity Analysis. In this analysis, molecular interaction fields, for example, electrostatic potentials, are computed from the enzyme structures. Differences in molecular interaction fields between enzymes are then related to the ratios of their kinetic parameters. This procedure can be used to estimate unknown kinetic parameters when enzyme structural information is available and kinetic parameters have been measured for related enzymes or were obtained under different conditions. The detailed interaction of the enzyme with substrate or cofactors is not modeled and is assumed to be similar for all the proteins compared. The protein structure modeling protocol employed ensures that differences between models reflect genuine differences between the protein sequences, rather than random fluctuations in protein structure. Conclusion Provided that the experimental conditions and the protein structural models refer to the same protein state or conformation, correlations between interaction fields and kinetic parameters can be established for sets of related enzymes. Outliers may arise due to variation in the importance of different contributions to the kinetic parameters, such as protein stability and conformational changes. The qPIPSA approach can assist in the validation as well as estimation of kinetic parameters, and provide insights into enzyme mechanism. PMID:17919319

  4. New procedure for the determination of Hansen solubility parameters by means of inverse gas chromatography.

    PubMed

    Adamska, K; Bellinghausen, R; Voelkel, A

    2008-06-27

    The Hansen solubility parameter (HSP) seems to be a useful tool for the thermodynamic characterization of different materials. Unfortunately, estimation of the HSP values can cause some problems. In this work different procedures by using inverse gas chromatography have been presented for calculation of pharmaceutical excipients' solubility parameter. The new procedure proposed, based on the Lindvig et al. methodology, where experimental data of Flory-Huggins interaction parameter are used, can be a reasonable alternative for the estimation of HSP values. The advantage of this method is that the values of Flory-Huggins interaction parameter chi for all test solutes are used for further calculation, thus diverse interactions between test solute and material are taken into consideration.

  5. Effects of data structure on the estimation of covariance functions to describe genotype by environment interactions in a reaction norm model

    PubMed Central

    Calus, Mario PL; Bijma, Piter; Veerkamp, Roel F

    2004-01-01

    Covariance functions have been proposed to predict breeding values and genetic (co)variances as a function of phenotypic within herd-year averages (environmental parameters) to include genotype by environment interaction. The objective of this paper was to investigate the influence of definition of environmental parameters and non-random use of sires on expected breeding values and estimated genetic variances across environments. Breeding values were simulated as a linear function of simulated herd effects. The definition of environmental parameters hardly influenced the results. In situations with random use of sires, estimated genetic correlations between the trait expressed in different environments were 0.93, 0.93 and 0.97 while simulated at 0.89 and estimated genetic variances deviated up to 30% from the simulated values. Non random use of sires, poor genetic connectedness and small herd size had a large impact on the estimated covariance functions, expected breeding values and calculated environmental parameters. Estimated genetic correlations between a trait expressed in different environments were biased upwards and breeding values were more biased when genetic connectedness became poorer and herd composition more diverse. The best possible solution at this stage is to use environmental parameters combining large numbers of animals per herd, while losing some information on genotype by environment interaction in the data. PMID:15339629

  6. Identification of dominant interactions between climatic seasonality, catchment characteristics and agricultural activities on Budyko-type equation parameter estimation

    NASA Astrophysics Data System (ADS)

    Xing, Wanqiu; Wang, Weiguang; Shao, Quanxi; Yong, Bin

    2018-01-01

    Quantifying precipitation (P) partition into evapotranspiration (E) and runoff (Q) is of great importance for global and regional water availability assessment. Budyko framework serves as a powerful tool to make simple and transparent estimation for the partition, using a single parameter, to characterize the shape of the Budyko curve for a "specific basin", where the single parameter reflects the overall effect by not only climatic seasonality, catchment characteristics (e.g., soil, topography and vegetation) but also agricultural activities (e.g., cultivation and irrigation). At the regional scale, these influencing factors are interconnected, and the interactions between them can also affect the single parameter of Budyko-type equations' estimating. Here we employ the multivariate adaptive regression splines (MARS) model to estimate the Budyko curve shape parameter (n in the Choudhury's equation, one form of the Budyko framework) of the selected 96 catchments across China using a data set of long-term averages for climatic seasonality, catchment characteristics and agricultural activities. Results show average storm depth (ASD), vegetation coverage (M), and seasonality index of precipitation (SI) are three statistically significant factors affecting the Budyko parameter. More importantly, four pairs of interactions are recognized by the MARS model as: The interaction between CA (percentage of cultivated land area to total catchment area) and ASD shows that the cultivation can weaken the reducing effect of high ASD (>46.78 mm) on the Budyko parameter estimating. Drought (represented by the value of Palmer drought severity index < -0.74) and uneven distribution of annual rainfall (represented by the value of coefficient of variation of precipitation > 0.23) tend to enhance the Budyko parameter reduction by large SI (>0.797). Low vegetation coverage (34.56%) is likely to intensify the rising effect on evapotranspiration ratio by IA (percentage of irrigation area to total catchment area). The Budyko n values estimated by the MARS model reproduce the calculated ones by the observation well for the selected 96 catchments (with R = 0.817, MAE = 4.09). Compared to the multiple stepwise regression model estimating the parameter n taken the influencing factors as independent inputs, the MARS model enhances the capability of the Budyko framework for assessing water availability at regional scale using readily available data.

  7. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  8. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  9. Evolutionary model selection and parameter estimation for protein-protein interaction network based on differential evolution algorithm

    PubMed Central

    Huang, Lei; Liao, Li; Wu, Cathy H.

    2016-01-01

    Revealing the underlying evolutionary mechanism plays an important role in understanding protein interaction networks in the cell. While many evolutionary models have been proposed, the problem about applying these models to real network data, especially for differentiating which model can better describe evolutionary process for the observed network urgently remains as a challenge. The traditional way is to use a model with presumed parameters to generate a network, and then evaluate the fitness by summary statistics, which however cannot capture the complete network structures information and estimate parameter distribution. In this work we developed a novel method based on Approximate Bayesian Computation and modified Differential Evolution (ABC-DEP) that is capable of conducting model selection and parameter estimation simultaneously and detecting the underlying evolutionary mechanisms more accurately. We tested our method for its power in differentiating models and estimating parameters on the simulated data and found significant improvement in performance benchmark, as compared with a previous method. We further applied our method to real data of protein interaction networks in human and yeast. Our results show Duplication Attachment model as the predominant evolutionary mechanism for human PPI networks and Scale-Free model as the predominant mechanism for yeast PPI networks. PMID:26357273

  10. Parameter estimating state reconstruction

    NASA Technical Reports Server (NTRS)

    George, E. B.

    1976-01-01

    Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.

  11. Performance comparison of first-order conditional estimation with interaction and Bayesian estimation methods for estimating the population parameters and its distribution from data sets with a low number of subjects.

    PubMed

    Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol

    2017-12-01

    Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.

  12. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions

    PubMed Central

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation. PMID:26150807

  13. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    PubMed

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation.

  14. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  15. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  16. Accuracy and Variability of Item Parameter Estimates from Marginal Maximum a Posteriori Estimation and Bayesian Inference via Gibbs Samplers

    ERIC Educational Resources Information Center

    Wu, Yi-Fang

    2015-01-01

    Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…

  17. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  18. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  19. A multi-mode real-time terrain parameter estimation method for wheeled motion control of mobile robots

    NASA Astrophysics Data System (ADS)

    Li, Yuankai; Ding, Liang; Zheng, Zhizhong; Yang, Qizhi; Zhao, Xingang; Liu, Guangjun

    2018-05-01

    For motion control of wheeled planetary rovers traversing on deformable terrain, real-time terrain parameter estimation is critical in modeling the wheel-terrain interaction and compensating the effect of wheel slipping. A multi-mode real-time estimation method is proposed in this paper to achieve accurate terrain parameter estimation. The proposed method is composed of an inner layer for real-time filtering and an outer layer for online update. In the inner layer, sinkage exponent and internal frictional angle, which have higher sensitivity than that of the other terrain parameters to wheel-terrain interaction forces, are estimated in real time by using an adaptive robust extended Kalman filter (AREKF), whereas the other parameters are fixed with nominal values. The inner layer result can help synthesize the current wheel-terrain contact forces with adequate precision, but has limited prediction capability for time-variable wheel slipping. To improve estimation accuracy of the result from the inner layer, an outer layer based on recursive Gauss-Newton (RGN) algorithm is introduced to refine the result of real-time filtering according to the innovation contained in the history data. With the two-layer structure, the proposed method can work in three fundamental estimation modes: EKF, REKF and RGN, making the method applicable for flat, rough and non-uniform terrains. Simulations have demonstrated the effectiveness of the proposed method under three terrain types, showing the advantages of introducing the two-layer structure.

  20. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  1. Markov Chain Monte Carlo Used in Parameter Inference of Magnetic Resonance Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hock, Kiel; Earle, Keith

    2016-02-06

    In this paper, we use Boltzmann statistics and the maximum likelihood distribution derived from Bayes’ Theorem to infer parameter values for a Pake Doublet Spectrum, a lineshape of historical significance and contemporary relevance for determining distances between interacting magnetic dipoles. A Metropolis Hastings Markov Chain Monte Carlo algorithm is implemented and designed to find the optimum parameter set and to estimate parameter uncertainties. In conclusion, the posterior distribution allows us to define a metric on parameter space that induces a geometry with negative curvature that affects the parameter uncertainty estimates, particularly for spectra with low signal to noise.

  2. Convergent Cross Mapping: Basic concept, influence of estimation parameters and practical application.

    PubMed

    Schiecke, Karin; Pester, Britta; Feucht, Martha; Leistritz, Lutz; Witte, Herbert

    2015-01-01

    In neuroscience, data are typically generated from neural network activity. Complex interactions between measured time series are involved, and nothing or only little is known about the underlying dynamic system. Convergent Cross Mapping (CCM) provides the possibility to investigate nonlinear causal interactions between time series by using nonlinear state space reconstruction. Aim of this study is to investigate the general applicability, and to show potentials and limitation of CCM. Influence of estimation parameters could be demonstrated by means of simulated data, whereas interval-based application of CCM on real data could be adapted for the investigation of interactions between heart rate and specific EEG components of children with temporal lobe epilepsy.

  3. Controls on the physical properties of gas-hydrate-bearing sediments because of the interaction between gas hydrate and porous media

    USGS Publications Warehouse

    Lee, Myung W.; Collett, Timothy S.

    2005-01-01

    Physical properties of gas-hydrate-bearing sediments depend on the pore-scale interaction between gas hydrate and porous media as well as the amount of gas hydrate present. Well log measurements such as proton nuclear magnetic resonance (NMR) relaxation and electromagnetic propagation tool (EPT) techniques depend primarily on the bulk volume of gas hydrate in the pore space irrespective of the pore-scale interaction. However, elastic velocities or permeability depend on how gas hydrate is distributed in the pore space as well as the amount of gas hydrate. Gas-hydrate saturations estimated from NMR and EPT measurements are free of adjustable parameters; thus, the estimations are unbiased estimates of gas hydrate if the measurement is accurate. However, the amount of gas hydrate estimated from elastic velocities or electrical resistivities depends on many adjustable parameters and models related to the interaction of gas hydrate and porous media, so these estimates are model dependent and biased. NMR, EPT, elastic-wave velocity, electrical resistivity, and permeability measurements acquired in the Mallik 5L-38 well in the Mackenzie Delta, Canada, show that all of the well log evaluation techniques considered provide comparable gas-hydrate saturations in clean (low shale content) sandstone intervals with high gas-hydrate saturations. However, in shaly intervals, estimates from log measurement depending on the pore-scale interaction between gas hydrate and host sediments are higher than those estimates from measurements depending on the bulk volume of gas hydrate.

  4. A Computational Model of Active Vision for Visual Search in Human-Computer Interaction

    DTIC Science & Technology

    2010-08-01

    processors that interact with the production rules to produce behavior, and (c) parameters that constrain the behavior of the model (e.g., the...velocity of a saccadic eye movement). While the parameters can be task-specific, the majority of the parameters are usually fixed across a wide variety...previously estimated durations. Hooge and Erkelens (1996) review these four explanations of fixation duration control. A variety of research

  5. Interactions Between Item Content And Group Membership on Achievement Test Items.

    ERIC Educational Resources Information Center

    Linn, Robert L.; Harnisch, Delwyn L.

    The purpose of this investigation was to examine the interaction of item content and group membership on achievement test items. Estimates of the parameters of the three parameter logistic model were obtained on the 46 item math test for the sample of eighth grade students (N = 2055) participating in the Illinois Inventory of Educational Progress,…

  6. Bayesian parameter estimation for chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie

    2016-09-01

    The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.

  7. A weighted least squares estimation of the polynomial regression model on paddy production in the area of Kedah and Perlis

    NASA Astrophysics Data System (ADS)

    Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd

    2017-08-01

    The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.

  8. A Note on the Specification of Error Structures in Latent Interaction Models

    ERIC Educational Resources Information Center

    Mao, Xiulin; Harring, Jeffrey R.; Hancock, Gregory R.

    2015-01-01

    Latent interaction models have motivated a great deal of methodological research, mainly in the area of estimating such models. Product-indicator methods have been shown to be competitive with other methods of estimation in terms of parameter bias and standard error accuracy, and their continued popularity in empirical studies is due, in part, to…

  9. Load forecasting via suboptimal seasonal autoregressive models and iteratively reweighted least squares estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbamalu, G.A.N.; El-Hawary, M.E.

    The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less

  10. Environmental confounding in gene-environment interaction studies.

    PubMed

    Vanderweele, Tyler J; Ko, Yi-An; Mukherjee, Bhramar

    2013-07-01

    We show that, in the presence of uncontrolled environmental confounding, joint tests for the presence of a main genetic effect and gene-environment interaction will be biased if the genetic and environmental factors are correlated, even if there is no effect of either the genetic factor or the environmental factor on the disease. When environmental confounding is ignored, such tests will in fact reject the joint null of no genetic effect with a probability that tends to 1 as the sample size increases. This problem with the joint test vanishes under gene-environment independence, but it still persists if estimating the gene-environment interaction parameter itself is of interest. Uncontrolled environmental confounding will bias estimates of gene-environment interaction parameters even under gene-environment independence, but it will not do so if the unmeasured confounding variable itself does not interact with the genetic factor. Under gene-environment independence, if the interaction parameter without controlling for the environmental confounder is nonzero, then there is gene-environment interaction either between the genetic factor and the environmental factor of interest or between the genetic factor and the unmeasured environmental confounder. We evaluate several recently proposed joint tests in a simulation study and discuss the implications of these results for the conduct of gene-environment interaction studies.

  11. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  12. Sequential Exposure of Bortezomib and Vorinostat is Synergistic in Multiple Myeloma Cells

    PubMed Central

    Nanavati, Charvi; Mager, Donald E.

    2018-01-01

    Purpose To examine the combination of bortezomib and vorinostat in multiple myeloma cells (U266) and xenografts, and to assess the nature of their potential interactions with semi-mechanistic pharmacodynamic models and biomarkers. Methods U266 proliferation was examined for a range of bortezomib and vorinostat exposure times and concentrations (alone and in combination). A non-competitive interaction model was used with interaction parameters that reflect the nature of drug interactions after simultaneous and sequential exposures. p21 and cleaved PARP were measured using immunoblotting to assess critical biomarker dynamics. For xenografts, data were extracted from literature and modeled with a PK/PD model with an interaction parameter. Results Estimated model parameters for simultaneous in vitro and xenograft treatments suggested additive drug effects. The sequence of bortezomib preincubation for 24 hours, followed by vorinostat for 24 hours, resulted in an estimated interaction term significantly less than 1, suggesting synergistic effects. p21 and cleaved PARP were also up-regulated the most in this sequence. Conclusions Semi-mechanistic pharmacodynamic modeling suggests synergistic pharmacodynamic interactions for the sequential administration of bortezomib followed by vorinostat. Increased p21 and cleaved PARP expression can potentially explain mechanisms of their enhanced effects, which require further PK/PD systems analysis to suggest an optimal dosing regimen. PMID:28101809

  13. A Bayesian approach to tracking patients having changing pharmacokinetic parameters

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Jelliffe, Roger W.

    2004-01-01

    This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.

  14. A modified Leslie-Gower predator-prey interaction model and parameter identifiability

    NASA Astrophysics Data System (ADS)

    Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed

    2018-01-01

    In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.

  15. State estimation of stochastic non-linear hybrid dynamic system using an interacting multiple model algorithm.

    PubMed

    Elenchezhiyan, M; Prakash, J

    2015-09-01

    In this work, state estimation schemes for non-linear hybrid dynamic systems subjected to stochastic state disturbances and random errors in measurements using interacting multiple-model (IMM) algorithms are formulated. In order to compute both discrete modes and continuous state estimates of a hybrid dynamic system either an IMM extended Kalman filter (IMM-EKF) or an IMM based derivative-free Kalman filters is proposed in this study. The efficacy of the proposed IMM based state estimation schemes is demonstrated by conducting Monte-Carlo simulation studies on the two-tank hybrid system and switched non-isothermal continuous stirred tank reactor system. Extensive simulation studies reveal that the proposed IMM based state estimation schemes are able to generate fairly accurate continuous state estimates and discrete modes. In the presence and absence of sensor bias, the simulation studies reveal that the proposed IMM unscented Kalman filter (IMM-UKF) based simultaneous state and parameter estimation scheme outperforms multiple-model UKF (MM-UKF) based simultaneous state and parameter estimation scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Adaptive Estimation and Heuristic Optimization of Nonlinear Spacecraft Attitude Dynamics

    DTIC Science & Technology

    2016-09-15

    Algorithm GPS Global Positioning System HOUF Higher Order Unscented Filter IC initial conditions IMM Interacting Multiple Model IMU Inertial Measurement Unit ...sources ranging from inertial measurement units to star sensors are used to construct observations for attitude estimation algorithms. The sensor...parameters. A single vector measurement will provide two independent parameters, as a unit vector constraint removes a DOF making the problem underdetermined

  17. Learning dependence from samples.

    PubMed

    Seth, Sohan; Príncipe, José C

    2014-01-01

    Mutual information, conditional mutual information and interaction information have been widely used in scientific literature as measures of dependence, conditional dependence and mutual dependence. However, these concepts suffer from several computational issues; they are difficult to estimate in continuous domain, the existing regularised estimators are almost always defined only for real or vector-valued random variables, and these measures address what dependence, conditional dependence and mutual dependence imply in terms of the random variables but not finite realisations. In this paper, we address the issue that given a set of realisations in an arbitrary metric space, what characteristic makes them dependent, conditionally dependent or mutually dependent. With this novel understanding, we develop new estimators of association, conditional association and interaction association. Some attractive properties of these estimators are that they do not require choosing free parameter(s), they are computationally simpler, and they can be applied to arbitrary metric spaces.

  18. A real-time digital program for estimating aircraft stability and control parameters from flight test data by using the maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Grove, R. D.; Mayhew, S. C.

    1973-01-01

    A computer program (Langley program C1123) has been developed for estimating aircraft stability and control parameters from flight test data. These parameters are estimated by the maximum likelihood estimation procedure implemented on a real-time digital simulation system, which uses the Control Data 6600 computer. This system allows the investigator to interact with the program in order to obtain satisfactory results. Part of this system, the control and display capabilities, is described for this program. This report also describes the computer program by presenting the program variables, subroutines, flow charts, listings, and operational features. Program usage is demonstrated with a test case using pseudo or simulated flight data.

  19. Inverse gas chromatographic determination of solubility parameters of excipients.

    PubMed

    Adamska, Katarzyna; Voelkel, Adam

    2005-11-04

    The principle aim of this work was an application of inverse gas chromatography (IGC) for the estimation of solubility parameter for pharmaceutical excipients. The retention data of number of test solutes were used to calculate Flory-Huggins interaction parameter (chi1,2infinity) and than solubility parameter (delta2), corrected solubility parameter (deltaT) and its components (deltad, deltap, deltah) by using different procedures. The influence of different values of test solutes solubility parameter (delta1) over calculated values was estimated. The solubility parameter values obtained for all excipients from the slope, from Guillet and co-workers' procedure are higher than that obtained from components according Voelkel and Janas procedure. It was found that solubility parameter's value of the test solutes influences, but not significantly, values of solubility parameter of excipients.

  20. Use of the Flory-Huggins theory to predict the solubility of nifedipine and sulfamethoxazole in the triblock, graft copolymer Soluplus.

    PubMed

    Altamimi, Mohammad A; Neau, Steven H

    2016-01-01

    Drug dispersed in a polymer can improve bioavailability; dispersed amorphous drug undergoes recrystallization. Solid solutions eliminate amorphous regions, but require a measure of the solubility. Use the Flory-Huggins Theory to predict crystalline drugs solubility in the triblock, graft copolymer Soluplus® to provide a solid solution. Physical mixtures of the two drugs with similar melting points but different glass forming ability, sulfamethoxazole and nifedipine, were prepared with Soluplus® using a quick technique. Drug melting point depression (MPD) was measured using differential scanning calorimetry. The Flory-Huggins Theory allowed: (1) interaction parameter, χ, calculation using MPD data to provide a measure of drug-polymer interaction strength and (2) estimation of the free energy of mixing. A phase diagram was constructed with the MPD data and glass transition temperature (Tg) curves. The interaction parameters with Soluplus® and the free energy of mixing were estimated. Drug solubility was calculated by the intersection of solubility equations and that of MPD and Tg curves in the phase diagram. Negative interaction parameters indicated strong drug-polymer interactions. The phase diagram and solubility equations provided comparable solubility estimates for each drug in Soluplus®. Results using the onset of melting rather than the end of melting support the use of the onset of melting. The Flory-Huggins Theory indicates that Soluplus® interacts effectively with each drug, making solid solution formation feasible. The predicted solubility of the drugs in Soluplus® compared favorably across the methods and supports the use of the onset of melting.

  1. Use of the Flory-Huggins theory to predict the solubility of nifedipine and sulfamethoxazole in the triblock, graft copolymer Soluplus.

    PubMed

    Altamimi, Mohammad A; Neau, Steven H

    2016-03-01

    Drug dispersed in a polymer can improve bioavailability; dispersed amorphous drug undergoes recrystallization. Solid solutions eliminate amorphous regions, but require a measure of the solubility. Use the Flory-Huggins Theory to predict crystalline drugs solubility in the triblock, graft copolymer Soluplus® to provide a solid solution. Physical mixtures of the two drugs with similar melting points but different glass forming ability, sulfamethoxazole and nifedipine, were prepared with Soluplus® using a quick technique. Drug melting point depression (MPD) was measured using differential scanning calorimetry. The Flory-Huggins Theory allowed: (1) interaction parameter, χ, calculation using MPD data to provide a measure of drug-polymer interaction strength and (2) estimation of the free energy of mixing. A phase diagram was constructed with the MPD data and glass transition temperature (T g ) curves. The interaction parameters with Soluplus® and the free energy of mixing were estimated. Drug solubility was calculated by the intersection of solubility equations and that of MPD and T g curves in the phase diagram. Negative interaction parameters indicated strong drug-polymer interactions. The phase diagram and solubility equations provided comparable solubility estimates for each drug in Soluplus®. Results using the onset of melting rather than the end of melting support the use of the onset of melting. The Flory-Huggins Theory indicates that Soluplus® interacts effectively with each drug, making solid solution formation feasible. The predicted solubility of the drugs in Soluplus® compared favorably across the methods and supports the use of the onset of melting.

  2. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    PubMed

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.

  3. GGOS and the EOP - the key role of SLR for a stable estimation of highly accurate Earth orientation parameters

    NASA Astrophysics Data System (ADS)

    Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael

    2016-04-01

    The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.

  4. Quantitative estimation of film forming polymer-plasticizer interactions by the Lorentz-Lorenz Law.

    PubMed

    Dredán, J; Zelkó, R; Dávid, A Z; Antal, I

    2006-03-09

    Molar refraction as well as refractive index has many uses. Beyond confirming the identity and purity of a compound, determination of molecular structure and molecular weight, molar refraction is also used in other estimation schemes, such as in critical properties, surface tension, solubility parameter, molecular polarizability, dipole moment, etc. In the present study molar refraction values of polymer dispersions were determined for the quantitative estimation of film forming polymer-plasticizer interactions. Information can be obtained concerning the extent of interaction between the polymer and the plasticizer from the calculation of molar refraction values of film forming polymer dispersions containing plasticizer.

  5. Nonlinear Quantum Metrology of Many-Body Open Systems

    NASA Astrophysics Data System (ADS)

    Beau, M.; del Campo, A.

    2017-07-01

    We introduce general bounds for the parameter estimation error in nonlinear quantum metrology of many-body open systems in the Markovian limit. Given a k -body Hamiltonian and p -body Lindblad operators, the estimation error of a Hamiltonian parameter using a Greenberger-Horne-Zeilinger state as a probe is shown to scale as N-[k -(p /2 )], surpassing the shot-noise limit for 2 k >p +1 . Metrology equivalence between initial product states and maximally entangled states is established for p ≥1 . We further show that one can estimate the system-environment coupling parameter with precision N-(p /2 ), while many-body decoherence enhances the precision to N-k in the noise-amplitude estimation of a fluctuating k -body Hamiltonian. For the long-range Ising model, we show that the precision of this parameter beats the shot-noise limit when the range of interactions is below a threshold value.

  6. Estimation of the ARNO model baseflow parameters using daily streamflow data

    NASA Astrophysics Data System (ADS)

    Abdulla, F. A.; Lettenmaier, D. P.; Liang, Xu

    1999-09-01

    An approach is described for estimation of baseflow parameters of the ARNO model, using historical baseflow recession sequences extracted from daily streamflow records. This approach allows four of the model parameters to be estimated without rainfall data, and effectively facilitates partitioning of the parameter estimation procedure so that parsimonious search procedures can be used to estimate the remaining storm response parameters separately. Three methods of optimization are evaluated for estimation of four baseflow parameters. These methods are the downhill Simplex (S), Simulated Annealing combined with the Simplex method (SA) and Shuffled Complex Evolution (SCE). These estimation procedures are explored in conjunction with four objective functions: (1) ordinary least squares; (2) ordinary least squares with Box-Cox transformation; (3) ordinary least squares on prewhitened residuals; (4) ordinary least squares applied to prewhitened with Box-Cox transformation of residuals. The effects of changing the seed random generator for both SA and SCE methods are also explored, as are the effects of the bounds of the parameters. Although all schemes converge to the same values of the objective function, SCE method was found to be less sensitive to these issues than both the SA and the Simplex schemes. Parameter uncertainty and interactions are investigated through estimation of the variance-covariance matrix and confidence intervals. As expected the parameters were found to be correlated and the covariance matrix was found to be not diagonal. Furthermore, the linearized confidence interval theory failed for about one-fourth of the catchments while the maximum likelihood theory did not fail for any of the catchments.

  7. Parameter estimation and sensitivity analysis for a mathematical model with time delays of leukemia

    NASA Astrophysics Data System (ADS)

    Cândea, Doina; Halanay, Andrei; Rǎdulescu, Rodica; Tǎlmaci, Rodica

    2017-01-01

    We consider a system of nonlinear delay differential equations that describes the interaction between three competing cell populations: healthy, leukemic and anti-leukemia T cells involved in Chronic Myeloid Leukemia (CML) under treatment with Imatinib. The aim of this work is to establish which model parameters are the most important in the success or failure of leukemia remission under treatment using a sensitivity analysis of the model parameters. For the most significant parameters of the model which affect the evolution of CML disease during Imatinib treatment we try to estimate the realistic values using some experimental data. For these parameters, steady states are calculated and their stability is analyzed and biologically interpreted.

  8. Constraints on the dark matter and dark energy interactions from weak lensing bispectrum tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Rui; Feng, Chang; Wang, Bin, E-mail: an_rui@sjtu.edu.cn, E-mail: chang.feng@uci.edu, E-mail: wang_b@sjtu.edu.cn

    We estimate uncertainties of cosmological parameters for phenomenological interacting dark energy models using weak lensing convergence power spectrum and bispectrum. We focus on the bispectrum tomography and examine how well the weak lensing bispectrum with tomography can constrain the interactions between dark sectors, as well as other cosmological parameters. Employing the Fisher matrix analysis, we forecast parameter uncertainties derived from weak lensing bispectra with a two-bin tomography and place upper bounds on strength of the interactions between the dark sectors. The cosmic shear will be measured from upcoming weak lensing surveys with high sensitivity, thus it enables us to usemore » the higher order correlation functions of weak lensing to constrain the interaction between dark sectors and will potentially provide more stringent results with other observations combined.« less

  9. Interacting dark sector and the coincidence problem within the scope of LRS Bianchi type I model

    NASA Astrophysics Data System (ADS)

    Muharlyamov, Ruslan K.; Pankratyeva, Tatiana N.

    2018-05-01

    It is shown that a suitable interaction between dark energy and dark matter in locally rotationally symmetric (LRS) Bianchi-I space-time can solve the coincidence problem and not contradict the accelerated expansion of present Universe. The interaction parameters are estimated from observational data.

  10. Optimization of multi-environment trials for genomic selection based on crop models.

    PubMed

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  11. Experimental design for estimating parameters of rate-limited mass transfer: Analysis of stream tracer studies

    USGS Publications Warehouse

    Wagner, Brian J.; Harvey, Judson W.

    1997-01-01

    Tracer experiments are valuable tools for analyzing the transport characteristics of streams and their interactions with shallow groundwater. The focus of this work is the design of tracer studies in high-gradient stream systems subject to advection, dispersion, groundwater inflow, and exchange between the active channel and zones in surface or subsurface water where flow is stagnant or slow moving. We present a methodology for (1) evaluating and comparing alternative stream tracer experiment designs and (2) identifying those combinations of stream transport properties that pose limitations to parameter estimation and therefore a challenge to tracer test design. The methodology uses the concept of global parameter uncertainty analysis, which couples solute transport simulation with parameter uncertainty analysis in a Monte Carlo framework. Two general conclusions resulted from this work. First, the solute injection and sampling strategy has an important effect on the reliability of transport parameter estimates. We found that constant injection with sampling through concentration rise, plateau, and fall provided considerably more reliable parameter estimates than a pulse injection across the spectrum of transport scenarios likely encountered in high-gradient streams. Second, for a given tracer test design, the uncertainties in mass transfer and storage-zone parameter estimates are strongly dependent on the experimental Damkohler number, DaI, which is a dimensionless combination of the rates of exchange between the stream and storage zones, the stream-water velocity, and the stream reach length of the experiment. Parameter uncertainties are lowest at DaI values on the order of 1.0. When DaI values are much less than 1.0 (owing to high velocity, long exchange timescale, and/or short reach length), parameter uncertainties are high because only a small amount of tracer interacts with storage zones in the reach. For the opposite conditions (DaI ≫ 1.0), solute exchange rates are fast relative to stream-water velocity and all solute is exchanged with the storage zone over the experimental reach. As DaI increases, tracer dispersion caused by hyporheic exchange eventually reaches an equilibrium condition and storage-zone exchange parameters become essentially nonidentifiable.

  12. Inferring interactions in complex microbial communities from nucleotide sequence data and environmental parameters

    PubMed Central

    Shang, Yu; Sikorski, Johannes; Bonkowski, Michael; Fiore-Donno, Anna-Maria; Kandeler, Ellen; Marhan, Sven; Boeddinghaus, Runa S.; Solly, Emily F.; Schrumpf, Marion; Schöning, Ingo; Wubet, Tesfaye; Buscot, Francois; Overmann, Jörg

    2017-01-01

    Interactions occur between two or more organisms affecting each other. Interactions are decisive for the ecology of the organisms. Without direct experimental evidence the analysis of interactions is difficult. Correlation analyses that are based on co-occurrences are often used to approximate interaction. Here, we present a new mathematical model to estimate the interaction strengths between taxa, based on changes in their relative abundances across environmental gradients. PMID:28288199

  13. STELLAR ENCOUNTER RATE IN GALACTIC GLOBULAR CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bahramian, Arash; Heinke, Craig O.; Sivakoff, Gregory R.

    2013-04-01

    The high stellar densities in the cores of globular clusters cause significant stellar interactions. These stellar interactions can produce close binary mass-transferring systems involving compact objects and their progeny, such as X-ray binaries and radio millisecond pulsars. Comparing the numbers of these systems and interaction rates in different clusters drives our understanding of how cluster parameters affect the production of close binaries. In this paper we estimate stellar encounter rates ({Gamma}) for 124 Galactic globular clusters based on observational data as opposed to the methods previously employed, which assumed 'King-model' profiles for all clusters. By deprojecting cluster surface brightness profilesmore » to estimate luminosity density profiles, we treat 'King-model' and 'core-collapsed' clusters in the same way. In addition, we use Monte Carlo simulations to investigate the effects of uncertainties in various observational parameters (distance, reddening, surface brightness) on {Gamma}, producing the first catalog of globular cluster stellar encounter rates with estimated errors. Comparing our results with published observations of likely products of stellar interactions (numbers of X-ray binaries, numbers of radio millisecond pulsars, and {gamma}-ray luminosity) we find both clear correlations and some differences with published results.« less

  14. Quantile regression models of animal habitat relationships

    USGS Publications Warehouse

    Cade, Brian S.

    2003-01-01

    Typically, all factors that limit an organism are not measured and included in statistical models used to investigate relationships with their environment. If important unmeasured variables interact multiplicatively with the measured variables, the statistical models often will have heterogeneous response distributions with unequal variances. Quantile regression is an approach for estimating the conditional quantiles of a response variable distribution in the linear model, providing a more complete view of possible causal relationships between variables in ecological processes. Chapter 1 introduces quantile regression and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of estimates for homogeneous and heterogeneous regression models. Chapter 2 evaluates performance of quantile rankscore tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). A permutation F test maintained better Type I errors than the Chi-square T test for models with smaller n, greater number of parameters p, and more extreme quantiles τ. Both versions of the test required weighting to maintain correct Type I errors when there was heterogeneity under the alternative model. An example application related trout densities to stream channel width:depth. Chapter 3 evaluates a drop in dispersion, F-ratio like permutation test for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). Chapter 4 simulates from a large (N = 10,000) finite population representing grid areas on a landscape to demonstrate various forms of hidden bias that might occur when the effect of a measured habitat variable on some animal was confounded with the effect of another unmeasured variable (spatially and not spatially structured). Depending on whether interactions of the measured habitat and unmeasured variable were negative (interference interactions) or positive (facilitation interactions), either upper (τ > 0.5) or lower (τ < 0.5) quantile regression parameters were less biased than mean rate parameters. Sampling (n = 20 - 300) simulations demonstrated that confidence intervals constructed by inverting rankscore tests provided valid coverage of these biased parameters. Quantile regression was used to estimate effects of physical habitat resources on a bivalve mussel (Macomona liliana) in a New Zealand harbor by modeling the spatial trend surface as a cubic polynomial of location coordinates.

  15. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  16. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  17. Estimating turbulent electrovortex flow parameters hear the dynamo cycle bifurcation point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimin, V.D.; Kolpakov, N.Yu.; Khripchenko, S.Yu.

    1988-07-01

    Models for estimating turbulent electrovortex flow parameters, derived in earlier studies, were delineated and extended in this paper to express those parameters near the dynamo cycle bifurcation point in a spherical cavity. Toroidal and poloidal fields rising from the induction currents within the liquid metal and their electrovortex interactions were calculated. Toroidal field strengthening by the poloidal electrovortex flow, the first part of the dynamo loop, was determined by the viscous dissipation in the liquid metal. The second part of the loop, in which the toroidal field localized in the liquid metal is converted to a poloidal field and emergesmore » from the sphere, was also established. The dissipative effects near the critical magnetic Reynolds number were estimated.« less

  18. Entangling measurements for multiparameter estimation with two qubits

    NASA Astrophysics Data System (ADS)

    Roccia, Emanuele; Gianani, Ilaria; Mancino, Luca; Sbroscia, Marco; Somma, Fabrizia; Genoni, Marco G.; Barbieri, Marco

    2018-01-01

    Careful tailoring the quantum state of probes offers the capability of investigating matter at unprecedented precisions. Rarely, however, the interaction with the sample is fully encompassed by a single parameter, and the information contained in the probe needs to be partitioned on multiple parameters. There exist, then, practical bounds on the ultimate joint-estimation precision set by the unavailability of a single optimal measurement for all parameters. Here, we discuss how these considerations are modified for two-level quantum probes — qubits — by the use of two copies and entangling measurements. We find that the joint estimation of phase and phase diffusion benefits from such collective measurement, while for multiple phases no enhancement can be observed. We demonstrate this in a proof-of-principle photonics setup.

  19. A data-input program (MFI2005) for the U.S. Geological Survey modular groundwater model (MODFLOW-2005) and parameter estimation program (UCODE_2005)

    USGS Publications Warehouse

    Harbaugh, Arien W.

    2011-01-01

    The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.

  20. K-ε Turbulence Model Parameter Estimates Using an Approximate Self-similar Jet-in-Crossflow Solution

    DOE PAGES

    DeChant, Lawrence; Ray, Jaideep; Lefantzi, Sophia; ...

    2017-06-09

    The k-ε turbulence model has been described as perhaps “the most widely used complete turbulence model.” This family of heuristic Reynolds Averaged Navier-Stokes (RANS) turbulence closures is supported by a suite of model parameters that have been estimated by demanding the satisfaction of well-established canonical flows such as homogeneous shear flow, log-law behavior, etc. While this procedure does yield a set of so-called nominal parameters, it is abundantly clear that they do not provide a universally satisfactory turbulence model that is capable of simulating complex flows. Recent work on the Bayesian calibration of the k-ε model using jet-in-crossflow wind tunnelmore » data has yielded parameter estimates that are far more predictive than nominal parameter values. In this paper, we develop a self-similar asymptotic solution for axisymmetric jet-in-crossflow interactions and derive analytical estimates of the parameters that were inferred using Bayesian calibration. The self-similar method utilizes a near field approach to estimate the turbulence model parameters while retaining the classical far-field scaling to model flow field quantities. Our parameter values are seen to be far more predictive than the nominal values, as checked using RANS simulations and experimental measurements. They are also closer to the Bayesian estimates than the nominal parameters. A traditional simplified jet trajectory model is explicitly related to the turbulence model parameters and is shown to yield good agreement with measurement when utilizing the analytical derived turbulence model coefficients. Finally, the close agreement between the turbulence model coefficients obtained via Bayesian calibration and the analytically estimated coefficients derived in this paper is consistent with the contention that the Bayesian calibration approach is firmly rooted in the underlying physical description.« less

  1. Boosted Multivariate Trees for Longitudinal Data

    PubMed Central

    Pande, Amol; Li, Liang; Rajeswaran, Jeevanantham; Ehrlinger, John; Kogalur, Udaya B.; Blackstone, Eugene H.; Ishwaran, Hemant

    2017-01-01

    Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data. PMID:29249866

  2. Estimation of kinematic parameters in CALIFA galaxies: no-assumption on internal dynamics

    NASA Astrophysics Data System (ADS)

    García-Lorenzo, B.; Barrera-Ballesteros, J.; CALIFA Team

    2016-06-01

    We propose a simple approach to homogeneously estimate kinematic parameters of a broad variety of galaxies (elliptical, spirals, irregulars or interacting systems). This methodology avoids the use of any kinematical model or any assumption on internal dynamics. This simple but novel approach allows us to determine: the frequency of kinematic distortions, systemic velocity, kinematic center, and kinematic position angles which are directly measured from the two dimensional-distributions of radial velocities. We test our analysis tools using the CALIFA Survey

  3. Validation of the alternating conditional estimation algorithm for estimation of flexible extensions of Cox's proportional hazards model with nonlinear constraints on the parameters.

    PubMed

    Wynant, Willy; Abrahamowicz, Michal

    2016-11-01

    Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Uncertainty quantification of effective nuclear interactions

    DOE PAGES

    Pérez, R. Navarro; Amaro, J. E.; Arriola, E. Ruiz

    2016-03-02

    We give a brief review on the development of phenomenological NN interactions and the corresponding quanti cation of statistical uncertainties. We look into the uncertainty of effective interactions broadly used in mean eld calculations through the Skyrme parameters and effective eld theory counter-terms by estimating both statistical and systematic uncertainties stemming from the NN interaction. We also comment on the role played by different tting strategies on the light of recent developments.

  5. Uncertainty quantification of effective nuclear interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pérez, R. Navarro; Amaro, J. E.; Arriola, E. Ruiz

    We give a brief review on the development of phenomenological NN interactions and the corresponding quanti cation of statistical uncertainties. We look into the uncertainty of effective interactions broadly used in mean eld calculations through the Skyrme parameters and effective eld theory counter-terms by estimating both statistical and systematic uncertainties stemming from the NN interaction. We also comment on the role played by different tting strategies on the light of recent developments.

  6. Earth-Moon system: Dynamics and parameter estimation

    NASA Technical Reports Server (NTRS)

    Breedlove, W. J., Jr.

    1979-01-01

    The following topics are discussed: (1) the Unified Model of Lunar Translation/Rotation (UMLTR); (2) the effect of figure-figure interactions on lunar physical librations; (3) the effect of translational-rotational coupling on the lunar orbit; and(4) an error analysis for estimating lunar inertias from LURE (Lunar Laser Ranging Experiment) data.

  7. Optimal Design for the Precise Estimation of an Interaction Threshold: The Impact of Exposure to a Mixture of 18 Polyhalogenated Aromatic Hydrocarbons

    PubMed Central

    Yeatts, Sharon D.; Gennings, Chris; Crofton, Kevin M.

    2014-01-01

    Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose-dependent interaction. However, the corresponding likelihood-ratio-based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds-optimal second-stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds-optimal second-stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice. PMID:22640366

  8. Functional interaction-based nonlinear models with application to multiplatform genomics data.

    PubMed

    Davenport, Clemontina A; Maity, Arnab; Baladandayuthapani, Veerabhadran

    2018-05-07

    Functional regression allows for a scalar response to be dependent on a functional predictor; however, not much work has been done when a scalar exposure that interacts with the functional covariate is introduced. In this paper, we present 2 functional regression models that account for this interaction and propose 2 novel estimation procedures for the parameters in these models. These estimation methods allow for a noisy and/or sparsely observed functional covariate and are easily extended to generalized exponential family responses. We compute standard errors of our estimators, which allows for further statistical inference and hypothesis testing. We compare the performance of the proposed estimators to each other and to one found in the literature via simulation and demonstrate our methods using a real data example. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Bayesian Methods for Effective Field Theories

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah

    Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that incorporate the prior pdfs. Problems of model selection, such as distinguishing between competing EFT implementations, are also natural in a Bayesian framework. In this thesis we focus on two complementary topics for EFT UQ using Bayesian methods--quantifying EFT truncation uncertainty and parameter estimation for LECs. Using the order-by-order calculations and underlying EFT constraints as prior information, we show how to estimate EFT truncation uncertainties. We then apply the result to calculating truncation uncertainties on predictions of nucleon-nucleon scattering in chiral effective field theory. We apply model-checking diagnostics to our calculations to ensure that the statistical model of truncation uncertainty produces consistent results. A framework for EFT parameter estimation based on EFT convergence properties and naturalness is developed which includes a series of diagnostics to ensure the extraction of the maximum amount of available information from data to estimate LECs with minimal bias. We develop this framework using model EFTs and apply it to the problem of extrapolating lattice quantum chromodynamics results for the nucleon mass. We then apply aspects of the parameter estimation framework to perform case studies in chiral EFT parameter estimation, investigating a possible operator redundancy at fourth order in the chiral expansion and the appropriate inclusion of truncation uncertainty in estimating LECs.

  10. Modeling Complex Equilibria in ITC Experiments: Thermodynamic Parameters Estimation for a Three Binding Site Model

    PubMed Central

    Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.

    2013-01-01

    Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283

  11. The link between a negative high resolution resist contrast/developer performance and the Flory-Huggins parameter estimated from the Hansen solubility sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    StCaire, Lorri; Olynick, Deirdre L.; Chao, Weilun L.

    We have implemented a technique to identify candidate polymer solvents for spinning, developing, and rinsing for a high resolution, negative electron beam resist hexa-methyl acetoxy calix(6)arene to elicit the optimum pattern development performance. Using the three dimensional Hansen solubility parameters for over 40 solvents, we have constructed a Hansen solubility sphere. From this sphere, we have estimated the Flory Huggins interaction parameter for solvents with hexa-methyl acetoxy calix(6)arene and found a correlation between resist development contrast and the Flory-Huggins parameter. This provides new insights into the development behavior of resist materials which are necessary for obtaining the ultimate lithographic resolution.

  12. Estimating Colloidal Contact Model Parameters Using Quasi-Static Compression Simulations.

    PubMed

    Bürger, Vincent; Briesen, Heiko

    2016-10-05

    For colloidal particles interacting in suspensions, clusters, or gels, contact models should attempt to include all physical phenomena experimentally observed. One critical point when formulating a contact model is to ensure that the interaction parameters can be easily obtained from experiments. Experimental determinations of contact parameters for particles either are based on bulk measurements for simulations on the macroscopic scale or require elaborate setups for obtaining tangential parameters such as using atomic force microscopy. However, on the colloidal scale, a simple method is required to obtain all interaction parameters simultaneously. This work demonstrates that quasi-static compression of a fractal-like particle network provides all the necessary information to obtain particle interaction parameters using a simple spring-based contact model. These springs provide resistances against all degrees of freedom associated with two-particle interactions, and include critical forces or moments where such springs break, indicating a bond-breakage event. A position-based cost function is introduced to show the identifiability of the two-particle contact parameters, and a discrete, nonlinear, and non-gradient-based global optimization method (simplex with simulated annealing, SIMPSA) is used to minimize the cost function calculated from deviations of particle positions. Results show that, in principle, all necessary contact parameters for an arbitrary particle network can be identified, although numerical efficiency as well as experimental noise must be addressed when applying this method. Such an approach lays the groundwork for identifying particle-contact parameters from a position-based particle analysis for a colloidal system using just one experiment. Spring constants also directly influence the time step of the discrete-element method, and a detailed knowledge of all necessary interaction parameters will help to improve the efficiency of colloidal particle simulations.

  13. Quantum metrology and estimation of Unruh effect

    PubMed Central

    Wang, Jieci; Tian, Zehua; Jing, Jiliang; Fan, Heng

    2014-01-01

    We study the quantum metrology for a pair of entangled Unruh-Dewitt detectors when one of them is accelerated and coupled to a massless scalar field. Comparing with previous schemes, our model requires only local interaction and avoids the use of cavities in the probe state preparation process. We show that the probe state preparation and the interaction between the accelerated detector and the external field have significant effects on the value of quantum Fisher information, correspondingly pose variable ultimate limit of precision in the estimation of Unruh effect. We find that the precision of the estimation can be improved by a larger effective coupling strength and a longer interaction time. Alternatively, the energy gap of the detector has a range that can provide us a better precision. Thus we may adjust those parameters and attain a higher precision in the estimation. We also find that an extremely high acceleration is not required in the quantum metrology process. PMID:25424772

  14. Bayesian parameter estimation for nonlinear modelling of biological pathways.

    PubMed

    Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang

    2011-01-01

    The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.

  15. Enthalpic parameters of interaction between diglycylglycine and polyatomic alcohols in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Mezhevoi, I. N.; Badelin, V. G.

    2015-12-01

    Integral enthalpies of solution Δsol H m of diglycylglycine in aqueous solutions of glycerol, ethylene glycol, and 1,2-propylene glycol are measured via solution calorimetry. The experimental data are used to calculate the standard enthalpies of solution (Δsol H°) and transfer (Δtr H°) of the tripeptide from water to aqueous solutions of polyatomic alcohols. The enthalpic pairwise coefficients h xy of interactions between the tripeptide and polyatomic alcohol molecules are calculated using the McMillan-Mayer solution theory and are found to have positive values. The findings are discussed using the theory of estimating various types of interactions in ternary systems and the effect the structural features of interacting biomolecules have on the thermochemical parameters of diglycylglycine dissolution.

  16. Identification of Spey engine dynamics in the augmentor wing jet STOL research aircraft from flight data

    NASA Technical Reports Server (NTRS)

    Dehoff, R. L.; Reed, W. B.; Trankle, T. L.

    1977-01-01

    The development and validation of a spey engine model is described. An analysis of the dynamical interactions involved in the propulsion unit is presented. The model was reduced to contain only significant effects, and was used, in conjunction with flight data obtained from an augmentor wing jet STOL research aircraft, to develop initial estimates of parameters in the system. The theoretical background employed in estimating the parameters is outlined. The software package developed for processing the flight data is described. Results are summarized.

  17. ITC Recommendations for Transporter Kinetic Parameter Estimation and Translational Modeling of Transport-Mediated PK and DDIs in Humans

    PubMed Central

    Zamek-Gliszczynski, MJ; Lee, CA; Poirier, A; Bentz, J; Chu, X; Ellens, H; Ishikawa, T; Jamei, M; Kalvass, JC; Nagar, S; Pang, KS; Korzekwa, K; Swaan, PW; Taub, ME; Zhao, P; Galetin, A

    2013-01-01

    This white paper provides a critical analysis of methods for estimating transporter kinetics and recommendations on proper parameter calculation in various experimental systems. Rational interpretation of transporter-knockout animal findings and application of static and dynamic physiologically based modeling approaches for prediction of human transporter-mediated pharmacokinetics and drug–drug interactions (DDIs) are presented. The objective is to provide appropriate guidance for the use of in vitro, in vivo, and modeling tools in translational transporter science. PMID:23588311

  18. Deriving percentage study weights in multi-parameter meta-analysis models: with application to meta-regression, network meta-analysis and one-stage individual participant data models.

    PubMed

    Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L

    2017-01-01

    Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).

  19. Estimates of genetic parameters in turkeys. 3. Sexual dimorphism and its implications in selection procedures.

    PubMed

    Toelle, V D; Havenstein, G B; Nestor, K E; Bacon, W L

    1990-10-01

    Live, carcass, and skeletal data taken at 16 wk of age on 504 female and 584 male turkeys from 34 sires and 168 dams were utilized to evaluate sex differences in genetic parameter estimates. Data were transformed to common mean and variance to evaluate possible scaling effects. Genetic parameters were estimated from transformed and untransformed data. Further analyses were conducted with a model that included sire by sex and dams within sire by sex interactions, and the variance estimates were used to calculate genetic correlations between the sexes and genetic regression parameters. Heritability estimates from transformed and untransformed data were similar, indicating that sex differences were present in the genetic parameters, but scaling effects were not an important factor. Genetic correlation estimates from paternal (PHS) and maternal (MHS) half-sib estimates were close to unity for BW (1.14, PHS; 1.09, MHS), shank width (.99, PHS; .93, MHS), breast muscle weight (1.23, PHS; 1.04, MHS), and shank length (1.09, PHS; .97, MHS). However, abdominal fat (.79, PHS; .59 MHS), total drumstick muscle weight (.75, PHS; 1.14, MHS), rough cleaned shank weight (.78, PHS; not estimatable, MHS), and shank bone density (1.00, PHS; .53, MHS) estimates were somewhat lower. The estimates suggest that the measurement of these latter "traits" at the same age in the two sexes may, in fact, be measuring different genetic effects and that selection procedures in turkeys need to take these correlations into account in order to make optimum progress. The genetic regression parameters indicated that more intense selection in the sex that has the smaller genetic variation could be practiced to make greater gains in the opposite sex.

  20. Assessment of the thermodynamic properties of poly(2,2,2-trifluoroethyl methacrylate) by inverse gas chromatography.

    PubMed

    Papadopoulou, Stella K; Panayiotou, Costas

    2014-01-10

    The thermodynamic properties of poly(2,2,2-trifluoroethyl methacrylate) (PTFEMA) were determined by the aid of the inverse gas chromatography technique (IGC), at infinite dilution. The interactions between the polymer and 15 solvents were examined in the temperature range of 120-150 °C via the estimation of the thermodynamic sorption parameters, the parameters of mixing at infinite dilution, the weight fraction activity coefficients and the Flory-Huggins interaction parameters. Additionally, the total and the partial solubility parameters of PTFEMA were estimated. The findings of this work indicate that the type and strength of the intermolecular interactions between the polymer and the solvents are strongly depended on the functional groups of the polymer and the solvents. The proton acceptor character of the polymer is responsible for the preferential solubility of PTFEMA in chloroform which acts as a proton donor solvent. The results also reveal that the polymer is insoluble in alkanes and alcohols whereas it presents good miscibility with polar solvents, especially with 2-butanone, 2-pentanone and 1,4-dioxane. Furthermore, the total and dispersive solubility parameters appear diminishing upon temperature rise, whereas the opposite behavior is noticed for the polar and hydrogen bonding solubility parameters. The latter increase with temperature, probably, due to conformational changes of the polymer on the solid support. Finally, comparison of the solubilization profiles of fluorinated methacrylic polymers studied by IGC, leads to the conclusion that PTFEMA is more soluble compared to polymers with higher fluorine content. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. A microwave method for measuring moisture content, density, and grain angle of wood

    Treesearch

    W. L. James; Y.-H. Yen; R. J. King

    1985-01-01

    The attenuation, phase shift and depolarization of a polarized 4.81-gigahertz wave as it is transmitted through a wood specimen can provide estimates of the moisture content (MC), density, and grain angle of the specimen. Calibrations are empirical, and computations are complicated, with considerable interaction between parameters. Measured dielectric parameters,...

  2. Characterization of classical static noise via qubit as probe

    NASA Astrophysics Data System (ADS)

    Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif

    2018-03-01

    The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.

  3. Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talukder, Srijeeta; Sen, Shrabani; Chaudhury, Pinaki, E-mail: pinakc@rediffmail.com

    We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction ε{sub hb}(AT) for an AT base pair and the ring factor ξ turn out to be the most sensitive parameters. In addition, the stackingmore » interaction ε{sub st}(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.« less

  4. Bayesian characterization of uncertainty in species interaction strengths.

    PubMed

    Wolf, Christopher; Novak, Mark; Gitelman, Alix I

    2017-06-01

    Considerable effort has been devoted to the estimation of species interaction strengths. This effort has focused primarily on statistical significance testing and obtaining point estimates of parameters that contribute to interaction strength magnitudes, leaving the characterization of uncertainty associated with those estimates unconsidered. We consider a means of characterizing the uncertainty of a generalist predator's interaction strengths by formulating an observational method for estimating a predator's prey-specific per capita attack rates as a Bayesian statistical model. This formulation permits the explicit incorporation of multiple sources of uncertainty. A key insight is the informative nature of several so-called non-informative priors that have been used in modeling the sparse data typical of predator feeding surveys. We introduce to ecology a new neutral prior and provide evidence for its superior performance. We use a case study to consider the attack rates in a New Zealand intertidal whelk predator, and we illustrate not only that Bayesian point estimates can be made to correspond with those obtained by frequentist approaches, but also that estimation uncertainty as described by 95% intervals is more useful and biologically realistic using the Bayesian method. In particular, unlike in bootstrap confidence intervals, the lower bounds of the Bayesian posterior intervals for attack rates do not include zero when a predator-prey interaction is in fact observed. We conclude that the Bayesian framework provides a straightforward, probabilistic characterization of interaction strength uncertainty, enabling future considerations of both the deterministic and stochastic drivers of interaction strength and their impact on food webs.

  5. Nonlinear Directed Interactions Between HRV and EEG Activity in Children With TLE.

    PubMed

    Schiecke, Karin; Pester, Britta; Piper, Diana; Benninger, Franz; Feucht, Martha; Leistritz, Lutz; Witte, Herbert

    2016-12-01

    Epileptic seizure activity influences the autonomic nervous system (ANS) in different ways. Heart rate variability (HRV) is used as indicator for alterations of the ANS. It was shown that linear, nondirected interactions between HRV and EEG activity before, during, and after epileptic seizure occur. Accordingly, investigations of directed nonlinear interactions are logical steps to provide, e.g., deeper insight into the development of seizure onsets. Convergent cross mapping (CCM) investigates nonlinear, directed interactions between time series by using nonlinear state space reconstruction. CCM is applied to simulated and clinically relevant data, i.e., interactions between HRV and specific EEG components of children with temporal lobe epilepsy (TLE). In addition, time-variant multivariate Autoregressive model (AR)-based estimation of partial directed coherence (PDC) was performed for the same data. Influence of estimation parameters and time-varying behavior of CCM estimation could be demonstrated by means of simulated data. AR-based estimation of PDC failed for the investigation of our clinical data. Time-varying interval-based application of CCM on these data revealed directed interactions between HRV and delta-related EEG activity. Interactions between HRV and alpha-related EEG activity were visible but less pronounced. EEG components mainly drive HRV. The interaction pattern and directionality clearly changed with onset of seizure. Statistical relevant interactions were quantified by bootstrapping and surrogate data approach. In contrast to AR-based estimation of PDC CCM was able to reveal time-courses and frequency-selective views of nonlinear interactions for the further understanding of complex interactions between the epileptic network and the ANS in children with TLE.

  6. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  7. Estimation of Filling and Afterload Conditions by Pump Intrinsic Parameters in a Pulsatile Total Artificial Heart.

    PubMed

    Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich

    2016-07-01

    A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  8. Effects of photosynthetic photon flux density, frequency, duty ratio, and their interactions on net photosynthetic rate of cos lettuce leaves under pulsed light: explanation based on photosynthetic-intermediate pool dynamics.

    PubMed

    Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro

    2018-06-01

    Square-wave pulsed light is characterized by three parameters, namely average photosynthetic photon flux density (PPFD), pulsed-light frequency, and duty ratio (the ratio of light-period duration to that of the light-dark cycle). In addition, the light-period PPFD is determined by the averaged PPFD and duty ratio. We investigated the effects of these parameters and their interactions on net photosynthetic rate (P n ) of cos lettuce leaves for every combination of parameters. Averaged PPFD values were 0-500 µmol m -2  s -1 . Frequency values were 0.1-1000 Hz. White LED arrays were used as the light source. Every parameter affected P n and interactions between parameters were observed for all combinations. The P n under pulsed light was lower than that measured under continuous light of the same averaged PPFD, and this difference was enhanced with decreasing frequency and increasing light-period PPFD. A mechanistic model was constructed to estimate the amount of stored photosynthetic intermediates over time under pulsed light. The results indicated that all effects of parameters and their interactions on P n were explainable by consideration of the dynamics of accumulation and consumption of photosynthetic intermediates.

  9. Estimation of the solubility parameters of model plant surfaces and agrochemicals: a valuable tool for understanding plant surface interactions

    PubMed Central

    2012-01-01

    Background Most aerial plant parts are covered with a hydrophobic lipid-rich cuticle, which is the interface between the plant organs and the surrounding environment. Plant surfaces may have a high degree of hydrophobicity because of the combined effects of surface chemistry and roughness. The physical and chemical complexity of the plant cuticle limits the development of models that explain its internal structure and interactions with surface-applied agrochemicals. In this article we introduce a thermodynamic method for estimating the solubilities of model plant surface constituents and relating them to the effects of agrochemicals. Results Following the van Krevelen and Hoftyzer method, we calculated the solubility parameters of three model plant species and eight compounds that differ in hydrophobicity and polarity. In addition, intact tissues were examined by scanning electron microscopy and the surface free energy, polarity, solubility parameter and work of adhesion of each were calculated from contact angle measurements of three liquids with different polarities. By comparing the affinities between plant surface constituents and agrochemicals derived from (a) theoretical calculations and (b) contact angle measurements we were able to distinguish the physical effect of surface roughness from the effect of the chemical nature of the epicuticular waxes. A solubility parameter model for plant surfaces is proposed on the basis of an increasing gradient from the cuticular surface towards the underlying cell wall. Conclusions The procedure enabled us to predict the interactions among agrochemicals, plant surfaces, and cuticular and cell wall components, and promises to be a useful tool for improving our understanding of biological surface interactions. PMID:23151272

  10. Estimation of the solubility parameters of model plant surfaces and agrochemicals: a valuable tool for understanding plant surface interactions.

    PubMed

    Khayet, Mohamed; Fernández, Victoria

    2012-11-14

    Most aerial plant parts are covered with a hydrophobic lipid-rich cuticle, which is the interface between the plant organs and the surrounding environment. Plant surfaces may have a high degree of hydrophobicity because of the combined effects of surface chemistry and roughness. The physical and chemical complexity of the plant cuticle limits the development of models that explain its internal structure and interactions with surface-applied agrochemicals. In this article we introduce a thermodynamic method for estimating the solubilities of model plant surface constituents and relating them to the effects of agrochemicals. Following the van Krevelen and Hoftyzer method, we calculated the solubility parameters of three model plant species and eight compounds that differ in hydrophobicity and polarity. In addition, intact tissues were examined by scanning electron microscopy and the surface free energy, polarity, solubility parameter and work of adhesion of each were calculated from contact angle measurements of three liquids with different polarities. By comparing the affinities between plant surface constituents and agrochemicals derived from (a) theoretical calculations and (b) contact angle measurements we were able to distinguish the physical effect of surface roughness from the effect of the chemical nature of the epicuticular waxes. A solubility parameter model for plant surfaces is proposed on the basis of an increasing gradient from the cuticular surface towards the underlying cell wall. The procedure enabled us to predict the interactions among agrochemicals, plant surfaces, and cuticular and cell wall components, and promises to be a useful tool for improving our understanding of biological surface interactions.

  11. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy

    1993-01-01

    Climate changes traditionally have been detected from long series of observations and long after they happened. The 'inverse sequential' monitoring procedure is designed to detect changes as soon as they occur. Frequency distribution parameters are estimated both from the most recent existing set of observations and from the same set augmented by 1,2,...j new observations. Individual-value probability products ('likelihoods') are then calculated which yield probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set and vice versa. A parameter change is signaled when these probabilities (or a more convenient and robust compound 'no change' probability) show a progressive decrease. New parameters are then estimated from the new observations alone to restart the procedure. The detailed algebra is developed and tested for Gaussian means and variances, Poisson and chi-square means, and linear or exponential trends; a comprehensive and interactive Fortran program is provided in the appendix.

  12. Optimal structure and parameter learning of Ising models

    DOE PAGES

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant; ...

    2018-03-16

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  13. Optimal structure and parameter learning of Ising models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  14. MC3: Multi-core Markov-chain Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan

    2016-10-01

    MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.

  15. Electronic polarizability and interaction parameter of gadolinium tungsten borate glasses with high WO{sub 3} content

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taki, Yukina; Shinozaki, Kenji; Honma, Tsuyoshi

    2014-12-15

    Glasses with the compositions of 25Gd{sub 2}O{sub 3}–xWO{sub 3}–(75−x)B{sub 2}O{sub 3} with x=25–65 were prepared by using a conventional melt quenching method, and their electronic polarizabilities, optical basicities Λ(n{sub o}), and interaction parameters A(n{sub o}) were estimated from density and refractive index measurements in order to clarify the feature of electronic polarizability and bonding states in the glasses with high WO{sub 3} contents. The optical basicity of the glasses increases monotonously with the substitution of WO{sub 3} for B{sub 2}O{sub 3}, and contrary the interaction parameter decreases monotonously with increasing WO{sub 3} content. A good linear correlation was observed betweenmore » Λ(n{sub o}) and A(n{sub o}) and between the glass transition temperature and A(n{sub o}). It was proposed that Gd{sub 2}O{sub 3} oxide belongs to the category of basic oxide with a value of A(n{sub o})=0.044 Å{sup −3} as similar to WO{sub 3}. The relationship between the glass formation and electronic polarizability in the glasses was discussed, and it was proposed that the glasses with high WO{sub 3} and Gd{sub 2}O{sub 3} contents would be a floppy network system consisting of mainly basic oxides. - Graphical abstract: This figure shows the correlation between the optical basicity and interaction parameter in borate-based glasses. The data obtained in the present study for Gd{sub 2}O{sub 3}–WO{sub 3}–B{sub 2}O{sub 3} glasses are locating in the correlation line for other borate glasses. These results shown in Fig. 8 clearly demonstrate that Gd{sub 2}O{sub 3}–WO{sub 3}–B{sub 2}O{sub 3} glasses having a wide range of optical basicity and interaction parameter are regarded as glasses consisting of acidic and basic oxides. - Highlights: • Gd{sub 2}O{sub 3}–WO{sub 3}–B{sub 2}O{sub 3} glasses with high WO{sub 3} contents were prepared. • Electronic polarizability and interaction parameter were estimated. • Optical basicity increases monotonously with increasing WO{sub 3} content. • Interaction parameter decreases monotonously with increasing WO{sub 3} content. • Glasses with high WO{sub 3}contents is regarded as a floppy network system.« less

  16. Estimating Likelihood of Fetal In Vivo Interactions Using In ...

    EPA Pesticide Factsheets

    Tox21/ToxCast efforts provide in vitro concentration-response data for thousands of compounds. Predicting whether chemical-biological interactions observed in vitro will occur in vivo is challenging. We hypothesize that using a modified model from the FDA guidance for drug interaction studies, Cmax/AC50 (i.e., maximal in vivo blood concentration over the half-maximal in in vitro activity concentration), will give a useful approximation for concentrations where in vivo interactions are likely. Further, for doses where maternal blood concentrations are likely to elicit an interaction (Cmax/AC50>0.1), where do the compounds accumulate in fetal tissues? In order to estimate these doses based on Tox21 data, in silico parameters of chemical fraction unbound in plasma and intrinsic hepatic clearance were estimated from ADMET predictor (Simulations-Plus Inc.) and used in the HTTK R-package to obtain Cmax values from a physiologically-based toxicokinetics model. In silico estimated Cmax values predicted in vivo human Cmax with median absolute error of 0.81 for 93 chemicals, giving confidence in the R-package and in silico estimates. A case example evaluating Cmax/AC50 values for peroxisome proliferator-activated receptor gamma (PPARγ) and glucocorticoid receptor revealed known compounds (glitazones and corticosteroids, respectively) highest on the list at pharmacological doses. Doses required to elicit likely interactions across all Tox21/ToxCast assays were compared to

  17. Study of a few cluster candidates in the Magellanic Bridge

    NASA Astrophysics Data System (ADS)

    Choudhury, Samyaday; Subramaniam Subramaniam, Annapurni; Sohn, Young-Jong

    2018-06-01

    The Magellanic Clouds (LMC & SMC) are gas rich, metal poor, dwarf satellite galaxies to our Milky Way that are interacting with each other. The Magellanic Bridge (MB), joining the larger and smaller Cloud is considered to be a signature of this interaction process. Studies have revealed that the MB, apart from gas also hosts stellar populations and star clusters. The number of clusters, with well-estimated parameters within the MB is still underway. In this work, we study a sample of 9 previously cataloged star clusters in the MB region. We use Washington C, Harris R and Cousins I bands data from literature, taken using the 4-m Blanco telescope to estimate the cluster properties (size, age, reddening). We also identify and separate out genuine cluster candidates from possible clusters/asterism. The increase in number of genuine cluster candidates with well-estimated parameters is important in the context of understanding cluster formation and evolution in such low-metallicity, and tidally disrupted environment. The clusters studied here can also help estimate distances to different parts of the MB, as recent studies indicate that portions of MB near the SMC is a closer to us, than the LMC.

  18. Epistasis interaction of QTL effects as a genetic parameter influencing estimation of the genetic additive effect.

    PubMed

    Bocianowski, Jan

    2013-03-01

    Epistasis, an additive-by-additive interaction between quantitative trait loci, has been defined as a deviation from the sum of independent effects of individual genes. Epistasis between QTLs assayed in populations segregating for an entire genome has been found at a frequency close to that expected by chance alone. Recently, epistatic effects have been considered by many researchers as important for complex traits. In order to understand the genetic control of complex traits, it is necessary to clarify additive-by-additive interactions among genes. Herein we compare estimates of a parameter connected with the additive gene action calculated on the basis of two models: a model excluding epistasis and a model with additive-by-additive interaction effects. In this paper two data sets were analysed: 1) 150 barley doubled haploid lines derived from the Steptoe × Morex cross, and 2) 145 DH lines of barley obtained from the Harrington × TR306 cross. The results showed that in cases when the effect of epistasis was different from zero, the coefficient of determination was larger for the model with epistasis than for the one excluding epistasis. These results indicate that epistatic interaction plays an important role in controlling the expression of complex traits.

  19. Semiparametric Bayesian analysis of gene-environment interactions with error in measurement of environmental covariates and missing genetic data.

    PubMed

    Lobach, Iryna; Mallick, Bani; Carroll, Raymond J

    2011-01-01

    Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.

  20. An interacting O + O supergiant close binary system: Cygnus OB2-5 (V729 Cyg)

    NASA Astrophysics Data System (ADS)

    Yaşarsoy, B.; Yakut, K.

    2014-08-01

    The massive interacting close binary system V729 Cyg (OIa + O/WN9), plausibly progenitor of a Wolf-Rayet system, is studied using new observations gathered over 65 nights and earlier published data. Radial velocity and five color light curves are analysed simultaneously. Estimated physical parameters of the components are M1=36±3 M, M2=10±1 M, R1=27±1 R, R2=15±0.6 R, log(L1/L⊙)=5.59±0.06, and log(L2/L⊙)=4.65±0.07. We give only the formal 1σ scatter, but we believe systematic errors in the luminosities, of uncertain origin as discussed in the text, are likely to be much bigger. The distance of the Cygnus OB2 association is estimated as 967±48 pc by using our newly obtained parameters.

  1. Critical behavior near the ferromagnetic phase transition in double perovskite Nd2NiMnO6

    NASA Astrophysics Data System (ADS)

    Ali, Anzar; Sharma, G.; Singh, Yogesh

    2018-05-01

    The knowledge of critical exponents plays a crucial role in trying to understand the interaction mechanism near a phase transition. In this report, we present a detailed study of the critical behaviour near the ferromagnetic (FM) transition (TC ˜ 193 K) in Nd2NiMnO6 using the temperature and magnetic field dependent isothermal magnetisation measurements. We used various analysis methods such as Arrott plot, modified Arrott plot, and Kouvel-Fisher plot to estimate the critical parameters. The magnetic critical parameters β = 0.49±0.02, γ = 1.05±0.04 and critical isothermal parameter δ = 3.05±0.02 are in excellent agreement with Widom scaling. The critical parameters analysis emphasizes that mean field interaction is the mechanism driving the FM transition in Nd2NiMnO6.

  2. Interactions of solutes and streambed sediment: 2. A dynamic analysis of coupled hydrologic and chemical processes that determine solute transport

    USGS Publications Warehouse

    Bencala, Kenneth E.

    1984-01-01

    Solute transport in streams is determined by the interaction of physical and chemical processes. Data from an injection experiment for chloride and several cations indicate significant influence of solutestreambed processes on transport in a mountain stream. These data are interpreted in terms of transient storage processes for all tracers and sorption processes for the cations. Process parameter values are estimated with simulations based on coupled quasi-two-dimensional transport and first-order mass transfer sorption. Comparative simulations demonstrate the relative roles of the physical and chemical processes in determining solute transport. During the first 24 hours of the experiment, chloride concentrations were attenuated relative to expected plateau levels. Additional attenuation occurred for the sorbing cation strontium. The simulations account for these storage processes. Parameter values determined by calibration compare favorably with estimates from other studies in mountain streams. Without further calibration, the transport of potassium and lithium is adequately simulated using parameters determined in the chloride-strontium simulation and with measured cation distribution coefficients.

  3. Evolutionary optimization with data collocation for reverse engineering of biological networks.

    PubMed

    Tsai, Kuan-Yao; Wang, Feng-Sheng

    2005-04-01

    Modern experimental biology is moving away from analyses of single elements to whole-organism measurements. Such measured time-course data contain a wealth of information about the structure and dynamic of the pathway or network. The dynamic modeling of the whole systems is formulated as a reverse problem that requires a well-suited mathematical model and a very efficient computational method to identify the model structure and parameters. Numerical integration for differential equations and finding global parameter values are still two major challenges in this field of the parameter estimation of nonlinear dynamic biological systems. We compare three techniques of parameter estimation for nonlinear dynamic biological systems. In the proposed scheme, the modified collocation method is applied to convert the differential equations to the system of algebraic equations. The observed time-course data are then substituted into the algebraic system equations to decouple system interactions in order to obtain the approximate model profiles. Hybrid differential evolution (HDE) with population size of five is able to find a global solution. The method is not only suited for parameter estimation but also can be applied for structure identification. The solution obtained by HDE is then used as the starting point for a local search method to yield the refined estimates.

  4. The structural identifiability and parameter estimation of a multispecies model for the transmission of mastitis in dairy cows with postmilking teat disinfection.

    PubMed

    White, L J; Evans, N D; Lam, T J G M; Schukken, Y H; Medley, G F; Godfrey, K R; Chappell, M J

    2002-01-01

    A mathematical model for the transmission of two interacting classes of mastitis causing bacterial pathogens in a herd of dairy cows is presented and applied to a specific data set. The data were derived from a field trial of a specific measure used in the control of these pathogens, where half the individuals were subjected to the control and in the others the treatment was discontinued. The resultant mathematical model (eight non-linear simultaneous ordinary differential equations) therefore incorporates heterogeneity in the host as well as the infectious agent and consequently the effects of control are intrinsic in the model structure. A structural identifiability analysis of the model is presented demonstrating that the scope of the novel method used allows application to high order non-linear systems. The results of a simultaneous estimation of six unknown system parameters are presented. Previous work has only estimated a subset of these either simultaneously or individually. Therefore not only are new estimates provided for the parameters relating to the transmission and control of the classes of pathogens under study, but also information about the relationships between them. We exploit the close link between mathematical modelling, structural identifiability analysis, and parameter estimation to obtain biological insights into the system modelled.

  5. Quasiparticle Interactions in Neutron Matter for Applications in Neutron Stars

    NASA Technical Reports Server (NTRS)

    Wambach, J.; Anisworth, T. L.; Pines, D.

    1993-01-01

    A microscopic model for the quaisiparticle interaction in neutron matter is presented. Both particle-particle (pp) and particle-hole (ph) correlation are are included. The pp correlations are treated in semi-empirical way, while ph correlations are incorporated by solving coupled two-body equations for the particle hole interaction and the scattering amplitude on the Fermi sphere. The resulting integral equations self-consistently sum the ph reducible diagrams. Antisymmetry is kept at all stages and hence the forward-scattering sum rules are obeyed. Results for Landau parameters and transport coefficients in a density regime representing the crust of a neutron star are presented. We also estimate the S-1 gap parameter for neutron superfluidity and comment briefly on neutron-star implications.

  6. Quasiparticle Interactions in Neutron Matter for Applications in Neutron Stars

    NASA Technical Reports Server (NTRS)

    Wambach, J; Ainsworth, T. L.; Pines, D.

    1993-01-01

    A microscopic model for the quasiparticle interaction in neutron matter is presented. Both-particle (pp) and particle-hole (ph) correlations are included. The pp correlations are treated in semi-empirical way, while ph correlations are incorporated by solving coupled two-body equations for particle-hole interaction and the scattering amplitude of the Fermi sphere. The resulting integral equations self-consistently sum the ph reducible diagrams. Antisymmetry is kept at all stages and hence the forward-scattering sum rules for the scattering amplitude are obeyed. Results for Landau parameters and transport coefficients in a density regime representing the crust of a neutron star are presented. We also estimate the (1)S(sub 0) gap parameter for neutron superfluidity and comment briefly on neutron-star implications.

  7. Relaxation limit of a compressible gas-liquid model with well-reservoir interaction

    NASA Astrophysics Data System (ADS)

    Solem, Susanne; Evje, Steinar

    2017-02-01

    This paper deals with the relaxation limit of a two-phase compressible gas-liquid model which contains a pressure-dependent well-reservoir interaction term of the form q (P_r - P) where q>0 is the rate of the pressure-dependent influx/efflux of gas, P is the (unknown) wellbore pressure, and P_r is the (known) surrounding reservoir pressure. The model can be used to study gas-kick flow scenarios relevant for various wellbore operations. One extreme case is when the wellbore pressure P is largely dictated by the surrounding reservoir pressure P_r. Formally, this model is obtained by deriving the limiting system as the relaxation parameter q in the full model tends to infinity. The main purpose of this work is to understand to what extent this case can be represented by a well-defined mathematical model for a fixed global time T>0. Well-posedness of the full model has been obtained in Evje (SIAM J Math Anal 45(2):518-546, 2013). However, as the estimates for the full model are dependent on the relaxation parameter q, new estimates must be obtained for the equilibrium model to ensure existence of solutions. By means of appropriate a priori assumptions and some restrictions on the model parameters, necessary estimates (low order and higher order) are obtained. These estimates that depend on the global time T together with smallness assumptions on the initial data are then used to obtain existence of solutions in suitable Sobolev spaces.

  8. Electrostatic interaction between stereocilia: I. Its role in supporting the structure of the hair bundle.

    PubMed

    Dolgobrodov, S G; Lukashkin, A N; Russell, I J

    2000-12-01

    This paper provides theoretical estimates for the forces of electrostatic interaction between adjacent stereocilia in auditory and vestibular hair cells. Estimates are given for parameters within the measured physiological range using constraints appropriate for the known geometry of the hair bundle. Stereocilia are assumed to possess an extended, negatively charged surface coat, the glycocalyx. Different charge distribution profiles within the glycocalyx are analysed. It is shown that charged glycocalices on the apical surface of the hair cells can support spatial separation between adjacent stereocilia in the hair bundles through electrostatic repulsion between stereocilia. The charge density profile within the glycocalyx is a crucial parameter. In fact, attraction instead of repulsion between adjacent stereocilia will be observed if the charge of the glycocalyx is concentrated near the membrane of the stereocilia, thereby making this type of charge distribution unlikely. The forces of electrostatic interaction between stereocilia may influence the mechanical properties of the hair bundle and, being strongly non-linear, contribute to the non-linear phenomena that have been recorded from the periphery of the auditory and vestibular systems.

  9. Farm water budgets for semiarid irrigated floodplains of northern New Mexico: characterizing the surface water-groundwater interactions

    NASA Astrophysics Data System (ADS)

    Gutierrez, K. Y.; Fernald, A.; Ochoa, C. G.; Guldan, S. J.

    2013-12-01

    KEY WORDS - Hydrology, Water budget, Deep percolation, Surface water-Groundwater interactions. With the recent projections for water scarcity, water balances have become an indispensable water management tool. In irrigated floodplains, deep percolation from irrigation can represent one of the main aquifer recharge sources. A better understanding of surface water and groundwater interactions in irrigated valleys is needed for properly assessing the water balances in these systems and estimating potential aquifer recharge. We conducted a study to quantify the parameters and calculate the water budgets in three flood irrigated hay fields with relatively low, intermediate and, high water availability in northern New Mexico. We monitored different hydrologic parameters including total amount of water applied, change in soil moisture, drainage below the effective root zone, and shallow water level fluctuations in response to irrigation. Evapotranspiration was calculated from weather station data collected in-situ using the Samani-Hargreaves. Previous studies in the region have estimated deep percolation as a residual parameter of the water balance equation. In this study, we used both, the water balance method and actual measurements of deep percolation using passive lysimeters. Preliminary analyses for the three fields show a relatively rapid movement of water through the upper 50 cm of the vadose zone and a quick response of the shallow aquifer under flood irrigation. Further results from this study will provide a better understanding of surface water-groundwater interactions in flood irrigated valleys in northern New Mexico.

  10. A 3D interactive method for estimating body segmental parameters in animals: application to the turning and running performance of Tyrannosaurus rex.

    PubMed

    Hutchinson, John R; Ng-Thow-Hing, Victor; Anderson, Frank C

    2007-06-21

    We developed a method based on interactive B-spline solids for estimating and visualizing biomechanically important parameters for animal body segments. Although the method is most useful for assessing the importance of unknowns in extinct animals, such as body contours, muscle bulk, or inertial parameters, it is also useful for non-invasive measurement of segmental dimensions in extant animals. Points measured directly from bodies or skeletons are digitized and visualized on a computer, and then a B-spline solid is fitted to enclose these points, allowing quantification of segment dimensions. The method is computationally fast enough so that software implementations can interactively deform the shape of body segments (by warping the solid) or adjust the shape quantitatively (e.g., expanding the solid boundary by some percentage or a specific distance beyond measured skeletal coordinates). As the shape changes, the resulting changes in segment mass, center of mass (CM), and moments of inertia can be recomputed immediately. Volumes of reduced or increased density can be embedded to represent lungs, bones, or other structures within the body. The method was validated by reconstructing an ostrich body from a fleshed and defleshed carcass and comparing the estimated dimensions to empirically measured values from the original carcass. We then used the method to calculate the segmental masses, centers of mass, and moments of inertia for an adult Tyrannosaurus rex, with measurements taken directly from a complete skeleton. We compare these results to other estimates, using the model to compute the sensitivities of unknown parameter values based upon 30 different combinations of trunk, lung and air sac, and hindlimb dimensions. The conclusion that T. rex was not an exceptionally fast runner remains strongly supported by our models-the main area of ambiguity for estimating running ability seems to be estimating fascicle lengths, not body dimensions. Additionally, the craniad position of the CM in all of our models reinforces the notion that T. rex did not stand or move with extremely columnar, elephantine limbs. It required some flexion in the limbs to stand still, but how much flexion depends directly on where its CM is assumed to lie. Finally we used our model to test an unsolved problem in dinosaur biomechanics: how fast a huge biped like T. rex could turn. Depending on the assumptions, our whole body model integrated with a musculoskeletal model estimates that turning 45 degrees on one leg could be achieved slowly, in about 1-2s.

  11. Estimation of effective connectivity using multi-layer perceptron artificial neural network.

    PubMed

    Talebi, Nasibeh; Nasrabadi, Ali Motie; Mohammad-Rezazadeh, Iman

    2018-02-01

    Studies on interactions between brain regions estimate effective connectivity, (usually) based on the causality inferences made on the basis of temporal precedence. In this study, the causal relationship is modeled by a multi-layer perceptron feed-forward artificial neural network, because of the ANN's ability to generate appropriate input-output mapping and to learn from training examples without the need of detailed knowledge of the underlying system. At any time instant, the past samples of data are placed in the network input, and the subsequent values are predicted at its output. To estimate the strength of interactions, the measure of " Causality coefficient " is defined based on the network structure, the connecting weights and the parameters of hidden layer activation function. Simulation analysis demonstrates that the method, called "CREANN" (Causal Relationship Estimation by Artificial Neural Network), can estimate time-invariant and time-varying effective connectivity in terms of MVAR coefficients. The method shows robustness with respect to noise level of data. Furthermore, the estimations are not significantly influenced by the model order (considered time-lag), and the different initial conditions (initial random weights and parameters of the network). CREANN is also applied to EEG data collected during a memory recognition task. The results implicate that it can show changes in the information flow between brain regions, involving in the episodic memory retrieval process. These convincing results emphasize that CREANN can be used as an appropriate method to estimate the causal relationship among brain signals.

  12. Interactions between Neurophysiology and Psychoacoustics: Meeting of the Acoustical Society of America (117th) Held in Syracuse, New York on 22 May 1989

    DTIC Science & Technology

    1989-06-01

    the intensity for which performance equals the chosen value. We use the PEST (parameter estimation by sequential testing; Taylor and Creelman , 1967...forward masking in the auditory nerve." J. Acoust. Soc. Am. 84, 584-591. Taylor, M.M. and Creelman , C.D. (1967). "PEST: Efficient estimates on

  13. Short cell-penetrating peptides: a model of interactions with gene promoter sites.

    PubMed

    Khavinson, V Kh; Tarnovskaya, S I; Linkova, N S; Pronyaeva, V E; Shataeva, L K; Yakutseni, P P

    2013-01-01

    Analysis of the main parameters of molecular mechanics (number of hydrogen bonds, hydrophobic and electrostatic interactions, DNA-peptide complex minimization energy) provided the data to validate the previously proposed qualitative models of peptide-DNA interactions and to evaluate their quantitative characteristics. Based on these estimations, a three-dimensional model of Lys-Glu and Ala-Glu-Asp-Gly peptide interactions with DNA sites (GCAG and ATTTC) located in the promoter zones of genes encoding CD5, IL-2, MMP2, and Tram1 signal molecules.

  14. Developing a methodology for the inverse estimation of root architectural parameters from field based sampling schemes

    NASA Astrophysics Data System (ADS)

    Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry

    2017-04-01

    Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.

  15. Reactivity of fluoroalkanes in reactions of coordinated molecular decomposition

    NASA Astrophysics Data System (ADS)

    Pokidova, T. S.; Denisov, E. T.

    2017-08-01

    Experimental results on the coordinated molecular decomposition of RF fluoroalkanes to olefin and HF are analyzed using the model of intersecting parabolas (IPM). The kinetic parameters are calculated to allow estimates of the activation energy ( E) and rate constant ( k) of these reactions, based on enthalpy and IPM algorithms. Parameters E and k are found for the first time for eight RF decomposition reactions. The factors that affect activation energy E of RF decomposition (the enthalpy of the reaction, the electronegativity of the atoms of reaction centers, and the dipole-dipole interaction of polar groups) are determined. The values of E and k for reverse reactions of addition are estimated.

  16. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  17. Skin friction and heat transfer correlations for high-speed low-density flow past a flat plate

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Baganoff, Donald

    1991-01-01

    The independent and dependent variables associated with drag and heat transfer to a flat plate at zero incidence in high-speed, rarefied flow are analyzed anew to reflect the importance of kinetic effects occurring near the plate surface on energy and momentum transfer, rather than following arguments normally used to describe continuum, higher density flowfields. A new parameter, the wall Knudsen number Knx,w, based on an estimate of the mean free path length of molecules having just interacted with the surface of the plate, is introduced and used to correlate published drag and heat transfer data. The new parameter is shown to provide better correlation than either the viscous interaction parameter X or the widely-used slip parameter Voo for drag and heat transfer data over a wide range of Mach numbers, Reynolds numbers, and plate-to-freestream stagnation temperature ratios.

  18. Attaining insight into interactions between hydrologic model parameters and geophysical attributes for national-scale model parameter estimation

    NASA Astrophysics Data System (ADS)

    Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.

    2017-12-01

    Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.

  19. Estimate the effective connectivity in multi-coupled neural mass model using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shan, Bonan; Wang, Jiang; Deng, Bin; Zhang, Zhen; Wei, Xile

    2017-03-01

    Assessment of the effective connectivity among different brain regions during seizure is a crucial problem in neuroscience today. As a consequence, a new model inversion framework of brain function imaging is introduced in this manuscript. This framework is based on approximating brain networks using a multi-coupled neural mass model (NMM). NMM describes the excitatory and inhibitory neural interactions, capturing the mechanisms involved in seizure initiation, evolution and termination. Particle swarm optimization method is used to estimate the effective connectivity variation (the parameters of NMM) and the epileptiform dynamics (the states of NMM) that cannot be directly measured using electrophysiological measurement alone. The estimated effective connectivity includes both the local connectivity parameters within a single region NMM and the remote connectivity parameters between multi-coupled NMMs. When the epileptiform activities are estimated, a proportional-integral controller outputs control signal so that the epileptiform spikes can be inhibited immediately. Numerical simulations are carried out to illustrate the effectiveness of the proposed framework. The framework and the results have a profound impact on the way we detect and treat epilepsy.

  20. Calculating the sensitivity and robustness of binding free energy calculations to force field parameters

    PubMed Central

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.

    2013-01-01

    Binding free energy calculations offer a thermodynamically rigorous method to compute protein-ligand binding, and they depend on empirical force fields with hundreds of parameters. We examined the sensitivity of computed binding free energies to the ligand’s electrostatic and van der Waals parameters. Dielectric screening and cancellation of effects between ligand-protein and ligand-solvent interactions reduce the parameter sensitivity of binding affinity by 65%, compared with interaction strengths computed in the gas-phase. However, multiple changes to parameters combine additively on average, which can lead to large changes in overall affinity from many small changes to parameters. Using these results, we estimate that random, uncorrelated errors in force field nonbonded parameters must be smaller than 0.02 e per charge, 0.06 Å per radius, and 0.01 kcal/mol per well depth in order to obtain 68% (one standard deviation) confidence that a computed affinity for a moderately-sized lead compound will fall within 1 kcal/mol of the true affinity, if these are the only sources of error considered. PMID:24015114

  1. Adaptive control based on an on-line parameter estimation of an upper limb exoskeleton.

    PubMed

    Riani, Akram; Madani, Tarek; Hadri, Abdelhafid El; Benallegue, Abdelaziz

    2017-07-01

    This paper presents an adaptive control strategy for an upper-limb exoskeleton based on an on-line dynamic parameter estimator. The objective is to improve the control performance of this system that plays a critical role in assisting patients for shoulder, elbow and wrist joint movements. In general, the dynamic parameters of the human limb are unknown and differ from a person to another, which degrade the performances of the exoskeleton-human control system. For this reason, the proposed control scheme contains a supplementary loop based on a new efficient on-line estimator of the dynamic parameters. Indeed, the latter is acting upon the parameter adaptation of the controller to ensure the performances of the system in the presence of parameter uncertainties and perturbations. The exoskeleton used in this work is presented and a physical model of the exoskeleton interacting with a 7 Degree of Freedom (DoF) upper limb model is generated using the SimMechanics library of MatLab/Simulink. To illustrate the effectiveness of the proposed approach, an example of passive rehabilitation movements is performed using multi-body dynamic simulation. The aims is to maneuver the exoskeleton that drive the upper limb to track desired trajectories in the case of the passive arm movements.

  2. Black Hole Solar Systems Extreme Mass Ratio Inspirals

    NASA Technical Reports Server (NTRS)

    Drasco, Steve

    2006-01-01

    Waveforms known well enough to detect some EMRIs today. Soon, enough to realize Gair et al estimate of approx. 100's to 1000's of detections to z = 1. Not yet enough to for precision parameter estimation of Barack and Cutler (mass and spin to 10(exp -4)). Some turning to the more exotic: non-Kerr background, gas interaction, third body, ... More status and refs: Drasco, gr-qc/0604115.

  3. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  4. Statistical Accounting for Uncertainty in Modeling Transport in Environmental Systems

    EPA Science Inventory

    Models frequently are used to predict the future extent of ground-water contamination, given estimates of their input parameters and forcing functions. Although models have a well established scientific basis for understanding the interactions between complex phenomena and for g...

  5. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  6. Hansen solubility parameters for polyethylene glycols by inverse gas chromatography.

    PubMed

    Adamska, Katarzyna; Voelkel, Adam

    2006-11-03

    Inverse gas chromatography (IGC) has been applied to determine solubility parameter and its components for nonionic surfactants--polyethylene glycols (PEG) of different molecular weight. Flory-Huggins interaction parameter (chi) and solubility parameter (delta(2)) were calculated according to DiPaola-Baranyi and Guillet method from experimentally collected retention data for the series of carefully selected test solutes. The Hansen's three-dimensional solubility parameters concept was applied to determine components (delta(d), delta(p), delta(h)) of corrected solubility parameter (delta(T)). The molecular weight and temperature of measurement influence the solubility parameter data, estimated from the slope, intercept and total solubility parameter. The solubility parameters calculated from the intercept are lower than those calculated from the slope. Temperature and structural dependences of the entopic factor (chi(S)) are presented and discussed.

  7. Slip-based terrain estimation with a skid-steer vehicle

    NASA Astrophysics Data System (ADS)

    Reina, Giulio; Galati, Rocco

    2016-10-01

    In this paper, a novel approach for online terrain characterisation is presented using a skid-steer vehicle. In the context of this research, terrain characterisation refers to the estimation of physical parameters that affects the terrain ability to support vehicular motion. These parameters are inferred from the modelling of the kinematic and dynamic behaviour of a skid-steer vehicle that reveals the underlying relationships governing the vehicle-terrain interaction. The concept of slip track is introduced as a measure of the slippage experienced by the vehicle during turning motion. The proposed terrain estimation system includes common onboard sensors, that is, wheel encoders, electrical current sensors and yaw rate gyroscope. Using these components, the system can characterise terrain online during normal vehicle operations. Experimental results obtained from different surfaces are presented to validate the system in the field showing its effectiveness and potential benefits to implement adaptive driving assistance systems or to automatically update the parameters of onboard control and planning algorithms.

  8. Parameter and observation importance in modelling virus transport in saturated porous media - Investigations in a homogenous system

    USGS Publications Warehouse

    Barth, Gilbert R.; Hill, M.C.

    2005-01-01

    This paper evaluates the importance of seven types of parameters to virus transport: hydraulic conductivity, porosity, dispersivity, sorption rate and distribution coefficient (representing physical-chemical filtration), and in-solution and adsorbed inactivation (representing virus inactivation). The first three parameters relate to subsurface transport in general while the last four, the sorption rate, distribution coefficient, and in-solution and adsorbed inactivation rates, represent the interaction of viruses with the porous medium and their ability to persist. The importance of four types of observations to estimate the virus-transport parameters are evaluated: hydraulic heads, flow, temporal moments of conservative-transport concentrations, and virus concentrations. The evaluations are conducted using one- and two-dimensional homogeneous simulations, designed from published field experiments, and recently developed sensitivity-analysis methods. Sensitivity to the transport-simulation time-step size is used to evaluate the importance of numerical solution difficulties. Results suggest that hydraulic conductivity, porosity, and sorption are most important to virus-transport predictions. Most observation types provide substantial information about hydraulic conductivity and porosity; only virus-concentration observations provide information about sorption and inactivation. The observations are not sufficient to estimate these important parameters uniquely. Even with all observation types, there is extreme parameter correlation between porosity and hydraulic conductivity and between the sorption rate and in-solution inactivation. Parameter estimation was accomplished by fixing values of porosity and in-solution inactivation.

  9. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  10. Charge relaxation and dynamics in organic semiconductors

    NASA Astrophysics Data System (ADS)

    Kwok, H. L.

    2006-08-01

    Charge relaxation in dispersive materials is often described in terms of the stretched exponential function (Kohlrausch law). The process can be explained using a "hopping" model which in principle, also applies to charge transport such as current conduction. This work analyzed reported transient photoconductivity data on functionalized pentacene single crystals using a geometric hopping model developed by B. Sturman et al and extracted values (or range of values) on the materials parameters relevant to charge relaxation as well as charge transport. Using the correlated disorder model (CDM), we estimated values of the carrier mobility for the pentacene samples. From these results, we observed the following: i) the transport site density appeared to be of the same order of magnitude as the carrier density; ii) it was possible to extract lower bound values on the materials parameters linked to the transport process; and iii) by matching the simulated charge decay to the transient photoconductivity data, we were able to refine estimates on the materials parameters. The data also allowed us to simulate the stretched exponential decay. Our observations suggested that the stretching index and the carrier mobility were related. Physically, such interdependence would allow one to demarcate between localized molecular interactions and distant coulomb interactions.

  11. PRince: a web server for structural and physicochemical analysis of protein-RNA interface.

    PubMed

    Barik, Amita; Mishra, Abhishek; Bahadur, Ranjit Prasad

    2012-07-01

    We have developed a web server, PRince, which analyzes the structural features and physicochemical properties of the protein-RNA interface. Users need to submit a PDB file containing the atomic coordinates of both the protein and the RNA molecules in complex form (in '.pdb' format). They should also mention the chain identifiers of interacting protein and RNA molecules. The size of the protein-RNA interface is estimated by measuring the solvent accessible surface area buried in contact. For a given protein-RNA complex, PRince calculates structural, physicochemical and hydration properties of the interacting surfaces. All these parameters generated by the server are presented in a tabular format. The interacting surfaces can also be visualized with software plug-in like Jmol. In addition, the output files containing the list of the atomic coordinates of the interacting protein, RNA and interface water molecules can be downloaded. The parameters generated by PRince are novel, and users can correlate them with the experimentally determined biophysical and biochemical parameters for better understanding the specificity of the protein-RNA recognition process. This server will be continuously upgraded to include more parameters. PRince is publicly accessible and free for use. Available at http://www.facweb.iitkgp.ernet.in/~rbahadur/prince/home.html.

  12. SIMPLE estimate of the free energy change due to aliphatic mutations: superior predictions based on first principles.

    PubMed

    Bueno, Marta; Camacho, Carlos J; Sancho, Javier

    2007-09-01

    The bioinformatics revolution of the last decade has been instrumental in the development of empirical potentials to quantitatively estimate protein interactions for modeling and design. Although computationally efficient, these potentials hide most of the relevant thermodynamics in 5-to-40 parameters that are fitted against a large experimental database. Here, we revisit this longstanding problem and show that a careful consideration of the change in hydrophobicity, electrostatics, and configurational entropy between the folded and unfolded state of aliphatic point mutations predicts 20-30% less false positives and yields more accurate predictions than any published empirical energy function. This significant improvement is achieved with essentially no free parameters, validating past theoretical and experimental efforts to understand the thermodynamics of protein folding. Our first principle analysis strongly suggests that both the solute-solute van der Waals interactions in the folded state and the electrostatics free energy change of exposed aliphatic mutations are almost completely compensated by similar interactions operating in the unfolded ensemble. Not surprisingly, the problem of properly accounting for the solvent contribution to the free energy of polar and charged group mutations, as well as of mutations that disrupt the protein backbone remains open. 2007 Wiley-Liss, Inc.

  13. Program for computer aided reliability estimation

    NASA Technical Reports Server (NTRS)

    Mathur, F. P. (Inventor)

    1972-01-01

    A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.

  14. The pEst version 2.1 user's manual

    NASA Technical Reports Server (NTRS)

    Murray, James E.; Maine, Richard E.

    1987-01-01

    This report is a user's manual for version 2.1 of pEst, a FORTRAN 77 computer program for interactive parameter estimation in nonlinear dynamic systems. The pEst program allows the user complete generality in definig the nonlinear equations of motion used in the analysis. The equations of motion are specified by a set of FORTRAN subroutines; a set of routines for a general aircraft model is supplied with the program and is described in the report. The report also briefly discusses the scope of the parameter estimation problem the program addresses. The report gives detailed explanations of the purpose and usage of all available program commands and a description of the computational algorithms used in the program.

  15. Geometric relationships for homogenization in single-phase binary alloy systems

    NASA Technical Reports Server (NTRS)

    Unnam, J.; Tenney, D. R.; Stein, B. A.

    1978-01-01

    A semiempirical relationship is presented which describes the extent of interaction between constituents in single-phase binary alloy systems having planar, cylindrical, or spherical interfaces. This relationship makes possible a quick estimate of the extent of interaction without lengthy numerical calculations. It includes two parameters which are functions of mean concentration and interface geometry. Experimental data for the copper-nickel system are included to demonstrate the usefulness of this relationship.

  16. Methods for estimating confidence intervals in interrupted time series analyses of health interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis

    2009-02-01

    Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.

  17. [Analytic methods for seed models with genotype x environment interactions].

    PubMed

    Zhu, J

    1996-01-01

    Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.

  18. Student Effort and Performance over the Semester

    ERIC Educational Resources Information Center

    Krohn, Gregory A.; O'Connor, Catherine M.

    2005-01-01

    The authors extend the standard education production function and student time allocation analysis to focus on the interactions between student effort and performance over the semester. The purged instrumental variable technique is used to obtain consistent estimators of the structural parameters of the model using data from intermediate…

  19. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    PubMed

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.

  20. Experimental study and thermodynamic modeling for determining the effect of non-polar solvent (hexane)/polar solvent (methanol) ratio and moisture content on the lipid extraction efficiency from Chlorella vulgaris.

    PubMed

    Malekzadeh, Mohammad; Abedini Najafabadi, Hamed; Hakim, Maziar; Feilizadeh, Mehrzad; Vossoughi, Manouchehr; Rashtchian, Davood

    2016-02-01

    In this research, organic solvent composed of hexane and methanol was used for lipid extraction from dry and wet biomass of Chlorella vulgaris. The results indicated that lipid and fatty acid extraction yield was decreased by increasing the moisture content of biomass. However, the maximum extraction efficiency was attained by applying equivolume mixture of hexane and methanol for both dry and wet biomass. Thermodynamic modeling was employed to estimate the effect of hexane/methanol ratio and moisture content on fatty acid extraction yield. Hansen solubility parameter was used in adjusting the interaction parameters of the model, which led to decrease the number of tuning parameters from 6 to 2. The results indicated that the model can accurately estimate the fatty acid recovery with average absolute deviation percentage (AAD%) of 13.90% and 15.00% for the two cases of using 6 and 2 adjustable parameters, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Models of Pilot Behavior and Their Use to Evaluate the State of Pilot Training

    NASA Astrophysics Data System (ADS)

    Jirgl, Miroslav; Jalovecky, Rudolf; Bradac, Zdenek

    2016-07-01

    This article discusses the possibilities of obtaining new information related to human behavior, namely the changes or progressive development of pilots' abilities during training. The main assumption is that a pilot's ability can be evaluated based on a corresponding behavioral model whose parameters are estimated using mathematical identification procedures. The mean values of the identified parameters are obtained via statistical methods. These parameters are then monitored and their changes evaluated. In this context, the paper introduces and examines relevant mathematical models of human (pilot) behavior, the pilot-aircraft interaction, and an example of the mathematical analysis.

  2. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  3. User's instructions for the 41-node thermoregulatory model (steady state version)

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A user's guide for the steady-state thermoregulatory model is presented. The model was modified to provide conversational interaction on a remote terminal, greater flexibility for parameter estimation, increased efficiency of convergence, greater choice of output variable and more realistic equations for respiratory and skin diffusion water losses.

  4. Search for an Annual Modulation in a p-Type Point Contact Germanium Dark Matter Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbeau, P.S.; Collar, J.I.; Fields, N.

    2011-01-01

    Fifteen months of cumulative CoGeNT data are examined for indications of an annual modulation, a predicted signature of weakly interacting massive particle (WIMP) interactions. Presently available data support the presence of a modulated component of unknown origin, with parameters prima facie compatible with a galactic halo composed of light-mass WIMPs. Unoptimized estimators yield a statistical significance for a modulation of {approx}2.8{sigma}, limited by the short exposure.

  5. Search for an Annual Modulation in a p-Type Point Contact Germanium Dark Matter Detector

    NASA Astrophysics Data System (ADS)

    Aalseth, C. E.; Barbeau, P. S.; Colaresi, J.; Collar, J. I.; Diaz Leon, J.; Fast, J. E.; Fields, N.; Hossbach, T. W.; Keillor, M. E.; Kephart, J. D.; Knecht, A.; Marino, M. G.; Miley, H. S.; Miller, M. L.; Orrell, J. L.; Radford, D. C.; Wilkerson, J. F.; Yocum, K. M.

    2011-09-01

    Fifteen months of cumulative CoGeNT data are examined for indications of an annual modulation, a predicted signature of weakly interacting massive particle (WIMP) interactions. Presently available data support the presence of a modulated component of unknown origin, with parameters prima facie compatible with a galactic halo composed of light-mass WIMPs. Unoptimized estimators yield a statistical significance for a modulation of ˜2.8σ, limited by the short exposure.

  6. Estimates of genetics and phenotypics parameters for the yield and quality of soybean seeds.

    PubMed

    Zambiazzi, E V; Bruzi, A T; Guilherme, S R; Pereira, D R; Lima, J G; Zuffo, A M; Ribeiro, F O; Mendes, A E S; Godinho, S H M; Carvalho, M L M

    2017-09-27

    Estimating genotype x environment (GxE) parameters for quality and yield in soybean seed grown in different environments in Minas Gerais State was the goal of this study, as well as to evaluate interaction effects of GxE for soybean seeds yield and quality. Seeds were produced in three locations in Minas Gerais State (Lavras, Inconfidentes, and Patos de Minas) in 2013/14 and 2014/15 seasons. Field experiments were conducted in randomized blocks in a factorial 17 x 6 (GxE), and three replications. Seed yield and quality were evaluated for germination in substrates paper and sand, seedling emergence, speed emergency index, mechanical damage by sodium hypochlorite, electrical conductivity, speed aging, vigor and viability of seeds by tetrazolium test in laboratory using completely randomized design. Quadratic component genotypic, GXE variance component, genotype determination coefficient, genetic variation coefficient and environmental variation coefficient were estimated using the Genes software. Percentage analysis of genotypes contribution, environments and genotype x environment interaction were conducted by sites combination two by two and three sites combination, using the R software. Considering genotypes selection of broad adaptation, TMG 1179 RR, CD 2737 RR, and CD 237 RR associated better yield performance at high physical and physiological potential of seed. Environmental effect was more expressive for most of the characters related to soybean seed quality. GxE interaction effects were expressive though genotypes did not present coincidental behavior in different environments.

  7. Fluorimetric study on the interaction between Norfloxacin and Proflavine hemisulphate.

    PubMed

    More, Vishalkumar R; Anbhule, Prashant V; Lee, Sang H; Patil, Shivajirao R; Kolekar, Govind B

    2011-07-01

    The interaction between Norfloxacin (NF) and Proflavine hemisulphate (PF) was investigated by spectroscopic tools like UV-VIS absorption and Fluorescence spectroscopy. It was proved that fluorescence quenching of NF by PF is due to the formation of NF-PF complex which was supported by UV-VIS absorption study. The study of thermodynamic parameters suggested that the key interacting forces are hydrogen bond and van der Waal's interactions and the binding interaction was spontaneous. The distance r between NF and PF was obtained according to the Förster's theory of non-radiative energy transfer. The fluorescence quenching mechanism was applied to estimate PF directly from pharmaceutical samples. © Springer Science+Business Media, LLC 2011

  8. Effect of solute interactions in columbium /Nb/ on creep strength

    NASA Technical Reports Server (NTRS)

    Klein, M. J.; Metcalfe, A. G.

    1973-01-01

    The creep strength of 17 ternary columbium (Nb)-base alloys was determined using an abbreviated measuring technique, and the results were analyzed to identify the contributions of solute interactions to creep strength. Isostrength creep diagrams and an interaction strengthening parameter, ST, were used to present and analyze data. It was shown that the isostrength creep diagram can be used to estimate the creep strength of untested alloys and to identify compositions with the most economical use of alloy elements. Positive values of ST were found for most alloys, showing that interaction strengthening makes an important contribution to the creep strength of these ternary alloys.

  9. Three-body Final State Interaction in η→3π

    DOE PAGES

    Guo, Peng; Danilkin, Igor V.; Schott, Diane; ...

    2015-09-11

    We present an unitary dispersive model for themore » $$\\eta \\to 3 \\pi$$ decay process based upon the Khuri-Treiman equations which are solved by means of the Pasquier inversion method. The description of the hadronic final-state interactions for the $$\\eta \\to 3\\pi$$ decay is essential to reproduce the available data and to understand the existing discrepancies between Dalitz plot parameters from experiment and chiral perturbation theory. Our approach incorporates substraction constants that are fixed by fitting the recent high-statistics WASA-at-COSY data for $$\\eta \\to \\pi^+ \\pi^- \\pi^0$$. Based on the parameters obtained we predict the slope parameter for the neutral channel to be $$\\alpha=-0.022\\pm 0.004$$. Through matching to next-to-leading order chiral perturbation theory we estimate the quark mass double ratio to be $$Q=21.4 \\pm 0.4$$.« less

  10. Prognostic relevance of the interaction between short-term, metronome-paced heart rate variability, and inflammation: results from the population-based CARLA cohort study.

    PubMed

    Medenwald, Daniel; Swenne, Cees A; Loppnow, Harald; Kors, Jan A; Pietzner, Diana; Tiller, Daniel; Thiery, Joachim; Nuding, Sebastian; Greiser, Karin H; Haerting, Johannes; Werdan, Karl; Kluttig, Alexander

    2017-01-01

    To determine the interaction between HRV and inflammation and their association with cardiovascular/all-cause mortality in the general population. Subjects of the CARLA study (n = 1671; 778 women, 893 men, 45-83 years of age) were observed for an average follow-up period of 8.8 years (226 deaths, 70 cardiovascular deaths). Heart rate variability parameters were calculated from 5-min segments of 20-min resting electrocardiograms. High-sensitivity C-reactive protein (hsCRP), interleukin-6 (IL-6), and soluble tumour necrosis factor-alpha receptor type 1 (sTNF-R1) were measured as inflammation parameters. The HRV parameters determined included the standard deviation of normal-to-normal intervals (SDNN), the root-mean-square of successive normal-interval differences (RMSSD), the low- and high-frequency (HF) power, the ratio of both, and non-linear parameters [Poincaré plot (SD1, SD2, SD1/SD2), short-term detrended fluctuation analysis]. We estimated hazard ratios by using covariate-adjusted Cox regression for cardiovascular and all-cause mortality incorporating an interaction term of HRV/inflammation parameters. Relative excess risk due to interactions (RERIs) were computed. We found an interaction effect of sTNF-R1 with SDNN (RERI: 0.5; 99% confidence interval (CI): 0.1-1.0), and a weaker effect with RMSSD (RERI: 0.4; 99% CI: 0.0-0.9) and HF (RERI: 0.4; 99% CI: 0.0-0.9) with respect to cardiovascular mortality on an additive scale after covariate adjustment. Neither IL-6 nor hsCRP showed a significant interaction with the HRV parameters. A change in TNF-α levels or the autonomic nervous system influences the mortality risk through both entities simultaneously. Thus, TNF-α and HRV need to be considered when predicating mortality. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.

  11. Geographic information system/watershed model interface

    USGS Publications Warehouse

    Fisher, Gary T.

    1989-01-01

    Geographic information systems allow for the interactive analysis of spatial data related to water-resources investigations. A conceptual design for an interface between a geographic information system and a watershed model includes functions for the estimation of model parameter values. Design criteria include ease of use, minimal equipment requirements, a generic data-base management system, and use of a macro language. An application is demonstrated for a 90.1-square-kilometer subbasin of the Patuxent River near Unity, Maryland, that performs automated derivation of watershed parameters for hydrologic modeling.

  12. The role of environmental heterogeneity in meta-analysis of gene-environment interactions with quantitative traits.

    PubMed

    Li, Shi; Mukherjee, Bhramar; Taylor, Jeremy M G; Rice, Kenneth M; Wen, Xiaoquan; Rice, John D; Stringham, Heather M; Boehnke, Michael

    2014-07-01

    With challenges in data harmonization and environmental heterogeneity across various data sources, meta-analysis of gene-environment interaction studies can often involve subtle statistical issues. In this paper, we study the effect of environmental covariate heterogeneity (within and between cohorts) on two approaches for fixed-effect meta-analysis: the standard inverse-variance weighted meta-analysis and a meta-regression approach. Akin to the results in Simmonds and Higgins (), we obtain analytic efficiency results for both methods under certain assumptions. The relative efficiency of the two methods depends on the ratio of within versus between cohort variability of the environmental covariate. We propose to use an adaptively weighted estimator (AWE), between meta-analysis and meta-regression, for the interaction parameter. The AWE retains full efficiency of the joint analysis using individual level data under certain natural assumptions. Lin and Zeng (2010a, b) showed that a multivariate inverse-variance weighted estimator retains full efficiency as joint analysis using individual level data, if the estimates with full covariance matrices for all the common parameters are pooled across all studies. We show consistency of our work with Lin and Zeng (2010a, b). Without sacrificing much efficiency, the AWE uses only univariate summary statistics from each study, and bypasses issues with sharing individual level data or full covariance matrices across studies. We compare the performance of the methods both analytically and numerically. The methods are illustrated through meta-analysis of interaction between Single Nucleotide Polymorphisms in FTO gene and body mass index on high-density lipoprotein cholesterol data from a set of eight studies of type 2 diabetes. © 2014 WILEY PERIODICALS, INC.

  13. Graph reconstruction using covariance-based methods.

    PubMed

    Sulaimanov, Nurgazy; Koeppl, Heinz

    2016-12-01

    Methods based on correlation and partial correlation are today employed in the reconstruction of a statistical interaction graph from high-throughput omics data. These dedicated methods work well even for the case when the number of variables exceeds the number of samples. In this study, we investigate how the graphs extracted from covariance and concentration matrix estimates are related by using Neumann series and transitive closure and through discussing concrete small examples. Considering the ideal case where the true graph is available, we also compare correlation and partial correlation methods for large realistic graphs. In particular, we perform the comparisons with optimally selected parameters based on the true underlying graph and with data-driven approaches where the parameters are directly estimated from the data.

  14. Using global sensitivity analysis to understand higher order interactions in complex models: an application of GSA on the Revised Universal Soil Loss Equation (RUSLE) to quantify model sensitivity and implications for ecosystem services management in Costa Rica

    NASA Astrophysics Data System (ADS)

    Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.

    2011-12-01

    Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch of soil (C factor), slope angle (L and S factor), and percentage of land area covered by surface cover (C factor). Our findings give further support to the importance of vegetation as a vital ecosystem service provider - soil loss reduction. Concurrent, progress is already been made in Costa Rica, where dam managers are moving forward on a Payment for Ecosystem Services scheme to help keep private lands forested and to improve crop management through targeted investments. Use of complex watershed models, such as RUSLE can help managers quantify the effect of specific land use changes. Moreover, effective land management of vegetation has other important benefits, such as bundled ecosystem services (e.g. pollination, habitat connectivity, etc) and improvements of communities' livelihoods.

  15. Flow structure generated by perpendicular blade-vortex interaction and implications for helicopter noise prediction. Volume 1: Measurements

    NASA Technical Reports Server (NTRS)

    Wittmer, Kenneth S.; Devenport, William J.

    1996-01-01

    The perpendicular interaction of a streamwise vortex with an infinite span helicopter blade was modeled experimentally in incompressible flow. Three-component velocity and turbulence measurements were made using a sub-miniature four sensor hot-wire probe. Vortex core parameters (radius, peak tangential velocity, circulation, and centerline axial velocity deficit) were determined as functions of blade-vortex separation, streamwise position, blade angle of attack, vortex strength, and vortex size. The downstream development of the flow shows that the interaction of the vortex with the blade wake is the primary cause of the changes in the core parameters. The blade sheds negative vorticity into its wake as a result of the induced angle of attack generated by the passing vortex. Instability in the vortex core due to its interaction with this negative vorticity region appears to be the catalyst for the magnification of the size and intensity of the turbulent flowfield downstream of the interaction. In general, the core radius increases while peak tangential velocity decreases with the effect being greater for smaller separations. These effects are largely independent of blade angle of attack; and if these parameters are normalized on their undisturbed values, then the effects of the vortex strength appear much weaker. Two theoretical models were developed to aid in extending the results to other flow conditions. An empirical model was developed for core parameter prediction which has some rudimentary physical basis, implying usefulness beyond a simple curve fit. An inviscid flow model was also created to estimate the vorticity shed by the interaction blade, and to predict the early stages of its incorporation into the interacting vortex.

  16. Modelling biological invasions: species traits, species interactions, and habitat heterogeneity.

    PubMed

    Cannas, Sergio A; Marco, Diana E; Páez, Sergio A

    2003-05-01

    In this paper we explore the integration of different factors to understand, predict and control ecological invasions, through a general cellular automaton model especially developed. The model includes life history traits of several species in a modular structure interacting multiple cellular automata. We performed simulations using field values corresponding to the exotic Gleditsia triacanthos and native co-dominant trees in a montane area. Presence of G. triacanthos juvenile bank was a determinant condition for invasion success. Main parameters influencing invasion velocity were mean seed dispersal distance and minimum reproductive age. Seed production had a small influence on the invasion velocity. Velocities predicted by the model agreed well with estimations from field data. Values of population density predicted matched field values closely. The modular structure of the model, the explicit interaction between the invader and the native species, and the simplicity of parameters and transition rules are novel features of the model.

  17. Integration and global analysis of isothermal titration calorimetry data for studying macromolecular interactions.

    PubMed

    Brautigam, Chad A; Zhao, Huaying; Vargas, Carolyn; Keller, Sandro; Schuck, Peter

    2016-05-01

    Isothermal titration calorimetry (ITC) is a powerful and widely used method to measure the energetics of macromolecular interactions by recording a thermogram of differential heating power during a titration. However, traditional ITC analysis is limited by stochastic thermogram noise and by the limited information content of a single titration experiment. Here we present a protocol for bias-free thermogram integration based on automated shape analysis of the injection peaks, followed by combination of isotherms from different calorimetric titration experiments into a global analysis, statistical analysis of binding parameters and graphical presentation of the results. This is performed using the integrated public-domain software packages NITPIC, SEDPHAT and GUSSI. The recently developed low-noise thermogram integration approach and global analysis allow for more precise parameter estimates and more reliable quantification of multisite and multicomponent cooperative and competitive interactions. Titration experiments typically take 1-2.5 h each, and global analysis usually takes 10-20 min.

  18. LigParGen web server: an automatic OPLS-AA parameter generator for organic ligands

    PubMed Central

    Dodda, Leela S.

    2017-01-01

    Abstract The accurate calculation of protein/nucleic acid–ligand interactions or condensed phase properties by force field-based methods require a precise description of the energetics of intermolecular interactions. Despite the progress made in force fields, small molecule parameterization remains an open problem due to the magnitude of the chemical space; the most critical issue is the estimation of a balanced set of atomic charges with the ability to reproduce experimental properties. The LigParGen web server provides an intuitive interface for generating OPLS-AA/1.14*CM1A(-LBCC) force field parameters for organic ligands, in the formats of commonly used molecular dynamics and Monte Carlo simulation packages. This server has high value for researchers interested in studying any phenomena based on intermolecular interactions with ligands via molecular mechanics simulations. It is free and open to all at jorgensenresearch.com/ligpargen, and has no login requirements. PMID:28444340

  19. Estimation of k-ε parameters using surrogate models and jet-in-crossflow data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan

    2014-11-01

    We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less

  20. A Statistical Method of Identifying Interactions in Neuron–Glia Systems Based on Functional Multicell Ca2+ Imaging

    PubMed Central

    Nakae, Ken; Ikegaya, Yuji; Ishikawa, Tomoe; Oba, Shigeyuki; Urakubo, Hidetoshi; Koyama, Masanori; Ishii, Shin

    2014-01-01

    Crosstalk between neurons and glia may constitute a significant part of information processing in the brain. We present a novel method of statistically identifying interactions in a neuron–glia network. We attempted to identify neuron–glia interactions from neuronal and glial activities via maximum-a-posteriori (MAP)-based parameter estimation by developing a generalized linear model (GLM) of a neuron–glia network. The interactions in our interest included functional connectivity and response functions. We evaluated the cross-validated likelihood of GLMs that resulted from the addition or removal of connections to confirm the existence of specific neuron-to-glia or glia-to-neuron connections. We only accepted addition or removal when the modification improved the cross-validated likelihood. We applied the method to a high-throughput, multicellular in vitro Ca2+ imaging dataset obtained from the CA3 region of a rat hippocampus, and then evaluated the reliability of connectivity estimates using a statistical test based on a surrogate method. Our findings based on the estimated connectivity were in good agreement with currently available physiological knowledge, suggesting our method can elucidate undiscovered functions of neuron–glia systems. PMID:25393874

  1. An empirical relationship for homogenization in single-phase binary alloy systems

    NASA Technical Reports Server (NTRS)

    Unnam, J.; Tenney, D. R.; Stein, B. A.

    1979-01-01

    A semiempirical formula is developed for describing the extent of interaction between constituents in single-phase binary alloy systems with planar, cylindrical, or spherical interfaces. The formula contains two parameters that are functions of mean concentration and interface geometry of the couple. The empirical solution is simple, easy to use, and does not involve sequential calculations, thereby allowing quick estimation of the extent of interactions without lengthy calculations. Results obtained with this formula are in good agreement with those from a finite-difference analysis.

  2. Nonparametric tests for interaction and group differences in a two-way layout.

    PubMed

    Fisher, A C; Wallenstein, S

    1991-01-01

    Nonparametric tests of group differences and interaction across strata are developed in which the null hypotheses for these tests are expressed as functions of rho i = P(X > Y) + 1/2P(X = Y), where X refers to a random observation from one group and Y refers to a random observation from the other group within stratum i. The estimator r of the parameter rho is shown to be a useful way to summarize and examine data for ordinal and continuous data.

  3. Physics of ultrasonic wave propagation in bone and heart characterized using Bayesian parameter estimation

    NASA Astrophysics Data System (ADS)

    Anderson, Christian Carl

    This Dissertation explores the physics underlying the propagation of ultrasonic waves in bone and in heart tissue through the use of Bayesian probability theory. Quantitative ultrasound is a noninvasive modality used for clinical detection, characterization, and evaluation of bone quality and cardiovascular disease. Approaches that extend the state of knowledge of the physics underpinning the interaction of ultrasound with inherently inhomogeneous and isotropic tissue have the potential to enhance its clinical utility. Simulations of fast and slow compressional wave propagation in cancellous bone were carried out to demonstrate the plausibility of a proposed explanation for the widely reported anomalous negative dispersion in cancellous bone. The results showed that negative dispersion could arise from analysis that proceeded under the assumption that the data consist of only a single ultrasonic wave, when in fact two overlapping and interfering waves are present. The confounding effect of overlapping fast and slow waves was addressed by applying Bayesian parameter estimation to simulated data, to experimental data acquired on bone-mimicking phantoms, and to data acquired in vitro on cancellous bone. The Bayesian approach successfully estimated the properties of the individual fast and slow waves even when they strongly overlapped in the acquired data. The Bayesian parameter estimation technique was further applied to an investigation of the anisotropy of ultrasonic properties in cancellous bone. The degree to which fast and slow waves overlap is partially determined by the angle of insonation of ultrasound relative to the predominant direction of trabecular orientation. In the past, studies of anisotropy have been limited by interference between fast and slow waves over a portion of the range of insonation angles. Bayesian analysis estimated attenuation, velocity, and amplitude parameters over the entire range of insonation angles, allowing a more complete characterization of anisotropy. A novel piecewise linear model for the cyclic variation of ultrasonic backscatter from myocardium was proposed. Models of cyclic variation for 100 type 2 diabetes patients and 43 normal control subjects were constructed using Bayesian parameter estimation. Parameters determined from the model, specifically rise time and slew rate, were found to be more reliable in differentiating between subject groups than the previously employed magnitude parameter.

  4. A loosely-coupled scheme for the interaction between a fluid, elastic structure and poroelastic material

    NASA Astrophysics Data System (ADS)

    Bukač, M.

    2016-05-01

    We model the interaction between an incompressible, viscous fluid, thin elastic structure and a poroelastic material. The poroelastic material is modeled using the Biot's equations of dynamic poroelasticity. The fluid, elastic structure and the poroelastic material are fully coupled, giving rise to a nonlinear, moving boundary problem with novel energy estimates. We present a modular, loosely coupled scheme where the original problem is split into the fluid sub-problem, elastic structure sub-problem and poroelasticity sub-problem. An energy estimate associated with the stability of the scheme is derived in the case where one of the coupling parameters, β, is equal to zero. We present numerical tests where we investigate the effects of the material properties of the poroelastic medium on the fluid flow. Our findings indicate that the flow patterns highly depend on the storativity of the poroelastic material and cannot be captured by considering fluid-structure interaction only.

  5. Sufficiency and Necessity Assumptions in Causal Structure Induction

    ERIC Educational Resources Information Center

    Mayrhofer, Ralf; Waldmann, Michael R.

    2016-01-01

    Research on human causal induction has shown that people have general prior assumptions about causal strength and about how causes interact with the background. We propose that these prior assumptions about the parameters of causal systems do not only manifest themselves in estimations of causal strength or the selection of causes but also when…

  6. The nuclear Thomas-Fermi model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, W.D.; Swiatecki, W.J.

    1994-08-01

    The statistical Thomas-Fermi model is applied to a comprehensive survey of macroscopic nuclear properties. The model uses a Seyler-Blanchard effective nucleon-nucleon interaction, generalized by the addition of one momentum-dependent and one density-dependent term. The adjustable parameters of the interaction were fitted to shell-corrected masses of 1654 nuclei, to the diffuseness of the nuclear surface and to the measured depths of the optical model potential. With these parameters nuclear sizes are well reproduced, and only relatively minor deviations between measured and calculated fission barriers of 36 nuclei are found. The model determines the principal bulk and surface properties of nuclear mattermore » and provides estimates for the more subtle, Droplet Model, properties. The predicted energy vs density relation for neutron matter is in striking correspondence with the 1981 theoretical estimate of Friedman and Pandharipande. Other extreme situations to which the model is applied are a study of Sn isotopes from {sup 82}Sn to {sup 170}Sn, and the rupture into a bubble configuration of a nucleus (constrained to spherical symmetry) which takes place when Z{sup 2}/A exceeds about 100.« less

  7. ECOUL: an interactive computer tool to study hydraulic behavior of swelling and rigid soils

    NASA Astrophysics Data System (ADS)

    Perrier, Edith; Garnier, Patricia; Leclerc, Christian

    2002-11-01

    ECOUL is an interactive, didactic software package which simulates vertical water flow in unsaturated soils. End-users are given an easily-used tool to predict the evolution of the soil water profile, with a large range of possible boundary conditions, through a classical numerical solution scheme for the Richards equation. Soils must be characterized by water retention curves and hydraulic conductivity curves, the form of which can be chosen among different analytical expressions from the literature. When the parameters are unknown, an inverse method is provided to estimate them from available experimental flow data. A significant original feature of the software is to include recent algorithms extending the water flow model to deal with deforming porous media: widespread swelling soils, the volume of which varies as a function of water content, must be described by a third hydraulic characteristic property, the deformation curve. Again, estimation of the parameters by means of inverse procedures and visualization facilities enable exploration, understanding and then prediction of soil hydraulic behavior under various experimental conditions.

  8. The Nuclear Thomas-Fermi Model

    DOE R&D Accomplishments Database

    Myers, W. D.; Swiatecki, W. J.

    1994-08-01

    The statistical Thomas-Fermi model is applied to a comprehensive survey of macroscopic nuclear properties. The model uses a Seyler-Blanchard effective nucleon-nucleon interaction, generalized by the addition of one momentum-dependent and one density-dependent term. The adjustable parameters of the interaction were fitted to shell-corrected masses of 1654 nuclei, to the diffuseness of the nuclear surface and to the measured depths of the optical model potential. With these parameters nuclear sizes are well reproduced, and only relatively minor deviations between measured and calculated fission barriers of 36 nuclei are found. The model determines the principal bulk and surface properties of nuclear matter and provides estimates for the more subtle, Droplet Model, properties. The predicted energy vs density relation for neutron matter is in striking correspondence with the 1981 theoretical estimate of Friedman and Pandharipande. Other extreme situations to which the model is applied are a study of Sn isotopes from {sup 82}Sn to {sup 170}Sn, and the rupture into a bubble configuration of a nucleus (constrained to spherical symmetry) which takes place when Z{sup 2}/A exceeds about 100.

  9. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  10. Measuring Biomass and Carbon Stock in Resprouting Woody Plants

    PubMed Central

    Matula, Radim; Damborská, Lenka; Nečasová, Monika; Geršl, Milan; Šrámek, Martin

    2015-01-01

    Resprouting multi-stemmed woody plants form an important component of the woody vegetation in many ecosystems, but a clear methodology for reliable measurement of their size and quick, non-destructive estimation of their woody biomass and carbon stock is lacking. Our goal was to find a minimum number of sprouts, i.e., the most easily obtainable, and sprout parameters that should be measured for accurate sprout biomass and carbon stock estimates. Using data for 5 common temperate woody species, we modelled carbon stock and sprout biomass as a function of an increasing number of sprouts in an interaction with different sprout parameters. The mean basal diameter of only two to five of the thickest sprouts and the basal diameter and DBH of the thickest sprouts per stump proved to be accurate estimators for the total sprout biomass of the individual resprouters and the populations of resprouters, respectively. Carbon stock estimates were strongly correlated with biomass estimates, but relative carbon content varied among species. Our study demonstrated that the size of the resprouters can be easily measured, and their biomass and carbon stock estimated; therefore, resprouters can be simply incorporated into studies of woody vegetation. PMID:25719601

  11. An examination of sources of sensitivity of consumer surplus estimates in travel cost models.

    PubMed

    Blaine, Thomas W; Lichtkoppler, Frank R; Bader, Timothy J; Hartman, Travis J; Lucente, Joseph E

    2015-03-15

    We examine sensitivity of estimates of recreation demand using the Travel Cost Method (TCM) to four factors. Three of the four have been routinely and widely discussed in the TCM literature: a) Poisson verses negative binomial regression; b) application of Englin correction to account for endogenous stratification; c) truncation of the data set to eliminate outliers. A fourth issue we address has not been widely modeled: the potential effect on recreation demand of the interaction between income and travel cost. We provide a straightforward comparison of all four factors, analyzing the impact of each on regression parameters and consumer surplus estimates. Truncation has a modest effect on estimates obtained from the Poisson models but a radical effect on the estimates obtained by way of the negative binomial. Inclusion of an income-travel cost interaction term generally produces a more conservative but not a statistically significantly different estimate of consumer surplus in both Poisson and negative binomial models. It also generates broader confidence intervals. Application of truncation, the Englin correction and the income-travel cost interaction produced the most conservative estimates of consumer surplus and eliminated the statistical difference between the Poisson and the negative binomial. Use of the income-travel cost interaction term reveals that for visitors who face relatively low travel costs, the relationship between income and travel demand is negative, while it is positive for those who face high travel costs. This provides an explanation of the ambiguities on the findings regarding the role of income widely observed in the TCM literature. Our results suggest that policies that reduce access to publicly owned resources inordinately impact local low income recreationists and are contrary to environmental justice. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Using Multistate Reweighting to Rapidly and Efficiently Explore Molecular Simulation Parameters Space for Nonbonded Interactions.

    PubMed

    Paliwal, Himanshu; Shirts, Michael R

    2013-11-12

    Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.

  13. Estimation of genetic parameters and selection of high-yielding, upright common bean lines with slow seed-coat darkening.

    PubMed

    Alvares, R C; Silva, F C; Melo, L C; Melo, P G S; Pereira, H S

    2016-11-21

    Slow seed coat darkening is desirable in common bean cultivars and genetic parameters are important to define breeding strategies. The aims of this study were to estimate genetic parameters for plant architecture, grain yield, grain size, and seed-coat darkening in common bean; identify any genetic association among these traits; and select lines that associate desirable phenotypes for these traits. Three experiments were set up in the winter 2012 growing season, in Santo Antônio de Goiás and Brasília, Brazil, including 220 lines obtained from four segregating populations and five parents. A triple lattice 15 x 15 experimental design was used. The traits evaluated were plant architecture, grain yield, grain size, and seed-coat darkening. Analyses of variance were carried out and genetic parameters such as heritability, gain expected from selection, and correlations, were estimated. For selection of superior lines, a "weight-free and parameter-free" index was used. The estimates of genetic variance, heritability, and gain expected from selection were high, indicating good possibility for success in selection of the four traits. The genotype x environment interaction was proportionally more important for yield than for the other traits. There was no strong genetic correlation observed among the four traits, which indicates the possibility of selection of superior lines with many traits. Considering simultaneous selection, it was not possible to join high genetic gains for the four traits. Forty-four lines that combined high yield, more upright plant architecture, slow darkening grains, and commercial grade size were selected.

  14. Effects of Genotype by Environment Interaction on Genetic Gain and Genetic Parameter Estimates in Red Tilapia (Oreochromis spp.)

    PubMed Central

    Nguyen, Nguyen H.; Hamzah, Azhar; Thoa, Ngo P.

    2017-01-01

    The extent to which genetic gain achieved from selection programs under strictly controlled environments in the nucleus that can be expressed in commercial production systems is not well-documented in aquaculture species. The main aim of this paper was to assess the effects of genotype by environment interaction on genetic response and genetic parameters for four body traits (harvest weight, standard length, body depth, body width) and survival in Red tilapia (Oreochromis spp.). The growth and survival data were recorded on 19,916 individual fish from a pedigreed population undergoing three generations of selection for increased harvest weight in earthen ponds from 2010 to 2012 at the Aquaculture Extension Center, Department of Fisheries, Jitra in Kedah, Malaysia. The pedigree comprised a total of 224 sires and 262 dams, tracing back to the base population in 2009. A multivariate animal model was used to measure genetic response and estimate variance and covariance components. When the homologous body traits in freshwater pond and cage were treated as genetically distinct traits, the genetic correlations between the two environments were high (0.85–0.90) for harvest weight and square root of harvest weight but the estimates were of lower magnitudes for length, width and depth (0.63–0.79). The heritabilities estimated for the five traits studied differed between pond (0.02 to 0.22) and cage (0.07 to 0.68). The common full-sib effects were large, ranging from 0.23 to 0.59 in pond and 0.11 to 0.31 in cage across all traits. The direct and correlated responses for four body traits were generally greater in pond than in cage environments (0.011–1.561 vs. −0.033–0.567 genetic standard deviation units, respectively). Selection for increased harvest body weight resulted in positive genetic changes in survival rate in both pond and cage culture. In conclusion, the reduced selection response and the magnitude of the genetic parameter estimates in the production environment (i.e., cage) relative to those achieved in the nucleus (pond) were a result of the genotype by environment interaction and this effect should be taken into consideration in the future breeding program for Red tilapia. PMID:28659970

  15. Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations

    NASA Astrophysics Data System (ADS)

    Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit

    2016-07-01

    A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.

  16. Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, Rimple; Poirel, Dominique; Pettit, Chris

    2016-07-01

    A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less

  17. Quality of traffic flow on urban arterial streets and its relationship with safety.

    PubMed

    Dixit, Vinayak V; Pande, Anurag; Abdel-Aty, Mohamed; Das, Abhishek; Radwan, Essam

    2011-09-01

    The two-fluid model for vehicular traffic flow explains the traffic on arterials as a mix of stopped and running vehicles. It describes the relationship between the vehicles' running speed and the fraction of running vehicles. The two parameters of the model essentially represent 'free flow' travel time and level of interaction among vehicles, and may be used to evaluate urban roadway networks and urban corridors with partially limited access. These parameters are influenced by not only the roadway characteristics but also by behavioral aspects of driver population, e.g., aggressiveness. Two-fluid models are estimated for eight arterial corridors in Orlando, FL for this study. The parameters of the two-fluid model were used to evaluate corridor level operations and the correlations of these parameters' with rates of crashes having different types/severity. Significant correlations were found between two-fluid parameters and rear-end and angle crash rates. Rate of severe crashes was also found to be significantly correlated with the model parameter signifying inter-vehicle interactions. While there is need for further analysis, the findings suggest that the two-fluid model parameters may have potential as surrogate measures for traffic safety on urban arterial streets. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Mathematical Methods for Studying DNA and Protein Interactions

    NASA Astrophysics Data System (ADS)

    LeGresley, Sarah

    Deoxyribnucleic Acid (DNA) damage can lead to health related issues such as developmental disorders, aging, and cancer. It has been estimated that damage rates may be as high as 100,000 per cell per day. Because of the devastating effects that DNA damage can have, DNA repair mechanisms are of great interest yet are not completely understood. To gain a better understanding of possible DNA repair mechanisms, my dissertation focused on mathematical methods for understanding the interactions between DNA and proteins. I developed a damaged DNA model to estimate the probabilities of damaged DNA being located at specific positions. Experiments were then performed that suggested that the damaged DNA may be repositioned. These experimental results were consistent with the model's prediction that damaged DNA has preferred locations. To study how proteins might be moving along the DNA, I studied the use of the uniform motion "n-step" model. The n-step model has been used to determine the kinetics parameters (e.g. rates at which a protein moves along the DNA, how much energy is required to move a protein along a specified amount of DNA, etc.) of proteins moving along the DNA. Monte Carlo methods were used to simulate proteins moving with different types of non-uniform motion (e.g. backward, jumping, etc.) along the DNA. Estimates for the kinetics parameters in the n-step model were found by fitting of the Monte Carlo simulation data. Analysis indicated that non-uniform motion of the protein may lead to over or underestimation of the kinetic parameters of this n-step model.

  19. Application of experimental design in geothermal resources assessment of Ciwidey-Patuha, West Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Ashat, Ali; Pratama, Heru Berian

    2017-12-01

    The successful Ciwidey-Patuha geothermal field size assessment required integration data analysis of all aspects to determined optimum capacity to be installed. Resources assessment involve significant uncertainty of subsurface information and multiple development scenarios from these field. Therefore, this paper applied the application of experimental design approach to the geothermal numerical simulation of Ciwidey-Patuha to generate probabilistic resource assessment result. This process assesses the impact of evaluated parameters affecting resources and interacting between these parameters. This methodology have been successfully estimated the maximum resources with polynomial function covering the entire range of possible values of important reservoir parameters.

  20. Probing the tides in interacting galaxy pairs

    NASA Technical Reports Server (NTRS)

    Borne, Kirk D.

    1990-01-01

    Detailed spectroscopic and imaging observations of colliding elliptical galaxies revealed unmistakable diagnostic signatures of the tidal interactions. It is possible to compare both the distorted luminosity distributions and the disturbed internal rotation profiles with numerical simulations in order to model the strength of the tidal gravitational field acting within a given pair of galaxies. Using the best-fit numerical model, one can then measure directly the mass of a specific interacting binary system. This technique applies to individual pairs and therefore complements the classical methods of measuring the masses of galaxy pairs in well-defined statistical samples. The 'personalized' modeling of galaxy pairs also permits the derivation of each binary's orbit, spatial orientation, and interaction timescale. Similarly, one can probe the tides in less-detailed observations of disturbed galaxies in order to estimate some of the physical parameters for larger samples of interacting galaxy pairs. These parameters are useful inputs to the more universal problems of (1) the galaxy merger rate, (2) the strength and duration of the driving forces behind tidally stimulated phenomena (e.g., starbursts and maybe quasi steller objects), and (3) the identification of long-lived signatures of interaction/merger events.

  1. Relation between quantum probe and entanglement in n-qubit systems within Markovian and non-Markovian environments

    NASA Astrophysics Data System (ADS)

    Rangani Jahromi, Hossein

    2017-08-01

    We address in detail the process of parameter estimation for an n-qubit system dissipating into a cavity in which the qubits are coupled to the single-mode cavity field via coupling constant g which should be estimated. In addition, the cavity field interacts with an external field considered as a set of continuum harmonic oscillators. We analyse the behaviour of the quantum Fisher information (QFI) for both weak and strong coupling regimes. In particular, we show that in strong coupling regime, the memory effects are dominant, leading to an oscillatory variation in the dynamics of the QFI and consequently information flowing from the environment to the quantum system. We show that when the number of the qubits or the coupling strength rises, the oscillations, signs of non-Markovian evolution of the QFI, increase. This indicates that in the strong-coupling regime, increasing the size of the system or the coupling strength remarkably enhances the reversed flow of information. Moreover, we find that it is possible to retard the QFI loss during the time evolution and therefore enhance the estimation of the parameter using a cavity with a larger decay rate factor. Furthermore, analysing the dynamics of the QFI and negativity of the probe state, we reveal a close relationship between the entanglement of probes and their capability for estimating the parameter. It is shown that in order to perform a better estimation of the parameter, we should avoid measuring when the entanglement between the probes is maximized.

  2. Dynamics of cellular level function and regulation derived from murine expression array data.

    PubMed

    de Bivort, Benjamin; Huang, Sui; Bar-Yam, Yaneer

    2004-12-21

    A major open question of systems biology is how genetic and molecular components interact to create phenotypes at the cellular level. Although much recent effort has been dedicated to inferring effective regulatory influences within small networks of genes, the power of microarray bioinformatics has yet to be used to determine functional influences at the cellular level. In all cases of data-driven parameter estimation, the number of model parameters estimable from a set of data is strictly limited by the size of that set. Rather than infer parameters describing the detailed interactions of just a few genes, we chose a larger-scale investigation so that the cumulative effects of all gene interactions could be analyzed to identify the dynamics of cellular-level function. By aggregating genes into large groups with related behaviors (megamodules), we were able to determine the effective aggregate regulatory influences among 12 major gene groups in murine B lymphocytes over a variety of time steps. Intriguing observations about the behavior of cells at this high level of abstraction include: (i) a medium-term critical global transcriptional dependence on ATP-generating genes in the mitochondria, (ii) a longer-term dependence on glycolytic genes, (iii) the dual role of chromatin-reorganizing genes in transcriptional activation and repression, (iv) homeostasis-favoring influences, (v) the indication that, as a group, G protein-mediated signals are not concentration-dependent in their influence on target gene expression, and (vi) short-term-activating/long-term-repressing behavior of the cell-cycle system that reflects its oscillatory behavior.

  3. Structure and Electronic Properties of Nano-complex CCl4…Cr(AcacCl)3 on Evidence Derived from Vibrational Spectroscopy

    NASA Astrophysics Data System (ADS)

    Slabzhennikov, S. N.; Kuarton, L. A.; Ryabchenko, O. B.

    In order to specify influence of intermolecular interaction on IR spectrum of interacting species, an investigation of a process CCl4 + Cr(AcacCl)3 → CCl4…Cr(AcacCl)3 has been performed by means of Hartree-Fock-Roothaan method in MIDI basis set with p- and d- polarization functions. An estimation of intermolecular interaction in geometrical parameters, electron density function both between interacting particles and inside themselves, frequencies and intensities of normal modes has been carried out. Chemical bonds with the most significant shifts of characteristics under formation of nano-complex CCl4…Cr(AcacCl)3 have been noted.

  4. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  5. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  6. Hyperfine Sublevel Correlation (HYSCORE) Spectra for Paramagnetic Centers with Nuclear Spin I = 1 Having Isotropic Hyperfine Interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maryasov, Alexander G.; Bowman, Michael K.

    2004-07-08

    It is shown that HYSCORE spectra of paramagnetic centers having nuclei of spin I=1 with isotropic hfi and arbitrary NQI consist of ridges having zero width. A parametric presentation of these ridges is found which shows the range of possible frequencies in the HYSCORE spectrum and aids in spectral assignments and rapid estimation of spin Hamiltonian parameters. An alternative approach for the spectral density calculation is presented that is based on spectral decomposition of the Hamiltonian. Only the eigenvalues of the Hamiltonian are needed in this approach. An atlas of HYSCORE spectra is given in the Supporting Information. This approachmore » is applied to the estimation of the spin Hamiltonian parameters of the oxovanadium-EDTA complex.« less

  7. On the ab initio evaluation of Hubbard parameters. II. The κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal

    NASA Astrophysics Data System (ADS)

    Fortunelli, Alessandro; Painelli, Anna

    1997-05-01

    A previously proposed approach for the ab initio evaluation of Hubbard parameters is applied to BEDT-TTF dimers. The dimers are positioned according to four geometries taken as the first neighbors from the experimental data on the κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal. RHF-SCF, CAS-SCF and frozen-orbital calculations using the 6-31G** basis set are performed with different values of the total charge, allowing us to derive all the relevant parameters. It is found that the electronic structure of the BEDT-TTF planes is adequately described by the standard Extended Hubbard Model, with the off-diagonal electron-electron interaction terms (X and W) of negligible size. The derived parameters are in good agreement with available experimental data. Comparison with previous theoretical estimates shows that the t values compare well with those obtained from Extended Hückel Theory (whereas the minimal basis set estimates are completely unreliable). On the other hand, the Uaeff values exhibit an appreciable dependence on the chemical environment.

  8. Cloud, Aerosol, and Volcanic Ash Retrievals Using ASTR and SLSTR with ORAC

    NASA Astrophysics Data System (ADS)

    McGarragh, Gregory; Poulsen, Caroline; Povey, Adam; Thomas, Gareth; Christensen, Matt; Sus, Oliver; Schlundt, Cornelia; Stapelberg, Stefan; Stengel, Martin; Grainger, Don

    2015-12-01

    The Optimal Retrieval of Aerosol and Cloud (ORAC) is a generalized optimal estimation system that retrieves cloud, aerosol and volcanic ash parameters using satellite imager measurements in the visible to infrared. Use of the same algorithm for different sensors and parameters leads to consistency that facilitates inter-comparison and interaction studies. ORAC currently supports ATSR, AVHRR, MODIS and SEVIRI. In this proceeding we discuss the ORAC retrieval algorithm applied to ATSR data including the retrieval methodology, the forward model, uncertainty characterization and discrimination/classification techniques. Application of ORAC to SLSTR data is discussed including the additional features that SLSTR provides relative to the ATSR heritage. The ORAC level 2 and level 3 results are discussed and an application of level 3 results to the study of cloud/aerosol interactions is presented.

  9. Drug-drug interaction predictions with PBPK models and optimal multiresponse sampling time designs: application to midazolam and a phase I compound. Part 2: clinical trial results.

    PubMed

    Chenel, Marylore; Bouzom, François; Cazade, Fanny; Ogungbenro, Kayode; Aarons, Leon; Mentré, France

    2008-12-01

    To compare results of population PK analyses obtained with a full empirical design (FD) and an optimal sparse design (MD) in a Drug-Drug Interaction (DDI) study aiming to evaluate the potential CYP3A4 inhibitory effect of a drug in development, SX, on a reference substrate, midazolam (MDZ). Secondary aim was to evaluate the interaction of SX on MDZ in the in vivo study. Methods To compare designs, real data were analysed by population PK modelling technique using either FD or MD with NONMEM FOCEI for SX and with NONMEM FOCEI and MONOLIX SAEM for MDZ. When applicable a Wald test was performed to compare model parameter estimates, such as apparent clearance (CL/F), across designs. To conclude on the potential interaction of SX on MDZ PK, a Student paired test was applied to compare the individual PK parameters (i.e. log(AUC) and log(C(max))) obtained either by a non-compartmental approach (NCA) using FD or from empirical Bayes estimates (EBE) obtained after fitting the model separately on each treatment group using either FD or MD. For SX, whatever the design, CL/F was well estimated and no statistical differences were found between CL/F estimated values obtained with FD (CL/F = 8.2 l/h) and MD (CL/F = 8.2 l/h). For MDZ, only MONOLIX was able to estimate CL/F and to provide its standard error of estimation with MD. With MONOLIX, whatever the design and the administration setting, MDZ CL/F was well estimated and there were no statistical differences between CL/F estimated values obtained with FD (72 l/h and 40 l/h for MDZ alone and for MDZ with SX, respectively) and MD (77 l/h and 45 l/h for MDZ alone and for MDZ with SX, respectively). Whatever the approach, NCA or population PK modelling, and for the latter approach, whatever the design, MD or FD, comparison tests showed that there was a statistical difference (P < 0.0001) between individual MDZ log(AUC) obtained after MDZ administration alone and co-administered with SX. Regarding C(max), there was a statistical difference (P < 0.05) between individual MDZ log(C(max)) obtained under the 2 administration settings in all cases, except with the sparse design with MONOLIX. However, the effect on C(max) was small. Finally, SX was shown to be a moderate CYP3A4 inhibitor, which at therapeutic doses increased MDZ exposure by a factor of 2 in average and almost did not affect the C(max). The optimal sparse design enabled the estimation of CL/F of a CYP3A4 substrate and inhibitor when co-administered together and to show the interaction leading to the same conclusion as the full empirical design.

  10. Drug-drug interaction predictions with PBPK models and optimal multiresponse sampling time designs: application to midazolam and a phase I compound. Part 2: clinical trial results

    PubMed Central

    Chenel, Marylore; Bouzom, François; Cazade, Fanny; Ogungbenro, Kayode; Aarons, Leon; Mentré, France

    2008-01-01

    Purpose To compare results of population PK analyses obtained with a full empirical design (FD) and an optimal sparse design (MD) in a Drug-Drug Interaction (DDI) study aiming to evaluate the potential CYP3A4 inhibitory effect of a drug in development, SX, on a reference substrate, midazolam (MDZ). Secondary aim was to evaluate the interaction of SX on MDZ in the in vivo study. Methods To compare designs, real data were analysed by population PK modelling using either FD or MD with NONMEM FOCEI for SX and with NONMEM FOCEI and MONOLIX SAEM for MDZ. When applicable a Wald’s test was performed to compare model parameter estimates, such as apparent clearance (CL/F), across designs. To conclude on the potential interaction of SX on MDZ PK, a Student paired test was applied to compare the individual PK parameters (i.e. log(AUC) and log(Cmax)) obtained either by a non-compartmental approach (NCA) using FD or from empirical Bayes estimates (EBE) obtained after fitting the model separately on each treatment group using either FD or MD. Results For SX, whatever the design, CL/F was well estimated and no statistical differences were found between CL/F estimated values obtained with FD (CL/F = 8.2 L/h) and MD (CL/F = 8.2 L/h). For MDZ, only MONOLIX was able to estimate CL/F and to provide its standard error of estimation with MD. With MONOLIX, whatever the design and the administration setting, MDZ CL/F was well estimated and there were no statistical differences between CL/F estimated values obtained with FD (72 L/h and 40 L/h for MDZ alone and for MDZ with SX, respectively) and MD (77 L/h and 45 L/h for MDZ alone and for MDZ with SX, respectively). Whatever the approach, NCA or population PK modelling, and for the latter approach, whatever the design, MD or FD, comparison tests showed that there was a statistical difference (p<0.0001) between individual MDZ log(AUC) obtained after MDZ administration alone and co-administered with SX. Regarding Cmax, there was a statistical difference (p<0.05) between individual MDZ log(Cmax) obtained under the 2 administration settings in all cases, except with the sparse design with MONOLIX. However, the effect on Cmax was small. Finally, SX was shown to be a moderate CYP3A4 inhibitor, which at therapeutic doses increased MDZ exposure by a factor 2 in average and almost did not affect the Cmax. Conclusion The optimal sparse design enabled the estimation of CL/F of a CYP3A4 substrate and inhibitor when co-administered together and to show the interaction leading to the same conclusion than the full empirical design. PMID:19130187

  11. Causal transfer function analysis to describe closed loop interactions between cardiovascular and cardiorespiratory variability signals.

    PubMed

    Faes, L; Porta, A; Cucino, R; Cerutti, S; Antolini, R; Nollo, G

    2004-06-01

    Although the concept of transfer function is intrinsically related to an input-output relationship, the traditional and widely used estimation method merges both feedback and feedforward interactions between the two analyzed signals. This limitation may endanger the reliability of transfer function analysis in biological systems characterized by closed loop interactions. In this study, a method for estimating the transfer function between closed loop interacting signals was proposed and validated in the field of cardiovascular and cardiorespiratory variability. The two analyzed signals x and y were described by a bivariate autoregressive model, and the causal transfer function from x to y was estimated after imposing causality by setting to zero the model coefficients representative of the reverse effects from y to x. The method was tested in simulations reproducing linear open and closed loop interactions, showing a better adherence of the causal transfer function to the theoretical curves with respect to the traditional approach in presence of non-negligible reverse effects. It was then applied in ten healthy young subjects to characterize the transfer functions from respiration to heart period (RR interval) and to systolic arterial pressure (SAP), and from SAP to RR interval. In the first two cases, the causal and non-causal transfer function estimates were comparable, indicating that respiration, acting as exogenous signal, sets an open loop relationship upon SAP and RR interval. On the contrary, causal and traditional transfer functions from SAP to RR were significantly different, suggesting the presence of a considerable influence on the opposite causal direction. Thus, the proposed causal approach seems to be appropriate for the estimation of parameters, like the gain and the phase lag from SAP to RR interval, which have a large clinical and physiological relevance.

  12. Interacting Effects Induced by Two Neighboring Pits Considering Relative Position Parameters and Pit Depth

    PubMed Central

    Huang, Yongfang; Gang, Tieqiang; Chen, Lijie

    2017-01-01

    For pre-corroded aluminum alloy 7075-T6, the interacting effects of two neighboring pits on the stress concentration are comprehensively analyzed by considering various relative position parameters (inclination angle θ and dimensionless spacing parameter λ) and pit depth (d) with the finite element method. According to the severity of the stress concentration, the critical corrosion regions, bearing high susceptibility to fatigue damage, are determined for intersecting and adjacent pits, respectively. A straightforward approach is accordingly proposed to conservatively estimate the combined stress concentration factor induced by two neighboring pits, and a concrete application example is presented. It is found that for intersecting pits, the normalized stress concentration factor Ktnor increases with the increase of θ and λ and always reaches its maximum at θ = 90°, yet for adjacent pits, Ktnor decreases with the increase of λ and the maximum value appears at a slight asymmetric location. The simulations reveal that Ktnor follows a linear and an exponential relationship with the dimensionless depth parameter Rd for intersecting and adjacent cases, respectively. PMID:28772758

  13. The Finite-Size Scaling Relation for the Order-Parameter Probability Distribution of the Six-Dimensional Ising Model

    NASA Astrophysics Data System (ADS)

    Merdan, Ziya; Karakuş, Özlem

    2016-11-01

    The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.

  14. Exclusion Bounds for Extended Anyons

    NASA Astrophysics Data System (ADS)

    Larson, Simon; Lundholm, Douglas

    2018-01-01

    We introduce a rigorous approach to the many-body spectral theory of extended anyons, that is quantum particles confined to two dimensions that interact via attached magnetic fluxes of finite extent. Our main results are many-body magnetic Hardy inequalities and local exclusion principles for these particles, leading to estimates for the ground-state energy of the anyon gas over the full range of the parameters. This brings out further non-trivial aspects in the dependence on the anyonic statistics parameter, and also gives improvements in the ideal (non-extended) case.

  15. Multiple Parton Interactions in p$$bar{p}$$ Collisions in D0 Experiment at the Tevatron Collider (in Russian)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golovanov, Georgy

    The thesis is devoted to the study of processes with multiple parton interactions (MPI) in a ppbar collision collected by D0 detector at the Fermilab Tevatron collider at sqrt(s) = 1.96 TeV. The study includes measurements of MPI event fraction and effective cross section, a process-independent parameter related to the effective interaction region inside the nucleon. The measurements are done using events with a photon and three hadronic jets in the final state. The measured effective cross section is used to estimate background from MPI for WH production at the Tevatron energy

  16. Modelling of interaction of the large disrupted meteoroid with the Earth atmosphere

    NASA Astrophysics Data System (ADS)

    Brykina, Irina G.

    2018-05-01

    The model of atmospheric fragmentation of large meteoroids to the cloud of fragments is proposed. The comparison with similar models used in the literature is made. The approximate analytical solution of meteor physics equations is obtained for the mass loss of the disrupted meteoroid, the energy deposition and for the light curve normalized to the maximum brightness. This solution is applied to modelling of interaction of the Chelyabinsk meteoroid with the atmosphere. The influence of uncertainty of initial parameters of the meteoroid on characteristics of its interaction with the atmosphere is estimated. Comparison of the analytical solution with the observational data is made.

  17. Refined potentials for rare gas atom adsorption on rare gas and alkali-halide surfaces

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Heinbockel, J. H.; Outlaw, R. A.

    1985-01-01

    The utilization of models of interatomic potential for physical interaction to estimate the long range attractive potential for rare gases and ions is discussed. The long range attractive force is calculated in terms of the atomic dispersion properties. A data base of atomic dispersion parameters for rare gas atoms, alkali ion, and halogen ions is applied to the study of the repulsive core; the procedure for evaluating the repulsive core of ion interactions is described. The interaction of rare gas atoms on ideal rare gas solid and alkali-halide surfaces is analyzed; zero coverage absorption potentials are derived.

  18. Discovering graphical Granger causality using the truncating lasso penalty

    PubMed Central

    Shojaie, Ali; Michailidis, George

    2010-01-01

    Motivation: Components of biological systems interact with each other in order to carry out vital cell functions. Such information can be used to improve estimation and inference, and to obtain better insights into the underlying cellular mechanisms. Discovering regulatory interactions among genes is therefore an important problem in systems biology. Whole-genome expression data over time provides an opportunity to determine how the expression levels of genes are affected by changes in transcription levels of other genes, and can therefore be used to discover regulatory interactions among genes. Results: In this article, we propose a novel penalization method, called truncating lasso, for estimation of causal relationships from time-course gene expression data. The proposed penalty can correctly determine the order of the underlying time series, and improves the performance of the lasso-type estimators. Moreover, the resulting estimate provides information on the time lag between activation of transcription factors and their effects on regulated genes. We provide an efficient algorithm for estimation of model parameters, and show that the proposed method can consistently discover causal relationships in the large p, small n setting. The performance of the proposed model is evaluated favorably in simulated, as well as real, data examples. Availability: The proposed truncating lasso method is implemented in the R-package ‘grangerTlasso’ and is freely available at http://www.stat.lsa.umich.edu/∼shojaie/ Contact: shojaie@umich.edu PMID:20823316

  19. Polarimetric scattering model for estimation of above ground biomass of multilayer vegetation using ALOS-PALSAR quad-pol data

    NASA Astrophysics Data System (ADS)

    Sai Bharadwaj, P.; Kumar, Shashi; Kushwaha, S. P. S.; Bijker, Wietske

    Forests are important biomes covering a major part of the vegetation on the Earth, and as such account for seventy percent of the carbon present in living beings. The value of a forest's above ground biomass (AGB) is considered as an important parameter for the estimation of global carbon content. In the present study, the quad-pol ALOS-PALSAR data was used for the estimation of AGB for the Dudhwa National Park, India. For this purpose, polarimetric decomposition components and an Extended Water Cloud Model (EWCM) were used. The PolSAR data orientation angle shifts were compensated for before the polarimetric decomposition. The scattering components obtained from the polarimetric decomposition were used in the Water Cloud Model (WCM). The WCM was extended for higher order interactions like double bounce scattering. The parameters of the EWCM were retrieved using the field measurements and the decomposition components. Finally, the relationship between the estimated AGB and measured AGB was assessed. The coefficient of determination (R2) and root mean square error (RMSE) were 0.4341 and 119 t/ha respectively.

  20. Toward quantitative estimation of material properties with dynamic mode atomic force microscopy: a comparative study.

    PubMed

    Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti

    2017-08-11

    In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.

  1. Multiple-hit parameter estimation in monolithic detectors.

    PubMed

    Hunter, William C J; Barrett, Harrison H; Lewellen, Tom K; Miyaoka, Robert S

    2013-02-01

    We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%-12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied.

  2. Using a generalized linear mixed model approach to explore the role of age, motor proficiency, and cognitive styles in children's reach estimation accuracy.

    PubMed

    Caçola, Priscila M; Pant, Mohan D

    2014-10-01

    The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.

  3. An interactive program for pharmacokinetic modeling.

    PubMed

    Lu, D R; Mao, F

    1993-05-01

    A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.

  4. Statistical fusion of continuous labels: identification of cardiac landmarks

    NASA Astrophysics Data System (ADS)

    Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L.; Landman, Bennett A.

    2011-03-01

    Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.

  5. Statistical Fusion of Continuous Labels: Identification of Cardiac Landmarks.

    PubMed

    Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L; Landman, Bennett A

    2011-01-01

    Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.

  6. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    PubMed

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  7. New best estimates for radionuclide solid-liquid distribution coefficients in soils. Part 2: naturally occurring radionuclides.

    PubMed

    Vandenhove, H; Gil-García, C; Rigol, A; Vidal, M

    2009-09-01

    Predicting the transfer of radionuclides in the environment for normal release, accidental, disposal or remediation scenarios in order to assess exposure requires the availability of an important number of generic parameter values. One of the key parameters in environmental assessment is the solid liquid distribution coefficient, K(d), which is used to predict radionuclide-soil interaction and subsequent radionuclide transport in the soil column. This article presents a review of K(d) values for uranium, radium, lead, polonium and thorium based on an extensive literature survey, including recent publications. The K(d) estimates were presented per soil groups defined by their texture and organic matter content (Sand, Loam, Clay and Organic), although the texture class seemed not to significantly affect K(d). Where relevant, other K(d) classification systems are proposed and correlations with soil parameters are highlighted. The K(d) values obtained in this compilation are compared with earlier review data.

  8. A Conceptual Wing Flutter Analysis Tool for Systems Analysis and Parametric Design Study

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    2003-01-01

    An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate flutt er instability boundaries of a typical wing, when detailed structural and aerodynamic data are not available. Effects of change in key flu tter parameters can also be estimated in order to guide the conceptual design. This userfriendly software was developed using MathCad and M atlab codes. The analysis method was based on non-dimensional paramet ric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on wing torsion stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravit y location and pitch-inertia radius of gyration. These parametric plo ts were compiled in a Chance-Vought Corporation report from database of past experiments and wind tunnel test results. An example was prese nted for conceptual flutter analysis of outer-wing of a Blended-Wing- Body aircraft.

  9. Modeling In Vivo Interactions of Engineered Nanoparticles in the Pulmonary Alveolar Lining Fluid

    PubMed Central

    Mukherjee, Dwaipayan; Porter, Alexandra; Ryan, Mary; Schwander, Stephan; Chung, Kian Fan; Tetley, Teresa; Zhang, Junfeng; Georgopoulos, Panos

    2015-01-01

    Increasing use of engineered nanomaterials (ENMs) in consumer products may result in widespread human inhalation exposures. Due to their high surface area per unit mass, inhaled ENMs interact with multiple components of the pulmonary system, and these interactions affect their ultimate fate in the body. Modeling of ENM transport and clearance in vivo has traditionally treated tissues as well-mixed compartments, without consideration of nanoscale interaction and transformation mechanisms. ENM agglomeration, dissolution and transport, along with adsorption of biomolecules, such as surfactant lipids and proteins, cause irreversible changes to ENM morphology and surface properties. The model presented in this article quantifies ENM transformation and transport in the alveolar air to liquid interface and estimates eventual alveolar cell dosimetry. This formulation brings together established concepts from colloidal and surface science, physics, and biochemistry to provide a stochastic framework capable of capturing essential in vivo processes in the pulmonary alveolar lining layer. The model has been implemented for in vitro solutions with parameters estimated from relevant published in vitro measurements and has been extended here to in vivo systems simulating human inhalation exposures. Applications are presented for four different ENMs, and relevant kinetic rates are estimated, demonstrating an approach for improving human in vivo pulmonary dosimetry. PMID:26240755

  10. Computational tools for multi-linked flexible structures

    NASA Technical Reports Server (NTRS)

    Lee, Gordon K. F.; Brubaker, Thomas A.; Shults, James R.

    1990-01-01

    A software module which designs and tests controllers and filters in Kalman Estimator form, based on a polynomial state-space model is discussed. The user-friendly program employs an interactive graphics approach to simplify the design process. A variety of input methods are provided to test the effectiveness of the estimator. Utilities are provided which address important issues in filter design such as graphical analysis, statistical analysis, and calculation time. The program also provides the user with the ability to save filter parameters, inputs, and outputs for future use.

  11. Dynamic behavior of the interaction between epidemics and cascades on heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Jiang, Lurong; Jin, Xinyu; Xia, Yongxiang; Ouyang, Bo; Wu, Duanpo

    2014-12-01

    Epidemic spreading and cascading failure are two important dynamical processes on complex networks. They have been investigated separately for a long time. But in the real world, these two dynamics sometimes may interact with each other. In this paper, we explore a model combined with the SIR epidemic spreading model and a local load sharing cascading failure model. There exists a critical value of the tolerance parameter for which the epidemic with high infection probability can spread out and infect a fraction of the network in this model. When the tolerance parameter is smaller than the critical value, the cascading failure cuts off the abundance of paths and blocks the spreading of the epidemic locally. While the tolerance parameter is larger than the critical value, the epidemic spreads out and infects a fraction of the network. A method for estimating the critical value is proposed. In simulations, we verify the effectiveness of this method in the uncorrelated configuration model (UCM) scale-free networks.

  12. Impact of alkaline alterations to a Brazilian soil on cesium retention under low temperature conditions.

    PubMed

    Calábria, Jaqueline Alves de Almeida; Cota, Stela Dalva Santos; de Morais, Gustavo Ferrari; Ladeira, Ana Cláudia Queiroz

    2017-11-01

    To be used as backfilling materials in radioactive waste disposal facilities, a natural material must have a suitable permeability, mechanical properties and a high sorption capacity for radionuclides. Also important when considering a material as a backfill is the effect of its interaction with the alkaline solution generated from concrete degradation. This solution promotes mineralogical alterations that result in significant changes in the material key properties influencing its performance as a safety component of the repository. This paper presents results of an investigation on the effect of alkaline interaction under a low temperature on cesium retention properties of a local soil being considered suitable as a backfill for the Brazilian near surface disposal facility. A sample of the Brazilian soil was mixed with an alkaline solution, simulating the pore water leached in the first stage of cement degradation, during 1, 7, 14 and 28 days. The experiments were conducted under low temperature (25 °C) aiming to evaluate similar conditions found on a low and intermediate level radioactive waste disposal installation. A non-classical isotherm sorption model was fitted to sorption data obtained from batch experiments, for unaltered and altered samples, providing parameters that allowed us to assess the effect of the interaction on material quality as Cs sorbent. The sorption parameters obtained from the data-fitted isotherm were used then to estimate the corresponding retardation factor (R). Alkaline interaction significantly modified the soil sorption properties for Cs. The parameter Q, related to the maximum sorption capacity, as well as the affinity parameter (K) and the retardation coefficients became significantly smaller (about 1000 times for the R coefficient) after pretreatment with the simulated alkaline solutions. Moreover, the increase in n-values, which is related with the energy distribution width and heterogeneity of surface site energies, demonstrated that the adsorbent surface became more homogenous as a consequence of the alkaline alteration. Together these results suggest that cementitious leachate has a profound effect on Cs retention and should be accounted for estimating radionuclide retention in radioactive waste disposal systems containing cementitious materials. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A global food demand model for the assessment of complex human-earth systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    EDMONDS, JAMES A.; LINK, ROBERT; WALDHOFF, STEPHANIE T.

    Demand for agricultural products is an important problem in climate change economics. Food consumption will shape and shaped by climate change and emissions mitigation policies through interactions with bioenergy and afforestation, two critical issues in meeting international climate goals such as two-degrees. We develop a model of food demand for staple and nonstaple commodities that evolves with changing incomes and prices. The model addresses a long-standing issue in estimating food demands, the evolution of demand relationships across large changes in income and prices. We discuss the model, some of its properties and limitations. We estimate parameter values using pooled cross-sectional-time-seriesmore » observations and the Metropolis Monte Carlo method and cross-validate the model by estimating parameters using a subset of the observations and test its ability to project into the unused observations. Finally, we apply bias correction techniques borrowed from the climate-modeling community and report results.« less

  14. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  15. Analysis of the Mechanism of Prolonged Persistence of Drug Interaction between Terbinafine and Amitriptyline or Nortriptyline.

    PubMed

    Mikami, Akiko; Hori, Satoko; Ohtani, Hisakazu; Sawada, Yasufumi

    2017-01-01

    The purpose of the study was to quantitatively estimate and predict drug interactions between terbinafine and tricyclic antidepressants (TCAs), amitriptyline or nortriptyline, based on in vitro studies. Inhibition of TCA-metabolizing activity by terbinafine was investigated using human liver microsomes. Based on the unbound K i values obtained in vitro and reported pharmacokinetic parameters, a pharmacokinetic model of drug interaction was fitted to the reported plasma concentration profiles of TCAs administered concomitantly with terbinafine to obtain the drug-drug interaction parameters. Then, the model was used to predict nortriptyline plasma concentration with concomitant administration of terbinafine and changes of area under the curve (AUC) of nortriptyline after cessation of terbinafine. The CYP2D6 inhibitory potency of terbinafine was unaffected by preincubation, so the inhibition seems to be reversible. Terbinafine competitively inhibited amitriptyline or nortriptyline E-10-hydroxylation, with unbound K i values of 13.7 and 12.4 nM, respectively. Observed plasma concentrations of TCAs administered concomitantly with terbinafine were successfully simulated with the drug interaction model using the in vitro parameters. Model-predicted nortriptyline plasma concentration after concomitant nortriptylene/terbinafine administration for two weeks exceeded the toxic level, and drug interaction was predicted to be prolonged; the AUC of nortriptyline was predicted to be increased by 2.5- or 2.0- and 1.5-fold at 0, 3 and 6 months after cessation of terbinafine, respectively. The developed model enables us to quantitatively predict the prolonged drug interaction between terbinafine and TCAs. The model should be helpful for clinical management of terbinafine-CYP2D6 substrate drug interactions, which are difficult to predict due to their time-dependency.

  16. Size-density scaling in protists and the links between consumer-resource interaction parameters.

    PubMed

    DeLong, John P; Vasseur, David A

    2012-11-01

    Recent work indicates that the interaction between body-size-dependent demographic processes can generate macroecological patterns such as the scaling of population density with body size. In this study, we evaluate this possibility for grazing protists and also test whether demographic parameters in these models are correlated after controlling for body size. We compiled data on the body-size dependence of consumer-resource interactions and population density for heterotrophic protists grazing algae in laboratory studies. We then used nested dynamic models to predict both the height and slope of the scaling relationship between population density and body size for these protists. We also controlled for consumer size and assessed links between model parameters. Finally, we used the models and the parameter estimates to assess the individual- and population-level dependence of resource use on body-size and prey-size selection. The predicted size-density scaling for all models matched closely to the observed scaling, and the simplest model was sufficient to predict the pattern. Variation around the mean size-density scaling relationship may be generated by variation in prey productivity and area of capture, but residuals are relatively insensitive to variation in prey size selection. After controlling for body size, many consumer-resource interaction parameters were correlated, and a positive correlation between residual prey size selection and conversion efficiency neutralizes the apparent fitness advantage of taking large prey. Our results indicate that widespread community-level patterns can be explained with simple population models that apply consistently across a range of sizes. They also indicate that the parameter space governing the dynamics and the steady states in these systems is structured such that some parts of the parameter space are unlikely to represent real systems. Finally, predator-prey size ratios represent a kind of conundrum, because they are widely observed but apparently have little influence on population size and fitness, at least at this level of organization. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  17. How perfect can protein interactomes be?

    PubMed

    Levy, Emmanuel D; Landry, Christian R; Michnick, Stephen W

    2009-03-03

    Any engineered device should certainly not contain nonfunctional components, for this would be a waste of energy and money. In contrast, evolutionary theory tells us that biological systems need not be optimized and may very well accumulate nonfunctional elements. Mutational and demographic processes contribute to the cluttering of eukaryotic genomes and transcriptional networks with "junk" DNA and spurious DNA binding sites. Here, we question whether such a notion should be applied to protein interactomes-that is, whether these protein interactomes are expected to contain a fraction of nonselected, nonfunctional protein-protein interactions (PPIs), which we term "noisy." We propose a simple relationship between the fraction of noisy interactions expected in a given organism and three parameters: (i) the number of mutations needed to create and destroy interactions, (ii) the size of the proteome, and (iii) the fitness cost of noisy interactions. All three parameters suggest that noisy PPIs are expected to exist. Their existence could help to explain why PPIs determined from large-scale studies often lack functional relationships between interacting proteins, why PPIs are poorly conserved across organisms, and why the PPI space appears to be immensely large. Finally, we propose experimental strategies to estimate the fraction of evolutionary noise in PPI networks.

  18. A new analytical method for characterizing nonlinear visual processes with stimuli of arbitrary distribution: Theory and applications.

    PubMed

    Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya

    2017-06-01

    Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.

  19. Strong spin-orbit coupling and Zeeman spin splitting in angle dependent magnetoresistance of Bi{sub 2}Te{sub 3}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dey, Rik, E-mail: rikdey@utexas.edu; Pramanik, Tanmoy; Roy, Anupam

    We have studied angle dependent magnetoresistance of Bi{sub 2}Te{sub 3} thin film with field up to 9 T over 2–20 K temperatures. The perpendicular field magnetoresistance has been explained by the Hikami-Larkin-Nagaoka theory alone in a system with strong spin-orbit coupling, from which we have estimated the mean free path, the phase coherence length, and the spin-orbit relaxation time. We have obtained the out-of-plane spin-orbit relaxation time to be small and the in-plane spin-orbit relaxation time to be comparable to the momentum relaxation time. The estimation of these charge and spin transport parameters are useful for spintronics applications. For parallel field magnetoresistance,more » we have confirmed the presence of Zeeman effect which is otherwise suppressed in perpendicular field magnetoresistance due to strong spin-orbit coupling. The parallel field data have been explained using both the contributions from the Maekawa-Fukuyama localization theory for non-interacting electrons and Lee-Ramakrishnan theory of electron-electron interactions. The estimated Zeeman g-factor and the strength of Coulomb screening parameter agree well with the theory. Finally, the anisotropy in magnetoresistance with respect to angle has been described by the Hikami-Larkin-Nagaoka theory. This anisotropy can be used in anisotropic magnetic sensor applications.« less

  20. Classification of hydrological parameter sensitivity and evaluation of parameter transferability across 431 US MOPEX basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi

    The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less

  1. EasyFRAP-web: a web-based tool for the analysis of fluorescence recovery after photobleaching data.

    PubMed

    Koulouras, Grigorios; Panagopoulos, Andreas; Rapsomaniki, Maria A; Giakoumakis, Nickolaos N; Taraviras, Stavros; Lygerou, Zoi

    2018-06-13

    Understanding protein dynamics is crucial in order to elucidate protein function and interactions. Advances in modern microscopy facilitate the exploration of the mobility of fluorescently tagged proteins within living cells. Fluorescence recovery after photobleaching (FRAP) is an increasingly popular functional live-cell imaging technique which enables the study of the dynamic properties of proteins at a single-cell level. As an increasing number of labs generate FRAP datasets, there is a need for fast, interactive and user-friendly applications that analyze the resulting data. Here we present easyFRAP-web, a web application that simplifies the qualitative and quantitative analysis of FRAP datasets. EasyFRAP-web permits quick analysis of FRAP datasets through an intuitive web interface with interconnected analysis steps (experimental data assessment, different types of normalization and estimation of curve-derived quantitative parameters). In addition, easyFRAP-web provides dynamic and interactive data visualization and data and figure export for further analysis after every step. We test easyFRAP-web by analyzing FRAP datasets capturing the mobility of the cell cycle regulator Cdt2 in the presence and absence of DNA damage in cultured cells. We show that easyFRAP-web yields results consistent with previous studies and highlights cell-to-cell heterogeneity in the estimated kinetic parameters. EasyFRAP-web is platform-independent and is freely accessible at: https://easyfrap.vmnet.upatras.gr/.

  2. Adhesion and volume constraints via nonlocal interactions determine cell organisation and migration profiles.

    PubMed

    Carrillo, José Antonio; Colombi, Annachiara; Scianna, Marco

    2018-05-14

    The description of the cell spatial pattern and characteristic distances is fundamental in a wide range of physio-pathological biological phenomena, from morphogenesis to cancer growth. Discrete particle models are widely used in this field, since they are focused on the cell-level of abstraction and are able to preserve the identity of single individuals reproducing their behavior. In particular, a fundamental role in determining the usefulness and the realism of a particle mathematical approach is played by the choice of the intercellular pairwise interaction kernel and by the estimate of its parameters. The aim of the paper is to demonstrate how the concept of H-stability, deriving from statistical mechanics, can have important implications in this respect. For any given interaction kernel, it in fact allows to a priori predict the regions of the free parameter space that result in stable configurations of the system characterized by a finite and strictly positive minimal interparticle distance, which is fundamental when dealing with biological phenomena. The proposed analytical arguments are indeed able to restrict the range of possible variations of selected model coefficients, whose exact estimate however requires further investigations (e.g., fitting with empirical data), as illustrated in this paper by series of representative simulations dealing with cell colony reorganization, sorting phenomena and zebrafish embryonic development. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Compost mixture influence of interactive physical parameters on microbial kinetics and substrate fractionation.

    PubMed

    Mohajer, Ardavan; Tremier, Anne; Barrington, Suzelle; Teglia, Cecile

    2010-01-01

    Composting is a feasible biological treatment for the recycling of wastewater sludge as a soil amendment. The process can be optimized by selecting an initial compost recipe with physical properties that enhance microbial activity. The present study measured the microbial O(2) uptake rate (OUR) in 16 sludge and wood residue mixtures to estimate the kinetics parameters of maximum growth rate mu(m) and rate of organic matter hydrolysis K(h), as well as the initial biodegradable organic matter fractions present. The starting mixtures consisted of a wide range of moisture content (MC), waste to bulking agent (BA) ratio (W/BA ratio) and BA particle size, which were placed in a laboratory respirometry apparatus to measure their OUR over 4 weeks. A microbial model based on the activated sludge process was used to calculate the kinetic parameters and was found to adequately reproduced OUR curves over time, except for the lag phase and peak OUR, which was not represented and generally over-estimated, respectively. The maximum growth rate mu(m), was found to have a quadratic relationship with MC and a negative association with BA particle size. As a result, increasing MC up to 50% and using a smaller BA particle size of 8-12 mm was seen to maximize mu(m). The rate of hydrolysis K(h) was found to have a linear association with both MC and BA particle size. The model also estimated the initial readily biodegradable organic matter fraction, MB(0), and the slower biodegradable matter requiring hydrolysis, MH(0). The sum of MB(0) and MH(0) was associated with MC, W/BA ratio and the interaction between these two parameters, suggesting that O(2) availability was a key factor in determining the value of these two fractions. The study reinforced the idea that optimization of the physical characteristics of a compost mixture requires a holistic approach. 2010 Elsevier Ltd. All rights reserved.

  4. Dislocation model for aseismic fault slip in the transverse ranges of Southern California

    NASA Technical Reports Server (NTRS)

    Cheng, A.; Jackson, D. D.; Matsuura, M.

    1985-01-01

    Geodetic data at a plate boundary can reveal the pattern of subsurface displacements that accompany plate motion. These displacements are modelled as the sum of rigid block motion and the elastic effects of frictional interaction between blocks. The frictional interactions are represented by uniform dislocation on each of several rectangular fault patches. The block velocities and fault parameters are then estimated from geodetic data. Bayesian inversion procedure employs prior estimates based on geological and seismological data. The method is applied to the Transverse Ranges, using prior geological and seismological data and geodetic data from the USGS trilateration networks. Geodetic data imply a displacement rate of about 20 mm/yr across the San Andreas Fault, while the geologic estimates exceed 30 mm/yr. The prior model and the final estimates both imply about 10 mm/yr crustal shortening normal to the trend of the San Andreas Fault. Aseismic fault motion is a major contributor to plate motion. The geodetic data can help to identify faults that are suffering rapid stress accumulation; in the Transverse Ranges those faults are the San Andreas and the Santa Susana.

  5. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  6. Vertical eddy diffusivity as a control parameter in the tropical Pacific

    NASA Astrophysics Data System (ADS)

    Martinez Avellaneda, N.; Cornuelle, B.

    2011-12-01

    Ocean models suffer from errors in the treatment of turbulent sub-grid-scale motions responsible for mixing and energy dissipation. Unrealistic small-scale physics in models can have large-scale consequences, such as biases in the upper ocean temperature, a symptom of poorly-simulated upwelling, currents and air-sea interactions. This is of special importance in the tropical Pacific Ocean (TP), which is home to energetic air-sea interactions that affect global climate. It has been shown in a number of studies that the simulated ENSO variability is highly dependent on the state of the ocean (e.g.: background mixing). Moreover, the magnitude of the vertical numerical diffusion is of primary importance in properly reproducing the Pacific equatorial thermocline. This work is part of a NASA-funded project to estimate the space- and time-varying ocean mixing coefficients in an eddy-permitting (1/3dgr) model of the TP to obtain an improved estimate of its time-varying circulation and its underlying dynamics. While an estimation procedure for the TP (26dgr S - 30dgr N) in underway using the MIT general circulation model, complementary adjoint-based sensitivity studies have been carried out for the starting ocean state from Forget (2010). This analysis aids the interpretation of the estimated mixing coefficients and possible error compensation. The focus of the sensitivity tests is the Equatorial Undercurrent and sub-thermocline jets (i.e., Tsuchiya Jets), which have been thought to have strong dependence on vertical diffusivity and should provide checks on the estimated mixing parameters. In order to build intuition for the vertical diffusivity adjoint results in the TP, adjoint and forward perturbed simulations were carried out for an idealized sharp thermocline in a rectangular domain.

  7. Evaluation of kinetic constants of biomolecular interaction on optical surface plasmon resonance sensor with Newton Iteration Method

    NASA Astrophysics Data System (ADS)

    Zhao, Yuanyuan; Jiang, Guoliang; Hu, Jiandong; Hu, Fengjiang; Wei, Jianguang; Shi, Liang

    2010-10-01

    In the immunology, there are two important types of biomolecular interaction: antigens-antibodies and receptors-ligands. Monitoring the response rate and affinity of biomolecular interaction can help analyze the protein function, drug discover, genomics and proteomics research. Moreover the association rate constant and dissociation rate constant of receptors-ligands are the important parameters for the study of signal transmission between cells. Recent advances in bioanalyzer instruments have greatly simplified the measurement of the kinetics of molecular interactions. Non-destructive and real-time monitoring the response to evaluate the parameters between antigens and antibodies can be performed by using optical surface plasmon resonance (SPR) biosensor technology. This technology provides a quantitative analysis that is carried out rapidly with label-free high-throughput detection using the binding curves of antigens-antibodies. Consequently, the kinetic parameters of interaction between antigens and antibodies can be obtained. This article presents a low cost integrated SPR-based bioanalyzer (HPSPR-6000) designed by ourselves. This bioanalyzer is mainly composed of a biosensor TSPR1K23, a touch-screen monitor, a microprocessor PIC24F128, a microflow cell with three channels, a clamp and a photoelectric conversion device. To obtain the kinetic parameters, sensorgrams may be modeled using one of several binding models provided with BIAevaluation software 3.0, SensiQ or Autolab. This allows calculation of the association rate constant (ka) and the dissociation rate constant (kd). The ratio of ka to kd can be used to estimate the equilibrium constant. Another kind is the analysis software OriginPro, which can process the obtained data by nonlinear fitting and then get some correlative parameters, but it can't be embedded into the bioanalyzer, so the bioanalyzer don't support the use of OriginPro. This paper proposes a novel method to evaluate the kinetic parameters of biomolecular interaction by using Newton Iteration Method and Least Squares Method. First, the pseudo first order kinetic model of biomolecular interaction was established. Then the data of molecular interaction of HBsAg and HBsAb was obtained by bioanalyzer. Finally, we used the optical SPR bioanalyzer software which was written by ourselves to make nonlinear fit about the association and dissociation curves. The correlation coefficient R-squared is 0.99229 and 0.99593, respectively. Furthermore, the kinetic parameters and affinity constants were evaluated using the obtained data from the fitting results.

  8. Towards robust quantification and reduction of uncertainty in hydrologic predictions: Integration of particle Markov chain Monte Carlo and factorial polynomial chaos expansion

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Ancell, B. C.

    2017-05-01

    The particle filtering techniques have been receiving increasing attention from the hydrologic community due to its ability to properly estimate model parameters and states of nonlinear and non-Gaussian systems. To facilitate a robust quantification of uncertainty in hydrologic predictions, it is necessary to explicitly examine the forward propagation and evolution of parameter uncertainties and their interactions that affect the predictive performance. This paper presents a unified probabilistic framework that merges the strengths of particle Markov chain Monte Carlo (PMCMC) and factorial polynomial chaos expansion (FPCE) algorithms to robustly quantify and reduce uncertainties in hydrologic predictions. A Gaussian anamorphosis technique is used to establish a seamless bridge between the data assimilation using the PMCMC and the uncertainty propagation using the FPCE through a straightforward transformation of posterior distributions of model parameters. The unified probabilistic framework is applied to the Xiangxi River watershed of the Three Gorges Reservoir (TGR) region in China to demonstrate its validity and applicability. Results reveal that the degree of spatial variability of soil moisture capacity is the most identifiable model parameter with the fastest convergence through the streamflow assimilation process. The potential interaction between the spatial variability in soil moisture conditions and the maximum soil moisture capacity has the most significant effect on the performance of streamflow predictions. In addition, parameter sensitivities and interactions vary in magnitude and direction over time due to temporal and spatial dynamics of hydrologic processes.

  9. Hydrogeological controls of groundwater - land surface interactions

    NASA Astrophysics Data System (ADS)

    Bresciani, Etienne; Batelaan, Okke; Goderniaux, Pascal

    2017-04-01

    Interaction of groundwater with the land surface impacts a wide range of climatic, hydrologic, ecologic and geomorphologic processes. Many site-specific studies have successfully focused on measuring and modelling groundwater-surface water interaction, but upscaling or estimation at catchment or regional scale appears to be challenging. The factors controlling the interaction at regional scale are still poorly understood. In this contribution, a new 2-D (cross-sectional) analytical groundwater flow solution is used to derive a dimensionless criterion that expresses the conditions under which the groundwater outcrops at the land surface (Bresciani et al., 2016). The criterion gives insights into the functional relationships between geology, topography, climate and the locations of groundwater discharge along river systems. This sheds light on the debate about the topographic control of groundwater flow and groundwater-surface water interaction, as effectively the topography only influences the interaction when the groundwater table reaches the land surface. The criterion provides a practical tool to predict locations of groundwater discharge if a limited number of geomorphological and hydrogeological parameters (recharge, hydraulic conductivity and depth to impervious base) are known, and conversely it can provide regional estimates of the ratio of recharge over hydraulic conductivity if locations of groundwater discharge are known. A case study with known groundwater discharge locations located in South-West Brittany, France shows the feasibility of regional estimates of the ratio of recharge over hydraulic conductivity. Bresciani, E., Goderniaux, P. and Batelaan, O., 2016, Hydrogeological controls of water table-land surface interactions. Geophysical Research Letters 43(18): 9653-9661. http://dx.doi.org/10.1002/2016GL070618

  10. Towards Improving our Understanding on the Retrievals of Key Parameters Characterising Land Surface Interactions from Space: Introduction & First Results from the PREMIER-EO Project

    NASA Astrophysics Data System (ADS)

    Ireland, Gareth; North, Matthew R.; Petropoulos, George P.; Srivastava, Prashant K.; Hodges, Crona

    2015-04-01

    Acquiring accurate information on the spatio-temporal variability of soil moisture content (SM) and evapotranspiration (ET) is of key importance to extend our understanding of the Earth system's physical processes, and is also required in a wide range of multi-disciplinary research studies and applications. The utility and applicability of Earth Observation (EO) technology provides an economically feasible solution to derive continuous spatio-temporal estimates of key parameters characterising land surface interactions, including ET as well as SM. Such information is of key value to practitioners, decision makers and scientists alike. The PREMIER-EO project recently funded by High Performance Computing Wales (HPCW) is a research initiative directed towards the development of a better understanding of EO technology's present ability to derive operational estimations of surface fluxes and SM. Moreover, the project aims at addressing knowledge gaps related to the operational estimation of such parameters, and thus contribute towards current ongoing global efforts towards enhancing the accuracy of those products. In this presentation we introduce the PREMIER-EO project, providing a detailed overview of the research aims and objectives for the 1 year duration of the project's implementation. Subsequently, we make available the initial results of the work carried out herein, in particular, related to an all-inclusive and robust evaluation of the accuracy of existing operational products of ET and SM from different ecosystems globally. The research outcomes of this project, once completed, will provide an important contribution towards addressing the knowledge gaps related to the operational estimation of ET and SM. This project results will also support efforts ongoing globally towards the operational development of related products using technologically advanced EO instruments which were launched recently or planned be launched in the next 1-2 years. Key Words: PREMIER-EO, HPC Wales, Soil Moisture, Evapotranspiration, , Earth Observation

  11. The estimation of H-bond and metal ion-ligand interaction energies in the G-Quadruplex ⋯ Mn+ complexes

    NASA Astrophysics Data System (ADS)

    Mostafavi, Najmeh; Ebrahimi, Ali

    2018-06-01

    In order to characterize various interactions in the G-quadruplex ⋯ Mn+ (G-Q ⋯ Mn+) complexes, the individual H-bond (EHB) and metal ion-ligand interaction (EMO) energies have been estimated using the electron charge densities (ρs) calculated at the X ⋯ H (X = N and O) and Mn+ ⋯ O (Mn+ is an alkaline, alkaline earth and transition metal ion) bond critical points (BCPs) obtained from the atoms in molecules (AIM) analysis. The estimated values of EMO and EHB were evaluated using the structural parameters, results of natural bond orbital analysis (NBO), aromaticity indexes and atomic charges. The EMO value increase with the ratio of ionic charge to radius, e/r, where a linear correlation is observed between EMO and e/r (R = 0.97). Meaningful relationships are also observed between EMO and indexes used for aromaticity estimation. The ENH value is higher than EOH in the complexes; this is in complete agreement with the trend of N⋯Hsbnd N and O⋯Hsbnd N angles, the E (2) value of nN → σ*NH and nO → σ*NH interactions and the difference between the natural charges on the H-bonded atom and the hydrogen atom of guanine (Δq). In general, the O1MO2 angle becomes closer to 109.5° with the increase in EMO and decrease in EHB in the presence of metal ion.

  12. Molecular Dynamic Simulations of Interaction of an AFM Probe with the Surface of an SCN Sample

    NASA Technical Reports Server (NTRS)

    Bune, Adris; Kaukler, William; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    Molecular dynamic (MD) simulations is conducted in order to estimate forces of probe-substrate interaction in the Atomic Force Microscope (AFM). First a review of available molecular dynamic techniques is given. Implementation of MD simulation is based on an object-oriented code developed at the University of Delft. Modeling of the sample material - succinonitrile (SCN) - is based on the Lennard-Jones potentials. For the polystyrene probe an atomic interaction potential is used. Due to object-oriented structure of the code modification of an atomic interaction potential is straight forward. Calculation of melting temperature is used for validation of the code and of the interaction potentials. Various fitting parameters of the probe-substrate interaction potentials are considered, as potentials fitted to certain properties and temperature ranges may not be reliable for the others. This research provides theoretical foundation for an interpretation of actual measurements of an interaction forces using AFM.

  13. Transient Inverse Calibration of Site-Wide Groundwater Model to Hanford Operational Impacts from 1943 to 1996--Alternative Conceptual Model Considering Interaction with Uppermost Basalt Confined Aquifer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermeul, Vincent R.; Cole, Charles R.; Bergeron, Marcel P.

    2001-08-29

    The baseline three-dimensional transient inverse model for the estimation of site-wide scale flow parameters, including their uncertainties, using data on the transient behavior of the unconfined aquifer system over the entire historical period of Hanford operations, has been modified to account for the effects of basalt intercommunication between the Hanford unconfined aquifer and the underlying upper basalt confined aquifer. Both the baseline and alternative conceptual models (ACM-1) considered only the groundwater flow component and corresponding observational data in the 3-Dl transient inverse calibration efforts. Subsequent efforts will examine both groundwater flow and transport. Comparisons of goodness of fit measures andmore » parameter estimation results for the ACM-1 transient inverse calibrated model with those from previous site-wide groundwater modeling efforts illustrate that the new 3-D transient inverse model approach will strengthen the technical defensibility of the final model(s) and provide the ability to incorporate uncertainty in predictions related to both conceptual model and parameter uncertainty. These results, however, indicate that additional improvements are required to the conceptual model framework. An investigation was initiated at the end of this basalt inverse modeling effort to determine whether facies-based zonation would improve specific yield parameter estimation results (ACM-2). A description of the justification and methodology to develop this zonation is discussed.« less

  14. Multishaker modal testing

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.

    1987-01-01

    The major accomplishments of this research are: (1) the refinement and documentation of a multi-input, multi-output modal parameter estimation algorithm which is applicable to general linear, time-invariant dynamic systems; (2) the development and testing of an unsymmetric block-Lanzcos algorithm for reduced-order modeling of linear systems with arbitrary damping; and (3) the development of a control-structure-interaction (CSI) test facility.

  15. Ponderomotive perturbations of low density low-temperature plasma under laser Thomson scattering diagnostics

    NASA Astrophysics Data System (ADS)

    Shneider, Mikhail N.

    2017-10-01

    The ponderomotive perturbation in the interaction region of laser radiation with a low density and low-temperature plasma is considered. Estimates of the perturbation magnitude are determined from the plasma parameters, geometry, intensity, and wavelength of laser radiation. It is shown that ponderomotive perturbations can lead to large errors in the electron density when measured using Thomson scattering.

  16. Local correction of quadrupole errors at LHC interaction regions using action and phase jump analysis on turn-by-turn beam position data

    NASA Astrophysics Data System (ADS)

    Cardona, Javier Fernando; García Bonilla, Alba Carolina; Tomás García, Rogelio

    2017-11-01

    This article shows that the effect of all quadrupole errors present in an interaction region with low β * can be modeled by an equivalent magnetic kick, which can be estimated from action and phase jumps found on beam position data. This equivalent kick is used to find the strengths that certain normal and skew quadrupoles located on the IR must have to make an effective correction in that region. Additionally, averaging techniques to reduce noise on beam position data, which allows precise estimates of equivalent kicks, are presented and mathematically justified. The complete procedure is tested with simulated data obtained from madx and 2015-LHC experimental data. The analyses performed in the experimental data indicate that the strengths of the IR skew quadrupole correctors and normal quadrupole correctors can be estimated within a 10% uncertainty. Finally, the effect of IR corrections in the β* is studied, and a correction scheme that returns this parameter to its designed value is proposed.

  17. Carbon and water flux responses to physiology by environment interactions: a sensitivity analysis of variation in climate on photosynthetic and stomatal parameters

    NASA Astrophysics Data System (ADS)

    Bauerle, William L.; Daniels, Alex B.; Barnard, David M.

    2014-05-01

    Sensitivity of carbon uptake and water use estimates to changes in physiology was determined with a coupled photosynthesis and stomatal conductance ( g s) model, linked to canopy microclimate with a spatially explicit scheme (MAESTRA). The sensitivity analyses were conducted over the range of intraspecific physiology parameter variation observed for Acer rubrum L. and temperate hardwood C3 (C3) vegetation across the following climate conditions: carbon dioxide concentration 200-700 ppm, photosynthetically active radiation 50-2,000 μmol m-2 s-1, air temperature 5-40 °C, relative humidity 5-95 %, and wind speed at the top of the canopy 1-10 m s-1. Five key physiological inputs [quantum yield of electron transport ( α), minimum stomatal conductance ( g 0), stomatal sensitivity to the marginal water cost of carbon gain ( g 1), maximum rate of electron transport ( J max), and maximum carboxylation rate of Rubisco ( V cmax)] changed carbon and water flux estimates ≥15 % in response to climate gradients; variation in α, J max, and V cmax input resulted in up to ~50 and 82 % intraspecific and C3 photosynthesis estimate output differences respectively. Transpiration estimates were affected up to ~46 and 147 % by differences in intraspecific and C3 g 1 and g 0 values—two parameters previously overlooked in modeling land-atmosphere carbon and water exchange. We show that a variable environment, within a canopy or along a climate gradient, changes the spatial parameter effects of g 0, g 1, α, J max, and V cmax in photosynthesis- g s models. Since variation in physiology parameter input effects are dependent on climate, this approach can be used to assess the geographical importance of key physiology model inputs when estimating large scale carbon and water exchange.

  18. On the direct detection of multi-component dark matter: sensitivity studies and parameter estimation

    NASA Astrophysics Data System (ADS)

    Herrero-Garcia, Juan; Scaffidi, Andre; White, Martin; Williams, Anthony G.

    2017-11-01

    We study the case of multi-component dark matter, in particular how direct detection signals are modified in the presence of several stable weakly-interacting-massive particles. Assuming a positive signal in a future direct detection experiment, stemming from two dark matter components, we study the region in parameter space where it is possible to distinguish a one from a two-component dark matter spectrum. First, we leave as free parameters the two dark matter masses and show that the two hypotheses can be significantly discriminated for a range of dark matter masses with their splitting being the critical factor. We then investigate how including the effects of different interaction strengths, local densities or velocity dispersions for the two components modifies these conclusions. We also consider the case of isospin-violating couplings. In all scenarios, we show results for various types of nuclei both for elastic spin-independent and spin-dependent interactions. Finally, assuming that the two-component hypothesis is confirmed, we quantify the accuracy with which the parameters can be extracted and discuss the different degeneracies that occur. This includes studying the case in which only a single experiment observes a signal, and also the scenario of having two signals from two different experiments, in which case the ratios of the couplings to neutrons and protons may also be extracted.

  19. Quantifying rates of cell migration and cell proliferation in co-culture barrier assays reveals how skin and melanoma cells interact during melanoma spreading and invasion.

    PubMed

    Haridas, Parvathi; Penington, Catherine J; McGovern, Jacqui A; McElwain, D L Sean; Simpson, Matthew J

    2017-06-21

    Malignant spreading involves the migration of cancer cells amongst other native cell types. For example, in vivo melanoma invasion involves individual melanoma cells migrating through native skin, which is composed of several distinct subpopulations of cells. Here, we aim to quantify how interactions between melanoma and fibroblast cells affect the collective spreading of a heterogeneous population of these cells in vitro. We perform a suite of circular barrier assays that includes: (i) monoculture assays with fibroblast cells; (ii) monoculture assays with SK-MEL-28 melanoma cells; and (iii) a series of co-culture assays initiated with three different ratios of SK-MEL-28 melanoma cells and fibroblast cells. Using immunostaining, detailed cell density histograms are constructed to illustrate how the two subpopulations of cells are spatially arranged within the spreading heterogeneous population. Calibrating the solution of a continuum partial differential equation to the experimental results from the monoculture assays allows us to estimate the cell diffusivity and the cell proliferation rate for the melanoma and the fibroblast cells, separately. Using the parameter estimates from the monoculture assays, we then make a prediction of the spatial spreading in the co-culture assays. Results show that the parameter estimates obtained from the monoculture assays lead to a reasonably accurate prediction of the spatial arrangement of the two subpopulations in the co-culture assays. Overall, the spatial pattern of spreading of the melanoma cells and the fibroblast cells is very similar in monoculture and co-culture conditions. Therefore, we find no clear evidence of any interactions other than cell-to-cell contact and crowding effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Mimosa: Mixture Model of Co-expression to Detect Modulators of Regulatory Interaction

    NASA Astrophysics Data System (ADS)

    Hansen, Matthew; Everett, Logan; Singh, Larry; Hannenhalli, Sridhar

    Functionally related genes tend to be correlated in their expression patterns across multiple conditions and/or tissue-types. Thus co-expression networks are often used to investigate functional groups of genes. In particular, when one of the genes is a transcription factor (TF), the co-expression-based interaction is interpreted, with caution, as a direct regulatory interaction. However, any particular TF, and more importantly, any particular regulatory interaction, is likely to be active only in a subset of experimental conditions. Moreover, the subset of expression samples where the regulatory interaction holds may be marked by presence or absence of a modifier gene, such as an enzyme that post-translationally modifies the TF. Such subtlety of regulatory interactions is overlooked when one computes an overall expression correlation. Here we present a novel mixture modeling approach where a TF-Gene pair is presumed to be significantly correlated (with unknown coefficient) in a (unknown) subset of expression samples. The parameters of the model are estimated using a Maximum Likelihood approach. The estimated mixture of expression samples is then mined to identify genes potentially modulating the TF-Gene interaction. We have validated our approach using synthetic data and on three biological cases in cow and in yeast. While limited in some ways, as discussed, the work represents a novel approach to mine expression data and detect potential modulators of regulatory interactions.

  1. Ensemble-Based Parameter Estimation in a Coupled General Circulation Model

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-09-10

    Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less

  2. Research requirements for development of improved helicopter rotor efficiency

    NASA Technical Reports Server (NTRS)

    Davis, S. J.

    1976-01-01

    The research requirements for developing an improved-efficiency rotor for a civil helicopter are documented. The various design parameters affecting the hover and cruise efficiency of a rotor are surveyed, and the parameters capable of producing the greatest potential improvement are identified. Research and development programs to achieve these improvements are defined, and estimated costs and schedules are presented. Interaction of the improved efficiency rotor with other technological goals for an advanced civil helicopter is noted, including its impact on engine noise, hover and cruise performance, one-engine-inoperative hover capability, and maintenance and reliability.

  3. On the interaction of luminol with human serum albumin: Nature and thermodynamics of ligand binding

    NASA Astrophysics Data System (ADS)

    Moyon, N. Shaemningwar; Mitra, Sivaprasad

    2010-09-01

    The mechanism and thermodynamic parameters for the binding of luminol (LH 2) with human serum albumin was explored by steady state and picosecond time-resolved fluorescence spectroscopy. It was shown that out of two possible LH 2 conformers present is solution, only one is accessible for binding with HSA. The thermodynamic parameters like enthalpy (Δ H) and entropy (Δ S) change corresponding to the ligand binding process were also estimated by performing the experiment at different temperatures. The ligand replacement experiment with bilirubin confirms that LH 2 binds into the sub-domain IIA of the protein.

  4. Cardiovascular oscillations: in search of a nonlinear parametric model

    NASA Astrophysics Data System (ADS)

    Bandrivskyy, Andriy; Luchinsky, Dmitry; McClintock, Peter V.; Smelyanskiy, Vadim; Stefanovska, Aneta; Timucin, Dogan

    2003-05-01

    We suggest a fresh approach to the modeling of the human cardiovascular system. Taking advantage of a new Bayesian inference technique, able to deal with stochastic nonlinear systems, we show that one can estimate parameters for models of the cardiovascular system directly from measured time series. We present preliminary results of inference of parameters of a model of coupled oscillators from measured cardiovascular data addressing cardiorespiratory interaction. We argue that the inference technique offers a very promising tool for the modeling, able to contribute significantly towards the solution of a long standing challenge -- development of new diagnostic techniques based on noninvasive measurements.

  5. System analysis of force feedback microscopy

    NASA Astrophysics Data System (ADS)

    Rodrigues, Mario S.; Costa, Luca; Chevrier, Joël; Comin, Fabio

    2014-02-01

    It was shown recently that the Force Feedback Microscope (FFM) can avoid the jump-to-contact in Atomic force Microscopy even when the cantilevers used are very soft, thus increasing force resolution. In this letter, we explore theoretical aspects of the associated real time control of the tip position. We take into account lever parameters such as the lever characteristics in its environment, spring constant, mass, dissipation coefficient, and the operating conditions such as controller gains and interaction force. We show how the controller parameters are determined so that the FFM functions at its best and estimate the bandwidth of the system under these conditions.

  6. Jupiter's outer atmosphere.

    NASA Technical Reports Server (NTRS)

    Brice, N. M.

    1973-01-01

    The current state of the theory of Jupiter's outer atmosphere is briefly reviewed. The similarities and dissimilarities between the terrestrial and Jovian upper atmospheres are discussed, including the interaction of the solar wind with the planetary magnetic fields. Estimates of Jovian parameters are given, including magnetosphere and auroral zone sizes, ionospheric conductivity, energy inputs, and solar wind parameters at Jupiter. The influence of the large centrifugal force on the cold plasma distribution is considered. The Jovian Van Allen belt is attributed to solar wind particles diffused in toward the planet by dynamo electric fields from ionospheric neutral winds, and the consequences of this theory are indicated.

  7. Multiple-Hit Parameter Estimation in Monolithic Detectors

    PubMed Central

    Barrett, Harrison H.; Lewellen, Tom K.; Miyaoka, Robert S.

    2014-01-01

    We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%–12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied. PMID:23193231

  8. Non-parametric directionality analysis - Extension for removal of a single common predictor and application to time series.

    PubMed

    Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob

    2016-08-01

    The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Local quantum uncertainty guarantees the measurement precision for two coupled two-level systems in non-Markovian environment

    NASA Astrophysics Data System (ADS)

    Wu, Shao-xiong; Zhang, Yang; Yu, Chang-shui

    2018-03-01

    Quantum Fisher information (QFI) is an important feature for the precision of quantum parameter estimation based on the quantum Cramér-Rao inequality. When the quantum state satisfies the von Neumann-Landau equation, the local quantum uncertainty (LQU), as a kind of quantum correlation, present in a bipartite mixed state guarantees a lower bound on QFI in the optimal phase estimation protocol (Girolami et al., 2013). However, in the open quantum systems, there is not an explicit relation between LQU and QFI generally. In this paper, we study the relation between LQU and QFI in open systems which is composed of two interacting two-level systems coupled to independent non-Markovian environments with the entangled initial state embedded by a phase parameter θ. The analytical calculations show that the QFI does not depend on the phase parameter θ, and its decay can be restrained through enhancing the coupling strength or non-Markovianity. Meanwhile, the LQU is related to the phase parameter θ and shows plentiful phenomena. In particular, we find that the LQU can well bound the QFI when the coupling between the two systems is switched off or the initial state is Bell state.

  10. Economic policy optimization based on both one stochastic model and the parametric control theory

    NASA Astrophysics Data System (ADS)

    Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit

    2016-06-01

    A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)

  11. Combining techniques for screening and evaluating interaction terms on high-dimensional time-to-event data.

    PubMed

    Sariyar, Murat; Hoffmann, Isabell; Binder, Harald

    2014-02-26

    Molecular data, e.g. arising from microarray technology, is often used for predicting survival probabilities of patients. For multivariate risk prediction models on such high-dimensional data, there are established techniques that combine parameter estimation and variable selection. One big challenge is to incorporate interactions into such prediction models. In this feasibility study, we present building blocks for evaluating and incorporating interactions terms in high-dimensional time-to-event settings, especially for settings in which it is computationally too expensive to check all possible interactions. We use a boosting technique for estimation of effects and the following building blocks for pre-selecting interactions: (1) resampling, (2) random forests and (3) orthogonalization as a data pre-processing step. In a simulation study, the strategy that uses all building blocks is able to detect true main effects and interactions with high sensitivity in different kinds of scenarios. The main challenge are interactions composed of variables that do not represent main effects, but our findings are also promising in this regard. Results on real world data illustrate that effect sizes of interactions frequently may not be large enough to improve prediction performance, even though the interactions are potentially of biological relevance. Screening interactions through random forests is feasible and useful, when one is interested in finding relevant two-way interactions. The other building blocks also contribute considerably to an enhanced pre-selection of interactions. We determined the limits of interaction detection in terms of necessary effect sizes. Our study emphasizes the importance of making full use of existing methods in addition to establishing new ones.

  12. Direct computation of general chemical energy differences: Application to ionization potentials, excitation, and bond energies.

    PubMed

    Beste, A; Harrison, R J; Yanai, T

    2006-08-21

    Chemists are mainly interested in energy differences. In contrast, most quantum chemical methods yield the total energy which is a large number compared to the difference and has therefore to be computed to a higher relative precision than would be necessary for the difference alone. Hence, it is desirable to compute energy differences directly, thereby avoiding the precision problem. Whenever it is possible to find a parameter which transforms smoothly from an initial to a final state, the energy difference can be obtained by integrating the energy derivative with respect to that parameter (cf. thermodynamic integration or adiabatic connection methods). If the dependence on the parameter is predominantly linear, accurate results can be obtained by single-point integration. In density functional theory and Hartree-Fock, we applied the formalism to ionization potentials, excitation energies, and chemical bond breaking. Example calculations for ionization potentials and excitation energies showed that accurate results could be obtained with a linear estimate. For breaking bonds, we introduce a nongeometrical parameter which gradually turns the interaction between two fragments of a molecule on. The interaction changes the potentials used to determine the orbitals as well as the constraint on the orbitals to be orthogonal.

  13. A powerful test for Balaam's design.

    PubMed

    Mori, Joji; Kano, Yutaka

    2015-01-01

    The crossover trial design (AB/BA design) is often used to compare the effects of two treatments in medical science because it performs within-subject comparisons, which increase the precision of a treatment effect (i.e., a between-treatment difference). However, the AB/BA design cannot be applied in the presence of carryover effects and/or treatments-by-period interaction. In such cases, Balaam's design is a more suitable choice. Unlike the AB/BA design, Balaam's design inflates the variance of an estimate of the treatment effect, thereby reducing the statistical power of tests. This is a serious drawback of the design. Although the variance of parameter estimators in Balaam's design has been extensively studied, the estimators of the treatment effect to improve the inference have received little attention. If the estimate of the treatment effect is obtained by solving the mixed model equations, the AA and BB sequences are excluded from the estimation process. In this study, we develop a new estimator of the treatment effect and a new test statistic using the estimator. The aim is to improve the statistical inference in Balaam's design. Simulation studies indicate that the type I error of the proposed test is well controlled, and that the test is more powerful and has more suitable characteristics than other existing tests when interactions are substantial. The proposed test is also applied to analyze a real dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Angular ellipticity correlations in a composite alignment model for elliptical and spiral galaxies and inference from weak lensing

    NASA Astrophysics Data System (ADS)

    Tugendhat, Tim M.; Schäfer, Björn Malte

    2018-05-01

    We investigate a physical, composite alignment model for both spiral and elliptical galaxies and its impact on cosmological parameter estimation from weak lensing for a tomographic survey. Ellipticity correlation functions and angular ellipticity spectra for spiral and elliptical galaxies are derived on the basis of tidal interactions with the cosmic large-scale structure and compared to the tomographic weak-lensing signal. We find that elliptical galaxies cause a contribution to the weak-lensing dominated ellipticity correlation on intermediate angular scales between ℓ ≃ 40 and ℓ ≃ 400 before that of spiral galaxies dominates on higher multipoles. The predominant term on intermediate scales is the negative cross-correlation between intrinsic alignments and weak gravitational lensing (GI-alignment). We simulate parameter inference from weak gravitational lensing with intrinsic alignments unaccounted; the bias induced by ignoring intrinsic alignments in a survey like Euclid is shown to be several times larger than the statistical error and can lead to faulty conclusions when comparing to other observations. The biases generally point into different directions in parameter space, such that in some cases one can observe a partial cancellation effect. Furthermore, it is shown that the biases increase with the number of tomographic bins used for the parameter estimation process. We quantify this parameter estimation bias in units of the statistical error and compute the loss of Bayesian evidence for a model due to the presence of systematic errors as well as the Kullback-Leibler divergence to quantify the distance between the true model and the wrongly inferred one.

  15. Investigation of genotype x environment interactions for weaning weight for Herefords in three countries.

    PubMed

    de Mattos, D; Bertrand, J K; Misztal, I

    2000-08-01

    The objective of this study was to investigate the possibility of genotype x environment interactions for weaning weight (WWT) between different regions of the United States (US) and between Canada (CA), Uruguay (UY), and US for populations of Hereford cattle. Original data were composed of 487,661, 102,986, and 2,322,722 edited weaning weight records from CA, UY, and US, respectively. A total of 359 sires were identified as having progeny across all three countries; 240 of them had at least one progeny with a record in each environment. The data sets within each country were reduced by retaining records from herds with more than 500 WWT records, with an average contemporary group size of greater than nine animals, and that contained WWT records from progeny or maternal grand-progeny of the across-country sires. Data sets within each country were further reduced by randomly selecting among remaining herds. Four regions within US were defined: Upper Plains (UP), Cornbelt (CB), South (S), and Gulf Coast (GC). Similar sampling criteria and common international sires were used to form the within-US regional data sets. A pairwise analysis was done between countries and regions within US (UP-CB vs S-GC, UP vs CB, and S vs GC) for the estimation of (co)variance components and genetic correlation between environments. An accelerated EM-REML algorithm and a multiple-trait animal model that considered WWT as a different trait in each environment were used to estimate parameters in each pairwise analysis. Direct and maternal (in parentheses) estimated genetic correlations for CA vs UY, CA vs US, US vs UY, UP-CB vs S-GC, UP vs CB, and S vs GC were .88 (.84), .86 (.82), .90 (.85), .88 (.87), .88 (.84), and .87 (.85), respectively. The general absence of genotype x country interactions observed in this study, together with a prior study that showed the similarity of genetic and environmental parameters across the three countries, strongly indicates that a joint WWT genetic evaluation for Hereford cattle could be conducted using a model that treated the information from CA, UY, and US as a single population using single population-wide genetic parameters.

  16. Updated constraints on self-interacting dark matter from Supernova 1987A

    NASA Astrophysics Data System (ADS)

    Mahoney, Cameron; Leibovich, Adam K.; Zentner, Andrew R.

    2017-08-01

    We revisit SN1987A constraints on light, hidden sector gauge bosons ("dark photons") that are coupled to the standard model through kinetic mixing with the photon. These constraints are realized because excessive bremsstrahlung radiation of the dark photon can lead to rapid cooling of the SN1987A progenitor core, in contradiction to the observed neutrinos from that event. The models we consider are of interest as phenomenological models of strongly self-interacting dark matter. We clarify several possible ambiguities in the literature and identify errors in prior analyses. We find constraints on the dark photon mixing parameter that are in rough agreement with the early estimates of Dent et al. [arXiv:1201.2683.], but only because significant errors in their analyses fortuitously canceled. Our constraints are in good agreement with subsequent analyses by Rrapaj & Reddy [Phys. Rev. C 94, 045805 (2016)., 10.1103/PhysRevC.94.045805] and Hardy & Lasenby [J. High Energy Phys. 02 (2017) 33., 10.1007/JHEP02(2017)033]. We estimate the dark photon bremsstrahlung rate using one-pion exchange (OPE), while Rrapaj & Reddy use a soft radiation approximation (SRA) to exploit measured nuclear scattering cross sections. We find that the differences between mixing parameter constraints obtained through the OPE approximation or the SRA approximation are roughly a factor of ˜2 - 3 . Hardy & Laseby [J. High Energy Phys. 02 (2017) 33., 10.1007/JHEP02(2017)033] include plasma effects in their calculations finding significantly weaker constraints on dark photon mixing for dark photon masses below ˜10 MeV . We do not consider plasma effects. Lastly, we point out that the properties of the SN1987A progenitor core remain somewhat uncertain and that this uncertainty alone causes uncertainty of at least a factor of ˜2 - 3 in the excluded values of the dark photon mixing parameter. Further refinement of these estimates is unwarranted until either the interior of the SN1987A progenitor is more well understood or additional, large, and heretofore neglected effects, such as the plasma interactions studied by Hardy & Lasenby [J. High Energy Phys. 02 (2017) 33. 10.1007/JHEP02(2017)033], are identified.

  17. Point and interval estimation of pollinator importance: a study using pollination data of Silene caroliniana.

    PubMed

    Reynolds, Richard J; Fenster, Charles B

    2008-05-01

    Pollinator importance, the product of visitation rate and pollinator effectiveness, is a descriptive parameter of the ecology and evolution of plant-pollinator interactions. Naturally, sources of its variation should be investigated, but the SE of pollinator importance has never been properly reported. Here, a Monte Carlo simulation study and a result from mathematical statistics on the variance of the product of two random variables are used to estimate the mean and confidence limits of pollinator importance for three visitor species of the wildflower, Silene caroliniana. Both methods provided similar estimates of mean pollinator importance and its interval if the sample size of the visitation and effectiveness datasets were comparatively large. These approaches allowed us to determine that bumblebee importance was significantly greater than clearwing hawkmoth, which was significantly greater than beefly. The methods could be used to statistically quantify temporal and spatial variation in pollinator importance of particular visitor species. The approaches may be extended for estimating the variance of more than two random variables. However, unless the distribution function of the resulting statistic is known, the simulation approach is preferable for calculating the parameter's confidence limits.

  18. Ubiquitous human upper-limb motion estimation using wearable sensors.

    PubMed

    Zhang, Zhi-Qiang; Wong, Wai-Choong; Wu, Jian-Kang

    2011-07-01

    Human motion capture technologies have been widely used in a wide spectrum of applications, including interactive game and learning, animation, film special effects, health care, navigation, and so on. The existing human motion capture techniques, which use structured multiple high-resolution cameras in a dedicated studio, are complicated and expensive. With the rapid development of microsensors-on-chip, human motion capture using wearable microsensors has become an active research topic. Because of the agility in movement, upper-limb motion estimation has been regarded as the most difficult problem in human motion capture. In this paper, we take the upper limb as our research subject and propose a novel ubiquitous upper-limb motion estimation algorithm, which concentrates on modeling the relationship between upper-arm movement and forearm movement. A link structure with 5 degrees of freedom (DOF) is proposed to model the human upper-limb skeleton structure. Parameters are defined according to Denavit-Hartenberg convention, forward kinematics equations are derived, and an unscented Kalman filter is deployed to estimate the defined parameters. The experimental results have shown that the proposed upper-limb motion capture and analysis algorithm outperforms other fusion methods and provides accurate results in comparison to the BTS optical motion tracker.

  19. Historical trends and the long-term changes of the hydrological cycle components in a Mediterranean river basin.

    PubMed

    Mentzafou, A; Wagner, S; Dimitriou, E

    2018-04-29

    Identifying the historical hydrometeorological trends in a river basin is necessary for understanding the dominant interactions between climate, human activities and local hydromorphological conditions. Estimating the hydrological reference conditions in a river is also crucial for estimating accurately the impacts from human water related activities and design appropriate water management schemes. In this effort, the output of a regional past climate model was used, covering the period from 1660 to 1990, in combination with a dynamic, spatially distributed, hydrologic model to estimate the past and recent trends in the main hydrologic parameters such as overland flow, water storages and evapotranspiration, in a Mediterranean river basin. The simulated past hydrologic conditions (1660-1960) were compared with the current hydrologic regime (1960-1990), to assess the magnitude of human and natural impacts on the identified hydrologic trends. The hydrological components of the recent period of 2008-2016 were also examined in relation to the impact of human activities. The estimated long-term trends of the hydrologic parameters were partially assigned to varying atmospheric forcing due to volcanic activity combined with spontaneous meteorological fluctuations. Copyright © 2018. Published by Elsevier B.V.

  20. Pharmacokinetic and pharmacodynamic drug interactions of carbamazepine and glibenclamide in healthy albino Wistar rats

    PubMed Central

    Prashanth, S.; Kumar, A. Anil; Madhu, B.; Rama, N.; Sagar, J. Vidya

    2011-01-01

    Aims: To find out the pharmacokinetic and pharmacodynamic drug interaction of carbamazepine, a protype drug used to treat painful diabetic neuropathy with glibenclamide in healthy albino Wistar rats following single and multiple dosage treatment. Materials and Methods: Therapeutic doses (TD) of glibenclamide and TD of carbamazepine were administered to the animals. The blood glucose levels were estimated by GOD/POD method and the plasma glibenclamide concentrations were estimated by a sensitive RP HPLC method to calculate pharmacokinetic parameters. Results: In single dose study the percentage reduction of blood glucose levels and glibenclamide concentrations of rats treated with both carbamazepine and glibenclamide were significantly increased when compared with glibenclamide alone treated rats and the mechanism behind this interaction may be due to inhibition of P-glycoprotein mediated transport of glibenclamide by carbamazepine, but in multiple dose study the percentage reduction of blood glucose levels and glibenclamide concentrations were reduced and it may be due to inhibition of P-glycoprotein mediated transport and induction of CYP2C9, the enzyme through which glibenclamide is metabolised. Conclusions: In the present study there is a pharmacokinetic and pharmacodynamic interaction between carbamazepine and glibenclamide was observed. The possible interaction involves both P-gp and CYP enzymes. To investigate this type of interactions pre-clinically are helpful to avoid drug-drug interactions in clinical situation. PMID:21701639

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Liu, Z.; Zhang, S.

    Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less

  2. Bayesian spatio-temporal modeling of particulate matter concentrations in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Manga, Edna; Awang, Norhashidah

    2016-06-01

    This article presents an application of a Bayesian spatio-temporal Gaussian process (GP) model on particulate matter concentrations from Peninsular Malaysia. We analyze daily PM10 concentration levels from 35 monitoring sites in June and July 2011. The spatiotemporal model set in a Bayesian hierarchical framework allows for inclusion of informative covariates, meteorological variables and spatiotemporal interactions. Posterior density estimates of the model parameters are obtained by Markov chain Monte Carlo methods. Preliminary data analysis indicate information on PM10 levels at sites classified as industrial locations could explain part of the space time variations. We include the site-type indicator in our modeling efforts. Results of the parameter estimates for the fitted GP model show significant spatio-temporal structure and positive effect of the location-type explanatory variable. We also compute some validation criteria for the out of sample sites that show the adequacy of the model for predicting PM10 at unmonitored sites.

  3. Acute effect of Vagus nerve stimulation parameters on cardiac chronotropic, inotropic, and dromotropic responses

    NASA Astrophysics Data System (ADS)

    Ojeda, David; Le Rolle, Virginie; Romero-Ugalde, Hector M.; Gallet, Clément; Bonnet, Jean-Luc; Henry, Christine; Bel, Alain; Mabo, Philippe; Carrault, Guy; Hernández, Alfredo I.

    2017-11-01

    Vagus nerve stimulation (VNS) is an established therapy for drug-resistant epilepsy and depression, and is considered as a potential therapy for other pathologies, including Heart Failure (HF) or inflammatory diseases. In the case of HF, several experimental studies on animals have shown an improvement in the cardiac function and a reverse remodeling of the cardiac cavity when VNS is applied. However, recent clinical trials have not been able to reproduce the same response in humans. One of the hypothesis to explain this lack of response is related to the way in which stimulation parameters are defined. The combined effect of VNS parameters is still poorly-known, especially in the case of VNS synchronously delivered with cardiac activity. In this paper, we propose a methodology to analyze the acute cardiovascular effects of VNS parameters individually, as well as their interactive effects. A Latin hypercube sampling method was applied to design a uniform experimental plan. Data gathered from this experimental plan was used to produce a Gaussian process regression (GPR) model in order to estimate unobserved VNS sequences. Finally, a Morris screening sensitivity analysis method was applied to each obtained GPR model. Results highlight dominant effects of pulse current, pulse width and number of pulses over frequency and delay and, more importantly, the degree of interactions between these parameters on the most important acute cardiovascular responses. In particular, high interacting effects between current and pulse width were found. Similar sensitivity profiles were observed for chronotropic, dromotropic and inotropic effects. These findings are of primary importance for the future development of closed-loop, personalized neuromodulator technologies.

  4. A Pipeline for Constructing a Catalog of Multi-method Models of Interacting Galaxies

    NASA Astrophysics Data System (ADS)

    Holincheck, Anthony

    Galaxies represent a fundamental unit of matter for describing the large-scale structure of the universe. One of the major processes affecting the formation and evolution of galaxies are mutual interactions. These interactions can including gravitational tidal distortion, mass transfer, and even mergers. In any hierarchical model, mergers are the key mechanism in galaxy formation and evolution. Computer simulations of interacting galaxies have evolved in the last four decades from simple restricted three-body algorithms to full n-body gravity models. These codes often included sophisticated physical mechanisms such as gas dynamics, supernova feedback, and central blackholes. As the level of complexity, and perhaps realism, increases so does the amount of computational resources needed. These advanced simulations are often used in parameter studies of interactions. They are usually only employed in an ad hoc fashion to recreate the dynamical history of specific sets of interacting galaxies. These specific models are often created with only a few dozen or at most few hundred sets of simulation parameters being attempted. This dissertation presents a prototype pipeline for modeling specific pairs of interacting galaxies in bulk. The process begins with a simple image of the current disturbed morphology and an estimate of distance to the system and mass of the galaxies. With the use of an updated restricted three-body simulation code and the help of Citizen Scientists, the pipeline is able to sample hundreds of thousands of points in parameter space for each system. Through the use of a convenient interface and innovative scoring algorithm, the pipeline aids researchers in identifying the best set of simulation parameters. This dissertation demonstrates a successful recreation of the disturbed morphologies of 62 pairs of interacting galaxies. The pipeline also provides for examining the level of convergence and uniqueness of the dynamical properties of each system. By creating a population of models for actual systems, the current research is able to compare simulation-based and observational values on a larger scale than previous efforts. Several potential relationships between star formation rate and dynamical time since closest approach are presented.

  5. Interaction of Low Frequency External Electric Fields and Pancreatic β-Cell: A Mathematical Modeling Approach to Identify the Influence of Excitation Parameters.

    PubMed

    Farashi, Sajjad; Sasanpour, Pezhman; Rafii-Tabar, Hashem

    2018-05-24

    Purpose-Although the effect of electromagnetic fields on biological systems has attracted attraction in recent years, there has not been any conclusive result concerning the effects of interaction and the underlying mechanisms involved. Besides the complexity of biological systems, the parameters of the applied electromagnetic field have not been estimated in most of the experiments. Material and Method-In this study, we have used computational approach in order to find the excitation parameters of an external electric field which produces sensible effects in the function of insulin secretory machinery, whose failure triggers the diabetes disease. A mathematical model of the human β-cell has been used and the effects of external electric fields with different amplitudes, frequencies and wave shapes have been studied. Results-The results from our simulations show that the external electric field can influence the membrane electrical activity and perhaps the insulin secretion when its amplitude exceeds a threshold value. Furthermore, our simulations reveal that different waveforms have distinct effects on the β-cell membrane electrical activity and the characteristic features of the excitation like frequency would change the interaction mechanism. Conclusion-The results could help the researchers to investigate the possible role of the environmental electromagnetic fields on the promotion of diabetes disease.

  6. Density functional theory and phytochemical study of 8-hydroxyisodiospyrin

    NASA Astrophysics Data System (ADS)

    Ullah, Zakir; Ata-ur-Rahman; Fazl-i-Sattar; Rauf, Abdur; Yaseen, Muhammad; Hassan, Waseem; Tariq, Muhammad; Ayub, Khurshid; Tahir, Asif Ali; Ullah, Habib

    2015-09-01

    Comprehensive theoretical and experimental studies of a natural product, 8-hydroxyisodiospyrin (HDO) have been carried out. Based on the correlation of experimental and theoretical data, an appropriate computational model was developed for obtaining the electronic, spectroscopic, and thermodynamic parameters of HDO. First of all, the exact structure of HDO is confirmed from the nice correlation of theory and experiment, prior to determination of its electroactive nature. Hybrid density functional theory (DFT) is employed for all theoretical simulations. The experimental and predicted IR and UV-vis spectra [B3LYP/6-31+G(d,p) level of theory] have excellent correlation. Inter-molecular non-covalent interaction of HDO with different gases such as NH3, CO2, CO, H2O is investigated through geometrical counterpoise (gCP) i.e., B3LYP-gCP-D3/6-31G∗ method. Furthermore, the inter-molecular interaction is also supported by geometrical parameters, electronic properties, thermodynamic parameters and charge analysis. All these characterizations have corroborated each other and confirmed the electroactive nature (non-covalent interaction ability) of HDO for the studied gases. Electronic properties such as Ionization Potential (IP), Electron Affinities (EA), electrostatic potential (ESP), density of states (DOS), HOMO, LUMO, and band gap of HDO have been estimated for the first time theoretically.

  7. Multisubstrate biodegradation kinetics of naphthalene, phenanthrene, and pyrene mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guha, S.; Peters, C.A.; Jaffe, P.R.

    Biodegradation kinetics of naphthalene, phenanthrene and pyrene were studied in sole-substrate systems, and in binary and ternary mixtures to examine substrate interactions. The experiments were conducted in aerobic batch aqueous systems inoculated with a mixed culture that had been isolated from soils contaminated with polycyclic aromatic hydrocarbons (PAHs). Monod kinetic parameters and yield coefficients for the individual parameters and yield coefficients for the individual compounds were estimated from substrate depletion and CO{sub 2} evolution rate data in sole-substrate experiments. In all three binary mixture experiments, biodegradation kinetics were comparable to the sole-substrate kinetics. In the ternary mixture, biodegradation of naphthalenemore » was inhibited and the biodegradation rates of phenanthrene and pyrene were enhanced. A multisubstrate form of the Monod kinetic model was found to adequately predict substrate interactions in the binary and ternary mixtures using only the parameters derived from sole-substrate experiments. Numerical simulations of biomass growth kinetics explain the observed range of behaviors in PAH mixtures. In general, the biodegradation rates of the more degradable and abundant compounds are reduced due to competitive inhibition, but enhanced biodegradation of the more recalcitrant PAHs occurs due to simultaneous biomass growth on multiple substrates. In PAH-contaminated environments, substrate interactions may be very large due to additive effects from the large number of compounds present.« less

  8. Estimating the Stoichiometry of HIV Neutralization

    PubMed Central

    Magnus, Carsten; Regoes, Roland R.

    2010-01-01

    HIV-1 virions infect target cells by first establishing contact between envelope glycoprotein trimers on the virion's surface and CD4 receptors on a target cell, recruiting co-receptors, fusing with the cell membrane and finally releasing the genetic material into the target cell. Specific experimental setups allow the study of the number of trimer-receptor-interactions needed for infection, i.e., the stoichiometry of entry and also the number of antibodies needed to prevent one trimer from engaging successfully in the entry process, i.e., the stoichiometry of (trimer) neutralization. Mathematical models are required to infer the stoichiometric parameters from these experimental data. Recently, we developed mathematical models for the estimations of the stoichiometry of entry [1]. In this article, we show how our models can be extended to investigate the stoichiometry of trimer neutralization. We study how various biological parameters affect the estimate of the stoichiometry of neutralization. We find that the distribution of trimer numbers—which is also an important determinant of the stoichiometry of entry—influences the estimated value of the stoichiometry of neutralization. In contrast, other parameters, which characterize the experimental system, diminish the information we can extract from the data about the stoichiometry of neutralization, and thus reduce our confidence in the estimate. We illustrate the use of our models by re-analyzing previously published data on the neutralization sensitivity [2], which contains measurements of neutralization sensitivity of viruses with different envelope proteins to antibodies with various specificities. Our mathematical framework represents the formal basis for the estimation of the stoichiometry of neutralization. Together with the stoichiometry of entry, the stoichiometry of trimer neutralization will allow one to calculate how many antibodies are required to neutralize a virion or even an entire population of virions. PMID:20333245

  9. Freeze-in through portals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blennow, Mattias; Fernandez-Martínez, Enrique; Zaldívar, Bryan, E-mail: emb@kth.se, E-mail: enrique.fernandez-martinez@uam.es, E-mail: b.zaldivar.m@csic.es

    2014-01-01

    The popular freeze-out paradigm for Dark Matter (DM) production, relies on DM-baryon couplings of the order of the weak interactions. However, different search strategies for DM have failed to provide a conclusive evidence of such (non-gravitational) interactions, while greatly reducing the parameter space of many representative models. This motivates the study of alternative mechanisms for DM genesis. In the freeze-in framework, the DM is slowly populated from the thermal bath while never reaching equilibrium. In this work, we analyse in detail the possibility of producing a frozen-in DM via a mediator particle which acts as a portal. We give analyticalmore » estimates of different freeze-in regimes and support them with full numerical analyses, taking into account the proper distribution functions of bath particles. Finally, we constrain the parameter space of generic models by requiring agreement with DM relic abundance observations.« less

  10. Lung Cancer Pathological Image Analysis Using a Hidden Potts Model

    PubMed Central

    Li, Qianyun; Yi, Faliu; Wang, Tao; Xiao, Guanghua; Liang, Faming

    2017-01-01

    Nowadays, many biological data are acquired via images. In this article, we study the pathological images scanned from 205 patients with lung cancer with the goal to find out the relationship between the survival time and the spatial distribution of different types of cells, including lymphocyte, stroma, and tumor cells. Toward this goal, we model the spatial distribution of different types of cells using a modified Potts model for which the parameters represent interactions between different types of cells and estimate the parameters of the Potts model using the double Metropolis-Hastings algorithm. The double Metropolis-Hastings algorithm allows us to simulate samples approximately from a distribution with an intractable normalizing constant. Our numerical results indicate that the spatial interaction between the lymphocyte and tumor cells is significantly associated with the patient’s survival time, and it can be used together with the cell count information to predict the survival of the patients. PMID:28615918

  11. Searching for dark matter-dark energy interactions: Going beyond the conformal case

    NASA Astrophysics Data System (ADS)

    van de Bruck, Carsten; Mifsud, Jurgen

    2018-01-01

    We consider several cosmological models which allow for nongravitational direct couplings between dark matter and dark energy. The distinguishing cosmological features of these couplings can be probed by current cosmological observations, thus enabling us to place constraints on these specific interactions which are composed of the conformal and disformal coupling functions. We perform a global analysis in order to independently constrain the conformal, disformal, and mixed interactions between dark matter and dark energy by combining current data from: Planck observations of the cosmic microwave background radiation anisotropies, a combination of measurements of baryon acoustic oscillations, a supernova type Ia sample, a compilation of Hubble parameter measurements estimated from the cosmic chronometers approach, direct measurements of the expansion rate of the Universe today, and a compilation of growth of structure measurements. We find that in these coupled dark-energy models, the influence of the local value of the Hubble constant does not significantly alter the inferred constraints when we consider joint analyses that include all cosmological probes. Moreover, the parameter constraints are remarkably improved with the inclusion of the growth of structure data set measurements. We find no compelling evidence for an interaction within the dark sector of the Universe.

  12. Accounting for Intraligand Interactions in Flexible Ligand Docking with a PMF-Based Scoring Function.

    PubMed

    Lizunov, A Y; Gonchar, A L; Zaitseva, N I; Zosimov, V V

    2015-10-26

    We analyzed the frequency with which intraligand contacts occurred in a set of 1300 protein-ligand complexes [ Plewczynski et al. J. Comput. Chem. 2011 , 32 , 742 - 755 .]. Our analysis showed that flexible ligands often form intraligand hydrophobic contacts, while intraligand hydrogen bonds are rare. The test set was also thoroughly investigated and classified. We suggest a universal method for enhancement of a scoring function based on a potential of mean force (PMF-based score) by adding a term accounting for intraligand interactions. The method was implemented via in-house developed program, utilizing an Algo_score scoring function [ Ramensky et al. Proteins: Struct., Funct., Genet. 2007 , 69 , 349 - 357 .] based on the Tarasov-Muryshev PMF [ Muryshev et al. J. Comput.-Aided Mol. Des. 2003 , 17 , 597 - 605 .]. The enhancement of the scoring function was shown to significantly improve the docking and scoring quality for flexible ligands in the test set of 1300 protein-ligand complexes [ Plewczynski et al. J. Comput. Chem. 2011 , 32 , 742 - 755 .]. We then investigated the correlation of the docking results with two parameters of intraligand interactions estimation. These parameters are the weight of intraligand interactions and the minimum number of bonds between the ligand atoms required to take their interaction into account.

  13. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  14. Rotorcraft Blade Mode Damping Identification from Random Responses Using a Recursive Maximum Likelihood Algorithm

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.

    1982-01-01

    An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.

  15. Estimation of π-π Electronic Couplings from Current Measurements.

    PubMed

    Trasobares, J; Rech, J; Jonckheere, T; Martin, T; Aleveque, O; Levillain, E; Diez-Cabanes, V; Olivier, Y; Cornil, J; Nys, J P; Sivakumarasamy, R; Smaali, K; Leclere, P; Fujiwara, A; Théron, D; Vuillaume, D; Clément, N

    2017-05-10

    The π-π interactions between organic molecules are among the most important parameters for optimizing the transport and optical properties of organic transistors, light-emitting diodes, and (bio-) molecular devices. Despite substantial theoretical progress, direct experimental measurement of the π-π electronic coupling energy parameter t has remained an old challenge due to molecular structural variability and the large number of parameters that affect the charge transport. Here, we propose a study of π-π interactions from electrochemical and current measurements on a large array of ferrocene-thiolated gold nanocrystals. We confirm the theoretical prediction that t can be assessed from a statistical analysis of current histograms. The extracted value of t ≈35 meV is in the expected range based on our density functional theory analysis. Furthermore, the t distribution is not necessarily Gaussian and could be used as an ultrasensitive technique to assess intermolecular distance fluctuation at the subangström level. The present work establishes a direct bridge between quantum chemistry, electrochemistry, organic electronics, and mesoscopic physics, all of which were used to discuss results and perspectives in a quantitative manner.

  16. Modelling non-linear effects of dark energy

    NASA Astrophysics Data System (ADS)

    Bose, Benjamin; Baldi, Marco; Pourtsidou, Alkistis

    2018-04-01

    We investigate the capabilities of perturbation theory in capturing non-linear effects of dark energy. We test constant and evolving w models, as well as models involving momentum exchange between dark energy and dark matter. Specifically, we compare perturbative predictions at 1-loop level against N-body results for four non-standard equations of state as well as varying degrees of momentum exchange between dark energy and dark matter. The interaction is modelled phenomenologically using a time dependent drag term in the Euler equation. We make comparisons at the level of the matter power spectrum and the redshift space monopole and quadrupole. The multipoles are modelled using the Taruya, Nishimichi and Saito (TNS) redshift space spectrum. We find perturbation theory does very well in capturing non-linear effects coming from dark sector interaction. We isolate and quantify the 1-loop contribution coming from the interaction and from the non-standard equation of state. We find the interaction parameter ξ amplifies scale dependent signatures in the range of scales considered. Non-standard equations of state also give scale dependent signatures within this same regime. In redshift space the match with N-body is improved at smaller scales by the addition of the TNS free parameter σv. To quantify the importance of modelling the interaction, we create mock data sets for varying values of ξ using perturbation theory. This data is given errors typical of Stage IV surveys. We then perform a likelihood analysis using the first two multipoles on these sets and a ξ=0 modelling, ignoring the interaction. We find the fiducial growth parameter f is generally recovered even for very large values of ξ both at z=0.5 and z=1. The ξ=0 modelling is most biased in its estimation of f for the phantom w=‑1.1 case.

  17. Bayesian estimation of the discrete coefficient of determination.

    PubMed

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  18. Characteristics and Impact Factors of Parameter Alpha in the Nonlinear Advection-Aridity Method for Estimating Evapotranspiration at Interannual Scale in the Loess Plateau

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Liu, W.; Ning, T.

    2017-12-01

    Land surface actual evapotranspiration plays a key role in the global water and energy cycles. Accurate estimation of evapotranspiration is crucial for understanding the interactions between the land surface and the atmosphere, as well as for managing water resources. The nonlinear advection-aridity approach was formulated by Brutsaert to estimate actual evapotranspiration in 2015. Subsequently, this approach has been verified, applied and developed by many scholars. The estimation, impact factors and correlation analysis of the parameter alpha (αe) of this approach has become important aspects of the research. According to the principle of this approach, the potential evapotranspiration (ETpo) (taking αe as 1) and the apparent potential evapotranspiration (ETpm) were calculated using the meteorological data of 123 sites of the Loess Plateau and its surrounding areas. Then the mean spatial values of precipitation (P), ETpm and ETpo for 13 catchments were obtained by a CoKriging interpolation algorithm. Based on the runoff data of the 13 catchments, actual evapotranspiration was calculated using the catchment water balance equation at the hydrological year scale (May to April of the following year) by ignoring the change of catchment water storage. Thus, the parameter was estimated, and its relationships with P, ETpm and aridity index (ETpm/P) were further analyzed. The results showed that the general range of annual parameter value was 0.385-1.085, with an average value of 0.751 and a standard deviation of 0.113. The mean annual parameter αe value showed different spatial characteristics, with lower values in northern and higher values in southern. The annual scale parameter linearly related with annual P (R2=0.89) and ETpm (R2=0.49), while it exhibited a power function relationship with the aridity index (R2=0.83). Considering the ETpm is a variable in the nonlinear advection-aridity approach in which its effect has been incorporated, the relationship of precipitation and parameter (αe=1.0×10-3*P+0.301) was developed. The value of αe in this study is lower than those in the published literature. The reason is unclear at this point and yet need further investigation. The preliminary application of the nonlinear advection-aridity approach in the Loess Plateau has shown promising results.

  19. A spatial panel ordered-response model with application to the analysis of urban land-use development intensity patterns

    NASA Astrophysics Data System (ADS)

    Ferdous, Nazneen; Bhat, Chandra R.

    2013-01-01

    This paper proposes and estimates a spatial panel ordered-response probit model with temporal autoregressive error terms to analyze changes in urban land development intensity levels over time. Such a model structure maintains a close linkage between the land owner's decision (unobserved to the analyst) and the land development intensity level (observed by the analyst) and accommodates spatial interactions between land owners that lead to spatial spillover effects. In addition, the model structure incorporates spatial heterogeneity as well as spatial heteroscedasticity. The resulting model is estimated using a composite marginal likelihood (CML) approach that does not require any simulation machinery and that can be applied to data sets of any size. A simulation exercise indicates that the CML approach recovers the model parameters very well, even in the presence of high spatial and temporal dependence. In addition, the simulation results demonstrate that ignoring spatial dependency and spatial heterogeneity when both are actually present will lead to bias in parameter estimation. A demonstration exercise applies the proposed model to examine urban land development intensity levels using parcel-level data from Austin, Texas.

  20. Demonstration and evaluation of a method for assessing mediated moderation.

    PubMed

    Morgan-Lopez, Antonio A; MacKinnon, David P

    2006-02-01

    Mediated moderation occurs when the interaction between two variables affects a mediator, which then affects a dependent variable. In this article, we describe the mediated moderation model and evaluate it with a statistical simulation using an adaptation of product-of-coefficients methods to assess mediation. We also demonstrate the use of this method with a substantive example from the adolescent tobacco literature. In the simulation, relative bias (RB) in point estimates and standard errors did not exceed problematic levels of +/- 10% although systematic variability in RB was accounted for by parameter size, sample size, and nonzero direct effects. Power to detect mediated moderation effects appears to be severely compromised under one particular combination of conditions: when the component variables that make up the interaction terms are correlated and partial mediated moderation exists. Implications for the estimation of mediated moderation effects in experimental and nonexperimental research are discussed.

  1. Interactive computation of coverage regions for indoor wireless communication

    NASA Astrophysics Data System (ADS)

    Abbott, A. Lynn; Bhat, Nitin; Rappaport, Theodore S.

    1995-12-01

    This paper describes a system which assists in the strategic placement of rf base stations within buildings. Known as the site modeling tool (SMT), this system allows the user to display graphical floor plans and to select base station transceiver parameters, including location and orientation, interactively. The system then computes and highlights estimated coverage regions for each transceiver, enabling the user to assess the total coverage within the building. For single-floor operation, the user can choose between distance-dependent and partition- dependent path-loss models. Similar path-loss models are also available for the case of multiple floors. This paper describes the method used by the system to estimate coverage for both directional and omnidirectional antennas. The site modeling tool is intended to be simple to use by individuals who are not experts at wireless communication system design, and is expected to be very useful in the specification of indoor wireless systems.

  2. Measuring and modeling the variation in species-specific transpiration in temperate deciduous hardwoods.

    PubMed

    Bowden, Joseph D; Bauerle, William L

    2008-11-01

    We investigated which parameters required by the MAESTRA model were most important in predicting leaf-area-based transpiration in 5-year-old trees of five deciduous hardwood species-yoshino cherry (Prunus x yedoensis Matsum.), red maple (Acer rubrum L. 'Autumn Flame'), trident maple (Acer buergeranum Miq.), Japanese flowering cherry (Prunus serrulata Lindl. 'Kwanzan') and London plane-tree (Platanus x acerifolia (Ait.) Willd.). Transpiration estimated from sap flow measured by the heat balance method in branches and trunks was compared with estimates predicted by the three-dimensional transpiration, photosynthesis and absorbed radiation model, MAESTRA. MAESTRA predicted species-specific transpiration from the interactions of leaf-level physiology and spatially explicit micro-scale weather patterns in a mixed deciduous hardwood plantation on a 15-min time step. The monthly differences between modeled mean daily transpiration estimates and measured mean daily sap flow ranged from a 35% underestimation for Acer buergeranum in June to a 25% overestimation for A. rubrum in July. The sensitivity of the modeled transpiration estimates was examined across a 30% error range for seven physiological input parameters. The minimum value of stomatal conductance as incident solar radiation tends to zero was determined to be eight times more influential than all other physiological model input parameters. This work quantified the major factors that influence modeled species-specific transpiration and confirmed the ability to scale leaf-level physiological attributes to whole-crown transpiration on a species-specific basis.

  3. Movement patterns and study area boundaries: Influences on survival estimation in capture-mark-recapture studies

    USGS Publications Warehouse

    Horton, G.E.; Letcher, B.H.

    2008-01-01

    The inability to account for the availability of individuals in the study area during capture-mark-recapture (CMR) studies and the resultant confounding of parameter estimates can make correct interpretation of CMR model parameter estimates difficult. Although important advances based on the Cormack-Jolly-Seber (CJS) model have resulted in estimators of true survival that work by unconfounding either death or recapture probability from availability for capture in the study area, these methods rely on the researcher's ability to select a method that is correctly matched to emigration patterns in the population. If incorrect assumptions regarding site fidelity (non-movement) are made, it may be difficult or impossible as well as costly to change the study design once the incorrect assumption is discovered. Subtleties in characteristics of movement (e.g. life history-dependent emigration, nomads vs territory holders) can lead to mixtures in the probability of being available for capture among members of the same population. The result of these mixtures may be only a partial unconfounding of emigration from other CMR model parameters. Biologically-based differences in individual movement can combine with constraints on study design to further complicate the problem. Because of the intricacies of movement and its interaction with other parameters in CMR models, quantification of and solutions to these problems are needed. Based on our work with stream-dwelling populations of Atlantic salmon Salmo salar, we used a simulation approach to evaluate existing CMR models under various mixtures of movement probabilities. The Barker joint data model provided unbiased estimates of true survival under all conditions tested. The CJS and robust design models provided similarly unbiased estimates of true survival but only when emigration information could be incorporated directly into individual encounter histories. For the robust design model, Markovian emigration (future availability for capture depends on an individual's current location) was a difficult emigration pattern to detect unless survival and especially recapture probability were high. Additionally, when local movement was high relative to study area boundaries and movement became more diffuse (e.g. a random walk), local movement and permanent emigration were difficult to distinguish and had consequences for correctly interpreting the survival parameter being estimated (apparent survival vs true survival). ?? 2008 The Authors.

  4. Beyond thriftiness: Independent and interactive effects of genetic and dietary factors on variations in fat deposition and distribution across populations

    PubMed Central

    Casazza, Krista; Beasley, T. Mark; Fernandez, Jose R.

    2011-01-01

    The thrifty genotype hypothesis initiated speculation that feast and famine cycling throughout history may have led to group-specific alterations of the human genome, thereby augmenting the capacity for excessive fat mass accrual when immersed in the modern-day obesogenic environment. Contemporary work, however, suggests alternative mechanisms influencing fuel utilization and subsequent tissue partitioning to be more relevant in the etiology of population-based variation in adipose storage. The objective of this study was to evaluate the independent and interactive contribution of ancestral admixture as a proxy for population-based genetic variation and diet on adipose tissue deposition and distribution in peripubertal children and to identify differences in racial/ethnic and sex groups. Two-hundred seventy-eight children (53% male) aged 7–12y, categorized by parental self-report as African- (n=91), European- (n=110), or Hispanic American (n=77), participated. Ancestral genetic admixture was estimated using 140 ancestry informative markers. Body composition was evaluated by dual-energy x-ray absorptiometry; energy expenditure by indirect calorimetry and accelerometry; and diet by 24h–recall. Admixture independently contributed to all adiposity parameters; i.e., estimates of European and Amerindian ancestries were positively associated with all adiposity parameters, whereas African genetic admixture was inversely associated with adiposity. In boys, energy intake was associated with adiposity, irrespective of macronutrient profile, whereas in girls, the relationship was mediated by carbohydrate. We also observed moderating effects of energy balance/fuel utilization of the interaction between ancestral genetic admixture and diet. Interactive effects of genetic and non-genetic factors alter metabolic pathways and underlie some of the present population-based differences in fat storage. PMID:21365611

  5. Influence of miscibility phenomenon on crystalline polymorph transition in poly(vinylidene fluoride)/acrylic rubber/clay nanocomposite hybrid.

    PubMed

    Abolhasani, Mohammad Mahdi; Naebe, Minoo; Jalali-Arani, Azam; Guo, Qipeng

    2014-01-01

    In this paper, intercalation of nanoclay in the miscible polymer blend of poly(vinylidene fluoride) (PVDF) and acrylic rubber(ACM) was studied. X-ray diffraction was used to investigate the formation of nanoscale polymer blend/clay hybrid. Infrared spectroscopy and X-ray analysis revealed the coexistence of β and γ crystalline forms in PVDF/Clay nanocomposite while α crystalline form was found to be dominant in PVDF/ACM/Clay miscible hybrids. Flory-Huggins interaction parameter (B) was used to further explain the miscibility phenomenon observed. The B parameter was determined by combining the melting point depression and the binary interaction model. The estimated B values for the ternary PVDF/ACM/Clay and PVDF/ACM pairs were all negative, showing both proper intercalation of the polymer melt into the nanoclay galleries and the good miscibility of PVDF and ACM blend. The B value for the PVDF/ACM blend was almost the same as that measured for the PVDF/ACM/Clay hybrid, suggesting that PVDF chains in nanocomposite hybrids interact with ACM chains and that nanoclay in hybrid systems is wrapped by ACM molecules.

  6. Simultaneous Estimation of Microphysical Parameters and Atmospheric State Variables With Radar Data and Ensemble Square-root Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tong, M.; Xue, M.

    2006-12-01

    An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.

  7. Social Interactions under Incomplete Information: Games, Equilibria, and Expectations

    NASA Astrophysics Data System (ADS)

    Yang, Chao

    My dissertation research investigates interactions of agents' behaviors through social networks when some information is not shared publicly, focusing on solutions to a series of challenging problems in empirical research, including heterogeneous expectations and multiple equilibria. The first chapter, "Social Interactions under Incomplete Information with Heterogeneous Expectations", extends the current literature in social interactions by devising econometric models and estimation tools with private information in not only the idiosyncratic shocks but also some exogenous covariates. For example, when analyzing peer effects in class performances, it was previously assumed that all control variables, including individual IQ and SAT scores, are known to the whole class, which is unrealistic. This chapter allows such exogenous variables to be private information and models agents' behaviors as outcomes of a Bayesian Nash Equilibrium in an incomplete information game. The distribution of equilibrium outcomes can be described by the equilibrium conditional expectations, which is unique when the parameters are within a reasonable range according to the contraction mapping theorem in function spaces. The equilibrium conditional expectations are heterogeneous in both exogenous characteristics and the private information, which makes estimation in this model more demanding than in previous ones. This problem is solved in a computationally efficient way by combining the quadrature method and the nested fixed point maximum likelihood estimation. In Monte Carlo experiments, if some exogenous characteristics are private information and the model is estimated under the mis-specified hypothesis that they are known to the public, estimates will be biased. Applying this model to municipal public spending in North Carolina, significant negative correlations between contiguous municipalities are found, showing free-riding effects. The Second chapter "A Tobit Model with Social Interactions under Incomplete Information", is an application of the first chapter to censored outcomes, corresponding to the situation when agents" behaviors are subjected to some binding restrictions. In an interesting empirical analysis for property tax rates set by North Carolina municipal governments, it is found that there is a significant positive correlation among near-by municipalities. Additionally, some private information about its own residents is used by a municipal government to predict others' tax rates, which enriches current empirical work about tax competition. The third chapter, "Social Interactions under Incomplete Information with Multiple Equilibria", extends the first chapter by investigating effective estimation methods when the condition for a unique equilibrium may not be satisfied. With multiple equilibria, the previous model is incomplete due to the unobservable equilibrium selection. Neither conventional likelihoods nor moment conditions can be used to estimate parameters without further specifications. Although there are some solutions to this issue in the current literature, they are based on strong assumptions such as agents with the same observable characteristics play the same strategy. This paper relaxes those assumptions and extends the all-solution method used to estimate discrete choice games to a setting with both discrete and continuous choices, bounded and unbounded outcomes, and a general form of incomplete information, where the existence of a pure strategy equilibrium has been an open question for a long time. By the use of differential topology and functional analysis, it is found that when all exogenous characteristics are public information, there are a finite number of equilibria. With privately known exogenous characteristics, the equilbria can be represented by a compact set in a Banach space and be approximated by a finite set. As a result, a finite-state probability mass function can be used to specify a probability measure for equilibrium selection, which completes the model. From Monte Carlo experiments about two types of binary choice models, it is found that assuming equilibrium uniqueness can bring in estimation biases when the true value of interaction intensity is large and there are multiple equilibria in the data generating process.

  8. Value of Computed Tomographic Perfusion-Based Patient Selection for Intra-Arterial Acute Ischemic Stroke Treatment.

    PubMed

    Borst, Jordi; Berkhemer, Olvert A; Roos, Yvo B W E M; van Bavel, Ed; van Zwam, Wim H; van Oostenbrugge, Robert J; van Walderveen, Marianne A A; Lingsma, Hester F; van der Lugt, Aad; Dippel, Diederik W J; Yoo, Albert J; Marquering, Henk A; Majoie, Charles B L M

    2015-12-01

    The utility of computed tomographic perfusion (CTP)-based patient selection for intra-arterial treatment of acute ischemic stroke has not been proven in randomized trials and requires further study in a cohort that was not selected based on CTP. Our objective was to study the relationship between CTP-derived parameters and outcome and treatment effect in patients with acute ischemic stroke because of a proximal intracranial arterial occlusion. We included 175 patients who underwent CTP in the Multicenter Randomized Clinical Trial of Endovascular Treatment for Acute Ischemic Stroke in The Netherlands (MR CLEAN). Association of CTP-derived parameters (ischemic-core volume, penumbra volume, and percentage ischemic core) with outcome was estimated with multivariable ordinal logistic regression as an adjusted odds ratio for a shift in the direction of a better outcome on the modified Rankin Scale. Interaction between CTP-derived parameters and treatment effect was determined using multivariable ordinal logistic regression. Interaction with treatment effect was also tested for mismatch (core <70 mL; penumbra core >1.2; penumbra core >10 mL). The adjusted odds ratio for improved functional outcome for ischemic core, percentage ischemic core, and penumbra were 0.79 per 10 mL (95% confidence interval: 0.71-0.89; P<0.001), 0.82 per 10% (95% confidence interval: 0.66-0.90; P=0.002), and 0.97 per 10 mL (96% confidence interval: 0.92-1.01; P=0.15), respectively. No significant interaction between any of the CTP-derived parameters and treatment effect was observed. We observed no significant interaction between mismatch and treatment effect. CTP seems useful for predicting functional outcome, but cannot reliably identify patients who will not benefit from intra-arterial therapy. © 2015 American Heart Association, Inc.

  9. Computer simulations of alkali-acetate solutions: Accuracy of the forcefields in difference concentrations

    NASA Astrophysics Data System (ADS)

    Ahlstrand, Emma; Zukerman Schpector, Julio; Friedman, Ran

    2017-11-01

    When proteins are solvated in electrolyte solutions that contain alkali ions, the ions interact mostly with carboxylates on the protein surface. Correctly accounting for alkali-carboxylate interactions is thus important for realistic simulations of proteins. Acetates are the simplest carboxylates that are amphipathic, and experimental data for alkali acetate solutions are available and can be compared with observables obtained from simulations. We carried out molecular dynamics simulations of alkali acetate solutions using polarizable and non-polarizable forcefields and examined the ion-acetate interactions. In particular, activity coefficients and association constants were studied in a range of concentrations (0.03, 0.1, and 1M). In addition, quantum-mechanics (QM) based energy decomposition analysis was performed in order to estimate the contribution of polarization, electrostatics, dispersion, and QM (non-classical) effects on the cation-acetate and cation-water interactions. Simulations of Li-acetate solutions in general overestimated the binding of Li+ and acetates. In lower concentrations, the activity coefficients of alkali-acetate solutions were too high, which is suggested to be due to the simulation protocol and not the forcefields. Energy decomposition analysis suggested that improvement of the forcefield parameters to enable accurate simulations of Li-acetate solutions can be achieved but may require the use of a polarizable forcefield. Importantly, simulations with some ion parameters could not reproduce the correct ion-oxygen distances, which calls for caution in the choice of ion parameters when protein simulations are performed in electrolyte solutions.

  10. The Rényi entropy H2 as a rigorous, measurable lower bound for the entropy of the interaction region in multi-particle production processes

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-10-01

    A model-independent lower bound on the entropy S of the multi-particle system produced in high energy collisions, provided by the measurable Rényi entropy H2, is shown to be very effective. Estimates show that the ratio H2/S remains close to one half for all realistic values of the parameters.

  11. Mixed models for selection of Jatropha progenies with high adaptability and yield stability in Brazilian regions.

    PubMed

    Teodoro, P E; Bhering, L L; Costa, R D; Rocha, R B; Laviola, B G

    2016-08-19

    The aim of this study was to estimate genetic parameters via mixed models and simultaneously to select Jatropha progenies grown in three regions of Brazil that meet high adaptability and stability. From a previous phenotypic selection, three progeny tests were installed in 2008 in the municipalities of Planaltina-DF (Midwest), Nova Porteirinha-MG (Southeast), and Pelotas-RS (South). We evaluated 18 families of half-sib in a randomized block design with three replications. Genetic parameters were estimated using restricted maximum likelihood/best linear unbiased prediction. Selection was based on the harmonic mean of the relative performance of genetic values method in three strategies considering: 1) performance in each environment (with interaction effect); 2) performance in each environment (with interaction effect); and 3) simultaneous selection for grain yield, stability and adaptability. Accuracy obtained (91%) reveals excellent experimental quality and consequently safety and credibility in the selection of superior progenies for grain yield. The gain with the selection of the best five progenies was more than 20%, regardless of the selection strategy. Thus, based on the three selection strategies used in this study, the progenies 4, 11, and 3 (selected in all environments and the mean environment and by adaptability and phenotypic stability methods) are the most suitable for growing in the three regions evaluated.

  12. Quantitative evaluation of water quality in the coastal zone by remote sensing

    NASA Technical Reports Server (NTRS)

    James, W. P.

    1971-01-01

    Remote sensing as a tool in a waste management program is discussed. By monitoring both the pollution sources and the environmental quality, the interaction between the components of the exturaine system was observed. The need for in situ sampling is reduced with the development of improved calibrated, multichannel sensors. Remote sensing is used for: (1) pollution source determination, (2) mapping the influence zone of the waste source on water quality parameters, and (3) estimating the magnitude of the water quality parameters. Diffusion coefficients and circulation patterns can also be determined by remote sensing, along with subtle changes in vegetative patterns and density.

  13. Attitude determination and parameter estimation using vector observations - Theory

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.

  14. A Bayesian Approach to Estimating Coupling Between Neural Components: Evaluation of the Multiple Component, Event-Related Potential (mcERP) Algorithm

    NASA Technical Reports Server (NTRS)

    Shah, Ankoor S.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Accurate measurement of single-trial responses is key to a definitive use of complex electromagnetic and hemodynamic measurements in the investigation of brain dynamics. We developed the multiple component, Event-Related Potential (mcERP) approach to single-trial response estimation. To improve our resolution of dynamic interactions between neuronal ensembles located in different layers within a cortical region and/or in different cortical regions. The mcERP model assets that multiple components defined as stereotypic waveforms comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Maximum a posteriori (MAP) solutions for the model are obtained by iterating a set of equations derived from the posterior probability. Our first goal was to use the ANTWERP algorithm to analyze interactions (specifically latency and amplitude correlation) between responses in different layers within a cortical region. Thus, we evaluated the model by applying the algorithm to synthetic data containing two correlated local components and one independent far-field component. Three cases were considered: the local components were correlated by an interaction in their single-trial amplitudes, by an interaction in their single-trial latencies, or by an interaction in both amplitude and latency. We then analyzed the accuracy with which the algorithm estimated the component waveshapes and the single-trial parameters as a function of the linearity of each of these relationships. Extensions of these analyses to real data are discussed as well as ongoing work to incorporate more detailed prior information.

  15. Genetic variability and heritability of chlorophyll a fluorescence parameters in Scots pine (Pinus sylvestris L.).

    PubMed

    Čepl, Jaroslav; Holá, Dana; Stejskal, Jan; Korecký, Jiří; Kočová, Marie; Lhotáková, Zuzana; Tomášková, Ivana; Palovská, Markéta; Rothová, Olga; Whetten, Ross W; Kaňák, Jan; Albrechtová, Jana; Lstibůrek, Milan

    2016-07-01

    Current knowledge of the genetic mechanisms underlying the inheritance of photosynthetic activity in forest trees is generally limited, yet it is essential both for various practical forestry purposes and for better understanding of broader evolutionary mechanisms. In this study, we investigated genetic variation underlying selected chlorophyll a fluorescence (ChlF) parameters in structured populations of Scots pine (Pinus sylvestris L.) grown on two sites under non-stress conditions. These parameters were derived from the OJIP part of the ChlF kinetics curve and characterize individual parts of primary photosynthetic processes associated, for example, with the exciton trapping by light-harvesting antennae, energy utilization in photosystem II (PSII) reaction centers (RCs) and its transfer further down the photosynthetic electron-transport chain. An additive relationship matrix was estimated based on pedigree reconstruction, utilizing a set of highly polymorphic single sequence repeat markers. Variance decomposition was conducted using the animal genetic evaluation mixed-linear model. The majority of ChlF parameters in the analyzed pine populations showed significant additive genetic variation. Statistically significant heritability estimates were obtained for most ChlF indices, with the exception of DI0/RC, φD0 and φP0 (Fv/Fm) parameters. Estimated heritabilities varied around the value of 0.15 with the maximal value of 0.23 in the ET0/RC parameter, which indicates electron-transport flux from QA to QB per PSII RC. No significant correlation was found between these indices and selected growth traits. Moreover, no genotype × environment interaction (G × E) was detected, i.e., no differences in genotypes' performance between sites. The absence of significant G × E in our study is interesting, given the relatively low heritability found for the majority of parameters analyzed. Therefore, we infer that polygenic variability of these indices is selectively neutral. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. A novel analytical solution for estimating aquifer properties within a horizontally anisotropic aquifer bounded by a stream

    NASA Astrophysics Data System (ADS)

    Huang, Yibin; Zhan, Hongbin; Knappett, Peter S. K.

    2018-04-01

    Past studies modeling stream-aquifer interaction commonly account for vertical anisotropy in hydraulic conductivity, but rarely address horizontal anisotropy, which may exist in certain sedimentary environments. If present, horizontal anisotropy will greatly impact stream depletion and the amount of recharge a pumped aquifer captures from the river. This scenario requires a different and somewhat more sophisticated mathematical approach to model and interpret pumping test results than previous models used to describe captured recharge from rivers. In this study, a new mathematical model is developed to describe the spatiotemporal distribution of drawdown from stream-bank pumping with a well screened across a horizontally anisotropic, confined aquifer, laterally bounded by a river. This new model is used to estimate four aquifer parameters including the magnitude and directions of major and minor principal transmissivities and storativity based on the observed drawdown-time curves within a minimum of three non-collinear observation wells. In order to approve the efficacy of the new model, a MATLAB script file is programmed to conduct a four-parameter inversion to estimate the four parameters of concern. By comparing the results of analytical and numerical inversions, the accuracy of estimated results from both inversions is acceptable, but the MATLAB program sometimes becomes problematic because of the difficulty of separating the local minima from the global minima. It appears that the new analytical model of this study is applicable and robust in estimating parameter values for a horizontally anisotropic aquifer laterally bounded by a stream. Besides that, the new model calculates stream depletion rate as a function of stream-bank pumping. Unique to horizontally anisotropic and homogeneous aquifers, the stream depletion rate at any given pumping rate depends closely on the horizontal anisotropy ratio and the direction of the principle transmissivities relative to the stream-bank.

  17. Vertical Eddy Diffusivity as a Control Parameter in the Tropical Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Martinez Avellaneda, N.; Cornuelle, B.; Mazloff, M. R.; Stammer, D.

    2012-12-01

    Ocean models suffer from errors in the treatment of turbulent sub-grid scale motions causing mixing and energy dissipation. Unrealistic small-scale features in models can have large-scale consequences, such as biases in the upper ocean temperature, a symptom of poorly-simulated upwelling, currents and air-sea interactions. This is of special importance in the tropical Pacific Ocean, which is home to energetic air-sea interactions that affect global climate. It has been shown in a number of studies that the simulated ENSO variability is highly dependent on the state of the ocean (e.g.: background mixing). Moreover, the magnitude of the vertical numerical diffusion is of primary importance in properly reproducing the Pacific equatorial thermocline. Yet, it is a common practice to use spatially uniform mixing parameters in ocean simulations. This work is part of a NASA-funded project to estimate the space-varying ocean mixing coefficients in an eddy-permitting model of the tropical Pacific. The usefulness of assimilation techniques in estimating mixing parameters has been previously explored (e.g.: Stammer, 2005, Ferreira et al., 2005). The authors also demonstrated that the spatial structure of the Equatorial Undercurrent (EUC) could be improved by adjusting wind-stress and surface buoyancy flux within their error bounds. In our work, we address the important question of whether adjusting mixing parameterizations can bring about similar improvements. To that end, an eddy-permitting state estimate for the tropical Pacific is developed using the MIT general circulation model and its adjoint where the vertical diffusivity is set as a control parameter. Complementary adjoint-based sensitivity results show strong sensitivities of the Tropical Pacific thermocline (thickness and location) and the EUC transport to the vertical diffusivity in the tropics. Argo, CTD, XBT and mooring in-situ data, as well as TMI SST and altimetry observations are assimilated in order to reduce the misfit between the model simulations and the ocean observations. Model domain topography of 1/3dgr of spatial resolution interpolated from ETOPO 2. The first and the last color levels represent regions shallower than 100m and deeper than 5000m, respectively

  18. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  19. Theoretical estimation of Photons flow rate Production in quark gluon interaction at high energies

    NASA Astrophysics Data System (ADS)

    Al-Agealy, Hadi J. M.; Hamza Hussein, Hyder; Mustafa Hussein, Saba

    2018-05-01

    photons emitted from higher energetic collisions in quark-gluon system have been theoretical studied depending on color quantum theory. A simple model for photons emission at quark-gluon system have been investigated. In this model, we use a quantum consideration which enhances to describing the quark system. The photons current rate are estimation for two system at different fugacity coefficient. We discussion the behavior of photons rate and quark gluon system properties in different photons energies with Boltzmann model. The photons rate depending on anisotropic coefficient : strong constant, photons energy, color number, fugacity parameter, thermal energy and critical energy of system are also discussed.

  20. Real-Time Monitoring and Prediction of the Pilot Vehicle System (PVS) Closed-Loop Stability

    NASA Astrophysics Data System (ADS)

    Mandal, Tanmay Kumar

    Understanding human control behavior is an important step for improving the safety of future aircraft. Considerable resources are invested during the design phase of an aircraft to ensure that the aircraft has desirable handling qualities. However, human pilots exhibit a wide range of control behaviors that are a function of external stimulus, aircraft dynamics, and human psychological properties (such as workload, stress factor, confidence, and sense of urgency factor). This variability is difficult to address comprehensively during the design phase and may lead to undesirable pilot-aircraft interaction, such as pilot-induced oscillations (PIO). This creates the need to keep track of human pilot performance in real-time to monitor the pilot vehicle system (PVS) stability. This work focused on studying human pilot behavior for the longitudinal axis of a remotely controlled research aircraft and using human-in-the-loop (HuIL) simulations to obtain information about the human controlled system (HCS) stability. The work in this dissertation is divided into two main parts: PIO analysis and human control model parameters estimation. To replicate different flight conditions, this study included time delay and elevator rate limiting phenomena, typical of actuator dynamics during the experiments. To study human control behavior, this study employed the McRuer model for single-input single-output manual compensatory tasks. McRuer model is a lead-lag controller with time delay which has been shown to adequately model manual compensatory tasks. This dissertation presents a novel technique to estimate McRuer model parameters in real-time and associated validation using HuIL simulations to correctly predict HCS stability. The McRuer model parameters were estimated in real-time using a Kalman filter approach. The estimated parameters were then used to analyze the stability of the closed-loop HCS and verify them against the experimental data. Therefore, the main contribution of this dissertation is the design of an unscented Kalman filter-based algorithm to estimate McRuer model parameters in real time, and a framework to validate this algorithm for single-input single-output manual compensatory tasks to predict instabilities.

  1. Application of nonlinear least-squares regression to ground-water flow modeling, west-central Florida

    USGS Publications Warehouse

    Yobbi, D.K.

    2000-01-01

    A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.

  2. Predicting drug loading in PLA-PEG nanoparticles.

    PubMed

    Meunier, M; Goupil, A; Lienard, P

    2017-06-30

    Polymer nanoparticles present advantageous physical and biopharmaceutical properties as drug delivery systems compared to conventional liquid formulations. Active pharmaceutical ingredients (APIs) are often hydrophobic, thus not soluble in conventional liquid delivery. Encapsulating the drugs in polymer nanoparticles can improve their pharmacological and bio-distribution properties, preventing rapid clearance from the bloodstream. Such nanoparticles are commonly made of non-toxic amphiphilic self-assembling block copolymers where the core (poly-[d,l-lactic acid] or PLA) serves as a reservoir for the API and the external part (Poly-(Ethylene-Glycol) or PEG) serves as a stealth corona to avoid capture by macrophage. The present study aims to predict the drug affinity for PLA-PEG nanoparticles and their effective drug loading using in silico tools in order to virtually screen potential drugs for non-covalent encapsulation applications. To that end, different simulation methods such as molecular dynamics and Monte-Carlo have been used to estimate the binding of actives on model polymer surfaces. Initially, the methods and models are validated against a series of pigments molecules for which experimental data exist. The drug affinity for the core of the nanoparticles is estimated using a Monte-Carlo "docking" method. Drug miscibility in the polymer matrix, using the Hildebrand solubility parameter (δ), and the solvation free energy of the drug in the PLA polymer model is then estimated. Finally, existing published ALogP quantitative structure-property relationships (QSPR) are compared to this method. Our results demonstrate that adsorption energies modelled by docking atomistic simulations on PLA surfaces correlate well with experimental drug loadings, whereas simpler approaches based on Hildebrand solubility parameters and Flory-Huggins interaction parameters do not. More complex molecular dynamics techniques which use estimation of the solvation free energies both in PLA and in water led to satisfactory predictive models. In addition, experimental drug loadings and Log P are found to correlate well. This work can be used to improve the understanding of drug-polymer interactions, a key component to designing better delivery systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Reaction norm model with unknown environmental covariate to analyze heterosis by environment interaction.

    PubMed

    Su, G; Madsen, P; Lund, M S

    2009-05-01

    Crossbreeding is currently increasing in dairy cattle production. Several studies have shown an environment-dependent heterosis [i.e., an interaction between heterosis and environment (H x E)]. An H x E interaction is usually estimated from a few discrete environment levels. The present study proposes a reaction norm model to describe H x E interaction, which can deal with a large number of environment levels using few parameters. In the proposed model, total heterosis consists of an environment-independent part, which is described as a function of heterozygosity, and an environment-dependent part, which is described as a function of heterozygosity and environmental value (e.g., herd-year effect). A Bayesian approach is developed to estimate the environmental covariates, the regression coefficients of the reaction norm, and other parameters of the model simultaneously in both linear and nonlinear reaction norms. In the nonlinear reaction norm model, the H x E is approximated using linear splines. The approach was tested using simulated data, which were generated using an animal model with a reaction norm for heterosis. The simulation study includes 4 scenarios (the combinations of moderate vs. low heritability and moderate vs. low herd-year variation) of H x E interaction in a nonlinear form. In all scenarios, the proposed model predicted total heterosis very well. The correlation between true heterosis and predicted heterosis was 0.98 in the scenarios with low herd-year variation and 0.99 in the scenarios with moderate herd-year variation. This suggests that the proposed model and method could be a good approach to analyze H x E interactions and predict breeding values in situations in which heterosis changes gradually and continuously over an environmental gradient. On the other hand, it was found that a model ignoring H x E interaction did not significantly harm the prediction of breeding value under the simulated scenarios in which the variance for environment-dependent heterosis effects was small (as it generally is), and sires were randomly used over production environments.

  4. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  5. The Age of the Surface of Venus

    NASA Technical Reports Server (NTRS)

    Zahnle, K. J.; McKinnon, William B.; Young, Richard E. (Technical Monitor)

    1997-01-01

    Impact craters on Venus appear to be uniformly and randomly scattered over a once, but no longer, geologically active planet. To first approximation, the planet shows a single surface of a single age. Here we use Monte Carlo cratering simulations to estimate the age of the surface of Venus. The simulations are based on the present populations of Earth-approaching asteroids, Jupiter-family, Halley-family, and long period comets; they use standard Schmidt-Housen crater scalings in the gravity regime; and they describe interaction with the atmosphere using a semi-analytic 'pancake' model that is calibrated to detailed numerical simulations of impactors striking Venus. The lunar and terrestrial cratering records are also simulated. Both of these records suffer from poor statistics. The Moon has few young large craters and fewer still whose ages are known, and the record is biased because small craters tend to look old and large craters tend to look young. The craters of the Earth provide the only reliable ages, but these craters are few, eroded, of uncertain diameter, and statistically incomplete. Together the three cratering records can be inverted to constrain the flux of impacting bodies, crater diameters given impact parameters, and the calibration of atmospheric interactions. The surface age of Venus that results is relatively young. Alternatively, we can use our best estimates for these three input parameters to derive a best estimate for the age of the surface of Venus. Our tentative conclusions are that comets are unimportant, that the lunar and terrestrial crater records are both subject to strong biases, that there is no strong evidence for an increasing cratering flux in recent years, and that that the nominal age of the surface of Venus is about 600 Ma, although the uncertainty is about a factor of two. The chief difference between our estimate and earlier, somewhat younger estimates is that we find that the venusian atmosphere is less permeable to impacting bodies than supposed by earlier studies. An older surface increases the likelihood that Venus is dead.

  6. A Unified Estimation Framework for State-Related Changes in Effective Brain Connectivity.

    PubMed

    Samdin, S Balqis; Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-04-01

    This paper addresses the critical problem of estimating time-evolving effective brain connectivity. Current approaches based on sliding window analysis or time-varying coefficient models do not simultaneously capture both slow and abrupt changes in causal interactions between different brain regions. To overcome these limitations, we develop a unified framework based on a switching vector autoregressive (SVAR) model. Here, the dynamic connectivity regimes are uniquely characterized by distinct vector autoregressive (VAR) processes and allowed to switch between quasi-stationary brain states. The state evolution and the associated directed dependencies are defined by a Markov process and the SVAR parameters. We develop a three-stage estimation algorithm for the SVAR model: 1) feature extraction using time-varying VAR (TV-VAR) coefficients, 2) preliminary regime identification via clustering of the TV-VAR coefficients, 3) refined regime segmentation by Kalman smoothing and parameter estimation via expectation-maximization algorithm under a state-space formulation, using initial estimates from the previous two stages. The proposed framework is adaptive to state-related changes and gives reliable estimates of effective connectivity. Simulation results show that our method provides accurate regime change-point detection and connectivity estimates. In real applications to brain signals, the approach was able to capture directed connectivity state changes in functional magnetic resonance imaging data linked with changes in stimulus conditions, and in epileptic electroencephalograms, differentiating ictal from nonictal periods. The proposed framework accurately identifies state-dependent changes in brain network and provides estimates of connectivity strength and directionality. The proposed approach is useful in neuroscience studies that investigate the dynamics of underlying brain states.

  7. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.

  8. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve accuracy, we perform a sensitivity study to further tune covariance parameters. Finally, we introduce a shrinkage based sample covariance estimation technique for both prior and mismatch covariances. This technique allows us to achieve similar accuracy nonparametrically in a more efficient and automated way.

  9. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  10. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM 370 VERSION)

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  11. Probabilistic assessment of the dynamic interaction between multiple pedestrians and vertical vibrations of footbridges

    NASA Astrophysics Data System (ADS)

    Tubino, Federica

    2018-03-01

    The effect of human-structure interaction in the vertical direction for footbridges is studied based on a probabilistic approach. The bridge is modeled as a continuous dynamic system, while pedestrians are schematized as moving single-degree-of-freedom systems with random dynamic properties. The non-dimensional form of the equations of motion allows us to obtain results that can be applied in a very wide set of cases. An extensive Monte Carlo simulation campaign is performed, varying the main non-dimensional parameters identified, and the mean values and coefficients of variation of the damping ratio and of the non-dimensional natural frequency of the coupled system are reported. The results obtained can be interpreted from two different points of view. If the characterization of pedestrians' equivalent dynamic parameters is assumed as uncertain, as revealed from a current literature review, then the paper provides a range of possible variations of the coupled system damping ratio and natural frequency as a function of pedestrians' parameters. Assuming that a reliable characterization of pedestrians' dynamic parameters is available (which is not the case at present, but could be in the future), the results presented can be adopted to estimate the damping ratio and natural frequency of the coupled footbridge-pedestrian system for a very wide range of real structures.

  12. Non-ambiguous recovery of Biot poroelastic parameters of cellular panels using ultrasonicwaves

    NASA Astrophysics Data System (ADS)

    Ogam, Erick; Fellah, Z. E. A.; Sebaa, Naima; Groby, J.-P.

    2011-03-01

    The inverse problem of the recovery of the poroelastic parameters of open-cell soft plastic foam panels is solved by employing transmitted ultrasonic waves (USW) and the Biot-Johnson-Koplik-Champoux-Allard (BJKCA) model. It is shown by constructing the objective functional given by the total square of the difference between predictions from the BJKCA interaction model and experimental data obtained with transmitted USW that the inverse problem is ill-posed, since the functional exhibits several local minima and maxima. In order to solve this problem, which is beyond the capability of most off-the-shelf iterative nonlinear least squares optimization algorithms (such as the Levenberg Marquadt or Nelder-Mead simplex methods), simple strategies are developed. The recovered acoustic parameters are compared with those obtained using simpler interaction models and a method employing asymptotic phase velocity of the transmitted USW. The retrieved elastic moduli are validated by solving an inverse vibration spectroscopy problem with data obtained from beam-like specimens cut from the panels using an equivalent solid elastodynamic model as estimator. The phase velocities are reconstructed using computed, measured resonance frequencies and a time-frequency decomposition of transient waves induced in the beam specimen. These confirm that the elastic parameters recovered using vibration are valid over the frequency range ofstudy.

  13. The HLMA project: determination of high Δm2 LMA mixing parameters and constraint on |Ue3| with a new reactor neutrino experiment

    NASA Astrophysics Data System (ADS)

    Schönert, Stefan; Lasserre, Thierry; Oberauer, Lothar

    2003-03-01

    In the forthcoming months, the KamLAND experiment will probe the parameter space of the solar large mixing angle MSW solution as the origin of the solar neutrino deficit with ν¯e's from distant nuclear reactors. If however the solution realized in nature is such that Δm2sol>~2×10-4 eV2 (thereafter named the HLMA region), KamLAND will only observe a rate suppression but no spectral distortion and hence it will not have the optimal sensitivity to measure the mixing parameters. In this case, we propose a new medium baseline reactor experiment located at Heilbronn (Germany) to pin down the precise value of the solar mixing parameters. In this paper, we present the Heilbronn detector site, we calculate the ν¯e interaction rate and the positron spectrum expected from the surrounding nuclear power plants. We also discuss the sensitivity of such an experiment to |Ue3| in both normal and inverted neutrino mass hierarchy scenarios. We then outline the detector design, estimate background signals induced by natural radioactivity as well as by in situ cosmic ray muon interaction, and discuss a strategy to detect the anti-neutrino signal `free of background'.

  14. Between-User Reliability of Tier 1 Exposure Assessment Tools Used Under REACH.

    PubMed

    Lamb, Judith; Galea, Karen S; Miller, Brian G; Hesse, Susanne; Van Tongeren, Martie

    2017-10-01

    When applying simple screening (Tier 1) tools to estimate exposure to chemicals in a given exposure situation under the Registration, Evaluation, Authorisation and restriction of CHemicals Regulation 2006 (REACH), users must select from several possible input parameters. Previous studies have suggested that results from exposure assessments using expert judgement and from the use of modelling tools can vary considerably between assessors. This study aimed to investigate the between-user reliability of Tier 1 tools. A remote-completion exercise and in person workshop were used to identify and evaluate tool parameters and factors such as user demographics that may be potentially associated with between-user variability. Participants (N = 146) generated dermal and inhalation exposure estimates (N = 4066) from specified workplace descriptions ('exposure situations') and Tier 1 tool combinations (N = 20). Interactions between users, tools, and situations were investigated and described. Systematic variation associated with individual users was minor compared with random between-user variation. Although variation was observed between choices made for the majority of input parameters, differing choices of Process Category ('PROC') code/activity descriptor and dustiness level impacted most on the resultant exposure estimates. Exposure estimates ranging over several orders of magnitude were generated for the same exposure situation by different tool users. Such unpredictable between-user variation will reduce consistency within REACH processes and could result in under-estimation or overestimation of exposure, risking worker ill-health or the implementation of unnecessary risk controls, respectively. Implementation of additional support and quality control systems for all tool users is needed to reduce between-assessor variation and so ensure both the protection of worker health and avoidance of unnecessary business risk management expenditure. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  15. Simultaneous versus sequential optimal experiment design for the identification of multi-parameter microbial growth kinetics as a function of temperature.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2010-05-21

    Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  16. A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.

    PubMed

    Kim, Joo H; Roberts, Dustyn

    2015-09-01

    Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  18. Investigation of Laser Based Thomson Scattering

    DTIC Science & Technology

    2015-06-04

    laser liquid interaction has the potential to provide sources of energetic ions and fission products such as neutrons . The development of strong...by the production of heavy water d-d fusion and the production of neutrons . Finally, in section VII the tight focusing of light by a 2π mirror is...laser system is estimated to be 10 -15 , using cross- polarization modulation and two plasma mirrors. These parameters allow prepulse expansion to be

  19. Estimation of the age-specific per-contact probability of Ebola virus transmission in Liberia using agent-based simulations

    NASA Astrophysics Data System (ADS)

    Siettos, Constantinos I.; Anastassopoulou, Cleo; Russo, Lucia; Grigoras, Christos; Mylonakis, Eleftherios

    2016-06-01

    Based on multiscale agent-based computations we estimated the per-contact probability of transmission by age of the Ebola virus disease (EVD) that swept through Liberia from May 2014 to March 2015. For the approximation of the epidemic dynamics we have developed a detailed agent-based model with small-world interactions between individuals categorized by age. For the estimation of the structure of the evolving contact network as well as the per-contact transmission probabilities by age group we exploited the so called Equation-Free framework. Model parameters were fitted to official case counts reported by the World Health Organization (WHO) as well as to recently published data of key epidemiological variables, such as the mean time to death, recovery and the case fatality rate.

  20. Solar radiation stress in climbing snails: behavioural and intrinsic features define the Hsp70 level in natural populations of Xeropicta derbentina (Pulmonata).

    PubMed

    Di Lellis, Maddalena A; Seifan, Merav; Troschinski, Sandra; Mazzia, Christophe; Capowiez, Yvan; Triebskorn, Rita; Köhler, Heinz-R

    2012-11-01

    Ectotherms from sunny and hot environments need to cope with solar radiation. Mediterranean land snails of the superfamily Helicoidea feature a behavioural strategy to escape from solar radiation-induced excessive soil heating by climbing up vertical objects. The height of climbing, and also other parameters like shell colouration pattern, shell orientation, shell size, body mass, actual internal and shell surface temperature, and the interactions between those factors may be expected to modulate proteotoxic effects in snails exposed to solar radiation and, thus, their stress response. Focussing on natural populations of Xeropicta derbentina, we conducted a 'snapshot' field study using the individual Hsp70 level as a proxy for proteotoxic stress. In addition to correlation analyses, an IT-model selection approach based on Akaike's Information Criterion was applied to evaluate a set of models with respect to their explanatory power and to assess the relevance of each of the above-mentioned parameters for individual stress, by model averaging and parameter estimation. The analysis revealed particular importance of the individuals' shell size, height above ground, the shell colouration pattern and the interaction height × orientation. Our study showed that a distinct set of behavioural traits and intrinsic characters define the Hsp70 level and that environmental factors and individual features strongly interact.

  1. Genetic potential of common bean progenies obtained by different breeding methods evaluated in various environments.

    PubMed

    Pontes Júnior, V A; Melo, P G S; Pereira, H S; Melo, L C

    2016-09-02

    Grain yield is strongly influenced by the environment, has polygenic and complex inheritance, and is a key trait in the selection and recommendation of cultivars. Breeding programs should efficiently explore the genetic variability resulting from crosses by selecting the most appropriate method for breeding in segregating populations. The goal of this study was to evaluate and compare the genetic potential of common bean progenies of carioca grain for grain yield, obtained by different breeding methods and evaluated in different environments. Progenies originating from crosses between lines and CNFC 7812 and CNFC 7829 were replanted up to the F 7 generation using three breeding methods in segregating populations: population (bulk), bulk within F 2 progenies, and single-seed descent (SSD). Fifteen F 8 progenies per method, two controls (BRS Estilo and Perola), and the parents were evaluated in a 7 x 7 simple lattice design, with plots of two 4-m rows. The tests were conducted in 10 environments in four States of Brazil and in three growing seasons in 2009 and 2010. Genetic parameters including genetic variance, heritability, variance of interaction, and expected selection gain were estimated. Genetic variability among progenies and the effect of progeny-environment interactions were determined for the three methods. The breeding methods differed significantly due to the effects of sampling procedures on the progenies and due to natural selection, which mainly affected the bulk method. The SSD and bulk methods provided populations with better estimates of genetic parameters and more stable progenies that were less affected by interaction with the environment.

  2. Density-Dependent Formulation of Dispersion-Repulsion Interactions in Hybrid Multiscale Quantum/Molecular Mechanics (QM/MM) Models.

    PubMed

    Curutchet, Carles; Cupellini, Lorenzo; Kongsted, Jacob; Corni, Stefano; Frediani, Luca; Steindal, Arnfinn Hykkerud; Guido, Ciro A; Scalmani, Giovanni; Mennucci, Benedetta

    2018-03-13

    Mixed multiscale quantum/molecular mechanics (QM/MM) models are widely used to explore the structure, reactivity, and electronic properties of complex chemical systems. Whereas such models typically include electrostatics and potentially polarization in so-called electrostatic and polarizable embedding approaches, respectively, nonelectrostatic dispersion and repulsion interactions are instead commonly described through classical potentials despite their quantum mechanical origin. Here we present an extension of the Tkatchenko-Scheffler semiempirical van der Waals (vdW TS ) scheme aimed at describing dispersion and repulsion interactions between quantum and classical regions within a QM/MM polarizable embedding framework. Starting from the vdW TS expression, we define a dispersion and a repulsion term, both of them density-dependent and consistently based on a Lennard-Jones-like potential. We explore transferable atom type-based parametrization strategies for the MM parameters, based on either vdW TS calculations performed on isolated fragments or on a direct estimation of the parameters from atomic polarizabilities taken from a polarizable force field. We investigate the performance of the implementation by computing self-consistent interaction energies for the S22 benchmark set, designed to represent typical noncovalent interactions in biological systems, in both equilibrium and out-of-equilibrium geometries. Overall, our results suggest that the present implementation is a promising strategy to include dispersion and repulsion in multiscale QM/MM models incorporating their explicit dependence on the electronic density.

  3. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less

  4. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  5. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  6. Point-to-point migration functions and gravity model renormalization: approaches to aggregation in spatial interaction modeling.

    PubMed

    Slater, P B

    1985-08-01

    Two distinct approaches to assessing the effect of geographic scale on spatial interactions are modeled. In the first, the question of whether a distance deterrence function, which explains interactions for one system of zones, can also succeed on a more aggregate scale, is examined. Only the two-parameter function for which it is found that distances between macrozones are weighted averaged of distances between component zones is satisfactory in this regard. Estimation of continuous (point-to-point) functions--in the form of quadrivariate cubic polynomials--for US interstate migration streams, is then undertaken. Upon numerical integration, these higher order surfaces yield predictions of interzonal and intrazonal movements at any scale of interest. Test of spatial stationarity, isotropy, and symmetry of interstate migration are conducted in this framework.

  7. Tidal interactions in the expanding universe - The formation of prolate systems

    NASA Technical Reports Server (NTRS)

    Binney, J.; Silk, J.

    1979-01-01

    The study estimates the magnitude of the anisotropy that can be tidally induced in neighboring initially spherical protostructures, be they protogalaxies, protoclusters, or even uncollapsed density enhancements in the large-scale structure of the universe. It is shown that the linear analysis of tidal interactions developed by Peebles (1969) predicts that the anisotropy energy of a perturbation grows to first order in a small dimensionless parameter, whereas the net angular momentum acquired is of second order. A simple model is presented for the growth of anisotropy by tidal interactions during the nonlinear stage of the development of perturbations. A possible observational test is described of the alignment predicted by the model between the orientations of large-scale perturbations and the positions of neighboring density enhancements.

  8. A Novel Analytical Solution for Estimating Aquifer Properties and Predicting Stream Depletion Rates by Pumping from a Horizontally Anisotropic Aquifer

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Zhan, H.; Knappett, P.

    2017-12-01

    Past studies modeling stream-aquifer interactions commonly account for vertical anisotropy, but rarely address horizontal anisotropy, which does exist in certain geological settings. Horizontal anisotropy is impacted by sediment deposition rates, orientation of sediment particles and orientations of fractures etc. We hypothesize that horizontal anisotropy controls the volume of recharge a pumped aquifer captures from the river. To test this hypothesis, a new mathematical model was developed to describe the distribution of drawdown from stream-bank pumping with a well screened across a horizontally anisotropic, confined aquifer, laterally bounded by a river. This new model was used to determine four aquifer parameters including the magnitude and directions of major and minor principal transmissivities and storativity based on the observed drawdown-time curves within a minimum of three non-collinear observation wells. By comparing the aquifer parameters values estimated from drawdown data generated known values, the discrepancies of the major and minor transmissivities, horizontal anisotropy ratio, storativity and the direction of major transmissivity were 13.1, 8.8, 4, 0 and <1 percent, respectively. These discrepancies are well within acceptable ranges of uncertainty for aquifer parameters estimation, when compared with other pumping test interpretation methods, which typically estimate uncertainty for the estimated parameters of 20 or 30 percent. Finally, the stream depletion rate was calculated as a function of stream-bank pumping. Unique to horizontally anisotropic aquifer, the stream depletion rate at any given pumping rate depends on the horizontal anisotropy ratio and the direction of the principle transmissivity. For example, when horizontal anisotropy ratios are 5 and 50 respectively, the corresponding depletion rate under pseudo steady-state condition are 86 m3/day and 91 m3/day. The results of this research fill a knowledge gap on predicting the response of horizontally anisotropic aquifers connected to streams. We further provide a method to estimate aquifer properties and predict stream depletion rates from observed drawdown. This new model can be used by water resources managers to exploit groundwater resource reasonably while protecting stream ecosystem.

  9. Mixed Effects Modeling Using Stochastic Differential Equations: Illustrated by Pharmacokinetic Data of Nicotinic Acid in Obese Zucker Rats.

    PubMed

    Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats

    2015-05-01

    Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.

  10. Information-theoretical noninvasive damage detection in bridge structures

    NASA Astrophysics Data System (ADS)

    Sudu Ambegedara, Amila; Sun, Jie; Janoyan, Kerop; Bollt, Erik

    2016-11-01

    Damage detection of mechanical structures such as bridges is an important research problem in civil engineering. Using spatially distributed sensor time series data collected from a recent experiment on a local bridge in Upper State New York, we study noninvasive damage detection using information-theoretical methods. Several findings are in order. First, the time series data, which represent accelerations measured at the sensors, more closely follow Laplace distribution than normal distribution, allowing us to develop parameter estimators for various information-theoretic measures such as entropy and mutual information. Second, as damage is introduced by the removal of bolts of the first diaphragm connection, the interaction between spatially nearby sensors as measured by mutual information becomes weaker, suggesting that the bridge is "loosened." Finally, using a proposed optimal mutual information interaction procedure to prune away indirect interactions, we found that the primary direction of interaction or influence aligns with the traffic direction on the bridge even after damaging the bridge.

  11. Ultraviolet absorption spectrum of the half-filled bilayer graphene

    NASA Astrophysics Data System (ADS)

    Apinyan, V.; Kopeć, T. K.

    2018-07-01

    We consider the optical properties of the half-filled AB-stacked bilayer graphene with the excitonic pairing and condensation between the layers. Both intra and interlayer local Coulomb interaction effects have been taken into account and the role of the exact Fermi energy has been discussed in details. We have calculated the absorption coefficient, refractive index, dielectric response functions and the electron energy loss spectrum for different interlayer Coulomb interaction regimes and for different temperatures. Considering the full four-band model for the interacting AB bilayer graphene, a good agreement is achieved with other theoretical and experimental works on the subject, in particular, limiting cases of the theory. The calculations, presented here, permit to estimate accurately the effects of excitonic pairing and condensation on the optical properties of the bilayer graphene. The modifications of the plasmon excitation spectrum are discussed in details for a very large interval of the interlayer interaction parameter.

  12. Transient high frequency signal estimation: A model-based processing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, F.L.

    1985-03-22

    By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less

  13. The estimation of quantitative parameters of oligonucleotides immobilization on mica surface

    NASA Astrophysics Data System (ADS)

    Sharipov, T. I.; Bakhtizin, R. Z.

    2017-05-01

    Immobilization of nucleic acids on the surface of various materials is increasingly being used in research and some practical applications. Currently, the DNA chip technology is rapidly developing. The basis of the immobilization process can be both physical adsorption and chemisorption. A useful way to control the immobilization of nucleic acids on a surface is to use atomic force microscopy. It allows you to investigate the topography of the surface by its direct imaging with high resolution. Usually, to fix the DNA on the surface of mica are used cations which mediate the interaction between the mica surface and the DNA molecules. In our work we have developed a method for estimation of quantitative parameter of immobilization of oligonucleotides is their degree of aggregation depending on the fixation conditions on the surface of mica. The results on study of aggregation of oligonucleotides immobilized on mica surface will be presented. The single oligonucleotides molecules have been imaged clearly, whereas their surface areas have been calculated and calibration curve has been plotted.

  14. Quantum preservation of the measurements precision using ultra-short strong pulses in exact analytical solution

    NASA Astrophysics Data System (ADS)

    Berrada, K.; Eleuch, H.

    2017-09-01

    Various schemes have been proposed to improve the parameter-estimation precision. In the present work, we suggest an alternative method to preserve the estimation precision by considering a model that closely describes a realistic experimental scenario. We explore this active way to control and enhance the measurements precision for a two-level quantum system interacting with classical electromagnetic field using ultra-short strong pulses with an exact analytical solution, i.e. beyond the rotating wave approximation. In particular, we investigate the variation of the precision with a few cycles pulse and a smooth phase jump over a finite time interval. We show that by acting on the shape of the phase transient and other parameters of the considered system, the amount of information may be increased and has smaller decay rate in the long time. These features make two-level systems incorporated in ultra-short, of-resonant and gradually changing phase good candidates for implementation of schemes for the quantum computation and the coherent information processing.

  15. An approach to and web-based tool for infectious disease outbreak intervention analysis

    NASA Astrophysics Data System (ADS)

    Daughton, Ashlynn R.; Generous, Nicholas; Priedhorsky, Reid; Deshpande, Alina

    2017-04-01

    Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a subjective process involving surveillance and expert opinion. However, there are many situations where neither may be available. Modeling can fill gaps in the decision making process by using available data to provide quantitative estimates of outbreak trajectories. Effective reduction of the spread of infectious diseases can be achieved through collaboration between the modeling community and public health policy community. However, such collaboration is rare, resulting in a lack of models that meet the needs of the public health community. Here we show a Susceptible-Infectious-Recovered (SIR) model modified to include control measures that allows parameter ranges, rather than parameter point estimates, and includes a web user interface for broad adoption. We apply the model to three diseases, measles, norovirus and influenza, to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.

  16. Influence of Plasma Environment on K-Line Emission in Highly Ionized Iron Atoms Evaluated Using a Debye-Huckel Model

    NASA Technical Reports Server (NTRS)

    Deprince, J.; Fritzsche, S.; Kallman, T. R.; Palmeri, P.; Quinet, P.

    2017-01-01

    The influence of plasma environment on the atomic parameters associated with the K-vacancy states has been investigated theoretically for several iron ions. To do this, a time-averaged Debye-Huckel potential for both the electron-nucleus and electron-electron interactions has been considered in the framework of relativistic multiconfiguration Dirac-Fock computations. More particularly, the plasma screening effects on ionization potentials, K-thresholds, transition energies, and radiative rates have been estimated in the astrophysical context of accretion disks around black holes. In the present paper, we describe the behavior of those atomic parameters for Ne-, Na-, Ar-, and K-like iron ions.

  17. Determination of recent horizontal crustal movements and deformations of African and Eurasian plates in western Mediterranean region using geodetic-GPS computations extended to 2006 (from 1997) related to NAFREF and AFREF frames.

    NASA Astrophysics Data System (ADS)

    Azzouzi, R.

    2009-04-01

    Determination of recent horizontal crustal movements and deformations of African and Eurasian plates in western Mediterranean region using geodetic-GPS computations extended to 2006 (from 1997) related to NAFREF and AFREF frames. By: R. Azzouzi*, M. Ettarid*, El H. Semlali*, et A. Rimi+ * Filière de Formation en Topographie Institut Agronomique et Vétérinaire Hassan II B.P. 6202 Rabat-Instituts MAROC + Département de la Physique du Globe Université Mohammed V Rabat MAROC This study focus on the use of the geodetic spatial technique GPS for geodynamic purposes generally in the Western Mediterranean area and particularly in Morocco. It aims to exploit this technique first to determine the geodetic coordinates on some western Mediterranean sites. And also this technique is used to detect and to determine movements cross the boundary line between the two African and Eurasian crustal plates on some well chosen GPS-Geodynamics sites. It will allow us also to estimate crustal dynamic parameters of tension that results. These parameters are linked to deformations of terrestrial crust in the region. They are also associated with tectonic constraints of the study area. The usefulness of repeated measurements of these elements, the estimate of displacements and the determination of their temporal rates is indisputable. Indeed, sismo-tectonique studies allow a good knowledge of the of earthquake processes, their frequency their amplitude and even of their prediction in the world in general and in Moroccan area especially. They allow also contributing to guarantee more security for all most important management projects, as projects of building great works (dams, bridges, nuclear centrals). And also as preliminary study, for the most important joint-project between Europe and Africa through the Strait of Gibraltar. For our application, 23 GPS monitoring stations under the ITRF2000 reference frame are chosen in Eurasian and African plates. The sites are located around the Western Mediterranean and especially on Morocco. Exploiting parameters of positions and dispersions of these stations within the 1997-2003 period, the motion and the interaction types of interaction between African and Eurasian tectonic plates can be estimated. Similarly, the crustal dynamic parameters of tension of these sites will be computed. The time occupation on repeated observations sites is at least 72 hours. The measurements are continuous on permanent stations. The precise ephemerides are used in GPS computations. The post-treatments are done using commercial and scientific softwares. The coordinates obtained for two consecutive periods to and t within a period of 8 years will be used by programs established for this purpose to estimate crustal dynamic parameters of tension as well as to evaluate the appropriate movements. Even crustal dynamic parameters will be determined on each sites of the GPS-Geodynamics network, whose interest of seismic investigations is very important. This will allow best knowledge of substantial seismic activities of the surrounding zones. It can be deduced by measuring the motions and their parameter tensions using GPS. These estimations will contribute on the earthquake prediction by supervising the strain accumulation and its release in the active areas. For the geodetically aspect the GPS-Geodynamics sites computed in the ITRF frame can be used with other similar ounces' of Africa country and some well selected and convenient IGS, EUREF stations..to determine first the NAFREF and the AFRER frames.

  18. Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets

    NASA Technical Reports Server (NTRS)

    Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.

    1978-01-01

    A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.

  19. Overcoming the sign problem at finite temperature: Quantum tensor network for the orbital eg model on an infinite square lattice

    NASA Astrophysics Data System (ADS)

    Czarnik, Piotr; Dziarmaga, Jacek; Oleś, Andrzej M.

    2017-07-01

    The variational tensor network renormalization approach to two-dimensional (2D) quantum systems at finite temperature is applied to a model suffering the notorious quantum Monte Carlo sign problem—the orbital eg model with spatially highly anisotropic orbital interactions. Coarse graining of the tensor network along the inverse temperature β yields a numerically tractable 2D tensor network representing the Gibbs state. Its bond dimension D —limiting the amount of entanglement—is a natural refinement parameter. Increasing D we obtain a converged order parameter and its linear susceptibility close to the critical point. They confirm the existence of finite order parameter below the critical temperature Tc, provide a numerically exact estimate of Tc, and give the critical exponents within 1 % of the 2D Ising universality class.

  20. Role of hybridization in the superconducting properties of an extended d p Hubbard model: a detailed numerical study

    NASA Astrophysics Data System (ADS)

    Calegari, E. J.; Magalhães, S. G.; Gomes, A. A.

    2005-04-01

    The Roth's two-pole approximation has been used by the present authors to study the effects of the hybridization in the superconducting properties of a strongly correlated electron system. The model used is the extended Hubbard model which includes the d-p hybridization, the p-band and a narrow d-band. The present work is an extension of our previous work (J. Mod. Phys. B 18(2) (2004) 241). Nevertheless, some important correlation functions necessary to estimate the Roth's band shift, are included together with the temperature T and the Coulomb interaction U to describe the superconductivity. The superconducting order parameter of a cuprate system, is obtained following Beenen and Edwards formalism. Here, we investigate in detail the change of the order parameter associated to temperature, Coulomb interaction and Roth's band shift effects on superconductivity. The phase diagram with Tc versus the total occupation number nT, shows the difference respect to the previous work.

  1. Effective intermolecular potential and critical point for C60 molecule

    NASA Astrophysics Data System (ADS)

    Ramos, J. Eloy

    2017-07-01

    The approximate nonconformal (ANC) theory is applied to the C60 molecule. A new binary potential function is developed for C60, which has three parameters only and is obtained by averaging the site-site carbon interactions on the surface of two C60 molecules. It is shown that the C60 molecule follows, to a good approximation, the corresponding states principle with n-C8H18, n-C4F10 and n-C5F12. The critical point of C60 is estimated in two ways: first by applying the corresponding states principle under the framework of the ANC theory, and then by using previous computer simulations. The critical parameters obtained by applying the corresponding states principle, although very different from those reported in the literature, are consistent with the previous results of the ANC theory. It is shown that the Girifalco potential does not correspond to an average of the site-site carbon-carbon interaction.

  2. Can biophysical properties of submersed macrophytes be determined by remote sensing?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malthus, T.J.; Ciraolo, G.; La Loggia, G.

    1997-06-01

    This paper details the development of a computationally efficient Monte Carlo simulation program to model photon transport through submersed plant canopies, with emphasis on Seagrass communities. The model incorporates three components: the transmission of photons through a water column of varying depth and turbidity; the interaction of photons within a submersed plant canopy of varying biomass; and interactions with the bottom substrate. The three components of the model are discussed. Simulations were performed based on measured parameters for Posidonia oceanica and compared to measured subsurface reflectance spectra made over comparable seagrass communities in Sicilian coastal waters. It is shown thatmore » the output is realistic. Further simulations are undertaken to investigate the effect of depth and turbidity of the overlying water column. Both sets of results indicate the rapid loss of canopy signal as depth increases and water column phytoplankton concentrations increase. The implications for the development of algorithms for the estimation of submersed canopy biophysical parameters are briefly discussed.« less

  3. On the interatomic potentials for noble gas mixtures

    NASA Astrophysics Data System (ADS)

    Watanabe, Kyoko; Allnatt, A. R.; Meath, William J.

    1982-07-01

    Recently, a relatively simple scheme for the construction of isotropic intermolecular potentials has been proposed and tested for the like species interactions involving He, Ne, Ar, Kr and H 2. The model potential has an adjustable parameter which controls the balance between its exchange and Coulomb energy components. The representation of the Coulomb energy contains a damped multipolar dispersion energy series (which is truncated through O( R-10) and provides additional flexibility through adjustment of the dispersion energy coefficients, particularly C8 and C10, within conservative error estimates. In this paper the scheme is tested further by application to interactions involving unlike noble gas atoms where the parameters in the potential model are determined by fitting mixed second virial coefficient data as a function of temperature. Generally the approach leads to potential of accuracy comparable to the best available literature potentials which are usually determined using a large base of experimental and theoretical input data. Our results also strongly indicate the need of high quality virial data.

  4. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  5. Fundamental frequency estimation of singing voice

    NASA Astrophysics Data System (ADS)

    de Cheveigné, Alain; Henrich, Nathalie

    2002-05-01

    A method of fundamental frequency (F0) estimation recently developped for speech [de Cheveigné and Kawahara, J. Acoust. Soc. Am. (to be published)] was applied to singing voice. An electroglottograph signal recorded together with the microphone provided a reference by which estimates could be validated. Using standard parameter settings as for speech, error rates were low despite the wide range of F0s (about 100 to 1600 Hz). Most ``errors'' were due to irregular vibration of the vocal folds, a sharp formant resonance that reduced the waveform to a single harmonic, or fast F0 changes such as in high-amplitude vibrato. Our database (18 singers from baritone to soprano) included examples of diphonic singing for which melody is carried by variations of the frequency of a narrow formant rather than F0. Varying a parameter (ratio of inharmonic to total power) the algorithm could be tuned to follow either frequency. Although the method has not been formally tested on a wide range of instruments, it seems appropriate for musical applications because it is accurate, accepts a wide range of F0s, and can be implemented with low latency for interactive applications. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.

  6. Power of tests for comparing trend curves with application to national immunization survey (NIS).

    PubMed

    Zhao, Zhen

    2011-02-28

    To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  8. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  9. Toward On-line Parameter Estimation of Concentric Tube Robots Using a Mechanics-based Kinematic Model

    PubMed Central

    Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo

    2017-01-01

    Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554

  10. Influence of Miscibility Phenomenon on Crystalline Polymorph Transition in Poly(Vinylidene Fluoride)/Acrylic Rubber/Clay Nanocomposite Hybrid

    PubMed Central

    Abolhasani, Mohammad Mahdi; Naebe, Minoo; Jalali-Arani, Azam; Guo, Qipeng

    2014-01-01

    In this paper, intercalation of nanoclay in the miscible polymer blend of poly(vinylidene fluoride) (PVDF) and acrylic rubber(ACM) was studied. X-ray diffraction was used to investigate the formation of nanoscale polymer blend/clay hybrid. Infrared spectroscopy and X-ray analysis revealed the coexistence of β and γ crystalline forms in PVDF/Clay nanocomposite while α crystalline form was found to be dominant in PVDF/ACM/Clay miscible hybrids. Flory-Huggins interaction parameter (B) was used to further explain the miscibility phenomenon observed. The B parameter was determined by combining the melting point depression and the binary interaction model. The estimated B values for the ternary PVDF/ACM/Clay and PVDF/ACM pairs were all negative, showing both proper intercalation of the polymer melt into the nanoclay galleries and the good miscibility of PVDF and ACM blend. The B value for the PVDF/ACM blend was almost the same as that measured for the PVDF/ACM/Clay hybrid, suggesting that PVDF chains in nanocomposite hybrids interact with ACM chains and that nanoclay in hybrid systems is wrapped by ACM molecules. PMID:24551141

  11. Thermal affected zone obtained in machining steel XC42 by high-power continuous CO 2 laser

    NASA Astrophysics Data System (ADS)

    Jebbari, Neila; Jebari, Mohamed Mondher; Saadallah, Faycal; Tarrats-Saugnac, Annie; Bennaceur, Raouf; Longuemard, Jean Paul

    2008-09-01

    A high-power continuous CO 2 laser (4 kW) can provide energy capable of causing melting or even, with a special treatment of the surface, vaporization of an XC42-steel sample. The laser-metal interaction causes an energetic machining mechanism, which takes place according to the assumption that the melting front precedes the laser beam, such that the laser beam interacts with a preheated surface whose temperature is near the melting point. The proposed model, obtained from the energy balance during the interaction time, concerns the case of machining with an inert gas jet and permits the calculation of the characteristic parameters of the groove according to the characteristic laser parameters (absorbed laser energy and impact diameter of the laser beam) and allows the estimation of the quantity of the energy causing the thermal affected zone (TAZ). This energy is equivalent to the heat quantity that must be injected in the heat propagation equation. In the case of a semi-infinite medium with fusion temperature at the surface, the resolution of the heat propagation equation gives access to the width of the TAZ.

  12. A Hidden Markov Model for Single Particle Tracks Quantifies Dynamic Interactions between LFA-1 and the Actin Cytoskeleton

    PubMed Central

    Das, Raibatak; Cairo, Christopher W.; Coombs, Daniel

    2009-01-01

    The extraction of hidden information from complex trajectories is a continuing problem in single-particle and single-molecule experiments. Particle trajectories are the result of multiple phenomena, and new methods for revealing changes in molecular processes are needed. We have developed a practical technique that is capable of identifying multiple states of diffusion within experimental trajectories. We model single particle tracks for a membrane-associated protein interacting with a homogeneously distributed binding partner and show that, with certain simplifying assumptions, particle trajectories can be regarded as the outcome of a two-state hidden Markov model. Using simulated trajectories, we demonstrate that this model can be used to identify the key biophysical parameters for such a system, namely the diffusion coefficients of the underlying states, and the rates of transition between them. We use a stochastic optimization scheme to compute maximum likelihood estimates of these parameters. We have applied this analysis to single-particle trajectories of the integrin receptor lymphocyte function-associated antigen-1 (LFA-1) on live T cells. Our analysis reveals that the diffusion of LFA-1 is indeed approximately two-state, and is characterized by large changes in cytoskeletal interactions upon cellular activation. PMID:19893741

  13. Bibliography for aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.; Maine, Richard E.

    1986-01-01

    An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.

  14. Observations on the interaction of nanomaterials with bacteria

    NASA Astrophysics Data System (ADS)

    Raja, P. M.; Ajayan, P. M.; Nalamasu, O.; Sharma, A.

    2006-05-01

    Large scale commercial manufacturing of nanomaterials raises the important issue of their environmental fate. With increased production (estimated to be in million gallon range) the nanomaterial interactions with environmental microbial ecology would be significant. However, there are scant studies that have addressed this concern. It is therefore essential to experimentally determine some fundamental parameters to ascertain any environmental stresses related to microbiological interactions of nanomaterials. There are concerns that such an interaction may be similar to the biogeochemical interactions of asbestos fibers, which continues to be an alarming environmental issue. Carbon nanotubes (CNTs) are newly emerging nanomaterials, with a wide range of potential electronic and medical applications. Though CNTs are dimensionally similar to the mineral fibers, they differ morphologically, and can possess different surface chemistries, capable of complex and varied biological interactions within the environment. In this study, we present experimental data that show discernible effects on microbial morphology, biofilm formation, substrate consumption rates and growth of Escherichia coli in the presence of carbon nanotubes with the aim of developing a fundamental understanding of the environmental implications of CNT-microbial interactions.

  15. Modifying and reacting to the environmental pH can drive bacterial interactions

    PubMed Central

    Ratzke, Christoph

    2018-01-01

    Microbes usually exist in communities consisting of myriad different but interacting species. These interactions are typically mediated through environmental modifications; microbes change the environment by taking up resources and excreting metabolites, which affects the growth of both themselves and also other microbes. We show here that the way microbes modify their environment and react to it sets the interactions within single-species populations and also between different species. A very common environmental modification is a change of the environmental pH. We find experimentally that these pH changes create feedback loops that can determine the fate of bacterial populations; they can either facilitate or inhibit growth, and in extreme cases will cause extinction of the bacterial population. Understanding how single species change the pH and react to these changes allowed us to estimate their pairwise interaction outcomes. Those interactions lead to a set of generic interaction motifs—bistability, successive growth, extended suicide, and stabilization—that may be independent of which environmental parameter is modified and thus may reoccur in different microbial systems. PMID:29538378

  16. Two-dimensional advective transport in ground-water flow parameter estimation

    USGS Publications Warehouse

    Anderman, E.R.; Hill, M.C.; Poeter, E.P.

    1996-01-01

    Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.

  17. Forest canopy height estimation using double-frequency repeat pass interferometry

    NASA Astrophysics Data System (ADS)

    Karamvasis, Kleanthis; Karathanassi, Vassilia

    2015-06-01

    In recent years, many efforts have been made in order to assess forest stand parameters from remote sensing data, as a mean to estimate the above-ground carbon stock of forests in the context of the Kyoto protocol. Synthetic aperture radar interferometry (InSAR) techniques have gained traction in last decade as a viable technology for vegetation parameter estimation. Many works have shown that forest canopy height, which is a critical parameter for quantifying the terrestrial carbon cycle, can be estimated with InSAR. However, research is still needed to understand further the interaction of SAR signals with forest canopy and to develop an operational method for forestry applications. This work discusses the use of repeat pass interferometry with ALOS PALSAR (L band) HH polarized and COSMO Skymed (X band) HH polarized acquisitions over the Taxiarchis forest (Chalkidiki, Greece), in order to produce accurate digital elevation models (DEMs) and estimate canopy height with interferometric processing. The effect of wavelength-dependent penetration depth into the canopy is known to be strong, and could potentially lead to forest canopy height mapping using dual-wavelength SAR interferometry at X- and L-band. The method is based on scattering phase center separation at different wavelengths. It involves the generation of a terrain elevation model underneath the forest canopy from repeat-pass L-band InSAR data as well as the generation of a canopy surface elevation model from repeat pass X-band InSAR data. The terrain model is then used to remove the terrain component from the repeat pass interferometric X-band elevation model, so as to enable the forest canopy height estimation. The canopy height results were compared to a field survey with 6.9 m root mean square error (RMSE). The effects of vegetation characteristics, SAR incidence angle and view geometry, and terrain slope on the accuracy of the results have also been studied in this work.

  18. Estimating Parameters for the Earth-Ionosphere Waveguide Using VLF Narrowband Transmitters

    NASA Astrophysics Data System (ADS)

    Gross, N. C.; Cohen, M.

    2017-12-01

    Estimating the D-region (60 to 90 km altitude) ionospheric electron density profile has always been a challenge. The D-region's altitude is too high for aircraft and balloons to reach but is too low for satellites to orbit at. Sounding rocket measurements have been a useful tool for directly measuring the ionosphere, however, these types of measurements are infrequent and costly. A more sustainable type of measurement, for characterizing the D-region, is remote sensing with very low frequency (VLF) waves. Both the lower ionosphere and Earth's ground strongly reflect VLF waves. These two spherical reflectors form what is known as the Earth-ionosphere waveguide. As VLF waves propagate within the waveguide, they interact with the D-region ionosphere, causing amplitude and phase changes that are polarization dependent. These changes can be monitored with a spatially distributed array of receivers and D-region properties can be inferred from these measurements. Researchers have previously used VLF remote sensing techniques, from either narrowband transmitters or sferics, to estimate the density profile, but these estimations are typically during a short time frame and over a narrow propagation region. We report on an effort to improve the understanding of VLF wave propagation by estimating the commonly known h' and beta two parameter exponential electron density profile. Measurements from multiple narrowband transmitters at multiple receivers are taken, concurrently, and input into an algorithm. The cornerstone of the algorithm is an artificial neural network (ANN), where input values are the received narrowband amplitude and phase and the outputs are the estimated h' and beta parameters. Training data for the ANN is generated using the Navy's Long-Wavelength Propagation Capability (LWPC) model. Emphasis is placed on profiling the daytime ionosphere, which has a more stable and predictable profile than the nighttime. Daytime ionospheric disturbances, from high solar activity, are also analyzed.

  19. Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks

    NASA Astrophysics Data System (ADS)

    Kyo, Koki

    Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.

  20. Seismic Imaging of VTI, HTI and TTI based on Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Rusmanugroho, H.; Tromp, J.

    2014-12-01

    Recent studies show that isotropic seismic imaging based on adjoint method reduces low-frequency artifact caused by diving waves, which commonly occur in two-wave wave-equation migration, such as Reverse Time Migration (RTM). Here, we derive new expressions of sensitivity kernels for Vertical Transverse Isotropy (VTI) using the Thomsen parameters (ɛ, δ, γ) plus the P-, and S-wave speeds (α, β) as well as via the Chen & Tromp (GJI 2005) parameters (A, C, N, L, F). For Horizontal Transverse Isotropy (HTI), these parameters depend on an azimuthal angle φ, where the tilt angle θ is equivalent to 90°, and for Tilted Transverse Isotropy (TTI), these parameters depend on both the azimuth and tilt angles. We calculate sensitivity kernels for each of these two approaches. Individual kernels ("images") are numerically constructed based on the interaction between the regular and adjoint wavefields in smoothed models which are in practice estimated through Full-Waveform Inversion (FWI). The final image is obtained as a result of summing all shots, which are well distributed to sample the target model properly. The impedance kernel, which is a sum of sensitivity kernels of density and the Thomsen or Chen & Tromp parameters, looks crisp and promising for seismic imaging. The other kernels suffer from low-frequency artifacts, similar to traditional seismic imaging conditions. However, all sensitivity kernels are important for estimating the gradient of the misfit function, which, in combination with a standard gradient-based inversion algorithm, is used to minimize the objective function in FWI.

  1. Advances in parameter estimation techniques applied to flexible structures

    NASA Technical Reports Server (NTRS)

    Maben, Egbert; Zimmerman, David C.

    1994-01-01

    In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.

  2. Impact of the time scale of model sensitivity response on coupled model parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu

    2017-11-01

    That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.

  3. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  4. Interaction of the electron density fluctuations with electron cyclotron waves from the equatorial launcher in ITER

    NASA Astrophysics Data System (ADS)

    Snicker, A.; Poli, E.; Maj, O.; Guidi, L.; Köhn, A.; Weber, H.; Conway, G. D.; Henderson, M.; Saibene, G.

    2018-01-01

    We present a numerical investigation of electron cyclotron beams interacting with electron density fluctuations in the ITER 15 MA H-mode scenario. In particular, here we study how the beam from the equatorial launcher, which shall be utilized to influence the sawtooth instability, is affected by the fluctuations. Moreover, we present the theory and first estimates of the power that is scattered from the injected O-mode to a secondary X-mode in the presence of the fluctuations. It is shown that for ITER parameters the scattered power stays within acceptable limits and broadening of the equatorial beams is less than those from the upper launcher.

  5. Influence of nonlinear detuning at plasma wavebreaking threshold on backward Raman compression of non-relativistic laser pulses

    NASA Astrophysics Data System (ADS)

    Balakin, A. A.; Fraiman, G. M.; Jia, Q.; Fisch, N. J.

    2018-06-01

    Taking into account the nonlinear dispersion of the plasma wave, the fluid equations for the three-wave (Raman) interaction in plasmas are derived. It is found that, in some parameter regimes, the nonlinear detuning resulting from the plasma wave dispersion during Raman compression limits the plasma wave amplitude to noticeably below the generally recognized wavebreaking threshold. Particle-in-cell simulations confirm the theoretical estimates. For weakly nonlinear dispersion, the detuning effect can be counteracted by pump chirping or, equivalently, by upshifting slightly the pump frequency, so that the frequency-upshifted pump interacts with the seed at the point where the plasma wave enters the nonlinear stage.

  6. Effects of phenotypic plasticity on pathogen transmission in the field in a Lepidoptera-NPV system.

    PubMed

    Reeson, A F; Wilson, K; Cory, J S; Hankard, P; Weeks, J M; Goulson, D; Hails, R S

    2000-08-01

    In models of insect-pathogen interactions, the transmission parameter (ν) is the term that describes the efficiency with which pathogens are transmitted between hosts. There are two components to the transmission parameter, namely the rate at which the host encounters pathogens (contact rate) and the rate at which contact between host and pathogen results in infection (host susceptibility). Here it is shown that in larvae of Spodoptera exempta (Lepidoptera: Noctuidae), in which rearing density triggers the expression of one of two alternative phenotypes, the high-density morph is associated with an increase in larval activity. This response is likely to result in an increase in the contact rate between hosts and pathogens. Rearing density is also known to affect susceptibility of S. exempta to pathogens, with the high-density morph showing increased resistance to a baculovirus. In order to determine whether density-dependent differences observed in the laboratory might affect transmission in the wild, a field trial was carried out to estimate the transmission parameter for S. exempta and its nuclear polyhedrosis virus (NPV). The transmission parameter was found to be significantly higher among larvae reared in isolation than among those reared in crowds. Models of insect-pathogen interactions, in which the transmission parameter is assumed to be constant, will therefore not fully describe the S. exempta-NPV system. The finding that crowding can influence transmission in this way has major implications for both the long-term population dynamics and the invasion dynamics of insect-pathogen systems.

  7. Improved Estimates of Thermodynamic Parameters

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  8. Spectrometric studies on the interaction of fluoroquinolones and bovine serum albumin

    NASA Astrophysics Data System (ADS)

    Ni, Yongnian; Su, Shaojing; Kokot, Serge

    2010-02-01

    The interaction between fluoroquinolones (FQs), ofloxacin and enrofloxacin, and bovine serum albumin (BSA) was investigated by fluorescence and UV-vis spectroscopy. It was demonstrated that the fluorescence quenching of BSA by FQ is a result of the formation of the FQ-BSA complex stabilized, in the main, by hydrogen bonds and van der Waals forces. The Stern-Volmer quenching constant, KSV, and the corresponding thermodynamic parameters, Δ H, Δ S and Δ G, were estimated. The distance, r, between the donor, BSA, and the acceptor, FQ, was estimated from fluorescence resonance energy transfer (FRET). The effect of FQ on the conformation of BSA was analyzed with the aid of UV-vis absorbance spectra and synchronous fluorescence spectroscopy. Spectral analysis showed that the two FQs affected the conformation of the BSA but in a different manner. Thus, with ofloxacin, the polarity around the tryptophan residues decreased and the hydrophobicity increased, while for enrofloxacin, the opposite effect was observed.

  9. A GUI-based Tool for Bridging the Gap between Models and Process-Oriented Studies

    NASA Astrophysics Data System (ADS)

    Kornfeld, A.; Van der Tol, C.; Berry, J. A.

    2014-12-01

    Models used for simulation of photosynthesis and transpiration by canopies of terrestrial plants typically have subroutines such as STOMATA.F90, PHOSIB.F90 or BIOCHEM.m that solve for photosynthesis and associated processes. Key parameters such as the Vmax for Rubisco and temperature response parameters are required by these subroutines. These are often taken from the literature or determined by separate analysis of gas exchange experiments. It is useful to note however that subroutines can be extracted and run as standalone models to simulate leaf responses collected in gas exchange experiments. Furthermore, there are excellent non-linear fitting tools that can be used to optimize the parameter values in these models to fit the observations. Ideally the Vmax fit in this way should be the same as that determined by a separate analysis, but it may not because of interactions with other kinetic constants and the temperature dependence of these in the full subroutine. We submit that it is more useful to fit the complete model to the calibration experiments rather as disaggregated constants. We designed a graphical user interface (GUI) based tool that uses gas exchange photosynthesis data to directly estimate model parameters in the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model and, at the same time, allow researchers to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. We have also ported some of this functionality to an Excel spreadsheet, which could be used as a teaching tool to help integrate process-oriented and model-oriented studies.

  10. In silico exploration of the impact of pasture larvae contamination and anthelmintic treatment on genetic parameter estimates for parasite resistance in grazing sheep.

    PubMed

    Laurenson, Y C S M; Kyriazakis, I; Bishop, S C

    2012-07-01

    A mathematical model was developed to investigate the impact of level of Teladorsagia circumcincta larval pasture contamination and anthelmintic treatment on genetic parameter estimates for performance and resistance to parasites in sheep. Currently great variability is seen for published correlations between performance and resistance, with estimates appearing to vary with production environment. The model accounted for host genotype and parasitism in a population of lambs, incorporating heritable between-lamb variation in host-parasite interactions, with genetic independence of input growth and immunological variables. An epidemiological module was linked to the host-parasite interaction module via food intake (FI) to create a grazing scenario. The model was run for a population of lambs growing from 2 mo of age, grazing on pasture initially contaminated with 0, 1,000, 3,000, or 5,000 larvae/kg DM, and given either no anthelmintic treatment or drenched at 30-d intervals. The mean population values for FI and empty BW (EBW) decreased with increasing levels of initial larval contamination (IL(0)), with non-drenched lambs having a greater reduction than drenched ones. For non-drenched lambs the maximum mean population values for worm burden (WB) and fecal egg count (FEC) increased and occurred earlier for increasing IL(0), with values being similar for all IL(0) at the end of the simulation. Drenching was predicted to suppress WB and FEC, and cause reduced pasture contamination. The heritability of EBW for non-drenched lambs was predicted to be initially high (0.55) and decreased over time with increasing IL(0), whereas drenched lambs remained high throughout. The heritability of WB and FEC for all lambs was initially low (∼0.05) and increased with time to ∼0.25, with increasing IL(0) leading to this value being reached at faster rates. The genetic correlation between EBW and FEC was initially ∼-0.3. As time progressed the correlation tended towards 0, before becoming negative by the end of the simulation for non-drenched lambs, with increasing IL(0) leading to increasingly negative correlations. For drenched lambs, the correlation remained close to 0. This study highlights the impact of IL(0) and anthelmintic treatment on genetic parameters for resistance. Along with factors affecting performance penalties due to parasitism and time of reporting, the results give plausible causes for variation in genetic parameter estimates previously reported.

  11. Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation

    NASA Astrophysics Data System (ADS)

    Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei

    2018-04-01

    Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.

  12. Detecting epistasis with the marginal epistasis test in genetic mapping studies of quantitative traits

    PubMed Central

    Zeng, Ping; Mukherjee, Sayan; Zhou, Xiang

    2017-01-01

    Epistasis, commonly defined as the interaction between multiple genes, is an important genetic component underlying phenotypic variation. Many statistical methods have been developed to model and identify epistatic interactions between genetic variants. However, because of the large combinatorial search space of interactions, most epistasis mapping methods face enormous computational challenges and often suffer from low statistical power due to multiple test correction. Here, we present a novel, alternative strategy for mapping epistasis: instead of directly identifying individual pairwise or higher-order interactions, we focus on mapping variants that have non-zero marginal epistatic effects—the combined pairwise interaction effects between a given variant and all other variants. By testing marginal epistatic effects, we can identify candidate variants that are involved in epistasis without the need to identify the exact partners with which the variants interact, thus potentially alleviating much of the statistical and computational burden associated with standard epistatic mapping procedures. Our method is based on a variance component model, and relies on a recently developed variance component estimation method for efficient parameter inference and p-value computation. We refer to our method as the “MArginal ePIstasis Test”, or MAPIT. With simulations, we show how MAPIT can be used to estimate and test marginal epistatic effects, produce calibrated test statistics under the null, and facilitate the detection of pairwise epistatic interactions. We further illustrate the benefits of MAPIT in a QTL mapping study by analyzing the gene expression data of over 400 individuals from the GEUVADIS consortium. PMID:28746338

  13. Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter

    PubMed Central

    Reddy, Chinthala P.; Rathi, Yogesh

    2016-01-01

    Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956

  14. Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter.

    PubMed

    Reddy, Chinthala P; Rathi, Yogesh

    2016-01-01

    Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.

  15. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented

  16. Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine

    2002-01-01

    The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.

  17. A deterministic method for estimating free energy genetic network landscapes with applications to cell commitment and reprogramming paths.

    PubMed

    Olariu, Victor; Manesso, Erica; Peterson, Carsten

    2017-06-01

    Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis-Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming.

  18. A deterministic method for estimating free energy genetic network landscapes with applications to cell commitment and reprogramming paths

    PubMed Central

    Olariu, Victor; Manesso, Erica

    2017-01-01

    Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis–Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming. PMID:28680655

  19. Simulation of the wastewater temperature in sewers with TEMPEST.

    PubMed

    Dürrenmatt, David J; Wanner, Oskar

    2008-01-01

    TEMPEST is a new interactive simulation program for the estimation of the wastewater temperature in sewers. Intuitive graphical user interfaces assist the user in managing data, performing calculations and plotting results. The program calculates the dynamics and longitudinal spatial profiles of the wastewater temperature in sewer lines. Interactions between wastewater, sewer air and surrounding soil are modeled in TEMPEST by mass balance equations, rate expressions found in the literature and a new empirical model of the airflow in the sewer. TEMPEST was developed as a tool which can be applied in practice, i.e., it requires as few input data as possible. These data include the upstream wastewater discharge and temperature, geometric and hydraulic parameters of the sewer, material properties of the sewer pipe and surrounding soil, ambient conditions, and estimates of the capacity of openings for air exchange between sewer and environment. Based on a case study it is shown how TEMPEST can be applied to estimate the decrease of the downstream wastewater temperature caused by heat recovery from the sewer. Because the efficiency of nitrification strongly depends on the wastewater temperature, this application is of practical relevance for situations in which the sewer ends at a nitrifying wastewater treatment plant.

  20. Multi-objective optimization in quantum parameter estimation

    NASA Astrophysics Data System (ADS)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  1. Incorporation of prior information on parameters into nonlinear regression groundwater flow models: 2. Applications

    USGS Publications Warehouse

    Cooley, Richard L.

    1983-01-01

    This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.

  2. Dynamic assessment of microbial ecology (DAME): a web app for interactive analysis and visualization of microbial sequencing data.

    PubMed

    Piccolo, Brian D; Wankhade, Umesh D; Chintapalli, Sree V; Bhattacharyya, Sudeepa; Chunqiao, Luo; Shankar, Kartik

    2018-03-15

    Dynamic assessment of microbial ecology (DAME) is a Shiny-based web application for interactive analysis and visualization of microbial sequencing data. DAME provides researchers not familiar with R programming the ability to access the most current R functions utilized for ecology and gene sequencing data analyses. Currently, DAME supports group comparisons of several ecological estimates of α-diversity and β-diversity, along with differential abundance analysis of individual taxa. Using the Shiny framework, the user has complete control of all aspects of the data analysis, including sample/experimental group selection and filtering, estimate selection, statistical methods and visualization parameters. Furthermore, graphical and tabular outputs are supported by R packages using D3.js and are fully interactive. DAME was implemented in R but can be modified by Hypertext Markup Language (HTML), Cascading Style Sheets (CSS), and JavaScript. It is freely available on the web at https://acnc-shinyapps.shinyapps.io/DAME/. Local installation and source code are available through Github (https://github.com/bdpiccolo/ACNC-DAME). Any system with R can launch DAME locally provided the shiny package is installed. bdpiccolo@uams.edu.

  3. Resonant inelastic X-ray scattering study of spin-wave excitations in the cuprate parent compound Ca 2CuO 2Cl 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lebert, B. W.; Dean, M.; Nicolaou, A.

    By means of resonant inelastic x-ray scattering at the Cu L 3 edge, we measured the spin wave dispersion along <100> and <110> in the undoped cuprate Ca 2CuO 2Cl 2. The data yields a reliable estimate of the superexchange parameter J = 135 ± 4 meV using a classical spin-1/2 2D Heisenberg model with nearest-neighbor interactions and including quantum fluctuations. Including further exchange interactions increases the estimate to J = 141 meV. The 40 meV dispersion between the magnetic Brillouin zone boundary points (1/2, 0) and (1/4, 1/4) indicates that next-nearest neighbor interactions in this compound are intermediate betweenmore » the values found in La 2CuO 4 and Sr 2CuO 2Cl 2. Here by owing to the low- Z elements composing Ca 2CuOCl 2, the present results may enable a reliable comparison with the predictions of quantum many-body calculations, which would improve our understanding of the role of magnetic excitations and of electronic correlations in cuprates.« less

  4. Resonant inelastic X-ray scattering study of spin-wave excitations in the cuprate parent compound Ca 2CuO 2Cl 2

    DOE PAGES

    Lebert, B. W.; Dean, M.; Nicolaou, A.; ...

    2017-04-07

    By means of resonant inelastic x-ray scattering at the Cu L 3 edge, we measured the spin wave dispersion along <100> and <110> in the undoped cuprate Ca 2CuO 2Cl 2. The data yields a reliable estimate of the superexchange parameter J = 135 ± 4 meV using a classical spin-1/2 2D Heisenberg model with nearest-neighbor interactions and including quantum fluctuations. Including further exchange interactions increases the estimate to J = 141 meV. The 40 meV dispersion between the magnetic Brillouin zone boundary points (1/2, 0) and (1/4, 1/4) indicates that next-nearest neighbor interactions in this compound are intermediate betweenmore » the values found in La 2CuO 4 and Sr 2CuO 2Cl 2. Here by owing to the low- Z elements composing Ca 2CuOCl 2, the present results may enable a reliable comparison with the predictions of quantum many-body calculations, which would improve our understanding of the role of magnetic excitations and of electronic correlations in cuprates.« less

  5. Weakly dynamic dark energy via metric-scalar couplings with torsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sur, Sourav; Bhatia, Arshdeep Singh, E-mail: sourav.sur@gmail.com, E-mail: arshdeepsb@gmail.com

    We study the dynamical aspects of dark energy in the context of a non-minimally coupled scalar field with curvature and torsion. Whereas the scalar field acts as the source of the trace mode of torsion, a suitable constraint on the torsion pseudo-trace provides a mass term for the scalar field in the effective action. In the equivalent scalar-tensor framework, we find explicit cosmological solutions representing dark energy in both Einstein and Jordan frames. We demand the dynamical evolution of the dark energy to be weak enough, so that the present-day values of the cosmological parameters could be estimated keeping themmore » within the confidence limits set for the standard LCDM model from recent observations. For such estimates, we examine the variations of the effective matter density and the dark energy equation of state parameters over different redshift ranges. In spite of being weakly dynamic, the dark energy component differs significantly from the cosmological constant, both in characteristics and features, for e.g. it interacts with the cosmological (dust) fluid in the Einstein frame, and crosses the phantom barrier in the Jordan frame. We also obtain the upper bounds on the torsion mode parameters and the lower bound on the effective Brans-Dicke parameter. The latter turns out to be fairly large, and in agreement with the local gravity constraints, which therefore come in support of our analysis.« less

  6. Weakly dynamic dark energy via metric-scalar couplings with torsion

    NASA Astrophysics Data System (ADS)

    Sur, Sourav; Singh Bhatia, Arshdeep

    2017-07-01

    We study the dynamical aspects of dark energy in the context of a non-minimally coupled scalar field with curvature and torsion. Whereas the scalar field acts as the source of the trace mode of torsion, a suitable constraint on the torsion pseudo-trace provides a mass term for the scalar field in the effective action. In the equivalent scalar-tensor framework, we find explicit cosmological solutions representing dark energy in both Einstein and Jordan frames. We demand the dynamical evolution of the dark energy to be weak enough, so that the present-day values of the cosmological parameters could be estimated keeping them within the confidence limits set for the standard LCDM model from recent observations. For such estimates, we examine the variations of the effective matter density and the dark energy equation of state parameters over different redshift ranges. In spite of being weakly dynamic, the dark energy component differs significantly from the cosmological constant, both in characteristics and features, for e.g. it interacts with the cosmological (dust) fluid in the Einstein frame, and crosses the phantom barrier in the Jordan frame. We also obtain the upper bounds on the torsion mode parameters and the lower bound on the effective Brans-Dicke parameter. The latter turns out to be fairly large, and in agreement with the local gravity constraints, which therefore come in support of our analysis.

  7. Simulation of sovereign CDS market based on interaction between market participant

    NASA Astrophysics Data System (ADS)

    Ko, Bonggyun; Kim, Kyungwon

    2017-08-01

    A research for distributional property of financial asset is the subject of intense interest not only for financial theory but also for practitioner. Such respect is no exception to CDS market. The CDS market, which began to receive attention since the global financial debacle, is not well researched despite of the importance of research necessity. This research introduces creation of CDS market and use Ising system utilizing occurrence characteristics (to shift risk) as an important factor. Therefore the results of this paper would be of great assistance to both financial theory and practice. From this study, not only distributional property of the CDS market but also various statistics like multifractal characteristics could promote understanding about the market. A salient point in this study is that countries are mainly clustering into 2 groups and it might be because of market situation and geographical characteristics of each country. This paper suggested 2 simulation parameters representing this market based on understanding such CDS market situation. The estimated parameters are suitable for high and low risk event of CDS market respectively and these two parameters are complementary and can cover not only basic statistics but also multifractal properties of most countries. Therefore these estimated parameters can be used in researches preparing for a certain event (high or low risk). Finally this research will serve as a momentum double-checking indirectly the performance of Ising system based on these results.

  8. Test apparatus to monitor time-domain signals from semiconductor-detector pixel arrays

    NASA Astrophysics Data System (ADS)

    Haston, Kyle; Barber, H. Bradford; Furenlid, Lars R.; Salçin, Esen; Bora, Vaibhav

    2011-10-01

    Pixellated semiconductor detectors, such as CdZnTe, CdTe, or TlBr, are used for gamma-ray imaging in medicine and astronomy. Data analysis for these detectors typically estimates the position (x, y, z) and energy (E) of each interacting gamma ray from a set of detector signals {Si} corresponding to completed charge transport on the hit pixel and any of its neighbors that take part in charge sharing, plus the cathode. However, it is clear from an analysis of signal induction, that there are transient signal on all pixel electrodes during the charge transport and, when there is charge trapping, small negative residual signals on all electrodes. If we wish to optimally obtain the event parameters, we should take all these signals into account. We wish to estimate x,y,z and E from the set of all electrode signals, {Si(t)}, including time dependence, using maximum-likelihood techniques[1]. To do this, we need to determine the probability of the electrode signals, given the event parameters {x, y, z, E}, i.e. Pr( {Si(t)} | {x, y, z, E} ). Thus we need to map the detector response of all pixels, {Si(t)}, for a large number of events with known x,y,z and E.In this paper we demonstrate the existence of the transient signals and residual signals and determine their magnitudes. They are typically 50-100 times smaller than the hit-pixel signals. We then describe development of an apparatus to measure the response of a 16-pixel semiconductor detector and show some preliminary results. We also discuss techniques for measuring the event parameters for individual gamma-ray interactions, a requirement for determining Pr( {Si(t)} | {x, y, z, E}).

  9. Modeling the degradation kinetics of ascorbic acid.

    PubMed

    Peleg, Micha; Normand, Mark D; Dixon, William R; Goulette, Timothy R

    2018-06-13

    Most published reports on ascorbic acid (AA) degradation during food storage and heat preservation suggest that it follows first-order kinetics. Deviations from this pattern include Weibullian decay, and exponential drop approaching finite nonzero retention. Almost invariably, the degradation rate constant's temperature-dependence followed the Arrhenius equation, and hence the simpler exponential model too. A formula and freely downloadable interactive Wolfram Demonstration to convert the Arrhenius model's energy of activation, E a , to the exponential model's c parameter, or vice versa, are provided. The AA's isothermal and non-isothermal degradation can be simulated with freely downloadable interactive Wolfram Demonstrations in which the model's parameters can be entered and modified by moving sliders on the screen. Where the degradation is known a priori to follow first or other fixed order kinetics, one can use the endpoints method, and in principle the successive points method too, to estimate the reaction's kinetic parameters from considerably fewer AA concentration determinations than in the traditional manner. Freeware to do the calculations by either method has been recently made available on the Internet. Once obtained in this way, the kinetic parameters can be used to reconstruct the entire degradation curves and predict those at different temperature profiles, isothermal or dynamic. Comparison of the predicted concentration ratios with experimental ones offers a way to validate or refute the kinetic model and the assumptions on which it is based.

  10. A group contribution method for associating chain molecules based on the statistical associating fluid theory (SAFT-γ)

    NASA Astrophysics Data System (ADS)

    Lymperiadis, Alexandros; Adjiman, Claire S.; Galindo, Amparo; Jackson, George

    2007-12-01

    A predictive group-contribution statistical associating fluid theory (SAFT-γ) is developed by extending the molecular-based SAFT-VR equation of state [A. Gil-Villegas et al. J. Chem. Phys. 106, 4168 (1997)] to treat heteronuclear molecules which are formed from fused segments of different types. Our models are thus a heteronuclear generalization of the standard models used within SAFT, comparable to the optimized potentials for the liquid state OPLS models commonly used in molecular simulation; an advantage of our SAFT-γ over simulation is that an algebraic description for the thermodynamic properties of the model molecules can be developed. In our SAFT-γ approach, each functional group in the molecule is modeled as a united-atom spherical (square-well) segment. The different groups are thus characterized by size (diameter), energy (well depth) and range parameters representing the dispersive interaction, and by shape factor parameters (which denote the extent to which each group contributes to the overall molecular properties). For associating groups a number of bonding sites are included on the segment: in this case the site types, the number of sites of each type, and the appropriate association energy and range parameters also have to be specified. A number of chemical families (n-alkanes, branched alkanes, n-alkylbenzenes, mono- and diunsaturated hydrocarbons, and n-alkan-1-ols) are treated in order to assess the quality of the SAFT-γ description of the vapor-liquid equilibria and to estimate the parameters of various functional groups. The group parameters for the functional groups present in these compounds (CH3, CH2, CH3CH, ACH, ACCH2, CH2, CH , and OH) together with the unlike energy parameters between groups of different types are obtained from an optimal description of the pure component phase equilibria. The approach is found to describe accurately the vapor-liquid equilibria with an overall %AAD of 3.60% for the vapor pressure and 0.86% for the saturated liquid density. The fluid phase equilibria of some larger compounds comprising these groups, which are not included in the optimization database and some binary mixtures are examined to confirm the predictive capability of the SAFT-γ approach. A key advantage of our method is that the binary interaction parameters between groups can be estimated directly from an examination of pure components alone. This means that as a first approximation the fluid-phase equilibria of mixtures of compounds comprising the groups considered can be predicted without the need for any adjustment of the binary interaction parameters (which is common in other approaches). The special case of molecular models comprising tangentially bonded (all-atom and united-atom) segments is considered separately; we comment on the adequacy of such models in representing the properties of real molecules.

  11. A group contribution method for associating chain molecules based on the statistical associating fluid theory (SAFT-gamma).

    PubMed

    Lymperiadis, Alexandros; Adjiman, Claire S; Galindo, Amparo; Jackson, George

    2007-12-21

    A predictive group-contribution statistical associating fluid theory (SAFT-gamma) is developed by extending the molecular-based SAFT-VR equation of state [A. Gil-Villegas et al. J. Chem. Phys. 106, 4168 (1997)] to treat heteronuclear molecules which are formed from fused segments of different types. Our models are thus a heteronuclear generalization of the standard models used within SAFT, comparable to the optimized potentials for the liquid state OPLS models commonly used in molecular simulation; an advantage of our SAFT-gamma over simulation is that an algebraic description for the thermodynamic properties of the model molecules can be developed. In our SAFT-gamma approach, each functional group in the molecule is modeled as a united-atom spherical (square-well) segment. The different groups are thus characterized by size (diameter), energy (well depth) and range parameters representing the dispersive interaction, and by shape factor parameters (which denote the extent to which each group contributes to the overall molecular properties). For associating groups a number of bonding sites are included on the segment: in this case the site types, the number of sites of each type, and the appropriate association energy and range parameters also have to be specified. A number of chemical families (n-alkanes, branched alkanes, n-alkylbenzenes, mono- and diunsaturated hydrocarbons, and n-alkan-1-ols) are treated in order to assess the quality of the SAFT-gamma description of the vapor-liquid equilibria and to estimate the parameters of various functional groups. The group parameters for the functional groups present in these compounds (CH(3), CH(2), CH(3)CH, ACH, ACCH(2), CH(2)=, CH=, and OH) together with the unlike energy parameters between groups of different types are obtained from an optimal description of the pure component phase equilibria. The approach is found to describe accurately the vapor-liquid equilibria with an overall %AAD of 3.60% for the vapor pressure and 0.86% for the saturated liquid density. The fluid phase equilibria of some larger compounds comprising these groups, which are not included in the optimization database and some binary mixtures are examined to confirm the predictive capability of the SAFT-gamma approach. A key advantage of our method is that the binary interaction parameters between groups can be estimated directly from an examination of pure components alone. This means that as a first approximation the fluid-phase equilibria of mixtures of compounds comprising the groups considered can be predicted without the need for any adjustment of the binary interaction parameters (which is common in other approaches). The special case of molecular models comprising tangentially bonded (all-atom and united-atom) segments is considered separately; we comment on the adequacy of such models in representing the properties of real molecules.

  12. ROCS: a Reproducibility Index and Confidence Score for Interaction Proteomics Studies

    PubMed Central

    2012-01-01

    Background Affinity-Purification Mass-Spectrometry (AP-MS) provides a powerful means of identifying protein complexes and interactions. Several important challenges exist in interpreting the results of AP-MS experiments. First, the reproducibility of AP-MS experimental replicates can be low, due both to technical variability and the dynamic nature of protein interactions in the cell. Second, the identification of true protein-protein interactions in AP-MS experiments is subject to inaccuracy due to high false negative and false positive rates. Several experimental approaches can be used to mitigate these drawbacks, including the use of replicated and control experiments and relative quantification to sensitively distinguish true interacting proteins from false ones. Methods To address the issues of reproducibility and accuracy of protein-protein interactions, we introduce a two-step method, called ROCS, which makes use of Indicator Prey Proteins to select reproducible AP-MS experiments, and of Confidence Scores to select specific protein-protein interactions. The Indicator Prey Proteins account for measures of protein identifiability as well as protein reproducibility, effectively allowing removal of outlier experiments that contribute noise and affect downstream inferences. The filtered set of experiments is then used in the Protein-Protein Interaction (PPI) scoring step. Prey protein scoring is done by computing a Confidence Score, which accounts for the probability of occurrence of prey proteins in the bait experiments relative to the control experiment, where the significance cutoff parameter is estimated by simultaneously controlling false positives and false negatives against metrics of false discovery rate and biological coherence respectively. In summary, the ROCS method relies on automatic objective criterions for parameter estimation and error-controlled procedures. Results We illustrate the performance of our method by applying it to five previously published AP-MS experiments, each containing well characterized protein interactions, allowing for systematic benchmarking of ROCS. We show that our method may be used on its own to make accurate identification of specific, biologically relevant protein-protein interactions, or in combination with other AP-MS scoring methods to significantly improve inferences. Conclusions Our method addresses important issues encountered in AP-MS datasets, making ROCS a very promising tool for this purpose, either on its own or in conjunction with other methods. We anticipate that our methodology may be used more generally in proteomics studies and databases, where experimental reproducibility issues arise. The method is implemented in the R language, and is available as an R package called “ROCS”, freely available from the CRAN repository http://cran.r-project.org/. PMID:22682516

  13. Aquifer response to stream-stage and recharge variations. II. Convolution method and applications

    USGS Publications Warehouse

    Barlow, P.M.; DeSimone, L.A.; Moench, A.F.

    2000-01-01

    In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to streamstage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped) parameter that accounts not only for the resistance of flow at the river-aquifer boundary, but also for the effects of partial penetration of the river and other near-stream flow phenomena not included in the theoretical development of the step-response functions.Analytical step-response functions, developed for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to stream-stage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank seepage rates and bank storage.

  14. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  15. Control system estimation and design for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Stefani, R. T.; Williams, T. L.; Yakowitz, S. J.

    1972-01-01

    The selection of an estimator which is unbiased when applied to structural parameter estimation is discussed. The mathematical relationships for structural parameter estimation are defined. It is shown that a conventional weighted least squares (CWLS) estimate is biased when applied to structural parameter estimation. Two approaches to bias removal are suggested: (1) change the CWLS estimator or (2) change the objective function. The advantages of each approach are analyzed.

  16. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

    NASA Astrophysics Data System (ADS)

    Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

    2006-03-01

    Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

  17. Rasch Model Parameter Estimation in the Presence of a Nonnormal Latent Trait Using a Nonparametric Bayesian Approach

    ERIC Educational Resources Information Center

    Finch, Holmes; Edwards, Julianne M.

    2016-01-01

    Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…

  18. A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling.

    PubMed

    Sumner, T; Shephard, E; Bogle, I D L

    2012-09-07

    One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.

  19. An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    imon, Donald L.; Armstrong, Jeffrey B.

    2012-01-01

    A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.

  20. Investigation of Maternal Effects, Maternal-Fetal Interactions and Parent-of-Origin Effects (Imprinting), Using Mothers and Their Offspring

    PubMed Central

    Ainsworth, Holly F; Unwin, Jennifer; Jamison, Deborah L; Cordell, Heather J

    2011-01-01

    Many complex genetic effects, including epigenetic effects, may be expected to operate via mechanisms in the inter-uterine environment. A popular design for the investigation of such effects, including effects of parent-of-origin (imprinting), maternal genotype, and maternal-fetal genotype interactions, is to collect DNA from affected offspring and their mothers (case/mother duos) and to compare with an appropriate control sample. An alternative design uses data from cases and both parents (case/parent trios) but does not require controls. In this study, we describe a novel implementation of a multinomial modeling approach that allows the estimation of such genetic effects using either case/mother duos or case/parent trios. We investigate the performance of our approach using computer simulations and explore the sample sizes and data structures required to provide high power for detection of effects and accurate estimation of the relative risks conferred. Through the incorporation of additional assumptions (such as Hardy-Weinberg equilibrium, random mating and known allele frequencies) and/or the incorporation of additional types of control sample (such as unrelated controls, controls and their mothers, or both parents of controls), we show that the (relative risk) parameters of interest are identifiable and well estimated. Nevertheless, parameter interpretation can be complex, as we illustrate by demonstrating the mathematical equivalence between various different parameterizations. Our approach scales up easily to allow the analysis of large-scale genome-wide association data, provided both mothers and affected offspring have been genotyped at all variants of interest. Genet. Epidemiol. 35:19–45, 2011. © 2010 Wiley-Liss, Inc. PMID:21181895

  1. Estimation of soil saturated hydraulic conductivity by artificial neural networks ensemble in smectitic soils

    NASA Astrophysics Data System (ADS)

    Sedaghat, A.; Bayat, H.; Safari Sinegani, A. A.

    2016-03-01

    The saturated hydraulic conductivity ( K s ) of the soil is one of the main soil physical properties. Indirect estimation of this parameter using pedo-transfer functions (PTFs) has received considerable attention. The Purpose of this study was to improve the estimation of K s using fractal parameters of particle and micro-aggregate size distributions in smectitic soils. In this study 260 disturbed and undisturbed soil samples were collected from Guilan province, the north of Iran. The fractal model of Bird and Perrier was used to compute the fractal parameters of particle and micro-aggregate size distributions. The PTFs were developed by artificial neural networks (ANNs) ensemble to estimate K s by using available soil data and fractal parameters. There were found significant correlations between K s and fractal parameters of particles and microaggregates. Estimation of K s was improved significantly by using fractal parameters of soil micro-aggregates as predictors. But using geometric mean and geometric standard deviation of particles diameter did not improve K s estimations significantly. Using fractal parameters of particles and micro-aggregates simultaneously, had the most effect in the estimation of K s . Generally, fractal parameters can be successfully used as input parameters to improve the estimation of K s in the PTFs in smectitic soils. As a result, ANNs ensemble successfully correlated the fractal parameters of particles and micro-aggregates to K s .

  2. Estimation of parameters related to vaccine efficacy and dengue transmission from two large phase III studies.

    PubMed

    Coudeville, Laurent; Baurin, Nicolas; Vergu, Elisabeta

    2016-12-07

    A tetravalent dengue vaccine was shown to be efficacious against symptomatic dengue in two phase III efficacy studies performed in five Asian and five Latin American countries. The objective here was to estimate key parameters of a dengue transmission model using the data collected during these studies. Parameter estimation was based on a Sequential Monte Carlo approach and used a cohort version of the transmission model. Serotype-specific basic reproduction numbers were derived for each country. Parameters related to serotype interactions included duration of cross-protection and level of cross-enhancement characterized by differences in symptomaticity for primary, secondary and post-secondary infections. We tested several vaccine efficacy profiles and simulated the evolution of vaccine efficacy over time for the scenarios providing the best fit to the data. Two reference scenarios were identified. The first included temporary cross-protection and the second combined cross-protection and cross-enhancement upon wild-type infection and following vaccination. Both scenarios were associated with differences in efficacy by serotype, higher efficacy for pre-exposed subjects and against severe dengue, increase in efficacy with doses for naïve subjects and by a more important waning of vaccine protection for subjects when naïve than when pre-exposed. Over 20 years, the median reduction of dengue risk induced by the direct protection conferred by the vaccine ranged from 24% to 47% according to country for the first scenario and from 34% to 54% for the second. Our study is an important first step in deriving a general framework that combines disease dynamics and mechanisms of vaccine protection that could be used to assess the impact of vaccination at a population level. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Reconstructing Mammalian Sleep Dynamics with Data Assimilation

    PubMed Central

    Sedigh-Sarvestani, Madineh; Schiff, Steven J.; Gluckman, Bruce J.

    2012-01-01

    Data assimilation is a valuable tool in the study of any complex system, where measurements are incomplete, uncertain, or both. It enables the user to take advantage of all available information including experimental measurements and short-term model forecasts of a system. Although data assimilation has been used to study other biological systems, the study of the sleep-wake regulatory network has yet to benefit from this toolset. We present a data assimilation framework based on the unscented Kalman filter (UKF) for combining sparse measurements together with a relatively high-dimensional nonlinear computational model to estimate the state of a model of the sleep-wake regulatory system. We demonstrate with simulation studies that a few noisy variables can be used to accurately reconstruct the remaining hidden variables. We introduce a metric for ranking relative partial observability of computational models, within the UKF framework, that allows us to choose the optimal variables for measurement and also provides a methodology for optimizing framework parameters such as UKF covariance inflation. In addition, we demonstrate a parameter estimation method that allows us to track non-stationary model parameters and accommodate slow dynamics not included in the UKF filter model. Finally, we show that we can even use observed discretized sleep-state, which is not one of the model variables, to reconstruct model state and estimate unknown parameters. Sleep is implicated in many neurological disorders from epilepsy to schizophrenia, but simultaneous observation of the many brain components that regulate this behavior is difficult. We anticipate that this data assimilation framework will enable better understanding of the detailed interactions governing sleep and wake behavior and provide for better, more targeted, therapies. PMID:23209396

  4. Adaptive Modal Identification for Flutter Suppression Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Drew, Michael; Swei, Sean S.

    2016-01-01

    In this paper, we will develop an adaptive modal identification method for identifying the frequencies and damping of a flutter mode based on model-reference adaptive control (MRAC) and least-squares methods. The least-squares parameter estimation will achieve parameter convergence in the presence of persistent excitation whereas the MRAC parameter estimation does not guarantee parameter convergence. Two adaptive flutter suppression control approaches are developed: one based on MRAC and the other based on the least-squares method. The MRAC flutter suppression control is designed as an integral part of the parameter estimation where the feedback signal is used to estimate the modal information. On the other hand, the separation principle of control and estimation is applied to the least-squares method. The least-squares modal identification is used to perform parameter estimation.

  5. Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1996-01-01

    Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.

  6. Anisotropic physical properties of myocardium characterized by ultrasonic measurements of backscatter, attenuation, and velocity

    NASA Astrophysics Data System (ADS)

    Baldwin, Steven L.

    The goal of elucidating the physical mechanisms underlying the propagation of ultrasonic waves in anisotropic soft tissue such as myocardium has posed an interesting and largely unsolved problem in the field of physics for the past 30 years. In part because of the vast complexity of the system being studied, progress towards understanding and modeling the mechanisms that underlie observed acoustic parameters may first require the guidance of careful experiment. Knowledge of the causes of observed ultrasonic properties in soft tissue including attenuation, speed of sound, and backscatter, and how those properties are altered with specific pathophysiologies, may lead to new noninvasive approaches to the diagnosis of disease. The primary aim of this Dissertation is to contribute to an understanding of the physics that underlies the mechanisms responsible for the observed interaction of ultrasound with myocardium. To this end, through-transmission and backscatter measurements were performed by varying acoustic properties as a function of angle of insonification relative to the predominant myofiber direction and by altering the material properties of myocardium by increased protein cross-linking induced by chemical fixation as an extreme form of changes that may occur in certain pathologies such as diabetes. Techniques to estimate acoustic parameters from backscatter were broadened and challenges to implementing these techniques in vivo were addressed. Provided that specific challenges identified in this Dissertation can be overcome, techniques to estimate attenuation from ultrasonic backscatter show promise as a means to investigate the physical interaction of ultrasound with anisotropic biological media in vivo. This Dissertation represents a step towards understanding the physics of the interaction of ultrasonic waves with anisotropic biological media.

  7. Observational Data Analysis and Numerical Model Assessment of the Seafloor Interaction and Mobility of Sand and Weathered Oil Agglomerates (Surface Residual Balls) in the Surf Zone

    NASA Astrophysics Data System (ADS)

    Dalyander, S.; Long, J.; Plant, N. G.; Penko, A.; Calantoni, J.; Thompson, D.; Mclaughlin, M. K.

    2014-12-01

    When weathered oil is transported ashore, such as during the Deepwater Horizon oil spill, it can mix with suspended sediment in the surf zone to create heavier-than-water sand and oil agglomerates in the form of mats several centimeters thick and tens of meters long. Broken off pieces of these mats and smaller agglomerates formed in situ (called Surface Residual Balls, SRBs) can cause beach re-oiling months to years after the initial spill. The physical dynamics of these SRBs in the nearshore, where they are larger (cm-scale) and less dense than natural sediment, are poorly understood. In the current study, SRB mobility and seafloor interaction is investigated through a combination of laboratory and field experiments with pseudo-SRBs developed to be physically stable proxies for genuine agglomerates. Formulations for mobility prediction based on comparing estimated shear stress to the critical Shields and modified Shields parameters developed for mixed sediment beds are assessed against observations. Processes such as burial, exhumation, and interaction with bedforms (e.g., migrating ripples) are also explored. The observations suggest that incipient motion estimates based on a modified Shields parameter have some skill in predicting SRB movement, but that other forcing mechanisms such as pressure gradients may be important under some conditions. Additionally, burial and exhumation due to the relatively high mobility of sand grains are confirmed as key processes controlling SRB dynamics in the surf zone. This work has broad implications for understanding surf zone sediment transport at the short timescale associated with mobilizing sand grains and SRBs as well as at the longer timescales associated with net transport patterns, sediment budgets, and bed elevation changes.

  8. Estimation of parameters in rational reaction rates of molecular biological systems via weighted least squares

    NASA Astrophysics Data System (ADS)

    Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke

    2010-01-01

    The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.

  9. Estimates of genetic parameters and environmental effects for measures of hunting performance in Finnish hounds.

    PubMed

    Liinamo, A E; Karjalainen, L; Ojala, M; Vilva, V

    1997-03-01

    Data from field trials of Finnish Hounds between 1988 and 1992 in Finland were used to estimate genetic parameters and environmental effects for measures of hunting performance using REML procedures and an animal model. The original data set included 28,791 field trial records from 5,666 dogs. Males and females had equal hunting performance, whereas experience acquired by age improved trial results compared with results for young dogs (P < .001). Results were mostly better on snow than on bare ground (P < .001), and testing areas, years, months, and their interactions affected results (P < .001). Estimates of heritabilities and repeatabilities were low for most of the 28 measures, mainly due to large residual variances. The highest heritabilities were for frequency of tonguing (h2 = .15), pursuit score (h2 = .13), tongue score (h2 = .13), ghost trailing score (h2 = .12), and merit and final score (both h2 = .11). Estimates of phenotypic and genetic correlations were positive and moderate or high for search scores, pursuit scores, and final scores but lower for other studied measures. The results suggest that, due to low heritabilities, evaluation of breeding values for Finnish Hounds with respect to their hunting ability should be based on animal model BLUP methods instead of mere performance testing. The evaluation system of field trials should also be revised for more reliability.

  10. Spatial dynamics of the 1918 influenza pandemic in England, Wales and the United States.

    PubMed

    Eggo, Rosalind M; Cauchemez, Simon; Ferguson, Neil M

    2011-02-06

    There is still limited understanding of key determinants of spatial spread of influenza. The 1918 pandemic provides an opportunity to elucidate spatial determinants of spread on a large scale. To better characterize the spread of the 1918 major wave, we fitted a range of city-to-city transmission models to mortality data collected for 246 population centres in England and Wales and 47 cities in the US. Using a gravity model for city-to-city contacts, we explored the effect of population size and distance on the spread of disease and tested assumptions regarding density dependence in connectivity between cities. We employed Bayesian Markov Chain Monte Carlo methods to estimate parameters of the model for population, infectivity, distance and density dependence. We inferred the most likely transmission trees for both countries. For England and Wales, a model that estimated the degree of density dependence in connectivity between cities was preferable by deviance information criterion comparison. Early in the major wave, long distance infective interactions predominated, with local infection events more likely as the epidemic became widespread. For the US, with fewer more widely dispersed cities, statistical power was lacking to estimate population size dependence or the degree of density dependence, with the preferred model depending on distance only. We find that parameters estimated from the England and Wales dataset can be applied to the US data with no likelihood penalty.

  11. Spatial dynamics of the 1918 influenza pandemic in England, Wales and the United States

    PubMed Central

    Eggo, Rosalind M.; Cauchemez, Simon; Ferguson, Neil M.

    2011-01-01

    There is still limited understanding of key determinants of spatial spread of influenza. The 1918 pandemic provides an opportunity to elucidate spatial determinants of spread on a large scale. To better characterize the spread of the 1918 major wave, we fitted a range of city-to-city transmission models to mortality data collected for 246 population centres in England and Wales and 47 cities in the US. Using a gravity model for city-to-city contacts, we explored the effect of population size and distance on the spread of disease and tested assumptions regarding density dependence in connectivity between cities. We employed Bayesian Markov Chain Monte Carlo methods to estimate parameters of the model for population, infectivity, distance and density dependence. We inferred the most likely transmission trees for both countries. For England and Wales, a model that estimated the degree of density dependence in connectivity between cities was preferable by deviance information criterion comparison. Early in the major wave, long distance infective interactions predominated, with local infection events more likely as the epidemic became widespread. For the US, with fewer more widely dispersed cities, statistical power was lacking to estimate population size dependence or the degree of density dependence, with the preferred model depending on distance only. We find that parameters estimated from the England and Wales dataset can be applied to the US data with no likelihood penalty. PMID:20573630

  12. Modelling algae-duckweed interaction under chemical pressure within a laboratory microcosm.

    PubMed

    Lamonica, Dominique; Clément, Bernard; Charles, Sandrine; Lopes, Christelle

    2016-06-01

    Contaminant effects on species are generally assessed with single-species bioassays. As a consequence, interactions between species that occur in ecosystems are not taken into account. To investigate the effects of contaminants on interacting species dynamics, our study describes the functioning of a 2-L laboratory microcosm with two species, the duckweed Lemna minor and the microalgae Pseudokirchneriella subcapitata, exposed to cadmium contamination. We modelled the dynamics of both species and their interactions using a mechanistic model based on coupled ordinary differential equations. The main processes occurring in this two-species microcosm were thus formalised, including growth and settling of algae, growth of duckweeds, interspecific competition between the two species and cadmium effects. We estimated model parameters by Bayesian inference, using simultaneously all the data issued from multiple laboratory experiments specifically conducted for this study. Cadmium concentrations ranged between 0 and 50 μg·L(-1). For all parameters of our model, we obtained biologically realistic values and reasonable uncertainties. Only duckweed dynamics was affected by interspecific competition, while algal dynamics was not impaired. Growth rate of both species decreased with cadmium concentration, as well as competition intensity showing that the interspecific competition pressure on duckweed decreased with cadmium concentration. This innovative combination of mechanistic modelling and model-guided experiments was successful to understand the algae-duckweed microcosm functioning without and with contaminant. This approach appears promising to include interactions between species when studying contaminant effects on ecosystem functioning. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Lattice-dynamical model for the filled skutterudite LaFe4Sb12: Harmonic and anharmonic couplings

    NASA Astrophysics Data System (ADS)

    Feldman, J. L.; Singh, D. J.; Bernstein, N.

    2014-06-01

    The filled skutterudite LaFe4Sb12 shows greatly reduced thermal conductivity compared to that of the related unfilled compound CoSb3, although the microscopic reasons for this are unclear. We calculate harmonic and anharmonic force constants for the interaction of the La filler atom with the framework atoms. We find that force constants show a general trend of decaying rapidly with distance and are very small for the interaction of the La with its next-nearest-neighbor Sb and nearest-neighbor La. However, a few rather long-range interactions, such as with the next-nearest-neighbor La and with the third neighbor Sb, are surprisingly strong, although still small. We test the central-force approximation and find significant deviations from it. Using our force constants we calculate a bare La mode Gruneisen parameter and find a value of 3-4, substantially higher than values associated with cage atom anharmonicity, i.e., a value of about 1 for CoSb3 but much smaller than a previous estimate [Bernstein et al., Phys. Rev. B 81, 134301 (2010), 10.1103/PhysRevB.81.134301]. This latter difference is primarily due to the previously used overestimate of the La-Fe cubic force constants. We also find a substantial negative contribution to this bare La Gruneisen parameter from the aforementioned third-neighbor La-Sb interaction. Our results underscore the need for rather long-range interactions in describing the role of anharmonicity on the dynamics in this material.

  14. A new Bayesian recursive technique for parameter estimation

    NASA Astrophysics Data System (ADS)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  15. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  16. A Comparative Study of Distribution System Parameter Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less

  17. Decoding tactile afferent activity to obtain an estimate of instantaneous force and torque applied to the fingerpad

    PubMed Central

    Birznieks, Ingvars; Redmond, Stephen J.

    2015-01-01

    Dexterous manipulation is not possible without sensory information about object properties and manipulative forces. Fundamental neuroscience has been unable to demonstrate how information about multiple stimulus parameters may be continuously extracted, concurrently, from a population of tactile afferents. This is the first study to demonstrate this, using spike trains recorded from tactile afferents innervating the monkey fingerpad. A multiple-regression model, requiring no a priori knowledge of stimulus-onset times or stimulus combination, was developed to obtain continuous estimates of instantaneous force and torque. The stimuli consisted of a normal-force ramp (to a plateau of 1.8, 2.2, or 2.5 N), on top of which −3.5, −2.0, 0, +2.0, or +3.5 mNm torque was applied about the normal to the skin surface. The model inputs were sliding windows of binned spike counts recorded from each afferent. Models were trained and tested by 15-fold cross-validation to estimate instantaneous normal force and torque over the entire stimulation period. With the use of the spike trains from 58 slow-adapting type I and 25 fast-adapting type I afferents, the instantaneous normal force and torque could be estimated with small error. This study demonstrated that instantaneous force and torque parameters could be reliably extracted from a small number of tactile afferent responses in a real-time fashion with stimulus combinations that the model had not been exposed to during training. Analysis of the model weights may reveal how interactions between stimulus parameters could be disentangled for complex population responses and could be used to test neurophysiologically relevant hypotheses about encoding mechanisms. PMID:25948866

  18. Shallow aquifer storage and recovery (SASR): Initial findings from the Willamette Basin, Oregon

    NASA Astrophysics Data System (ADS)

    Neumann, P.; Haggerty, R.

    2012-12-01

    A novel mode of shallow aquifer management could increase the volumetric potential and distribution of groundwater storage. We refer to this mode as shallow aquifer storage and recovery (SASR) and gauge its potential as a freshwater storage tool. By this mode, water is stored in hydraulically connected aquifers with minimal impact to surface water resources. Basin-scale numerical modeling provides a linkage between storage efficiency and hydrogeological parameters, which in turn guides rulemaking for how and where water can be stored. Increased understanding of regional groundwater-surface water interactions is vital to effective SASR implementation. In this study we (1) use a calibrated model of the central Willamette Basin (CWB), Oregon to quantify SASR storage efficiency at 30 locations; (2) estimate SASR volumetric storage potential throughout the CWB based on these results and pertinent hydrogeological parameters; and (3) introduce a methodology for management of SASR by such parameters. Of 3 shallow, sedimentary aquifers in the CWB, we find the moderately conductive, semi-confined, middle sedimentary unit (MSU) to be most efficient for SASR. We estimate that users overlying 80% of the area in this aquifer could store injected water with greater than 80% efficiency, and find efficiencies of up to 95%. As a function of local production well yields, we estimate a maximum annual volumetric storage potential of 30 million m3 using SASR in the MSU. This volume constitutes roughly 9% of the current estimated summer pumpage in the Willamette basin at large. The dimensionless quantity lag #—calculated using modeled specific capacity, distance to nearest in-layer stream boundary, and injection duration—exhibits relatively high correlation to SASR storage efficiency at potential locations in the CWB. This correlation suggests that basic field measurements could guide SASR as an efficient shallow aquifer storage tool.

  19. Estimating the Effect of Competition on Trait Evolution Using Maximum Likelihood Inference.

    PubMed

    Drury, Jonathan; Clavel, Julien; Manceau, Marc; Morlon, Hélène

    2016-07-01

    Many classical ecological and evolutionary theoretical frameworks posit that competition between species is an important selective force. For example, in adaptive radiations, resource competition between evolving lineages plays a role in driving phenotypic diversification and exploration of novel ecological space. Nevertheless, current models of trait evolution fit to phylogenies and comparative data sets are not designed to incorporate the effect of competition. The most advanced models in this direction are diversity-dependent models where evolutionary rates depend on lineage diversity. However, these models still treat changes in traits in one branch as independent of the value of traits on other branches, thus ignoring the effect of species similarity on trait evolution. Here, we consider a model where the evolutionary dynamics of traits involved in interspecific interactions are influenced by species similarity in trait values and where we can specify which lineages are in sympatry. We develop a maximum likelihood based approach to fit this model to combined phylogenetic and phenotypic data. Using simulations, we demonstrate that the approach accurately estimates the simulated parameter values across a broad range of parameter space. Additionally, we develop tools for specifying the biogeographic context in which trait evolution occurs. In order to compare models, we also apply these biogeographic methods to specify which lineages interact sympatrically for two diversity-dependent models. Finally, we fit these various models to morphological data from a classical adaptive radiation (Greater Antillean Anolis lizards). We show that models that account for competition and geography perform better than other models. The matching competition model is an important new tool for studying the influence of interspecific interactions, in particular competition, on phenotypic evolution. More generally, it constitutes a step toward a better integration of interspecific interactions in many ecological and evolutionary processes. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. A validation study of a stochastic model of human interaction

    NASA Astrophysics Data System (ADS)

    Burchfield, Mitchel Talmadge

    The purpose of this dissertation is to validate a stochastic model of human interactions which is part of a developmentalism paradigm. Incorporating elements of ancient and contemporary philosophy and science, developmentalism defines human development as a progression of increasing competence and utilizes compatible theories of developmental psychology, cognitive psychology, educational psychology, social psychology, curriculum development, neurology, psychophysics, and physics. To validate a stochastic model of human interactions, the study addressed four research questions: (a) Does attitude vary over time? (b) What are the distributional assumptions underlying attitudes? (c) Does the stochastic model, {-}N{intlimitssbsp{-infty}{infty}}varphi(chi,tau)\\ Psi(tau)dtau, have utility for the study of attitudinal distributions and dynamics? (d) Are the Maxwell-Boltzmann, Fermi-Dirac, and Bose-Einstein theories applicable to human groups? Approximately 25,000 attitude observations were made using the Semantic Differential Scale. Positions of individuals varied over time and the logistic model predicted observed distributions with correlations between 0.98 and 1.0, with estimated standard errors significantly less than the magnitudes of the parameters. The results bring into question the applicability of Fisherian research designs (Fisher, 1922, 1928, 1938) for behavioral research based on the apparent failure of two fundamental assumptions-the noninteractive nature of the objects being studied and normal distribution of attributes. The findings indicate that individual belief structures are representable in terms of a psychological space which has the same or similar properties as physical space. The psychological space not only has dimension, but individuals interact by force equations similar to those described in theoretical physics models. Nonlinear regression techniques were used to estimate Fermi-Dirac parameters from the data. The model explained a high degree of the variance in each probability distribution. The correlation between predicted and observed probabilities ranged from a low of 0.955 to a high value of 0.998, indicating that humans behave in psychological space as Fermions behave in momentum space.

  1. Moment Magnitude discussion in Austria

    NASA Astrophysics Data System (ADS)

    Weginger, Stefan; Jia, Yan; Hausmann, Helmut; Lenhardt, Wolfgang

    2017-04-01

    We implemented and tested the Moment Magnitude estimation „dbmw" from the University of Trieste in our Antelope near real-time System. It is used to get a fast Moment Magnitude solutions and Ground Motion Parameter (PGA, PGV, PSA 0.3, PSA 1.0 and PSA 3.0) to calculate Shake and Interactive maps. A Moment Magnitude Catalogue was generated and compared with the Austrian Earthquake Catalogue and all available Magnitude solution of the neighbouring agencies. Relations of Mw to Ml and Ground Motion to Intensity are presented.

  2. Meteorological limits on the growth and development of screwworm populations

    NASA Technical Reports Server (NTRS)

    Phinney, D. E.; Arp, G. K.

    1978-01-01

    A program to evaluate the use of remotely sensed data as an additional tool in existing and projected efforts to eradicate the screwworm began in 1973. Estimating weather conditions by use of remotely sensed data was part of the study. Next, the effect of weather on screwworm populations was modeled. A significant portion of the variation in screwworm population growth and development has been traced to weather-related parameters. This report deals with the salient points of the weather and the screwworm population interaction.

  3. Modeling, Analysis, and Control of Swarming Agents in a Probabilistic Framework

    DTIC Science & Technology

    2012-11-01

    configurations, which can ultimately lead the swarm towards configurations close to the global minimum of the total potential of interactions. The drawback ...165–171, 1992. [6] H. Ye, H. Wang, and H. Wang, “Stabilization of a PVTOL aircraft and an inertia wheel pendulum using saturation technique,” IEEE...estimate its parameters. The drawback of this approach is that the assumed form of the field can be unrealistic. In the approach that we are presenting here

  4. Fuzzy variable impedance control based on stiffness identification for human-robot cooperation

    NASA Astrophysics Data System (ADS)

    Mao, Dachao; Yang, Wenlong; Du, Zhijiang

    2017-06-01

    This paper presents a dynamic fuzzy variable impedance control algorithm for human-robot cooperation. In order to estimate the intention of human for co-manipulation, a fuzzy inference system is set up to adjust the impedance parameter. Aiming at regulating the output fuzzy universe based on the human arm’s stiffness, an online stiffness identification method is developed. A drag interaction task is conducted on a 5-DOF robot with variable impedance control. Experimental results demonstrate that the proposed algorithm is superior.

  5. On real statistics of relaxation in gases

    NASA Astrophysics Data System (ADS)

    Kuzovlev, Yu. E.

    2016-02-01

    By example of a particle interacting with ideal gas, it is shown that the statistics of collisions in statistical mechanics at any value of the gas rarefaction parameter qualitatively differ from that conjugated with Boltzmann's hypothetical molecular chaos and kinetic equation. In reality, the probability of collisions of the particle in itself is random. Because of that, the relaxation of particle velocity acquires a power-law asymptotic behavior. An estimate of its exponent is suggested on the basis of simple kinematic reasons.

  6. Point-Process Models of Social Network Interactions: Parameter Estimation and Missing Data Recovery

    DTIC Science & Technology

    2014-08-01

    treating them as zero will have a de minimis impact on the results, but avoiding computing them (and computing with them) saves tremendous time. Set a... test the methods on simulated time series on artificial social networks, including some toy networks and some meant to resemble IkeNet. We conclude...the section by discussing the results in detail. In each of our tests we begin with a complete data set, whether it is real (IkeNet) or simulated. Then

  7. Variation in cassava germplasm for tolerance to post-harvest physiological deterioration.

    PubMed

    Venturini, M T; Santos, L R; Vildoso, C I A; Santos, V S; Oliveira, E J

    2016-05-06

    Tolerant varieties can effectively control post-harvest physiological deterioration (PPD) of cassava, although knowledge on the genetic variability and inheritance of this trait is needed. The objective of this study was to estimate genetic parameters and identify sources of tolerance to PPD and their stability in cassava accessions. Roots from 418 cassava accessions, grown in four independent experiments, were evaluated for PPD tolerance 0, 2, 5, and 10 days post-harvest. Data were transformed into area under the PPD-progress curve (AUP-PPD) to quantify tolerance. Genetic parameters, stability (Si), adaptability (Ai), and the joint analysis of stability and adaptability (Zi) were obtained via residual maximum likelihood (REML) and best linear unbiased prediction (BLUP) methods. Variance in the genotype (G) x environment (E) interaction and genotypic variance were important for PPD tolerance. Individual broad-sense heritability (hg(2)= 0.38 ± 0.04) and average heritability in accessions (hmg(2)= 0.52) showed high genetic control of PPD tolerance. Genotypic correlation of AUP-PPD in different experiments was of medium magnitude (ȓgA = 0.42), indicating significant G x E interaction. The predicted genotypic values o f G x E free of interaction (û + ĝi) showed high variation. Of the 30 accessions with high Zi, 19 were common to û + ĝi, Si, and Ai parameters. The genetic gain with selection of these 19 cassava accessions was -55.94, -466.86, -397.72, and -444.03% for û + ĝi, Si, Ai, and Zi, respectively, compared with the overall mean for each parameter. These results demonstrate the variability and potential of cassava germplasm to introduce PPD tolerance in commercial varieties.

  8. Seabed roughness parameters from joint backscatter and reflection inversion at the Malta Plateau.

    PubMed

    Steininger, Gavin; Holland, Charles W; Dosso, Stan E; Dettmer, Jan

    2013-09-01

    This paper presents estimates of seabed roughness and geoacoustic parameters and uncertainties on the Malta Plateau, Mediterranean Sea, by joint Bayesian inversion of mono-static backscatter and spherical wave reflection-coefficient data. The data are modeled using homogeneous fluid sediment layers overlying an elastic basement. The scattering model assumes a randomly rough water-sediment interface with a von Karman roughness power spectrum. Scattering and reflection data are inverted simultaneously using a population of interacting Markov chains to sample roughness and geoacoustic parameters as well as residual error parameters. Trans-dimensional sampling is applied to treat the number of sediment layers and the order (zeroth or first) of an autoregressive error model (to represent potential residual correlation) as unknowns. Results are considered in terms of marginal posterior probability profiles and distributions, which quantify the effective data information content to resolve scattering/geoacoustic structure. Results indicate well-defined scattering (roughness) parameters in good agreement with existing measurements, and a multi-layer sediment profile over a high-speed (elastic) basement, consistent with independent knowledge of sand layers over limestone.

  9. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    PubMed Central

    2011-01-01

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173

  10. Electron and muon parameters of EAS and the composition of primary cosmic rays in 10(15) to approximately 10(16) eV

    NASA Technical Reports Server (NTRS)

    Cheung, T.; Mackeown, P. K.

    1985-01-01

    Estimation of the relative intensities of protons and heavy nuclei in primary cosmic rays in the energy region 10 to the 15th power approx. 10 to the 17th power eV, was done by a systematic comparison between all available observed data on various parameters of extensive air showers (EAS) and the results of simulation. The interaction model used is an extrapolation of scaling violation indicated by recent pp collider results. A composition consisting of various percentages of Fe in an otherwise pure proton beam was assumed. Greatest overall consistency between the data and the simulation is found when the Fe fraction is in the region of 25%.

  11. Wiener-Hammerstein system identification - an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Naitali, Abdessamad; Giri, Fouad

    2016-01-01

    The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.

  12. Structural Equation Model Trees

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2015-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree structures that separate a data set recursively into subsets with significantly different parameter estimates in a SEM. SEM Trees provide means for finding covariates and covariate interactions that predict differences in structural parameters in observed as well as in latent space and facilitate theory-guided exploration of empirical data. We describe the methodology, discuss theoretical and practical implications, and demonstrate applications to a factor model and a linear growth curve model. PMID:22984789

  13. Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior

    NASA Technical Reports Server (NTRS)

    Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.

    2017-01-01

    A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.

  14. A consistent framework to predict mass fluxes and depletion times for DNAPL contaminations in heterogeneous aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Jonas; Nowak, Wolfgang

    2013-04-01

    At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.

  15. Factors that cause genotype by environment interaction and use of a multiple-trait herd-cluster model for milk yield of Holstein cattle from Brazil and Colombia.

    PubMed

    Cerón-Muñoz, M F; Tonhati, H; Costa, C N; Rojas-Sarmiento, D; Echeverri Echeverri, D M

    2004-08-01

    Descriptive herd variables (DVHE) were used to explain genotype by environment interactions (G x E) for milk yield (MY) in Brazilian and Colombian production environments and to develop a herd-cluster model to estimate covariance components and genetic parameters for each herd environment group. Data consisted of 180,522 lactation records of 94,558 Holstein cows from 937 Brazilian and 400 Colombian herds. Herds in both countries were jointly grouped in thirds according to 8 DVHE: production level, phenotypic variability, age at first calving, calving interval, percentage of imported semen, lactation length, and herd size. For each DVHE, REML bivariate animal model analyses were used to estimate genetic correlations for MY between upper and lower thirds of the data. Based on estimates of genetic correlations, weights were assigned to each DVHE to group herds in a cluster analysis using the FASTCLUS procedure in SAS. Three clusters were defined, and genetic and residual variance components were heterogeneous among herd clusters. Estimates of heritability in clusters 1 and 3 were 0.28 and 0.29, respectively, but the estimate was larger (0.39) in Cluster 2. The genetic correlations of MY from different clusters ranged from 0.89 to 0.97. The herd-cluster model based on DVHE properly takes into account G x E by grouping similar environments accordingly and seems to be an alternative to simply considering country borders to distinguish between environments.

  16. Combining Genome-Wide Information with a Functional Structural Plant Model to Simulate 1-Year-Old Apple Tree Architecture.

    PubMed

    Migault, Vincent; Pallas, Benoît; Costes, Evelyne

    2016-01-01

    In crops, optimizing target traits in breeding programs can be fostered by selecting appropriate combinations of architectural traits which determine light interception and carbon acquisition. In apple tree, architectural traits were observed to be under genetic control. However, architectural traits also result from many organogenetic and morphological processes interacting with the environment. The present study aimed at combining a FSPM built for apple tree, MAppleT, with genetic determinisms of architectural traits, previously described in a bi-parental population. We focused on parameters related to organogenesis (phyllochron and immediate branching) and morphogenesis processes (internode length and leaf area) during the first year of tree growth. Two independent datasets collected in 2004 and 2007 on 116 genotypes, issued from a 'Starkrimson' × 'Granny Smith' cross, were used. The phyllochron was estimated as a function of thermal time and sylleptic branching was modeled subsequently depending on phyllochron. From a genetic map built with SNPs, marker effects were estimated on four MAppleT parameters with rrBLUP, using 2007 data. These effects were then considered in MAppleT to simulate tree development in the two climatic conditions. The genome wide prediction model gave consistent estimations of parameter values with correlation coefficients between observed values and estimated values from SNP markers ranging from 0.79 to 0.96. However, the accuracy of the prediction model following cross validation schemas was lower. Three integrative traits (the number of leaves, trunk length, and number of sylleptic laterals) were considered for validating MAppleT simulations. In 2007 climatic conditions, simulated values were close to observations, highlighting the correct simulation of genetic variability. However, in 2004 conditions which were not used for model calibration, the simulations differed from observations. This study demonstrates the possibility of integrating genome-based information in a FSPM for a perennial fruit tree. It also showed that further improvements are required for improving the prediction ability. Especially temperature effect should be extended and other factors taken into account for modeling GxE interactions. Improvements could also be expected by considering larger populations and by testing other genome wide prediction models. Despite these limitations, this study opens new possibilities for supporting plant breeding by in silico evaluations of the impact of genotypic polymorphisms on plant integrative phenotypes.

  17. A computer program (MODFLOWP) for estimating parameters of a transient, three-dimensional ground-water flow model using nonlinear regression

    USGS Publications Warehouse

    Hill, Mary Catherine

    1992-01-01

    This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.

  18. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    NASA Astrophysics Data System (ADS)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  19. Data-Adaptive Bias-Reduced Doubly Robust Estimation.

    PubMed

    Vermeulen, Karel; Vansteelandt, Stijn

    2016-05-01

    Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.

  20. Gene-environment studies: any advantage over environmental studies?

    PubMed

    Bermejo, Justo Lorenzo; Hemminki, Kari

    2007-07-01

    Gene-environment studies have been motivated by the likely existence of prevalent low-risk genes that interact with common environmental exposures. The present study assessed the statistical advantage of the simultaneous consideration of genes and environment to investigate the effect of environmental risk factors on disease. In particular, we contemplated the possibility that several genes modulate the environmental effect. Environmental exposures, genotypes and phenotypes were simulated according to a wide range of parameter settings. Different models of gene-gene-environment interaction were considered. For each parameter combination, we estimated the probability of detecting the main environmental effect, the power to identify the gene-environment interaction and the frequency of environmentally affected individuals at which environmental and gene-environment studies show the same statistical power. The proportion of cases in the population attributable to the modeled risk factors was also calculated. Our data indicate that environmental exposures with weak effects may account for a significant proportion of the population prevalence of the disease. A general result was that, if the environmental effect was restricted to rare genotypes, the power to detect the gene-environment interaction was higher than the power to identify the main environmental effect. In other words, when few individuals contribute to the overall environmental effect, individual contributions are large and result in easily identifiable gene-environment interactions. Moreover, when multiple genes interacted with the environment, the statistical benefit of gene-environment studies was limited to those studies that included major contributors to the gene-environment interaction. The advantage of gene-environment over plain environmental studies also depends on the inheritance mode of the involved genes, on the study design and, to some extend, on the disease prevalence.

  1. Compensatory effects of recruitment and survival when amphibian populations are perturbed by disease

    USGS Publications Warehouse

    Muths, E.; Scherer, R. D.; Pilliod, D.S.

    2011-01-01

    The need to increase our understanding of factors that regulate animal population dynamics has been catalysed by recent, observed declines in wildlife populations worldwide. Reliable estimates of demographic parameters are critical for addressing basic and applied ecological questions and understanding the response of parameters to perturbations (e.g. disease, habitat loss, climate change). However, to fully assess the impact of perturbation on population dynamics, all parameters contributing to the response of the target population must be estimated. We applied the reverse-time model of Pradel in Program mark to 6years of capture-recapture data from two populations of Anaxyrus boreas (boreal toad) populations, one with disease and one without. We then assessed a priori hypotheses about differences in survival and recruitment relative to local environmental conditions and the presence of disease. We further explored the relative contribution of survival probability and recruitment rate to population growth and investigated how shifts in these parameters can alter population dynamics when a population is perturbed. High recruitment rates (0??41) are probably compensating for low survival probability (range 0??51-0??54) in the population challenged by an emerging pathogen, resulting in a relatively slow rate of decline. In contrast, the population with no evidence of disease had high survival probability (range 0??75-0??78) but lower recruitment rates (0??25). Synthesis and applications.We suggest that the relationship between survival and recruitment may be compensatory, providing evidence that populations challenged with disease are not necessarily doomed to extinction. A better understanding of these interactions may help to explain, and be used to predict, population regulation and persistence for wildlife threatened with disease. Further, reliable estimates of population parameters such as recruitment and survival can guide the formulation and implementation of conservation actions such as repatriations or habitat management aimed to improve recruitment. ?? 2011 The Authors. Journal of Applied Ecology ?? 2011 British Ecological Society.

  2. Spectral Induced Polarization approaches to characterize reactive transport parameters and processes

    NASA Astrophysics Data System (ADS)

    Schmutz, M.; Franceschi, M.; Revil, A.; Peruzzo, L.; Maury, T.; Vaudelet, P.; Ghorbani, A.; Hubbard, S. S.

    2017-12-01

    For almost a decade, geophysical methods have explored the potential for characterization of reactive transport parameters and processes relevant to hydrogeology, contaminant remediation, and oil and gas applications. Spectral Induced Polarization (SIP) methods show particular promise in this endeavour, given the sensitivity of the SIP signature to geological material electrical double layer properties and the critical role of the electrical double layer on reactive transport processes, such as adsorption. In this presentation, we discuss results from several recent studies that have been performed to quantify the value of SIP parameters for characterizing reactive transport parameters. The advances have been realized through performing experimental studies and interpreting their responses using theoretical and numerical approaches. We describe a series of controlled experimental studies that have been performed to quantify the SIP responses to variations in grain size and specific surface area, pore fluid geochemistry, and other factors. We also model chemical reactions at the interface fluid/matrix linked to part of our experimental data set. For some examples, both geochemical modelling and measurements are integrated into a SIP physico-chemical based model. Our studies indicate both the potential of and the opportunity for using SIP to estimate reactive transport parameters. In case of well sorted granulometry of the samples, we find that the grain size characterization (as well as the permeabililty for some specific examples) value can be estimated using SIP. We show that SIP is sensitive to physico-chemical conditions at the fluid/mineral interface, including the different pore fluid dissolved ions (Na+, Cu2+, Zn2+, Pb2+) due to their different adsorption behavior. We also showed the relevance of our approach to characterize the fluid/matrix interaction for various organic contents (wetting and non-wetting oils). We also discuss early efforts to jointly interpret SIP and other information for improved estimation, approaches to use SIP information to constrain mechanistic flow and transport models, and the potential to apply some of the approaches to field scale applications.

  3. Sensitivity of drainage morphometry based hydrological response (GIUH) of a river basin to the spatial resolution of DEM data

    NASA Astrophysics Data System (ADS)

    Sahoo, Ramendra; Jain, Vikrant

    2018-02-01

    Drainage network pattern and its associated morphometric ratios are some of the important plan form attributes of a drainage basin. Extraction of these attributes for any basin is usually done by spatial analysis of the elevation data of that basin. These planform attributes are further used as input data for studying numerous process-response interactions inside the physical premise of the basin. One of the important uses of the morphometric ratios is its usage in the derivation of hydrologic response of a basin using GIUH concept. Hence, accuracy of the basin hydrological response to any storm event depends upon the accuracy with which, the morphometric ratios can be estimated. This in turn, is affected by the spatial resolution of the source data, i.e. the digital elevation model (DEM). We have estimated the sensitivity of the morphometric ratios and the GIUH derived hydrograph parameters, to the resolution of source data using a 30 meter and a 90 meter DEM. The analysis has been carried out for 50 drainage basins in a mountainous catchment. A simple and comprehensive algorithm has been developed for estimation of the morphometric indices from a stream network. We have calculated all the morphometric parameters and the hydrograph parameters for each of these basins extracted from two different DEMs, with different spatial resolutions. Paired t-test and Sign test were used for the comparison. Our results didn't show any statistically significant difference among any of the parameters calculated from the two source data. Along with the comparative study, a first-hand empirical analysis about the frequency distribution of the morphometric and hydrologic response parameters has also been communicated. Further, a comparison with other hydrological models suggests that plan form morphometry based GIUH model is more consistent with resolution variability in comparison to topographic based hydrological model.

  4. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    PubMed

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  5. Estimation of the Parameters in a Two-State System Coupled to a Squeezed Bath

    NASA Astrophysics Data System (ADS)

    Hu, Yao-Hua; Yang, Hai-Feng; Tan, Yong-Gang; Tao, Ya-Ping

    2018-04-01

    Estimation of the phase and weight parameters of a two-state system in a squeezed bath by calculating quantum Fisher information is investigated. The results show that, both for the phase estimation and for the weight estimation, the quantum Fisher information always decays with time and changes periodically with the phases. The estimation precision can be enhanced by choosing the proper values of the phases and the squeezing parameter. These results can be provided as an analysis reference for the practical application of the parameter estimation in a squeezed bath.

  6. Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.

    PubMed

    Ette, E I; Howie, C A; Kelman, A W; Whiting, B

    1995-05-01

    Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.

  7. Robust gaze-steering of an active vision system against errors in the estimated parameters

    NASA Astrophysics Data System (ADS)

    Han, Youngmo

    2015-01-01

    Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.

  8. An Evaluation of Hierarchical Bayes Estimation for the Two- Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho

    Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item parameters. Simulated data sets were analyzed using two different Bayes estimation procedures, the two-stage hierarchical Bayes estimation (HB2) and the marginal Bayesian with known hyperparameters (MB), and marginal maximum…

  9. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  10. Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.

    PubMed

    Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B

    2005-06-01

    This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.

  11. SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.

    PubMed

    Zi, Zhike

    2011-04-01

    Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.

  12. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  13. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  14. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  15. Comparison of stability and control parameters for a light, single-engine, high-winged aircraft using different flight test and parameter estimation techniques

    NASA Technical Reports Server (NTRS)

    Suit, W. T.; Cannaday, R. L.

    1979-01-01

    The longitudinal and lateral stability and control parameters for a high wing, general aviation, airplane are examined. Estimations using flight data obtained at various flight conditions within the normal range of the aircraft are presented. The estimations techniques, an output error technique (maximum likelihood) and an equation error technique (linear regression), are presented. The longitudinal static parameters are estimated from climbing, descending, and quasi steady state flight data. The lateral excitations involve a combination of rudder and ailerons. The sensitivity of the aircraft modes of motion to variations in the parameter estimates are discussed.

  16. Determination of stability and control parameters of a light airplane from flight data using two estimation methods. [equation error and maximum likelihood methods

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1979-01-01

    Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.

  17. Ensemble urban flood simulation in comparison with laboratory-scale experiments: Impact of interaction models for manhole, sewer pipe, and surface flow

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Lee, Seungsoo; An, Hyunuk; Kawaike, Kenji; Nakagawa, Hajime

    2016-11-01

    An urban flood is an integrated phenomenon that is affected by various uncertainty sources such as input forcing, model parameters, complex geometry, and exchanges of flow among different domains in surfaces and subsurfaces. Despite considerable advances in urban flood modeling techniques, limited knowledge is currently available with regard to the impact of dynamic interaction among different flow domains on urban floods. In this paper, an ensemble method for urban flood modeling is presented to consider the parameter uncertainty of interaction models among a manhole, a sewer pipe, and surface flow. Laboratory-scale experiments on urban flood and inundation are performed under various flow conditions to investigate the parameter uncertainty of interaction models. The results show that ensemble simulation using interaction models based on weir and orifice formulas reproduces experimental data with high accuracy and detects the identifiability of model parameters. Among interaction-related parameters, the parameters of the sewer-manhole interaction show lower uncertainty than those of the sewer-surface interaction. Experimental data obtained under unsteady-state conditions are more informative than those obtained under steady-state conditions to assess the parameter uncertainty of interaction models. Although the optimal parameters vary according to the flow conditions, the difference is marginal. Simulation results also confirm the capability of the interaction models and the potential of the ensemble-based approaches to facilitate urban flood simulation.

  18. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  19. Model parameter learning using Kullback-Leibler divergence

    NASA Astrophysics Data System (ADS)

    Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan

    2018-02-01

    In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.

  20. The Effects of Statistical Multiplicity of Infection on Virus Quantification and Infectivity Assays.

    PubMed

    Mistry, Bhaven A; D'Orsogna, Maria R; Chou, Tom

    2018-06-19

    Many biological assays are employed in virology to quantify parameters of interest. Two such classes of assays, virus quantification assays (VQAs) and infectivity assays (IAs), aim to estimate the number of viruses present in a solution and the ability of a viral strain to successfully infect a host cell, respectively. VQAs operate at extremely dilute concentrations, and results can be subject to stochastic variability in virus-cell interactions. At the other extreme, high viral-particle concentrations are used in IAs, resulting in large numbers of viruses infecting each cell, enough for measurable change in total transcription activity. Furthermore, host cells can be infected at any concentration regime by multiple particles, resulting in a statistical multiplicity of infection and yielding potentially significant variability in the assay signal and parameter estimates. We develop probabilistic models for statistical multiplicity of infection at low and high viral-particle-concentration limits and apply them to the plaque (VQA), endpoint dilution (VQA), and luciferase reporter (IA) assays. A web-based tool implementing our models and analysis is also developed and presented. We test our proposed new methods for inferring experimental parameters from data using numerical simulations and show improvement on existing procedures in all limits. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  1. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  2. Anomalous cosmic ray fluxes in diffusive shock acceleration processes in the heliosphere and in planetary and WR-nebulae

    NASA Astrophysics Data System (ADS)

    Yeghikyan, Ararat

    2018-04-01

    Based on the analogy between interacting stellar winds of planetary nebulae and WR-nebulae, on the one hand, and the heliosphere and the expanding envelopes of supernovae, on the other, an attempt is made to calculate the differential intensity of the energetic protons accelerated to energies of 100 MeV by the shock wave. The proposed one-parameter formula for estimating the intensity at 1-100 MeV, when applied to the heliosphere, shows good agreement with the Voyager-1 data, to within a factor of less than 2. The same estimate for planetary (and WR-) nebulae yields a value 7-8 (3-4) orders of magnitude higher than the mean galactic intensity value. The obtained estimate of the intensity of energetic protons in mentioned kinds of nebulae was used to estimate the doses of irradiation of certain substances, in order to show that such accelerated particles play an important role in radiation-chemical transformations in such nebulae.

  3. Molecular interactions in nanocellulose assembly

    NASA Astrophysics Data System (ADS)

    Nishiyama, Yoshiharu

    2017-12-01

    The contribution of hydrogen bonds and the London dispersion force in the cohesion of cellulose is discussed in the light of the structure, spectroscopic data, empirical molecular-modelling parameters and thermodynamics data of analogue molecules. The hydrogen bond of cellulose is mainly electrostatic, and the stabilization energy in cellulose for each hydrogen bond is estimated to be between 17 and 30 kJ mol-1. On average, hydroxyl groups of cellulose form hydrogen bonds comparable to those of other simple alcohols. The London dispersion interaction may be estimated from empirical attraction terms in molecular modelling by simple integration over all components. Although this interaction extends to relatively large distances in colloidal systems, the short-range interaction is dominant for the cohesion of cellulose and is equivalent to a compression of 3 GPa. Trends of heat of vaporization of alkyl alcohols and alkanes suggests a stabilization by such hydroxyl group hydrogen bonding to be of the order of 24 kJ mol-1, whereas the London dispersion force contributes about 0.41 kJ mol-1 Da-1. The simple arithmetic sum of the energy is consistent with the experimental enthalpy of sublimation of small sugars, where the main part of the cohesive energy comes from hydrogen bonds. For cellulose, because of the reduced number of hydroxyl groups, the London dispersion force provides the main contribution to intermolecular cohesion. This article is part of a discussion meeting issue `New horizons for cellulose nanotechnology'.

  4. Pharmacokinetic model analysis of interaction between phenytoin and capecitabine.

    PubMed

    Miyazaki, Shohei; Satoh, Hiroki; Ikenishi, Masayuki; Sakurai, Miyuki; Ueda, Mutsuaki; Kawahara, Kaori; Ueda, Rie; Ohtori, Tohru; Matsuyama, Kenji; Miki, Akiko; Hori, Satoko; Fukui, Eiji; Nakatsuka, Eitaro; Sawada, Yasufumi

    2016-09-01

    Recent reports have shbown an increase in serum phenytoin levels resulting in phenytoin toxicity after initiation of luoropyrimidine chemotherapy. To prevent phenytoin intoxication, phenytoin dosage must be adjusted. We sought to develop a pharmacokinetic model of the interaction between phenytoin and capecitabine. We developed the phenytoin-capecitabine interaction model on the assumption that fluorouracil (5-FU) inhibits cytochrome P450 (CYP) 2C9 synthesis in a concentration- dependent manner. The plasma 5-FU concentration after oral administration of capecitabine was estimated using a conventional compartment model. Nonlinear pharmacokinetics of phenytoin was modeled by incorporating the Michaelis-Menten equation to represent the saturation of phenytoin metabolism. The resulting model was fitted to data from our previously-reported cases. The developed phenytoincapecitabine interaction model successfully described the profiles of serum phenytoin concentration in patients who received phenytoin and capecitabine concomitantly. The 50% inhibitory 5-FU concentration for CYP2C9 synthesis and the degradation rate constant of CYP2C9 were estimated to be 0.00310 ng/mL and 0.0768 day-1, respectively. This model and these parameters allow us to predict the appropriate phenytoin dosage schedule when capecitabine is administered concomitantly. This newly-developed model accurately describes changes in phenytoin concentration during concomitant capecitabine chemotherapy, and it may be clinically useful for predicting appropriate phenytoin dosage adjustments for maintaining serum phenytoin levels within the therapeutic range.

  5. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    NASA Astrophysics Data System (ADS)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  6. A variational approach to parameter estimation in ordinary differential equations.

    PubMed

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  7. Estimation of distributional parameters for censored trace level water quality data: 2. Verification and applications

    USGS Publications Warehouse

    Helsel, Dennis R.; Gilliom, Robert J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.

  8. Parameter estimation of qubit states with unknown phase parameter

    NASA Astrophysics Data System (ADS)

    Suzuki, Jun

    2015-02-01

    We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.

  9. Image informative maps for component-wise estimating parameters of signal-dependent noise

    NASA Astrophysics Data System (ADS)

    Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem

    2013-01-01

    We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.

  10. Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo

    2017-01-01

    Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.

  11. Complex phase error and motion estimation in synthetic aperture radar imaging

    NASA Astrophysics Data System (ADS)

    Soumekh, M.; Yang, H.

    1991-06-01

    Attention is given to a SAR wave equation-based system model that accurately represents the interaction of the impinging radar signal with the target to be imaged. The model is used to estimate the complex phase error across the synthesized aperture from the measured corrupted SAR data by combining the two wave equation models governing the collected SAR data at two temporal frequencies of the radar signal. The SAR system model shows that the motion of an object in a static scene results in coupled Doppler shifts in both the temporal frequency domain and the spatial frequency domain of the synthetic aperture. The velocity of the moving object is estimated through these two Doppler shifts. It is shown that once the dynamic target's velocity is known, its reconstruction can be formulated via a squint-mode SAR geometry with parameters that depend upon the dynamic target's velocity.

  12. Continuous-variable quantum probes for structured environments

    NASA Astrophysics Data System (ADS)

    Bina, Matteo; Grasselli, Federico; Paris, Matteo G. A.

    2018-01-01

    We address parameter estimation for structured environments and suggest an effective estimation scheme based on continuous-variables quantum probes. In particular, we investigate the use of a single bosonic mode as a probe for Ohmic reservoirs, and obtain the ultimate quantum limits to the precise estimation of their cutoff frequency. We assume the probe prepared in a Gaussian state and determine the optimal working regime, i.e., the conditions for the maximization of the quantum Fisher information in terms of the initial preparation, the reservoir temperature, and the interaction time. Upon investigating the Fisher information of feasible measurements, we arrive at a remarkable simple result: homodyne detection of canonical variables allows one to achieve the ultimate quantum limit to precision under suitable, mild, conditions. Finally, upon exploiting a perturbative approach, we find the invariant sweet spots of the (tunable) characteristic frequency of the probe, able to drive the probe towards the optimal working regime.

  13. Sieve estimation in semiparametric modeling of longitudinal data with informative observation times.

    PubMed

    Zhao, Xingqiu; Deng, Shirong; Liu, Li; Liu, Lei

    2014-01-01

    Analyzing irregularly spaced longitudinal data often involves modeling possibly correlated response and observation processes. In this article, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates, leaving patterns of the observation process to be arbitrary. For inference on the regression parameters and the baseline mean function, a spline-based least squares estimation approach is proposed. The consistency, rate of convergence, and asymptotic normality of the proposed estimators are established. Our new approach is different from the usual approaches relying on the model specification of the observation scheme, and it can be easily used for predicting the longitudinal response. Simulation studies demonstrate that the proposed inference procedure performs well and is more robust. The analyses of bladder tumor data and medical cost data are presented to illustrate the proposed method.

  14. Technique for Determination of Rational Boundaries in Combining Construction and Installation Processes Based on Quantitative Estimation of Technological Connections

    NASA Astrophysics Data System (ADS)

    Gusev, E. V.; Mukhametzyanov, Z. R.; Razyapov, R. V.

    2017-11-01

    The problems of the existing methods for the determination of combining and technologically interlinked construction processes and activities are considered under the modern construction conditions of various facilities. The necessity to identify common parameters that characterize the interaction nature of all the technology-related construction and installation processes and activities is shown. The research of the technologies of construction and installation processes for buildings and structures with the goal of determining a common parameter for evaluating the relationship between technologically interconnected processes and construction works are conducted. The result of this research was to identify the quantitative evaluation of interaction construction and installation processes and activities in a minimum technologically necessary volume of the previous process allowing one to plan and organize the execution of a subsequent technologically interconnected process. The quantitative evaluation is used as the basis for the calculation of the optimum range of the combination of processes and activities. The calculation method is based on the use of the graph theory. The authors applied a generic characterization parameter to reveal the technological links between construction and installation processes, and the proposed technique has adaptive properties which are key for wide use in organizational decisions forming. The article provides a written practical significance of the developed technique.

  15. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    PubMed

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  16. Computational analysis of the binding ability of heterocyclic and conformationally constrained epibatidine analogs in the neuronal nicotinic acetylcholine receptor.

    PubMed

    Soriano, Elena; Marco-Contelles, José; Colmena, Inés; Gandía, Luis

    2010-05-01

    One of the most critical issues on the study of ligand-receptor interactions in drug design is the knowledge of the bioactive conformation of the ligand. In this study, we describe a computational approach aimed at estimating the binding ability of epibatidine analogs to interact with the neuronal nicotinic acetylcholine receptor (nAChR) and get insights into the bioactive conformation. The protocol followed consists of a docking analysis and evaluation of pharmacophore parameters of the docked structures. On the basis of the biological data, the results have revealed that the docking analysis is able to predict active ligands, whereas further efforts are needed to develop a suitable and solid pharmacophore model.

  17. Stochastic Acceleration of Ions Driven by Pc1 Wave Packets

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Sibeck, D. G.; Tel'nikhin, A. A.; Kronberg, T. K.

    2015-01-01

    The stochastic motion of protons and He(sup +) ions driven by Pc1 wave packets is studied in the context of resonant particle heating. Resonant ion cyclotron heating typically occurs when wave powers exceed 10(exp -4) nT sq/Hz. Gyroresonance breaks the first adiabatic invariant and energizes keV ions. Cherenkov resonances with the electrostatic component of wave packets can also accelerate ions. The main effect of this interaction is to accelerate thermal protons to the local Alfven speed. The dependencies of observable quantities on the wave power and plasma parameters are determined, and estimates for the heating extent and rate of particle heating in these wave-particle interactions are shown to be in reasonable agreement with known empirical data.

  18. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    USGS Publications Warehouse

    Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.

  19. Transit Project Planning Guidance : Estimation of Transit Supply Parameters

    DOT National Transportation Integrated Search

    1984-04-01

    This report discusses techniques applicable to the estimation of transit vehicle fleet requirements, vehicle-hours and vehicle-miles, and other related transit supply parameters. These parameters are used for estimating operating costs and certain ca...

  20. Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo

    2016-04-01

    Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.

  1. SBML-PET: a Systems Biology Markup Language-based parameter estimation tool.

    PubMed

    Zi, Zhike; Klipp, Edda

    2006-11-01

    The estimation of model parameters from experimental data remains a bottleneck for a major breakthrough in systems biology. We present a Systems Biology Markup Language (SBML) based Parameter Estimation Tool (SBML-PET). The tool is designed to enable parameter estimation for biological models including signaling pathways, gene regulation networks and metabolic pathways. SBML-PET supports import and export of the models in the SBML format. It can estimate the parameters by fitting a variety of experimental data from different experimental conditions. SBML-PET has a unique feature of supporting event definition in the SMBL model. SBML models can also be simulated in SBML-PET. Stochastic Ranking Evolution Strategy (SRES) is incorporated in SBML-PET for parameter estimation jobs. A classic ODE Solver called ODEPACK is used to solve the Ordinary Differential Equation (ODE) system. http://sysbio.molgen.mpg.de/SBML-PET/. The website also contains detailed documentation for SBML-PET.

  2. Population pharmacokinetics of aripiprazole in healthy Korean subjects.

    PubMed

    Jeon, Ji-Young; Chae, Soo-Wan; Kim, Min-Gul

    2016-04-01

    Aripiprazole is widely used to treat schizophrenia and bipolar disorder. This study aimed to develop a combined population pharmacokinetic model for aripiprazole in healthy Korean subjects and to identify the significant covariates in the pharmacokinetic variability of aripiprazole. Aripiprazole plasma concentrations and demographic data were collected retrospectively from previous bioequivalence studies that were conducted in Chonbuk National University Hospital. Informed consent was obtained from subjects for cytochrome P450 (CYP) genotyping. The population pharmacokinetic parameters of aripiprazole were estimated using nonlinear mixed-effect modeling with first-order conditional estimation with interaction method. The effects of age, sex, weight, height, and CYP genotype were assessed as covariates. A total of 1,508 samples from 88 subjects in three bioequivalence studies were collected. The two-compartment model was adopted, and the final population model showed that the CYP2D6 genotype polymorphism, height and weight significantly affect aripiprazole disposition. The bootstrap and visual predictive check results were evaluated, showing that the accuracy of the pharmacokinetic model was acceptable. A population pharmacokinetic model of aripiprazole was developed for Korean subjects. CYP2D6 genotype polymorphism, weight, and height were included as significant factors affecting aripiprazole disposition. The population pharmacokinetic parameters of aripiprazole estimated in the present study may be useful for individualizing clinical dosages and for studying the concentration-effect relationship of the drug.

  3. Revised age estimates of the Euphrosyne family

    NASA Astrophysics Data System (ADS)

    Carruba, Valerio; Masiero, Joseph R.; Cibulková, Helena; Aljbaae, Safwan; Espinoza Huaman, Mariela

    2015-08-01

    The Euphrosyne family, a high inclination asteroid family in the outer main belt, is considered one of the most peculiar groups of asteroids. It is characterized by the steepest size frequency distribution (SFD) among families in the main belt, and it is the only family crossed near its center by the ν6 secular resonance. Previous studies have shown that the steep size frequency distribution may be the result of the dynamical evolution of the family.In this work we further explore the unique dynamical configuration of the Euphrosyne family by refining the previous age values, considering the effects of changes in shapes of the asteroids during YORP cycle (``stochastic YORP''), the long-term effect of close encounters of family members with (31) Euphrosyne itself, and the effect that changing key parameters of the Yarkovsky force (such as density and thermal conductivity) has on the estimate of the family age obtained using Monte Carlo methods. Numerical simulations accounting for the interaction with the local web of secular and mean-motion resonances allow us to refine previous estimates of the family age. The cratering event that formed the Euphrosyne family most likely occurred between 560 and 1160 Myr ago, and no earlier than 1400 Myr ago when we allow for larger uncertainties in the key parameters of the Yarkovsky force.

  4. GBIS (Geodetic Bayesian Inversion Software): Rapid Inversion of InSAR and GNSS Data to Estimate Surface Deformation Source Parameters and Uncertainties

    NASA Astrophysics Data System (ADS)

    Bagnardi, M.; Hooper, A. J.

    2017-12-01

    Inversions of geodetic observational data, such as Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS) measurements, are often performed to obtain information about the source of surface displacements. Inverse problem theory has been applied to study magmatic processes, the earthquake cycle, and other phenomena that cause deformation of the Earth's interior and of its surface. Together with increasing improvements in data resolution, both spatial and temporal, new satellite missions (e.g., European Commission's Sentinel-1 satellites) are providing the unprecedented opportunity to access space-geodetic data within hours from their acquisition. To truly take advantage of these opportunities we must become able to interpret geodetic data in a rapid and robust manner. Here we present the open-source Geodetic Bayesian Inversion Software (GBIS; available for download at http://comet.nerc.ac.uk/gbis). GBIS is written in Matlab and offers a series of user-friendly and interactive pre- and post-processing tools. For example, an interactive function has been developed to estimate the characteristics of noise in InSAR data by calculating the experimental semi-variogram. The inversion software uses a Markov-chain Monte Carlo algorithm, incorporating the Metropolis-Hastings algorithm with adaptive step size, to efficiently sample the posterior probability distribution of the different source parameters. The probabilistic Bayesian approach allows the user to retrieve estimates of the optimal (best-fitting) deformation source parameters together with the associated uncertainties produced by errors in the data (and by scaling, errors in the model). The current version of GBIS (V1.0) includes fast analytical forward models for magmatic sources of different geometry (e.g., point source, finite spherical source, prolate spheroid source, penny-shaped sill-like source, and dipping-dike with uniform opening) and for dipping faults with uniform slip, embedded in a isotropic elastic half-space. However, the software architecture allows the user to easily add any other analytical or numerical forward models to calculate displacements at the surface. GBIS is delivered with a detailed user manual and three synthetic datasets for testing and practical training.

  5. Taguchi Optimization of Cutting Parameters in Turning AISI 1020 MS with M2 HSS Tool

    NASA Astrophysics Data System (ADS)

    Sonowal, Dharindom; Sarma, Dhrupad; Bakul Barua, Parimal; Nath, Thuleswar

    2017-08-01

    In this paper the effect of three cutting parameters viz. Spindle speed, Feed and Depth of Cut on surface roughness of AISI 1020 mild steel bar in turning was investigated and optimized to obtain minimum surface roughness. All the experiments are conducted on HMT LB25 lathe machine using M2 HSS cutting tool. Ranges of parameters of interest have been decided through some preliminary experimentation (One Factor At a Time experiments). Finally a combined experiment has been carried out using Taguchi’s L27 Orthogonal Array (OA) to study the main effect and interaction effect of the all three parameters. The experimental results were analyzed with raw data ANOVA (Analysis of Variance) and S/N data (Signal to Noise ratio) ANOVA. Results show that Spindle speed, Feed and Depth of Cut have significant effects on both mean and variation of surface roughness in turning AISI 1020 mild steel. Mild two factors interactions are observed among the aforesaid factors with significant effects only on the mean of the output variable. From the Taguchi parameter optimization the optimum factor combination is found to be 630 rpm spindle speed, 0.05 mm/rev feed and 1.25 mm depth of cut with estimated surface roughness 2.358 ± 0.970 µm. A confirmatory experiment was conducted with the optimum factor combination to verify the results. In the confirmatory experiment the average value of surface roughness is found to be 2.408 µm which is well within the range (0.418 µm to 4.299 µm) predicted for confirmatory experiment.

  6. Estimation of genetic parameters and their sampling variances of quantitative traits in the type 2 modified augmented design

    USDA-ARS?s Scientific Manuscript database

    We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...

  7. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  8. Simple model of sickle hemogloblin

    NASA Astrophysics Data System (ADS)

    Shiryayev, Andrey; Li, Xiaofei; Gunton, J. D.

    2006-07-01

    A microscopic model is proposed for the interactions between sickle hemoglobin molecules based on information from the protein data bank. A solution of this model, however, requires accurate estimates of the interaction parameters which are currently unavailable. Therefore, as a first step toward a molecular understanding of the nucleation mechanisms in sickle hemoglobin, a Monte Carlo simulation of a simplified two patch model is carried out. A gradual transition from monomers to one dimensional chains is observed as one varies the density of molecules at fixed temperature, somewhat similar to the transition from monomers to polymer fibers in sickle hemoglobin molecules in solution. An observed competition between chain formation and crystallization for the model is also discussed. The results of the simulation of the equation of state are shown to be in excellent agreement with a theory for a model of globular proteins, for the case of two interacting sites.

  9. An Interactive Image Segmentation Method in Hand Gesture Recognition

    PubMed Central

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818

  10. Incommensurate phase of a triangular frustrated Heisenberg model studied via Schwinger-boson mean-field theory

    NASA Astrophysics Data System (ADS)

    Li, Peng; Su, Haibin; Dong, Hui-Ning; Shen, Shun-Qing

    2009-08-01

    We study a triangular frustrated antiferromagnetic Heisenberg model with nearest-neighbor interactions J1 and third-nearest-neighbor interactions J3 by means of Schwinger-boson mean-field theory. By setting an antiferromagnetic J3 and varying J1 from positive to negative values, we disclose the low-temperature features of its interesting incommensurate phase. The gapless dispersion of quasiparticles leads to the intrinsic T2 law of specific heat. The magnetic susceptibility is linear in temperature. The local magnetization is significantly reduced by quantum fluctuations. We address possible relevance of these results to the low-temperature properties of NiGa2S4. From a careful analysis of the incommensurate spin wavevector, the interaction parameters are estimated as J1≈-3.8755 K and J3≈14.0628 K, in order to account for the experimental data.

  11. Catchment Tomography - Joint Estimation of Surface Roughness and Hydraulic Conductivity with the EnKF

    NASA Astrophysics Data System (ADS)

    Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.

    2017-12-01

    Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.

  12. Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies and Incidence Time Series

    PubMed Central

    Li, Lucy M.; Grassly, Nicholas C.; Fraser, Christophe

    2017-01-01

    Abstract Heterogeneity in individual-level transmissibility can be quantified by the dispersion parameter k of the offspring distribution. Quantifying heterogeneity is important as it affects other parameter estimates, it modulates the degree of unpredictability of an epidemic, and it needs to be accounted for in models of infection control. Aggregated data such as incidence time series are often not sufficiently informative to estimate k. Incorporating phylogenetic analysis can help to estimate k concurrently with other epidemiological parameters. We have developed an inference framework that uses particle Markov Chain Monte Carlo to estimate k and other epidemiological parameters using both incidence time series and the pathogen phylogeny. Using the framework to fit a modified compartmental transmission model that includes the parameter k to simulated data, we found that more accurate and less biased estimates of the reproductive number were obtained by combining epidemiological and phylogenetic analyses. However, k was most accurately estimated using pathogen phylogeny alone. Accurately estimating k was necessary for unbiased estimates of the reproductive number, but it did not affect the accuracy of reporting probability and epidemic start date estimates. We further demonstrated that inference was possible in the presence of phylogenetic uncertainty by sampling from the posterior distribution of phylogenies. Finally, we used the inference framework to estimate transmission parameters from epidemiological and genetic data collected during a poliovirus outbreak. Despite the large degree of phylogenetic uncertainty, we demonstrated that incorporating phylogenetic data in parameter inference improved the accuracy and precision of estimates. PMID:28981709

  13. Canal–Otolith Interactions and Detection Thresholds of Linear and Angular Components During Curved-Path Self-Motion

    PubMed Central

    MacNeilage, Paul R.; Turner, Amanda H.

    2010-01-01

    Gravitational signals arising from the otolith organs and vertical plane rotational signals arising from the semicircular canals interact extensively for accurate estimation of tilt and inertial acceleration. Here we used a classical signal detection paradigm to examine perceptual interactions between otolith and horizontal semicircular canal signals during simultaneous rotation and translation on a curved path. In a rotation detection experiment, blindfolded subjects were asked to detect the presence of angular motion in blocks where half of the trials were pure nasooccipital translation and half were simultaneous translation and yaw rotation (curved-path motion). In separate, translation detection experiments, subjects were also asked to detect either the presence or the absence of nasooccipital linear motion in blocks, in which half of the trials were pure yaw rotation and half were curved path. Rotation thresholds increased slightly, but not significantly, with concurrent linear velocity magnitude. Yaw rotation detection threshold, averaged across all conditions, was 1.45 ± 0.81°/s (3.49 ± 1.95°/s2). Translation thresholds, on the other hand, increased significantly with increasing magnitude of concurrent angular velocity. Absolute nasooccipital translation detection threshold, averaged across all conditions, was 2.93 ± 2.10 cm/s (7.07 ± 5.05 cm/s2). These findings suggest that conscious perception might not have independent access to separate estimates of linear and angular movement parameters during curved-path motion. Estimates of linear (and perhaps angular) components might instead rely on integrated information from canals and otoliths. Such interaction may underlie previously reported perceptual errors during curved-path motion and may originate from mechanisms that are specialized for tilt-translation processing during vertical plane rotation. PMID:20554843

  14. The Detectability of Radio Auroral Emission from Proxima b

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burkhart, Blakesley; Loeb, Abraham

    Magnetically active stars possess stellar winds whose interactions with planetary magnetic fields produce radio auroral emission. We examine the detectability of radio auroral emission from Proxima b, the closest known exosolar planet orbiting our nearest neighboring star, Proxima Centauri. Using the radiometric Bode’s law, we estimate the radio flux produced by the interaction of Proxima Centauri’s stellar wind and Proxima b’s magnetosphere for different planetary magnetic field strengths. For plausible planetary masses, Proxima b could produce radio fluxes of 100 mJy or more in a frequency range of 0.02–3 MHz for planetary magnetic field strengths of 0.007–1 G. According tomore » recent MHD models that vary the orbital parameters of the system, this emission is expected to be highly variable. This variability is due to large fluctuations in the size of Proxima b’s magnetosphere as it crosses the equatorial streamer regions of dense stellar wind and high dynamic pressure. Using the MHD model of Garraffo et al. for the variation of the magnetosphere radius during the orbit, we estimate that the observed radio flux can vary nearly by an order of magnitude over the 11.2-day period of Proxima b. The detailed amplitude variation depends on the stellar wind, orbital, and planetary magnetic field parameters. We discuss observing strategies for proposed future space-based observatories to reach frequencies below the ionospheric cutoff (∼10 MHz), which would be required to detect the signal we investigate.« less

  15. Using the MWC model to describe heterotropic interactions in hemoglobin

    PubMed Central

    Rapp, Olga

    2017-01-01

    Hemoglobin is a classical model allosteric protein. Research on hemoglobin parallels the development of key cooperativity and allostery concepts, such as the ‘all-or-none’ Hill formalism, the stepwise Adair binding formulation and the concerted Monod-Wymann-Changuex (MWC) allosteric model. While it is clear that the MWC model adequately describes the cooperative binding of oxygen to hemoglobin, rationalizing the effects of H+, CO2 or organophosphate ligands on hemoglobin-oxygen saturation using the same model remains controversial. According to the MWC model, allosteric ligands exert their effect on protein function by modulating the quaternary conformational transition of the protein. However, data fitting analysis of hemoglobin oxygen saturation curves in the presence or absence of inhibitory ligands persistently revealed effects on both relative oxygen affinity (c) and conformational changes (L), elementary MWC parameters. The recent realization that data fitting analysis using the traditional MWC model equation may not provide reliable estimates for L and c thus calls for a re-examination of previous data using alternative fitting strategies. In the current manuscript, we present two simple strategies for obtaining reliable estimates for MWC mechanistic parameters of hemoglobin steady-state saturation curves in cases of both evolutionary and physiological variations. Our results suggest that the simple MWC model provides a reasonable description that can also account for heterotropic interactions in hemoglobin. The results, moreover, offer a general roadmap for successful data fitting analysis using the MWC model. PMID:28793329

  16. Estimating the Properties of Hard X-Ray Solar Flares by Constraining Model Parameters

    NASA Technical Reports Server (NTRS)

    Ireland, J.; Tolbert, A. K.; Schwartz, R. A.; Holman, G. D.; Dennis, B. R.

    2013-01-01

    We wish to better constrain the properties of solar flares by exploring how parameterized models of solar flares interact with uncertainty estimation methods. We compare four different methods of calculating uncertainty estimates in fitting parameterized models to Ramaty High Energy Solar Spectroscopic Imager X-ray spectra, considering only statistical sources of error. Three of the four methods are based on estimating the scale-size of the minimum in a hypersurface formed by the weighted sum of the squares of the differences between the model fit and the data as a function of the fit parameters, and are implemented as commonly practiced. The fourth method is also based on the difference between the data and the model, but instead uses Bayesian data analysis and Markov chain Monte Carlo (MCMC) techniques to calculate an uncertainty estimate. Two flare spectra are modeled: one from the Geostationary Operational Environmental Satellite X1.3 class flare of 2005 January 19, and the other from the X4.8 flare of 2002 July 23.We find that the four methods give approximately the same uncertainty estimates for the 2005 January 19 spectral fit parameters, but lead to very different uncertainty estimates for the 2002 July 23 spectral fit. This is because each method implements different analyses of the hypersurface, yielding method-dependent results that can differ greatly depending on the shape of the hypersurface. The hypersurface arising from the 2005 January 19 analysis is consistent with a normal distribution; therefore, the assumptions behind the three non- Bayesian uncertainty estimation methods are satisfied and similar estimates are found. The 2002 July 23 analysis shows that the hypersurface is not consistent with a normal distribution, indicating that the assumptions behind the three non-Bayesian uncertainty estimation methods are not satisfied, leading to differing estimates of the uncertainty. We find that the shape of the hypersurface is crucial in understanding the output from each uncertainty estimation technique, and that a crucial factor determining the shape of hypersurface is the location of the low-energy cutoff relative to energies where the thermal emission dominates. The Bayesian/MCMC approach also allows us to provide detailed information on probable values of the low-energy cutoff, Ec, a crucial parameter in defining the energy content of the flare-accelerated electrons. We show that for the 2002 July 23 flare data, there is a 95% probability that Ec lies below approximately 40 keV, and a 68% probability that it lies in the range 7-36 keV. Further, the low-energy cutoff is more likely to be in the range 25-35 keV than in any other 10 keV wide energy range. The low-energy cutoff for the 2005 January 19 flare is more tightly constrained to 107 +/- 4 keV with 68% probability.

  17. PopHuman: the human population genomics browser

    PubMed Central

    Mulet, Roger; Villegas-Mirón, Pablo; Hervas, Sergi; Sanz, Esteve; Velasco, Daniel; Bertranpetit, Jaume; Laayouni, Hafid

    2018-01-01

    Abstract The 1000 Genomes Project (1000GP) represents the most comprehensive world-wide nucleotide variation data set so far in humans, providing the sequencing and analysis of 2504 genomes from 26 populations and reporting >84 million variants. The availability of this sequence data provides the human lineage with an invaluable resource for population genomics studies, allowing the testing of molecular population genetics hypotheses and eventually the understanding of the evolutionary dynamics of genetic variation in human populations. Here we present PopHuman, a new population genomics-oriented genome browser based on JBrowse that allows the interactive visualization and retrieval of an extensive inventory of population genetics metrics. Efficient and reliable parameter estimates have been computed using a novel pipeline that faces the unique features and limitations of the 1000GP data, and include a battery of nucleotide variation measures, divergence and linkage disequilibrium parameters, as well as different tests of neutrality, estimated in non-overlapping windows along the chromosomes and in annotated genes for all 26 populations of the 1000GP. PopHuman is open and freely available at http://pophuman.uab.cat. PMID:29059408

  18. An approach to and web-based tool for infectious disease outbreak intervention analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daughton, Ashlynn R.; Generous, Nicholas; Priedhorsky, Reid

    Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a subjective process involving surveillance and expert opinion. However, there are many situations where neither may be available. Modeling can fill gaps in the decision making process by using available data to provide quantitative estimates of outbreak trajectories. Effective reduction of the spread of infectious diseases can be achieved through collaboration between the modeling community and public health policy community. However, such collaboration is rare, resulting in a lack of models that meet the needs of the public healthmore » community. Here we show a Susceptible-Infectious-Recovered (SIR) model modified to include control measures that allows parameter ranges, rather than parameter point estimates, and includes a web user interface for broad adoption. We apply the model to three diseases, measles, norovirus and influenza, to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.« less

  19. An approach to and web-based tool for infectious disease outbreak intervention analysis

    DOE PAGES

    Daughton, Ashlynn R.; Generous, Nicholas; Priedhorsky, Reid; ...

    2017-04-18

    Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a subjective process involving surveillance and expert opinion. However, there are many situations where neither may be available. Modeling can fill gaps in the decision making process by using available data to provide quantitative estimates of outbreak trajectories. Effective reduction of the spread of infectious diseases can be achieved through collaboration between the modeling community and public health policy community. However, such collaboration is rare, resulting in a lack of models that meet the needs of the public healthmore » community. Here we show a Susceptible-Infectious-Recovered (SIR) model modified to include control measures that allows parameter ranges, rather than parameter point estimates, and includes a web user interface for broad adoption. We apply the model to three diseases, measles, norovirus and influenza, to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.« less

  20. In silico characterization of cell-cell interactions using a cellular automata model of cell culture.

    PubMed

    Kihara, Takanori; Kashitani, Kosuke; Miyake, Jun

    2017-07-14

    Cell proliferation is a key characteristic of eukaryotic cells. During cell proliferation, cells interact with each other. In this study, we developed a cellular automata model to estimate cell-cell interactions using experimentally obtained images of cultured cells. We used four types of cells; HeLa cells, human osteosarcoma (HOS) cells, rat mesenchymal stem cells (MSCs), and rat smooth muscle A7r5 cells. These cells were cultured and stained daily. The obtained cell images were binarized and clipped into squares containing about 10 4 cells. These cells showed characteristic cell proliferation patterns. The growth curves of these cells were generated from the cell proliferation images and we determined the doubling time of these cells from the growth curves. We developed a simple cellular automata system with an easily accessible graphical user interface. This system has five variable parameters, namely, initial cell number, doubling time, motility, cell-cell adhesion, and cell-cell contact inhibition (of proliferation). Within these parameters, we obtained initial cell numbers and doubling times experimentally. We set the motility at a constant value because the effect of the parameter for our simulation was restricted. Therefore, we simulated cell proliferation behavior with cell-cell adhesion and cell-cell contact inhibition as variables. By comparing growth curves and proliferation cell images, we succeeded in determining the cell-cell interaction properties of each cell. Simulated HeLa and HOS cells exhibited low cell-cell adhesion and weak cell-cell contact inhibition. Simulated MSCs exhibited high cell-cell adhesion and positive cell-cell contact inhibition. Simulated A7r5 cells exhibited low cell-cell adhesion and strong cell-cell contact inhibition. These simulated results correlated with the experimental growth curves and proliferation images. Our simulation approach is an easy method for evaluating the cell-cell interaction properties of cells.

Top