Sample records for model fitting method

  1. Curve fitting methods for solar radiation data modeling

    NASA Astrophysics Data System (ADS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  2. Curve fitting methods for solar radiation data modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both withmore » two terms) gives better results as compare with the other fitting methods.« less

  3. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  4. Development of an Advanced Respirator Fit-Test Headform

    PubMed Central

    Bergman, Michael S.; Zhuang, Ziqing; Hanson, David; Heimbuch, Brian K.; McDonald, Michael J.; Palmiero, Andrew J.; Shaffer, Ronald E.; Harnish, Delbert; Husband, Michael; Wander, Joseph D.

    2015-01-01

    Improved respirator test headforms are needed to measure the fit of N95 filtering facepiece respirators (FFRs) for protection studies against viable airborne particles. A Static (i.e., non-moving, non-speaking) Advanced Headform (StAH) was developed for evaluating the fit of N95 FFRs. The StAH was developed based on the anthropometric dimensions of a digital headform reported by the National Institute for Occupational Safety and Health (NIOSH) and has a silicone polymer skin with defined local tissue thicknesses. Quantitative fit factor evaluations were performed on seven N95 FFR models of various sizes and designs. Donnings were performed with and without a pre-test leak checking method. For each method, four replicate FFR samples of each of the seven models were tested with two donnings per replicate, resulting in a total of 56 tests per donning method. Each fit factor evaluation was comprised of three 86-sec exercises: “Normal Breathing” (NB, 11.2 liters per min (lpm)), “Deep Breathing” (DB, 20.4 lpm), then NB again. A fit factor for each exercise and an overall test fit factor were obtained. Analysis of variance methods were used to identify statistical differences among fit factors (analyzed as logarithms) for different FFR models, exercises, and testing methods. For each FFR model and for each testing method, the NB and DB fit factor data were not significantly different (P > 0.05). Significant differences were seen in the overall exercise fit factor data for the two donning methods among all FFR models (pooled data) and in the overall exercise fit factor data for the two testing methods within certain models. Utilization of the leak checking method improved the rate of obtaining overall exercise fit factors ≥100. The FFR models, which are expected to achieve overall fit factors ≥ 100 on human subjects, achieved overall exercise fit factors ≥ 100 on the StAH. Further research is needed to evaluate the correlation of FFRs fitted on the StAH to FFRs fitted on people. PMID:24369934

  5. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  6. Induced subgraph searching for geometric model fitting

    NASA Astrophysics Data System (ADS)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  7. Sustainable Sizing.

    PubMed

    Robinette, Kathleen M; Veitch, Daisy

    2016-08-01

    To provide a review of sustainable sizing practices that reduce waste, increase sales, and simultaneously produce safer, better fitting, accommodating products. Sustainable sizing involves a set of methods good for both the environment (sustainable environment) and business (sustainable business). Sustainable sizing methods reduce (1) materials used, (2) the number of sizes or adjustments, and (3) the amount of product unsold or marked down for sale. This reduces waste and cost. The methods can also increase sales by fitting more people in the target market and produce happier, loyal customers with better fitting products. This is a mini-review of methods that result in more sustainable sizing practices. It also reviews and contrasts current statistical and modeling practices that lead to poor fit and sizing. Fit-mapping and the use of cases are two excellent methods suited for creating sustainable sizing, when real people (vs. virtual people) are used. These methods are described and reviewed. Evidence presented supports the view that virtual fitting with simulated people and products is not yet effective. Fit-mapping and cases with real people and actual products result in good design and products that are fit for person, fit for purpose, with good accommodation and comfortable, optimized sizing. While virtual models have been shown to be ineffective for predicting or representing fit, there is an opportunity to improve them by adding fit-mapping data to the models. This will require saving fit data, product data, anthropometry, and demographics in a standardized manner. For this success to extend to the wider design community, the development of a standardized method of data collection for fit-mapping with a globally shared fit-map database is needed. It will enable the world community to build knowledge of fit and accommodation and generate effective virtual fitting for the future. A standardized method of data collection that tests products' fit methodically and quantitatively will increase our predictive power to determine fit and accommodation, thereby facilitating improved, effective design. These methods apply to all products people wear, use, or occupy. © 2016, Human Factors and Ergonomics Society.

  8. [Study on the effect of different impression methods on the marginal fit of all-ceramic crowns].

    PubMed

    Zhan, Lilin; Zeng, Liwei; Chen, Ping; Liao, Lan; Li, Shiyue; Liu, Renying

    2015-08-01

    To investigate the effect of three different impression methods on the marginal fit of all-ceramic crowns. The three methods include scanning silicone rubber impression, cast models, and direct optical impression. The polymethyl methacrylate (PMMA) material of a mandibular first molar in standard model was prepared with 16 models duplicated. The all-ceramic crowns were prepared using three different impression methods. Accurate impressions were made using silicone rubber, and the cast models were obtained. The PMMA models, silicone rubber impressions, and cast models were scanned, and digital models of three groups were obtained to produce 48 zirconia all-ceramic crowns with computer aided design/computer aided manufacture. The marginal fit of these groups was measured by silicone rubber gap impression. Statistical analysis was performed with SPSS 17.0 software. The marginal fit of direct optical impression groups, silicone rubber impression groups, cast model groups was (69.18±9.47), (81.04±10.88), (84.42±9.96) µm. A significant difference was observed in the marginal fit of the direct optical impression groups and the other groups (P<0.05). No statistically significant difference was observed in the marginal fit of the silicone rubber impression groups and the cast model groups (P>0.05). All marginal measurement sites are clinically acceptable by the three different impression scanning methods. The silicone rubber impression scanning method can be used for all-ceramic restorations.

  9. Comparison of 1-step and 2-step methods of fitting microbiological models.

    PubMed

    Jewell, Keith

    2012-11-15

    Previous conclusions that a 1-step fitting method gives more precise coefficients than the traditional 2-step method are confirmed by application to three different data sets. It is also shown that, in comparison to 2-step fits, the 1-step method gives better fits to the data (often substantially) with directly interpretable regression diagnostics and standard errors. The improvement is greatest at extremes of environmental conditions and it is shown that 1-step fits can indicate inappropriate functional forms when 2-step fits do not. 1-step fits are better at estimating primary parameters (e.g. lag, growth rate) as well as concentrations, and are much more data efficient, allowing the construction of more robust models on smaller data sets. The 1-step method can be straightforwardly applied to any data set for which the 2-step method can be used and additionally to some data sets where the 2-step method fails. A 2-step approach is appropriate for visual assessment in the early stages of model development, and may be a convenient way to generate starting values for a 1-step fit, but the 1-step approach should be used for any quantitative assessment. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. An NCME Instructional Module on Item-Fit Statistics for Item Response Theory Models

    ERIC Educational Resources Information Center

    Ames, Allison J.; Penfield, Randall D.

    2015-01-01

    Drawing valid inferences from item response theory (IRT) models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. This instructional module provides an overview of methods used for evaluating the fit of IRT models. Upon completing…

  11. A Method For Modeling Discontinuities In A Microwave Coaxial Transmission Line

    NASA Technical Reports Server (NTRS)

    Otoshi, Tom Y.

    1994-01-01

    A methodology for modeling discountinuities in a coaxial transmission line is presented. The method uses a none-linear least squares fit program to optimize the fit between a theoretical model and experimental data. When the method was applied for modeling discontinuites in a damaged S-band antenna cable, excellent agreement was obtained.

  12. Fast and exact Newton and Bidirectional fitting of Active Appearance Models.

    PubMed

    Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja

    2016-12-21

    Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.

  13. Model misspecification detection by means of multiple generator errors, using the observed potential map.

    PubMed

    Zhang, Z; Jewett, D L

    1994-01-01

    Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.

  14. Clarifications regarding the use of model-fitting methods of kinetic analysis for determining the activation energy from a single non-isothermal curve.

    PubMed

    Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M

    2013-02-05

    This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.

  15. Evaluating the performance of the Lee-Carter method and its variants in modelling and forecasting Malaysian mortality

    NASA Astrophysics Data System (ADS)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-12-01

    This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.

  16. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    NASA Astrophysics Data System (ADS)

    De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.

    2013-02-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.

  17. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  18. Goodness-of-Fit Assessment of Item Response Theory Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  19. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  20. Robustness of fit indices to outliers and leverage observations in structural equation modeling.

    PubMed

    Yuan, Ke-Hai; Zhong, Xiaoling

    2013-06-01

    Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  1. Black Versus Gray T-Shirts: Comparison of Spectrophotometric and Other Biophysical Properties of Physical Fitness Uniforms and Modeled Heat Strain and Thermal Comfort

    DTIC Science & Technology

    2016-09-01

    test method for measuring the thermal insulation of clothing using a heated manikin. 2010. 2. ASTM International. F2370-10 Standard test method for...PROPERTIES OF PHYSICAL FITNESS UNIFORMS AND MODELED HEAT STRAIN AND THERMAL COMFORT DISCLAIMER The opinions or assertions contained herein are the...SHIRTS: COMPARISON OF SPECTROPHOTOMETRIC AND OTHER BIOPHYSICAL PROPERTIES OF PHYSICAL FITNESS UNIFORMS AND MODELED HEAT STRAIN AND THERMAL COMFORT

  2. Combined approaches to flexible fitting and assessment in virus capsids undergoing conformational change☆

    PubMed Central

    Pandurangan, Arun Prasad; Shakeel, Shabih; Butcher, Sarah Jane; Topf, Maya

    2014-01-01

    Fitting of atomic components into electron cryo-microscopy (cryoEM) density maps is routinely used to understand the structure and function of macromolecular machines. Many fitting methods have been developed, but a standard protocol for successful fitting and assessment of fitted models has yet to be agreed upon among the experts in the field. Here, we created and tested a protocol that highlights important issues related to homology modelling, density map segmentation, rigid and flexible fitting, as well as the assessment of fits. As part of it, we use two different flexible fitting methods (Flex-EM and iMODfit) and demonstrate how combining the analysis of multiple fits and model assessment could result in an improved model. The protocol is applied to the case of the mature and empty capsids of Coxsackievirus A7 (CAV7) by flexibly fitting homology models into the corresponding cryoEM density maps at 8.2 and 6.1 Å resolution. As a result, and due to the improved homology models (derived from recently solved crystal structures of a close homolog – EV71 capsid – in mature and empty forms), the final models present an improvement over previously published models. In close agreement with the capsid expansion observed in the EV71 structures, the new CAV7 models reveal that the expansion is accompanied by ∼5° counterclockwise rotation of the asymmetric unit, predominantly contributed by the capsid protein VP1. The protocol could be applied not only to viral capsids but also to many other complexes characterised by a combination of atomic structure modelling and cryoEM density fitting. PMID:24333899

  3. New formulation feed method in tariff model of solar PV in Indonesia

    NASA Astrophysics Data System (ADS)

    Djamal, Muchlishah Hadi; Setiawan, Eko Adhi; Setiawan, Aiman

    2017-03-01

    Geographically, Indonesia has 18 latitudes that correlated strongly with the potential of solar radiation for the implementation of solar photovoltaic (PV) technologies. This is becoming the basis assumption to develop a proportional model of Feed In Tariff (FIT), consequently the FIT will be vary, according to the various of latitudes in Indonesia. This paper proposed a new formulation of solar PV FIT based on the potential of solar radiation and some independent variables such as latitude, longitude, Levelized Cost of Electricity (LCOE), and also socio-economic. The Principal Component Regression (PCR) method is used to analyzed the correlation of six independent variables C1-C6 then three models of FIT are presented. Model FIT-2 is chosen because it has a small residual value and has higher financial benefit compared to the other models. This study reveals the value of variable FIT associated with solar energy potential in each region, can reduce the total FIT to be paid by the state around 80 billion rupiahs in 10 years of 1 MW photovoltaic operation at each 34 provinces in Indonesia.

  4. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  5. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    PubMed

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  6. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    PubMed Central

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  7. Model-Free CUSUM Methods for Person Fit

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Shi, Min

    2009-01-01

    This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…

  8. Goodness-Of-Fit Test for Nonparametric Regression Models: Smoothing Spline ANOVA Models as Example.

    PubMed

    Teran Hidalgo, Sebastian J; Wu, Michael C; Engel, Stephanie M; Kosorok, Michael R

    2018-06-01

    Nonparametric regression models do not require the specification of the functional form between the outcome and the covariates. Despite their popularity, the amount of diagnostic statistics, in comparison to their parametric counter-parts, is small. We propose a goodness-of-fit test for nonparametric regression models with linear smoother form. In particular, we apply this testing framework to smoothing spline ANOVA models. The test can consider two sources of lack-of-fit: whether covariates that are not currently in the model need to be included, and whether the current model fits the data well. The proposed method derives estimated residuals from the model. Then, statistical dependence is assessed between the estimated residuals and the covariates using the HSIC. If dependence exists, the model does not capture all the variability in the outcome associated with the covariates, otherwise the model fits the data well. The bootstrap is used to obtain p-values. Application of the method is demonstrated with a neonatal mental development data analysis. We demonstrate correct type I error as well as power performance through simulations.

  9. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    PubMed

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. A Generalized Form of Context-Dependent Psychophysiological Interactions (gPPI): A Comparison to Standard Approaches

    PubMed Central

    McLaren, Donald G.; Ries, Michele L.; Xu, Guofan; Johnson, Sterling C.

    2012-01-01

    Functional MRI (fMRI) allows one to study task-related regional responses and task-dependent connectivity analysis using psychophysiological interaction (PPI) methods. The latter affords the additional opportunity to understand how brain regions interact in a task-dependent manner. The current implementation of PPI in Statistical Parametric Mapping (SPM8) is configured primarily to assess connectivity differences between two task conditions, when in practice fMRI tasks frequently employ more than two conditions. Here we evaluate how a generalized form of context-dependent PPI (gPPI; http://www.nitrc.org/projects/gppi), which is configured to automatically accommodate more than two task conditions in the same PPI model by spanning the entire experimental space, compares to the standard implementation in SPM8. These comparisons are made using both simulations and an empirical dataset. In the simulated dataset, we compare the interaction beta estimates to their expected values and model fit using the Akaike Information Criterion (AIC). We found that interaction beta estimates in gPPI were robust to different simulated data models, were not different from the expected beta value, and had better model fits than when using standard PPI (sPPI) methods. In the empirical dataset, we compare the model fit of the gPPI approach to sPPI. We found that the gPPI approach improved model fit compared to sPPI. There were several regions that became non-significant with gPPI. These regions all showed significantly better model fits with gPPI. Also, there were several regions where task-dependent connectivity was only detected using gPPI methods, also with improved model fit. Regions that were detected with all methods had more similar model fits. These results suggest that gPPI may have greater sensitivity and specificity than standard implementation in SPM. This notion is tempered slightly as there is no gold standard; however, data simulations with a known outcome support our conclusions about gPPI. In sum, the generalized form of context-dependent PPI approach has increased flexibility of statistical modeling, and potentially improves model fit, specificity to true negative findings, and sensitivity to true positive findings. PMID:22484411

  11. An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates

    NASA Astrophysics Data System (ADS)

    Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin

    2014-03-01

    The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.

  12. Extracting Damping Ratio from Dynamic Data and Numerical Solutions

    NASA Technical Reports Server (NTRS)

    Casiano, M. J.

    2016-01-01

    There are many ways to extract damping parameters from data or models. This Technical Memorandum provides a quick reference for some of the more common approaches used in dynamics analysis. Described are six methods of extracting damping from data: the half-power method, logarithmic decrement (decay rate) method, an autocorrelation/power spectral density fitting method, a frequency response fitting method, a random decrement fitting method, and a newly developed half-quadratic gain method. Additionally, state-space models and finite element method modeling tools, such as COMSOL Multiphysics (COMSOL), provide a theoretical damping via complex frequency. Each method has its advantages which are briefly noted. There are also likely many other advanced techniques in extracting damping within the operational modal analysis discipline, where an input excitation is unknown; however, these approaches discussed here are objective, direct, and can be implemented in a consistent manner.

  13. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  14. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tao; Li, Cheng; Huang, Can

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  15. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE PAGES

    Ding, Tao; Li, Cheng; Huang, Can; ...

    2017-01-09

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  16. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    PubMed

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  17. Fast auto-focus scheme based on optical defocus fitting model

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min

    2018-04-01

    An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.

  18. A Note on Recurring Misconceptions When Fitting Nonlinear Mixed Models.

    PubMed

    Harring, Jeffrey R; Blozis, Shelley A

    2016-01-01

    Nonlinear mixed-effects (NLME) models are used when analyzing continuous repeated measures data taken on each of a number of individuals where the focus is on characteristics of complex, nonlinear individual change. Challenges with fitting NLME models and interpreting analytic results have been well documented in the statistical literature. However, parameter estimates as well as fitted functions from NLME analyses in recent articles have been misinterpreted, suggesting the need for clarification of these issues before these misconceptions become fact. These misconceptions arise from the choice of popular estimation algorithms, namely, the first-order linearization method (FO) and Gaussian-Hermite quadrature (GHQ) methods, and how these choices necessarily lead to population-average (PA) or subject-specific (SS) interpretations of model parameters, respectively. These estimation approaches also affect the fitted function for the typical individual, the lack-of-fit of individuals' predicted trajectories, and vice versa.

  19. A comparison of methods of fitting several models to nutritional response data.

    PubMed

    Vedenov, D; Pesti, G M

    2008-02-01

    A variety of models have been proposed to fit nutritional input-output response data. The models are typically nonlinear; therefore, fitting the models usually requires sophisticated statistical software and training to use it. An alternative tool for fitting nutritional response models was developed by using widely available and easier-to-use Microsoft Excel software. The tool, implemented as an Excel workbook (NRM.xls), allows simultaneous fitting and side-by-side comparisons of several popular models. This study compared the results produced by the tool we developed and PROC NLIN of SAS. The models compared were the broken line (ascending linear and quadratic segments), saturation kinetics, 4-parameter logistics, sigmoidal, and exponential models. The NRM.xls workbook provided results nearly identical to those of PROC NLIN. Furthermore, the workbook successfully fit several models that failed to converge in PROC NLIN. Two data sets were used as examples to compare fits by the different models. The results suggest that no particular nonlinear model is necessarily best for all nutritional response data.

  20. Student Background, School Climate, School Disorder, and Student Achievement: An Empirical Study of New York City's Middle Schools

    ERIC Educational Resources Information Center

    Chen, Greg; Weikart, Lynne A.

    2008-01-01

    This study develops and tests a school disorder and student achievement model based upon the school climate framework. The model was fitted to 212 New York City middle schools using the Structural Equations Modeling Analysis method. The analysis shows that the model fits the data well based upon test statistics and goodness of fit indices. The…

  1. Using evolutionary algorithms for fitting high-dimensional models to neuronal data.

    PubMed

    Svensson, Carl-Magnus; Coombes, Stephen; Peirce, Jonathan Westley

    2012-04-01

    In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.

  2. In search of best fitted composite model to the ALAE data set with transformed Gamma and inversed transformed Gamma families

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu

    2017-05-01

    In this paper, a recent novel approach is applied to estimate the threshold parameter of a composite model. Several composite models from Transformed Gamma and Inverse Transformed Gamma families are constructed based on this approach and their parameters are estimated by the maximum likelihood method. These composite models are fitted to allocated loss adjustment expenses (ALAE). In comparison to all composite models studied, the composite Weibull-Inverse Transformed Gamma model is proved to be a competitor candidate as it best fit the loss data. The final part considers the backtesting method to verify the validation of VaR and CTE risk measures.

  3. Carbon dioxide stripping in aquaculture -- part III: model verification

    USGS Publications Warehouse

    Colt, John; Watten, Barnaby; Pfeiffer, Tim

    2012-01-01

    Based on conventional mass transfer models developed for oxygen, the use of the non-linear ASCE method, 2-point method, and one parameter linear-regression method were evaluated for carbon dioxide stripping data. For values of KLaCO2 < approximately 1.5/h, the 2-point or ASCE method are a good fit to experimental data, but the fit breaks down at higher values of KLaCO2. How to correct KLaCO2 for gas phase enrichment remains to be determined. The one-parameter linear regression model was used to vary the C*CO2 over the test, but it did not result in a better fit to the experimental data when compared to the ASCE or fixed C*CO2 assumptions.

  4. The Beta-Geometric Model Applied to Fecundability in a Sample of Married Women

    NASA Astrophysics Data System (ADS)

    Adekanmbi, D. B.; Bamiduro, T. A.

    2006-10-01

    The time required to achieve pregnancy among married couples termed fecundability has been proposed to follow a beta-geometric distribution. The accuracy of the method used in estimating the parameters of the model has an implication on the goodness of fit of the model. In this study, the parameters of the model are estimated using the Method of Moments and Newton-Raphson estimation procedure. The goodness of fit of the model was considered, using estimates from the two methods of estimation, as well as the asymptotic relative efficiency of the estimates. A noticeable improvement in the fit of the model to the data on time to conception was observed, when the parameters are estimated by Newton-Raphson procedure, and thereby estimating reasonable expectations of fecundability for married female population in the country.

  5. Modelling the firing pattern of bullfrog vestibular neurons responding to naturalistic stimuli

    NASA Technical Reports Server (NTRS)

    Paulin, M. G.; Hoffman, L. F.

    1999-01-01

    We have developed a neural system identification method for fitting models to stimulus-response data, where the response is a spike train. The method involves using a general nonlinear optimisation procedure to fit models in the time domain. We have applied the method to model bullfrog semicircular canal afferent neuron responses during naturalistic, broad-band head rotations. These neurons respond in diverse ways, but a simple four parameter class of models elegantly accounts for the various types of responses observed. c1999 Elsevier Science B.V. All rights reserved.

  6. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments

    PubMed Central

    2010-01-01

    Background The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/. PMID:20482791

  7. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments.

    PubMed

    Ma, Jingming; Dykes, Carrie; Wu, Tao; Huang, Yangxin; Demeter, Lisa; Wu, Hulin

    2010-05-18

    The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.

  8. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    PubMed

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  9. Determining polarizable force fields with electrostatic potentials from quantum mechanical linear response theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708

    We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less

  10. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data.

    PubMed

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.

  11. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data

    PubMed Central

    Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian

    2016-01-01

    Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches. PMID:26785378

  12. Fitted Fourier-pseudospectral methods for solving a delayed reaction-diffusion partial differential equation in biology

    NASA Astrophysics Data System (ADS)

    Adam, A. M. A.; Bashier, E. B. M.; Hashim, M. H. A.; Patidar, K. C.

    2017-07-01

    In this work, we design and analyze a fitted numerical method to solve a reaction-diffusion model with time delay, namely, a delayed version of a population model which is an extension of the logistic growth (LG) equation for a food-limited population proposed by Smith [F.E. Smith, Population dynamics in Daphnia magna and a new model for population growth, Ecology 44 (1963) 651-663]. Seeing that the analytical solution (in closed form) is hard to obtain, we seek for a robust numerical method. The method consists of a Fourier-pseudospectral semi-discretization in space and a fitted operator implicit-explicit scheme in temporal direction. The proposed method is analyzed for convergence and we found that it is unconditionally stable. Illustrative numerical results will be presented at the conference.

  13. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  14. A new approach to correct the QT interval for changes in heart rate using a nonparametric regression model in beagle dogs.

    PubMed

    Watanabe, Hiroyuki; Miyazaki, Hiroyasu

    2006-01-01

    Over- and/or under-correction of QT intervals for changes in heart rate may lead to misleading conclusions and/or masking the potential of a drug to prolong the QT interval. This study examines a nonparametric regression model (Loess Smoother) to adjust the QT interval for differences in heart rate, with an improved fitness over a wide range of heart rates. 240 sets of (QT, RR) observations collected from each of 8 conscious and non-treated beagle dogs were used as the materials for investigation. The fitness of the nonparametric regression model to the QT-RR relationship was compared with four models (individual linear regression, common linear regression, and Bazett's and Fridericia's correlation models) with reference to Akaike's Information Criterion (AIC). Residuals were visually assessed. The bias-corrected AIC of the nonparametric regression model was the best of the models examined in this study. Although the parametric models did not fit, the nonparametric regression model improved the fitting at both fast and slow heart rates. The nonparametric regression model is the more flexible method compared with the parametric method. The mathematical fit for linear regression models was unsatisfactory at both fast and slow heart rates, while the nonparametric regression model showed significant improvement at all heart rates in beagle dogs.

  15. A global goodness-of-fit statistic for Cox regression models.

    PubMed

    Parzen, M; Lipsitz, S R

    1999-06-01

    In this paper, a global goodness-of-fit test statistic for a Cox regression model, which has an approximate chi-squared distribution when the model has been correctly specified, is proposed. Our goodness-of-fit statistic is global and has power to detect if interactions or higher order powers of covariates in the model are needed. The proposed statistic is similar to the Hosmer and Lemeshow (1980, Communications in Statistics A10, 1043-1069) goodness-of-fit statistic for binary data as well as Schoenfeld's (1980, Biometrika 67, 145-153) statistic for the Cox model. The methods are illustrated using data from a Mayo Clinic trial in primary billiary cirrhosis of the liver (Fleming and Harrington, 1991, Counting Processes and Survival Analysis), in which the outcome is the time until liver transplantation or death. The are 17 possible covariates. Two Cox proportional hazards models are fit to the data, and the proposed goodness-of-fit statistic is applied to the fitted models.

  16. Modeling dust emission in the Magellanic Clouds with Spitzer and Herschel

    NASA Astrophysics Data System (ADS)

    Chastenet, Jérémy; Bot, Caroline; Gordon, Karl D.; Bocchio, Marco; Roman-Duval, Julia; Jones, Anthony P.; Ysard, Nathalie

    2017-05-01

    Context. Dust modeling is crucial to infer dust properties and budget for galaxy studies. However, there are systematic disparities between dust grain models that result in corresponding systematic differences in the inferred dust properties of galaxies. Quantifying these systematics requires a consistent fitting analysis. Aims: We compare the output dust parameters and assess the differences between two dust grain models, the DustEM model and THEMIS. In this study, we use a single fitting method applied to all the models to extract a coherent and unique statistical analysis. Methods: We fit the models to the dust emission seen by Spitzer and Herschel in the Small and Large Magellanic Clouds (SMC and LMC). The observations cover the infrared (IR) spectrum from a few microns to the sub-millimeter range. For each fitted pixel, we calculate the full n-D likelihood based on a previously described method. The free parameters are both environmental (U, the interstellar radiation field strength; αISRF, power-law coefficient for a multi-U environment; Ω∗, the starlight strength) and intrinsic to the model (YI: abundances of the grain species I; αsCM20, coefficient in the small carbon grain size distribution). Results: Fractional residuals of five different sets of parameters show that fitting THEMIS brings a more accurate reproduction of the observations than the DustEM model. However, independent variations of the dust species show strong model-dependencies. We find that the abundance of silicates can only be constrained to an upper-limit and that the silicate/carbon ratio is different than that seen in our Galaxy. In the LMC, our fits result in dust masses slightly lower than those found in the literature, by a factor lower than 2. In the SMC, we find dust masses in agreement with previous studies.

  17. Estimation of retinal vessel caliber using model fitting and random forests

    NASA Astrophysics Data System (ADS)

    Araújo, Teresa; Mendonça, Ana Maria; Campilho, Aurélio

    2017-03-01

    Retinal vessel caliber changes are associated with several major diseases, such as diabetes and hypertension. These caliber changes can be evaluated using eye fundus images. However, the clinical assessment is tiresome and prone to errors, motivating the development of automatic methods. An automatic method based on vessel crosssection intensity profile model fitting for the estimation of vessel caliber in retinal images is herein proposed. First, vessels are segmented from the image, vessel centerlines are detected and individual segments are extracted and smoothed. Intensity profiles are extracted perpendicularly to the vessel, and the profile lengths are determined. Then, model fitting is applied to the smoothed profiles. A novel parametric model (DoG-L7) is used, consisting on a Difference-of-Gaussians multiplied by a line which is able to describe profile asymmetry. Finally, the parameters of the best-fit model are used for determining the vessel width through regression using ensembles of bagged regression trees with random sampling of the predictors (random forests). The method is evaluated on the REVIEW public dataset. A precision close to the observers is achieved, outperforming other state-of-the-art methods. The method is robust and reliable for width estimation in images with pathologies and artifacts, with performance independent of the range of diameters.

  18. The More, the Better? Curvilinear Effects of Job Autonomy on Well-Being From Vitamin Model and PE-Fit Theory Perspectives.

    PubMed

    Stiglbauer, Barbara; Kovacs, Carrie

    2017-12-28

    In organizational psychology research, autonomy is generally seen as a job resource with a monotone positive relationship with desired occupational outcomes such as well-being. However, both Warr's vitamin model and person-environment (PE) fit theory suggest that negative outcomes may result from excesses of some job resources, including autonomy. Thus, the current studies used survey methodology to explore cross-sectional relationships between environmental autonomy, person-environment autonomy (mis)fit, and well-being. We found that autonomy and autonomy (mis)fit explained between 6% and 22% of variance in well-being, depending on type of autonomy (scheduling, method, or decision-making) and type of (mis)fit operationalization (atomistic operationalization through the separate assessment of actual and ideal autonomy levels vs. molecular operationalization through the direct assessment of perceived autonomy (mis)fit). Autonomy (mis)fit (PE-fit perspective) explained more unique variance in well-being than environmental autonomy itself (vitamin model perspective). Detrimental effects of autonomy excess on well-being were most evident for method autonomy and least consistent for decision-making autonomy. We argue that too-much-of-a-good-thing effects of job autonomy on well-being exist, but suggest that these may be dependent upon sample characteristics (range of autonomy levels), type of operationalization (molecular vs. atomistic fit), autonomy facet (method, scheduling, or decision-making), as well as individual and organizational moderators. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Assessing the fit of site-occupancy models

    USGS Publications Warehouse

    MacKenzie, D.I.; Bailey, L.L.

    2004-01-01

    Few species are likely to be so evident that they will always be detected at a site when present. Recently a model has been developed that enables estimation of the proportion of area occupied, when the target species is not detected with certainty. Here we apply this modeling approach to data collected on terrestrial salamanders in the Plethodon glutinosus complex in the Great Smoky Mountains National Park, USA, and wish to address the question 'how accurately does the fitted model represent the data?' The goodness-of-fit of the model needs to be assessed in order to make accurate inferences. This article presents a method where a simple Pearson chi-square statistic is calculated and a parametric bootstrap procedure is used to determine whether the observed statistic is unusually large. We found evidence that the most global model considered provides a poor fit to the data, hence estimated an overdispersion factor to adjust model selection procedures and inflate standard errors. Two hypothetical datasets with known assumption violations are also analyzed, illustrating that the method may be used to guide researchers to making appropriate inferences. The results of a simulation study are presented to provide a broader view of the methods properties.

  20. Improved Model Fitting for the Empirical Green's Function Approach Using Hierarchical Models

    NASA Astrophysics Data System (ADS)

    Van Houtte, Chris; Denolle, Marine

    2018-04-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study examines a variety of model-fitting methods and shows that the choice of method can explain some of the discrepancy. The preferred method is Bayesian hierarchical modeling, which can reduce bias, better quantify uncertainties, and allow additional effects to be resolved. Two case study earthquakes are examined, the 2016 MW7.1 Kumamoto, Japan earthquake and a MW5.3 aftershock of the 2016 MW7.8 Kaikōura earthquake. By using hierarchical models, the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be retrieved without overfitting the data. Other methods commonly used to calculate corner frequencies may give substantial biases. In particular, if fc was calculated for the Kumamoto earthquake using an ω-square model, the obtained fc could be twice as large as a realistic value.

  1. Much More than Model Fitting? Evidence for the Heritability of Method Effect Associated with Positively Worded Items of the Life Orientation Test Revised

    ERIC Educational Resources Information Center

    Alessandri, Guido; Vecchione, Michele; Fagnani, Corrado; Bentler, Peter M.; Barbaranelli, Claudio; Medda, Emanuela; Nistico, Lorenza; Stazi, Maria Antonietta; Caprara, Gian Vittorio

    2010-01-01

    When a self-report instrument includes a balanced number of positively and negatively worded items, factor analysts often use method effect factors to aid model fitting. One of the most widely investigated sources of method effects stems from the respondent tendencies to agree with an item regardless of its content. The nature of these effects,…

  2. Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling

    ERIC Educational Resources Information Center

    Lee, Taehun; Cai, Li

    2012-01-01

    Model-based multiple imputation has become an indispensable method in the educational and behavioral sciences. Mean and covariance structure models are often fitted to multiply imputed data sets. However, the presence of multiple random imputations complicates model fit testing, which is an important aspect of mean and covariance structure…

  3. Fitting ARMA Time Series by Structural Equation Models.

    ERIC Educational Resources Information Center

    van Buuren, Stef

    1997-01-01

    This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)

  4. Evaluation of Statistical Methods for Modeling Historical Resource Production and Forecasting

    NASA Astrophysics Data System (ADS)

    Nanzad, Bolorchimeg

    This master's thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed "cycle-jumping" wherein overlap of multiple cycles is limited. The model is designed in a way that each cycle is described by the same three parameters as conventional multicyclic Hubbert model and every two cycles are connected with a transition width. Transition width indicates the shift from one cycle to the next and is described as weighted coaddition of neighboring two cycles. It is determined by three parameters: transition year, transition width, and gamma parameter for weighting. The cycle-jumping method provides superior model compared to the conventional multicyclic Hubbert model and reflects historical production behavior more reasonably and practically, by better modeling of the effects of technological transitions and socioeconomic factors that affect historical resource production behavior by explicitly considering the form of the transitions between production cycles.

  5. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    PubMed

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.

  6. Scaling analysis and model estimation of solar corona index

    NASA Astrophysics Data System (ADS)

    Ray, Samujjwal; Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik

    2018-04-01

    A monthly average solar green coronal index time series for the period from January 1939 to December 2008 collected from NOAA (The National Oceanic and Atmospheric Administration) has been analysed in this paper in perspective of scaling analysis and modelling. Smoothed and de-noising have been done using suitable mother wavelet as a pre-requisite. The Finite Variance Scaling Method (FVSM), Higuchi method, rescaled range (R/S) and a generalized method have been applied to calculate the scaling exponents and fractal dimensions of the time series. Autocorrelation function (ACF) is used to find autoregressive (AR) process and Partial autocorrelation function (PACF) has been used to get the order of AR model. Finally a best fit model has been proposed using Yule-Walker Method with supporting results of goodness of fit and wavelet spectrum. The results reveal an anti-persistent, Short Range Dependent (SRD), self-similar property with signatures of non-causality, non-stationarity and nonlinearity in the data series. The model shows the best fit to the data under observation.

  7. Local Intrinsic Dimension Estimation by Generalized Linear Modeling.

    PubMed

    Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru

    2017-07-01

    We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.

  8. Volume effects of late term normal tissue toxicity in prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Bonta, Dacian Viorel

    Modeling of volume effects for treatment toxicity is paramount for optimization of radiation therapy. This thesis proposes a new model for calculating volume effects in gastro-intestinal and genito-urinary normal tissue complication probability (NTCP) following radiation therapy for prostate carcinoma. The radiobiological and the pathological basis for this model and its relationship to other models are detailed. A review of the radiobiological experiments and published clinical data identified salient features and specific properties a biologically adequate model has to conform to. The new model was fit to a set of actual clinical data. In order to verify the goodness of fit, two established NTCP models and a non-NTCP measure for complication risk were fitted to the same clinical data. The method of fit for the model parameters was maximum likelihood estimation. Within the framework of the maximum likelihood approach I estimated the parameter uncertainties for each complication prediction model. The quality-of-fit was determined using the Aikaike Information Criterion. Based on the model that provided the best fit, I identified the volume effects for both types of toxicities. Computer-based bootstrap resampling of the original dataset was used to estimate the bias and variance for the fitted parameter values. Computer simulation was also used to estimate the population size that generates a specific uncertainty level (3%) in the value of predicted complication probability. The same method was used to estimate the size of the patient population needed for accurate choice of the model underlying the NTCP. The results indicate that, depending on the number of parameters of a specific NTCP model, 100 (for two parameter models) and 500 patients (for three parameter models) are needed for accurate parameter fit. Correlation of complication occurrence in patients was also investigated. The results suggest that complication outcomes are correlated in a patient, although the correlation coefficient is rather small.

  9. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  10. Validation of catchment models for predicting land-use and climate change impacts. 1. Method

    NASA Astrophysics Data System (ADS)

    Ewen, J.; Parkin, G.

    1996-02-01

    Computer simulation models are increasingly being proposed as tools capable of giving water resource managers accurate predictions of the impact of changes in land-use and climate. Previous validation testing of catchment models is reviewed, and it is concluded that the methods used do not clearly test a model's fitness for such a purpose. A new generally applicable method is proposed. This involves the direct testing of fitness for purpose, uses established scientific techniques, and may be implemented within a quality assured programme of work. The new method is applied in Part 2 of this study (Parkin et al., J. Hydrol., 175:595-613, 1996).

  11. Accounting for seasonal patterns in syndromic surveillance data for outbreak detection.

    PubMed

    Burr, Tom; Graves, Todd; Klamann, Richard; Michalak, Sarah; Picard, Richard; Hengartner, Nicolas

    2006-12-04

    Syndromic surveillance (SS) can potentially contribute to outbreak detection capability by providing timely, novel data sources. One SS challenge is that some syndrome counts vary with season in a manner that is not identical from year to year. Our goal is to evaluate the impact of inconsistent seasonal effects on performance assessments (false and true positive rates) in the context of detecting anomalous counts in data that exhibit seasonal variation. To evaluate the impact of inconsistent seasonal effects, we injected synthetic outbreaks into real data and into data simulated from each of two models fit to the same real data. Using real respiratory syndrome counts collected in an emergency department from 2/1/94-5/31/03, we varied the length of training data from one to eight years, applied a sequential test to the forecast errors arising from each of eight forecasting methods, and evaluated their detection probabilities (DP) on the basis of 1000 injected synthetic outbreaks. We did the same for each of two corresponding simulated data sets. The less realistic, nonhierarchical model's simulated data set assumed that "one season fits all," meaning that each year's seasonal peak has the same onset, duration, and magnitude. The more realistic simulated data set used a hierarchical model to capture violation of the "one season fits all" assumption. This experiment demonstrated optimistic bias in DP estimates for some of the methods when data simulated from the nonhierarchical model was used for DP estimation, thus suggesting that at least for some real data sets and methods, it is not adequate to assume that "one season fits all." For the data we analyze, the "one season fits all " assumption is violated, and DP performance claims based on simulated data that assume "one season fits all," for the forecast methods considered, except for moving average methods, tend to be optimistic. Moving average methods based on relatively short amounts of training data are competitive on all three data sets, but are particularly competitive on the real data and on data from the hierarchical model, which are the two data sets that violate the "one season fits all" assumption.

  12. An optimized knife-edge method for on-orbit MTF estimation of optical sensors using powell parameter fitting

    NASA Astrophysics Data System (ADS)

    Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue

    2017-08-01

    On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.

  13. Network growth models: A behavioural basis for attachment proportional to fitness

    NASA Astrophysics Data System (ADS)

    Bell, Michael; Perera, Supun; Piraveenan, Mahendrarajah; Bliemer, Michiel; Latty, Tanya; Reid, Chris

    2017-02-01

    Several growth models have been proposed in the literature for scale-free complex networks, with a range of fitness-based attachment models gaining prominence recently. However, the processes by which such fitness-based attachment behaviour can arise are less well understood, making it difficult to compare the relative merits of such models. This paper analyses an evolutionary mechanism that would give rise to a fitness-based attachment process. In particular, it is proven by analytical and numerical methods that in homogeneous networks, the minimisation of maximum exposure to node unfitness leads to attachment probabilities that are proportional to node fitness. This result is then extended to heterogeneous networks, with supply chain networks being used as an example.

  14. Elevation data fitting and precision analysis of Google Earth in road survey

    NASA Astrophysics Data System (ADS)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.

  15. Estimating Dynamical Systems: Derivative Estimation Hints From Sir Ronald A. Fisher.

    PubMed

    Deboeck, Pascal R

    2010-08-06

    The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common approach for estimating derivatives, Local Linear Approximation (LLA), produces estimates with correlated errors. Depending on the specific differential equation model used, such correlated errors can lead to severely biased estimates of differential equation model parameters. This article shows that the fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed method and a generalized form of LLA when used to estimate derivatives and when used to estimate differential equation model parameters. A third application estimates the frequency of oscillation in observations of the monthly deaths from bronchitis, emphysema, and asthma in the United Kingdom. These data are publicly available in the statistical program R, and functions in R for the method presented are provided.

  16. Inversion method applied to the rotation curves of galaxies

    NASA Astrophysics Data System (ADS)

    Márquez-Caicedo, L. A.; Lora-Clavijo, F. D.; Sanabria-Gómez, J. D.

    2017-07-01

    We used simulated annealing, Montecarlo and genetic algorithm methods for matching both numerical data of density and velocity profiles in some low surface brigthness galaxies with theoretical models of Boehmer-Harko, Navarro-Frenk-White and Pseudo Isothermal Profiles for galaxies with dark matter halos. We found that Navarro-Frenk-White model does not fit at all in contrast with the other two models which fit very well. Inversion methods have been widely used in various branches of science including astrophysics (Charbonneau 1995, ApJS, 101, 309). In this work we have used three different parametric inversion methods (MonteCarlo, Genetic Algorithm and Simmulated Annealing) in order to determine the best fit of the observed data of the density and velocity profiles of a set of low surface brigthness galaxies (De Block et al. 2001, ApJ, 122, 2396) with three models of galaxies containing dark mattter. The parameters adjusted by the inversion methods were the central density and a characteristic distance in the Boehmer-Harko BH (Boehmer & Harko 2007, JCAP, 6, 25), Navarro-Frenk-White NFW (Navarro et al. 2007, ApJ, 490, 493) and Pseudo Isothermal Profile PI (Robles & Matos 2012, MNRAS, 422, 282). The results obtained showed that the BH and PI Profile dark matter galaxies fit very well for both the density and the velocity profiles, in contrast the NFW model did not make good adjustments to the profiles in any analized galaxy.

  17. Model Diagnostics for Bayesian Networks

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2006-01-01

    Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…

  18. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  19. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  20. A scientific and statistical analysis of accelerated aging for pharmaceuticals. Part 1: accuracy of fitting methods.

    PubMed

    Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L

    2014-10-01

    Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  1. A Model-Free Diagnostic for Single-Peakedness of Item Responses Using Ordered Conditional Means.

    PubMed

    Polak, Marike; de Rooij, Mark; Heiser, Willem J

    2012-09-01

    In this article we propose a model-free diagnostic for single-peakedness (unimodality) of item responses. Presuming a unidimensional unfolding scale and a given item ordering, we approximate item response functions of all items based on ordered conditional means (OCM). The proposed OCM methodology is based on Thurstone & Chave's (1929) criterion of irrelevance, which is a graphical, exploratory method for evaluating the "relevance" of dichotomous attitude items. We generalized this criterion to graded response items and quantified the relevance by fitting a unimodal smoother. The resulting goodness-of-fit was used to determine item fit and aggregated scale fit. Based on a simulation procedure, cutoff values were proposed for the measures of item fit. These cutoff values showed high power rates and acceptable Type I error rates. We present 2 applications of the OCM method. First, we apply the OCM method to personality data from the Developmental Profile; second, we analyze attitude data collected by Roberts and Laughlin (1996) concerning opinions of capital punishment.

  2. SU-C-9A-04: Alternative Analytic Solution to the Paralyzable Detector Model to Calculate Deadtime and Deadtime Loss

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman, W; Kappadath, S

    2014-06-01

    Purpose: Some common methods to solve for deadtime are (1) dual-source method, which assumes two equal activities; (2) model fitting, which requires multiple acquisitions as source decays; and (3) lossless model, which assumes no deadtime loss at low count rates. We propose a new analytic alternative solution to calculate deadtime for paralyzable gamma camera. Methods: Deadtime T can be calculated analytically from two distinct observed count rates M1 and M2 when the ratio of the true count rates alpha=N2/N1 is known. Alpha can be measured as a ratio of two measured activities using dose calibrators or via radioactive decay. Knowledgemore » of alpha creates a system with 2 equations and 2 unknowns, i.e., T and N1. To verify the validity of the proposed method, projections of a non-uniform phantom (4GBq 99mTc) were acquired in using Siemens SymbiaS multiple times over 48 hours. Each projection has >100kcts. The deadtime for each projection was calculated by fitting the data to a paralyzable model and also by using the proposed 2-acquisition method. The two estimates of deadtime were compared using the Bland-Altmann method. In addition, the dependency of uncertainty in T on uncertainty in alpha was investigated for several imaging conditions. Results: The results strongly suggest that the 2-acquisition method is equivalent to the fitting method. The Bland-Altman analysis yielded mean difference in deadtime estimate of ∼0.076us (95%CI: -0.049us, 0.103us) between the 2-acquisition and model fitting methods. The 95% limits of agreement were calculated to be -0.104 to 0.256us. The uncertainty in deadtime calculated using the proposed method is highly dependent on the uncertainty in the ratio alpha. Conclusion: The 2-acquisition method was found to be equivalent to the parameter fitting method. The proposed method offers a simpler and more practical way to analytically solve for a paralyzable detector deadtime, especially during physics testing.« less

  3. Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations

    PubMed Central

    Nigh, Gordon

    2015-01-01

    Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472

  4. Detection of shallow buried objects using an autoregressive model on the ground penetrating radar signal

    NASA Astrophysics Data System (ADS)

    Nabelek, Daniel P.; Ho, K. C.

    2013-06-01

    The detection of shallow buried low-metal content objects using ground penetrating radar (GPR) is a challenging task. This is because these targets are right underneath the ground and the ground bounce reflection interferes with their detections. They do not create distinctive hyperbolic signatures as required by most existing GPR detection algorithms due to their special geometric shapes and low metal content. This paper proposes the use of the Autoregressive (AR) modeling method for the detection of these targets. We fit an A-scan of the GPR data to an AR model. It is found that the fitting error will be small when such a target is present and large when it is absent. The ratio of the energy in an Ascan before and after AR model fitting is used as the confidence value for detection. We also apply AR model fitting over scans and utilize the fitting residual energies over several scans to form a feature vector for improving the detections. Using the data collected from a government test site, the proposed method can improve the detection of this kind of targets by 30% compared to the pre-screener, at a false alarm rate of 0.002/m2.

  5. Methods of comparing associative models and an application to retrospective revaluation.

    PubMed

    Witnauer, James E; Hutchings, Ryan; Miller, Ralph R

    2017-11-01

    Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Analytic solutions to modelling exponential and harmonic functions using Chebyshev polynomials: fitting frequency-domain lifetime images with photobleaching.

    PubMed

    Malachowski, George C; Clegg, Robert M; Redford, Glen I

    2007-12-01

    A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.

  7. Study on residual discharge time of lead-acid battery based on fitting method

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Yu, Wangwang; Jin, Yueqiang; Wang, Shuying

    2017-05-01

    This paper use the method of fitting to discuss the data of C problem of mathematical modeling in 2016, the residual discharge time model of lead-acid battery with 20A,30A,…,100A constant current discharge is obtained, and the discharge time model of discharge under arbitrary constant current is presented. The mean relative error of the model is calculated to be about 3%, which shows that the model has high accuracy. This model can provide a basis for optimizing the adaptation of power system to the electrical motor vehicle.

  8. Evaluation of marginal/internal fit of chrome-cobalt crowns: Direct laser metal sintering versus computer-aided design and computer-aided manufacturing.

    PubMed

    Gunsoy, S; Ulusoy, M

    2016-01-01

    The purpose of this study was to evaluate the internal and marginal fit of chrome cobalt (Co-Cr) crowns were fabricated with laser sintering, computer-aided design (CAD) and computer-aided manufacturing, and conventional methods. Polyamide master and working models were designed and fabricated. The models were initially designed with a software application for three-dimensional (3D) CAD (Maya, Autodesk Inc.). All models were fabricated models were produced by a 3D printer (EOSINT P380 SLS, EOS). 128 1-unit Co-Cr fixed dental prostheses were fabricated with four different techniques: Conventional lost wax method, milled wax with lost-wax method (MWLW), direct laser metal sintering (DLMS), and milled Co-Cr (MCo-Cr). The cement film thickness of the marginal and internal gaps was measured by an observer using a stereomicroscope after taking digital photos in ×24. Best fit rates according to mean and standard deviations of all measurements was in DLMS both in premolar (65.84) and molar (58.38) models in μm. A significant difference was found DLMS and the rest of fabrication techniques (P < 0.05). No significant difference was found between MCo-CR and MWLW in all fabrication techniques both in premolar and molar models (P > 0.05). DMLS was best fitting fabrication techniques for single crown based on the results.The best fit was found in marginal; the larger gap was found in occlusal.All groups were within the clinically acceptable misfit range.

  9. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  10. Accuracy of methods for calculating volumetric wear from coordinate measuring machine data of retrieved metal-on-metal hip joint implants.

    PubMed

    Lu, Zhen; McKellop, Harry A

    2014-03-01

    This study compared the accuracy and sensitivity of several numerical methods employing spherical or plane triangles for calculating the volumetric wear of retrieved metal-on-metal hip joint implants from coordinate measuring machine measurements. Five methods, one using spherical triangles and four using plane triangles to represent the bearing and the best-fit surfaces, were assessed and compared on a perfect hemisphere model and a hemi-ellipsoid model (i.e. unworn models), computer-generated wear models and wear-tested femoral balls, with point spacings of 0.5, 1, 2 and 3 mm. The results showed that the algorithm (Method 1) employing spherical triangles to represent the bearing surface and to scale the mesh to the best-fit surfaces produced adequate accuracy for the wear volume with point spacings of 0.5, 1, 2 and 3 mm. The algorithms (Methods 2-4) using plane triangles to represent the bearing surface and to scale the mesh to the best-fit surface also produced accuracies that were comparable to that with spherical triangles. In contrast, if the bearing surface was represented with a mesh of plane triangles and the best-fit surface was taken as a smooth surface without discretization (Method 5), the algorithm produced much lower accuracy with a point spacing of 0.5 mm than Methods 1-4 with a point spacing of 3 mm.

  11. Does Breast Cancer Drive the Building of Survival Probability Models among States? An Assessment of Goodness of Fit for Patient Data from SEER Registries

    PubMed

    Khan, Hafiz; Saxena, Anshul; Perisetti, Abhilash; Rafiq, Aamrin; Gabbidon, Kemesha; Mende, Sarah; Lyuksyutova, Maria; Quesada, Kandi; Blakely, Summre; Torres, Tiffany; Afesse, Mahlet

    2016-12-01

    Background: Breast cancer is a worldwide public health concern and is the most prevalent type of cancer in women in the United States. This study concerned the best fit of statistical probability models on the basis of survival times for nine state cancer registries: California, Connecticut, Georgia, Hawaii, Iowa, Michigan, New Mexico, Utah, and Washington. Materials and Methods: A probability random sampling method was applied to select and extract records of 2,000 breast cancer patients from the Surveillance Epidemiology and End Results (SEER) database for each of the nine state cancer registries used in this study. EasyFit software was utilized to identify the best probability models by using goodness of fit tests, and to estimate parameters for various statistical probability distributions that fit survival data. Results: Statistical analysis for the summary of statistics is reported for each of the states for the years 1973 to 2012. Kolmogorov-Smirnov, Anderson-Darling, and Chi-squared goodness of fit test values were used for survival data, the highest values of goodness of fit statistics being considered indicative of the best fit survival model for each state. Conclusions: It was found that California, Connecticut, Georgia, Iowa, New Mexico, and Washington followed the Burr probability distribution, while the Dagum probability distribution gave the best fit for Michigan and Utah, and Hawaii followed the Gamma probability distribution. These findings highlight differences between states through selected sociodemographic variables and also demonstrate probability modeling differences in breast cancer survival times. The results of this study can be used to guide healthcare providers and researchers for further investigations into social and environmental factors in order to reduce the occurrence of and mortality due to breast cancer. Creative Commons Attribution License

  12. Automatic selection of arterial input function using tri-exponential models

    NASA Astrophysics Data System (ADS)

    Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David

    2009-02-01

    Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.

  13. A method for modeling discontinuities in a microwave coaxial transmission line

    NASA Technical Reports Server (NTRS)

    Otoshi, T. Y.

    1992-01-01

    A method for modeling discontinuities in a coaxial transmission line is presented. The methodology involves the use of a nonlinear least-squares fit program to optimize the fit between theoretical data (from the model) and experimental data. When this method was applied to modeling discontinuities in a slightly damaged Galileo spacecraft S-band (2.295-GHz) antenna cable, excellent agreement between theory and experiment was obtained over a frequency range of 1.70-2.85 GHz. The same technique can be applied for diagnostics and locating unknown discontinuities in other types of microwave transmission lines, such as rectangular, circular, and beam waveguides.

  14. A method for modeling discontinuities in a microwave coaxial transmission line

    NASA Astrophysics Data System (ADS)

    Otoshi, T. Y.

    1992-08-01

    A method for modeling discontinuities in a coaxial transmission line is presented. The methodology involves the use of a nonlinear least-squares fit program to optimize the fit between theoretical data (from the model) and experimental data. When this method was applied to modeling discontinuities in a slightly damaged Galileo spacecraft S-band (2.295-GHz) antenna cable, excellent agreement between theory and experiment was obtained over a frequency range of 1.70-2.85 GHz. The same technique can be applied for diagnostics and locating unknown discontinuities in other types of microwave transmission lines, such as rectangular, circular, and beam waveguides.

  15. Relatedness, conflict, and the evolution of eusociality.

    PubMed

    Liao, Xiaoyun; Rong, Stephen; Queller, David C

    2015-03-01

    The evolution of sterile worker castes in eusocial insects was a major problem in evolutionary theory until Hamilton developed a method called inclusive fitness. He used it to show that sterile castes could evolve via kin selection, in which a gene for altruistic sterility is favored when the altruism sufficiently benefits relatives carrying the gene. Inclusive fitness theory is well supported empirically and has been applied to many other areas, but a recent paper argued that the general method of inclusive fitness was wrong and advocated an alternative population genetic method. The claim of these authors was bolstered by a new model of the evolution of eusociality with novel conclusions that appeared to overturn some major results from inclusive fitness. Here we report an expanded examination of this kind of model for the evolution of eusociality and show that all three of its apparently novel conclusions are essentially false. Contrary to their claims, genetic relatedness is important and causal, workers are agents that can evolve to be in conflict with the queen, and eusociality is not so difficult to evolve. The misleading conclusions all resulted not from incorrect math but from overgeneralizing from narrow assumptions or parameter values. For example, all of their models implicitly assumed high relatedness, but modifying the model to allow lower relatedness shows that relatedness is essential and causal in the evolution of eusociality. Their modeling strategy, properly applied, actually confirms major insights of inclusive fitness studies of kin selection. This broad agreement of different models shows that social evolution theory, rather than being in turmoil, is supported by multiple theoretical approaches. It also suggests that extensive prior work using inclusive fitness, from microbial interactions to human evolution, should be considered robust unless shown otherwise.

  16. Practical Consequences of Item Response Theory Model Misfit in the Context of Test Equating with Mixed-Format Test Data

    PubMed Central

    Zhao, Yue; Hambleton, Ronald K.

    2017-01-01

    In item response theory (IRT) models, assessing model-data fit is an essential step in IRT calibration. While no general agreement has ever been reached on the best methods or approaches to use for detecting misfit, perhaps the more important comment based upon the research findings is that rarely does the research evaluate IRT misfit by focusing on the practical consequences of misfit. The study investigated the practical consequences of IRT model misfit in examining the equating performance and the classification of examinees into performance categories in a simulation study that mimics a typical large-scale statewide assessment program with mixed-format test data. The simulation study was implemented by varying three factors, including choice of IRT model, amount of growth/change of examinees’ abilities between two adjacent administration years, and choice of IRT scaling methods. Findings indicated that the extent of significant consequences of model misfit varied over the choice of model and IRT scaling methods. In comparison with mean/sigma (MS) and Stocking and Lord characteristic curve (SL) methods, separate calibration with linking and fixed common item parameter (FCIP) procedure was more sensitive to model misfit and more robust against various amounts of ability shifts between two adjacent administrations regardless of model fit. SL was generally the least sensitive to model misfit in recovering equating conversion and MS was the least robust against ability shifts in recovering the equating conversion when a substantial degree of misfit was present. The key messages from the study are that practical ways are available to study model fit, and, model fit or misfit can have consequences that should be considered when choosing an IRT model. Not only does the study address the consequences of IRT model misfit, but also it is our hope to help researchers and practitioners find practical ways to study model fit and to investigate the validity of particular IRT models for achieving a specified purpose, to assure that the successful use of the IRT models are realized, and to improve the applications of IRT models with educational and psychological test data. PMID:28421011

  17. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  18. An Empirical Investigation of Methods for Assessing Item Fit for Mixed Format Tests

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N.

    2013-01-01

    Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…

  19. Optimization of isotherm models for pesticide sorption on biopolymer-nanoclay composite by error analysis.

    PubMed

    Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M

    2017-04-01

    A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. An approximation method for improving dynamic network model fitting.

    PubMed

    Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M

    There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.

  1. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    PubMed

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  2. Parametric modelling of cost data in medical studies.

    PubMed

    Nixon, R M; Thompson, S G

    2004-04-30

    The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.

  3. A novel method to assess primary stability of press-fit acetabular cups.

    PubMed

    Crosnier, Emilie A; Keogh, Patrick S; Miles, Anthony W

    2014-11-01

    Initial stability is an essential prerequisite to achieve osseointegration of press-fit acetabular cups in total hip replacements. Most in vitro methods that assess cup stability do not reproduce physiological loading conditions and use simplified acetabular models with a spherical cavity. The aim of this study was to investigate the effect of bone density and acetabular geometry on cup stability using a novel method for measuring acetabular cup micromotion. A press-fit cup was inserted into Sawbones(®) foam blocks having different densities to simulate normal and osteoporotic bone variations and different acetabular geometries. The stability of the cup was assessed in two ways: (a) measurement of micromotion of the cup in 6 degrees of freedom under physiological loading and (b) uniaxial push-out tests. The results indicate that changes in bone substrate density and acetabular geometry affect the stability of press-fit acetabular cups. They also suggest that cups implanted into weaker, for example, osteoporotic, bone are subjected to higher levels of micromotion and are therefore more prone to loosening. The decrease in stability of the cup in the physiological model suggests that using simplified spherical cavities to model the acetabulum over-estimates the initial stability of press-fit cups. This novel testing method should provide the basis for a more representative protocol for future pre-clinical evaluation of new acetabular cup designs. © IMechE 2014.

  4. SPSS macros to compare any two fitted values from a regression model.

    PubMed

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  5. Comparing Indirect Effects in SEM: A Sequential Model Fitting Method Using Covariance-Equivalent Specifications

    ERIC Educational Resources Information Center

    Chan, Wai

    2007-01-01

    In social science research, an indirect effect occurs when the influence of an antecedent variable on the effect variable is mediated by an intervening variable. To compare indirect effects within a sample or across different samples, structural equation modeling (SEM) can be used if the computer program supports model fitting with nonlinear…

  6. Can Policy Alone Stop Decline of Children and Youth Fitness?

    ERIC Educational Resources Information Center

    Zhang, Chunhua; Yang, Yang

    2017-01-01

    Various models and methods have been proposed to address the worldwide decline in children's and youth's physical fitness, and the social-ecological model has shown some promise. Yet, the impact of the policy intervention, one component of that model, has not been evaluated carefully. Using limited data from policy documents, the impact of policy…

  7. An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data.

    PubMed

    Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel

    2012-01-01

    For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  8. Automatic Fitting of Spiking Neuron Models to Electrophysiological Recordings

    PubMed Central

    Rossant, Cyrille; Goodman, Dan F. M.; Platkiewicz, Jonathan; Brette, Romain

    2010-01-01

    Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models. PMID:20224819

  9. Goodness of fit of probability distributions for sightings as species approach extinction.

    PubMed

    Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael

    2009-04-01

    Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.

  10. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  11. Local-aggregate modeling for big data via distributed optimization: Applications to neuroimaging.

    PubMed

    Hu, Yue; Allen, Genevera I

    2015-12-01

    Technological advances have led to a proliferation of structured big data that have matrix-valued covariates. We are specifically motivated to build predictive models for multi-subject neuroimaging data based on each subject's brain imaging scans. This is an ultra-high-dimensional problem that consists of a matrix of covariates (brain locations by time points) for each subject; few methods currently exist to fit supervised models directly to this tensor data. We propose a novel modeling and algorithmic strategy to apply generalized linear models (GLMs) to this massive tensor data in which one set of variables is associated with locations. Our method begins by fitting GLMs to each location separately, and then builds an ensemble by blending information across locations through regularization with what we term an aggregating penalty. Our so called, Local-Aggregate Model, can be fit in a completely distributed manner over the locations using an Alternating Direction Method of Multipliers (ADMM) strategy, and thus greatly reduces the computational burden. Furthermore, we propose to select the appropriate model through a novel sequence of faster algorithmic solutions that is similar to regularization paths. We will demonstrate both the computational and predictive modeling advantages of our methods via simulations and an EEG classification problem. © 2015, The International Biometric Society.

  12. Establishment method of a mixture model and its practical application for transmission gears in an engineering vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping

    2012-09-01

    Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.

  13. A New Z Score Curve of the Coronary Arterial Internal Diameter Using the Lambda-Mu-Sigma Method in a Pediatric Population.

    PubMed

    Kobayashi, Tohru; Fuse, Shigeto; Sakamoto, Naoko; Mikami, Masashi; Ogawa, Shunichi; Hamaoka, Kenji; Arakaki, Yoshio; Nakamura, Tsuneyuki; Nagasawa, Hiroyuki; Kato, Taichi; Jibiki, Toshiaki; Iwashima, Satoru; Yamakawa, Masaru; Ohkubo, Takashi; Shimoyama, Shinya; Aso, Kentaro; Sato, Seiichi; Saji, Tsutomu

    2016-08-01

    Several coronary artery Z score models have been developed. However, a Z score model derived by the lambda-mu-sigma (LMS) method has not been established. Echocardiographic measurements of the proximal right coronary artery, left main coronary artery, proximal left anterior descending coronary artery, and proximal left circumflex artery were prospectively collected in 3,851 healthy children ≤18 years of age and divided into developmental and validation data sets. In the developmental data set, smooth curves were fitted for each coronary artery using linear, logarithmic, square-root, and LMS methods for both sexes. The relative goodness of fit of these models was compared using the Bayesian information criterion. The best-fitting model was tested for reproducibility using the validation data set. The goodness of fit of the selected model was visually compared with that of the previously reported regression models using a Q-Q plot. Because the internal diameter of each coronary artery was not similar between sexes, sex-specific Z score models were developed. The LMS model with body surface area as the independent variable showed the best goodness of fit; therefore, the internal diameter of each coronary artery was transformed into a sex-specific Z score on the basis of body surface area using the LMS method. In the validation data set, a Q-Q plot of each model indicated that the distribution of Z scores in the LMS models was closer to the normal distribution compared with previously reported regression models. Finally, the final models for each coronary artery in both sexes were developed using the developmental and validation data sets. A Microsoft Excel-based Z score calculator was also created, which is freely available online (http://raise.umin.jp/zsp/calculator/). Novel LMS models with which to estimate the sex-specific Z score of each internal coronary artery diameter were generated and validated using a large pediatric population. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  14. Why Might Relative Fit Indices Differ between Estimators?

    ERIC Educational Resources Information Center

    Weng, Li-Jen; Cheng, Chung-Ping

    1997-01-01

    Relative fit indices using the null model as the reference point in computation may differ across estimation methods, as this article illustrates by comparing maximum likelihood, ordinary least squares, and generalized least squares estimation in structural equation modeling. The illustration uses a covariance matrix for six observed variables…

  15. Correlates of male fitness in captive zebra finches--a comparison of methods to disentangle genetic and environmental effects.

    PubMed

    Bolund, Elisabeth; Schielzeth, Holger; Forstmeier, Wolfgang

    2011-11-08

    It is a common observation in evolutionary studies that larger, more ornamented or earlier breeding individuals have higher fitness, but that body size, ornamentation or breeding time does not change despite of sometimes substantial heritability for these traits. A possible explanation for this is that these traits do not causally affect fitness, but rather happen to be indirectly correlated with fitness via unmeasured non-heritable aspects of condition (e.g. undernourished offspring grow small and have low fitness as adults due to poor health). Whether this explanation applies to a specific case can be examined by decomposing the covariance between trait and fitness into its genetic and environmental components using pedigree-based animal models. We here examine different methods of doing this for a captive zebra finch population where male fitness was measured in communal aviaries in relation to three phenotypic traits (tarsus length, beak colour and song rate). Our case study illustrates how methods that regress fitness over breeding values for phenotypic traits yield biased estimates as well as anti-conservative standard errors. Hence, it is necessary to estimate the genetic and environmental covariances between trait and fitness directly from a bivariate model. This method, however, is very demanding in terms of sample sizes. In our study parameter estimates of selection gradients for tarsus were consistent with the hypothesis of environmentally induced bias (βA=0.035±0.25 (SE), βE=0.57±0.28 (SE)), yet this differences between genetic and environmental selection gradients falls short of statistical significance. To examine the generality of the idea that phenotypic selection gradients for certain traits (like size) are consistently upwardly biased by environmental covariance a meta-analysis across study systems will be needed.

  16. Correlates of male fitness in captive zebra finches - a comparison of methods to disentangle genetic and environmental effects

    PubMed Central

    2011-01-01

    Backgound It is a common observation in evolutionary studies that larger, more ornamented or earlier breeding individuals have higher fitness, but that body size, ornamentation or breeding time does not change despite of sometimes substantial heritability for these traits. A possible explanation for this is that these traits do not causally affect fitness, but rather happen to be indirectly correlated with fitness via unmeasured non-heritable aspects of condition (e.g. undernourished offspring grow small and have low fitness as adults due to poor health). Whether this explanation applies to a specific case can be examined by decomposing the covariance between trait and fitness into its genetic and environmental components using pedigree-based animal models. We here examine different methods of doing this for a captive zebra finch population where male fitness was measured in communal aviaries in relation to three phenotypic traits (tarsus length, beak colour and song rate). Results Our case study illustrates how methods that regress fitness over breeding values for phenotypic traits yield biased estimates as well as anti-conservative standard errors. Hence, it is necessary to estimate the genetic and environmental covariances between trait and fitness directly from a bivariate model. This method, however, is very demanding in terms of sample sizes. In our study parameter estimates of selection gradients for tarsus were consistent with the hypothesis of environmentally induced bias (βA = 0.035 ± 0.25 (SE), βE = 0.57 ± 0.28 (SE)), yet this differences between genetic and environmental selection gradients falls short of statistical significance. Conclusions To examine the generality of the idea that phenotypic selection gradients for certain traits (like size) are consistently upwardly biased by environmental covariance a meta-analysis across study systems will be needed. PMID:22067225

  17. Bayesian-MCMC-based parameter estimation of stealth aircraft RCS models

    NASA Astrophysics Data System (ADS)

    Xia, Wei; Dai, Xiao-Xia; Feng, Yuan

    2015-12-01

    When modeling a stealth aircraft with low RCS (Radar Cross Section), conventional parameter estimation methods may cause a deviation from the actual distribution, owing to the fact that the characteristic parameters are estimated via directly calculating the statistics of RCS. The Bayesian-Markov Chain Monte Carlo (Bayesian-MCMC) method is introduced herein to estimate the parameters so as to improve the fitting accuracies of fluctuation models. The parameter estimations of the lognormal and the Legendre polynomial models are reformulated in the Bayesian framework. The MCMC algorithm is then adopted to calculate the parameter estimates. Numerical results show that the distribution curves obtained by the proposed method exhibit improved consistence with the actual ones, compared with those fitted by the conventional method. The fitting accuracy could be improved by no less than 25% for both fluctuation models, which implies that the Bayesian-MCMC method might be a good candidate among the optimal parameter estimation methods for stealth aircraft RCS models. Project supported by the National Natural Science Foundation of China (Grant No. 61101173), the National Basic Research Program of China (Grant No. 613206), the National High Technology Research and Development Program of China (Grant No. 2012AA01A308), the State Scholarship Fund by the China Scholarship Council (CSC), and the Oversea Academic Training Funds, and University of Electronic Science and Technology of China (UESTC).

  18. Can Policy Alone Stop Decline of Children and Youth Fitness?

    PubMed

    Zhang, Chunhua; Yang, Yang

    2017-03-01

    Various models and methods have been proposed to address the worldwide decline in children's and youth's physical fitness, and the social-ecological model has shown some promise. Yet, the impact of the policy intervention, 1 component of that model, has not been evaluated carefully. Using limited data from policy documents, the impact of policy related to children and youth fitness in China was examined, and it was found that the policy alone did not seem to work. Possible reasons are explored, and a call for more policy evaluation research is made.

  19. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    PubMed

    Silva, Mónica A; Jonsen, Ian; Russell, Deborah J F; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km) was nearly half that of LS estimates (11.6 ± 8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  20. Assessing Performance of Bayesian State-Space Models Fit to Argos Satellite Telemetry Locations Processed with Kalman Filtering

    PubMed Central

    Silva, Mónica A.; Jonsen, Ian; Russell, Deborah J. F.; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F.

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to “true” GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6±5.6 km) was nearly half that of LS estimates (11.6±8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales’ behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates. PMID:24651252

  1. Paleotemperature reconstruction from mammalian phosphate δ18O records - an alternative view on data processing

    NASA Astrophysics Data System (ADS)

    Skrzypek, Grzegorz; Sadler, Rohan; Wiśniewski, Andrzej

    2017-04-01

    The stable oxygen isotope composition of phosphates (δ18O) extracted from mammalian bone and teeth material is commonly used as a proxy for paleotemperature. Historically, several different analytical and statistical procedures for determining air paleotemperatures from the measured δ18O of phosphates have been applied. This inconsistency in both stable isotope data processing and the application of statistical procedures has led to large and unwanted differences between calculated results. This study presents the uncertainty associated with two of the most commonly used regression methods: least squares inverted fit and transposed fit. We assessed the performance of these methods by designing and applying calculation experiments to multiple real-life data sets, calculating in reverse temperatures, and comparing them with true recorded values. Our calculations clearly show that the mean absolute errors are always substantially higher for the inverted fit (a causal model), with the transposed fit (a predictive model) returning mean values closer to the measured values (Skrzypek et al. 2015). The predictive models always performed better than causal models, with 12-65% lower mean absolute errors. Moreover, the least-squares regression (LSM) model is more appropriate than Reduced Major Axis (RMA) regression for calculating the environmental water stable oxygen isotope composition from phosphate signatures, as well as for calculating air temperature from the δ18O value of environmental water. The transposed fit introduces a lower overall error than the inverted fit for both the δ18O of environmental water and Tair calculations; therefore, the predictive models are more statistically efficient than the causal models in this instance. The direct comparison of paleotemperature results from different laboratories and studies may only be achieved if a single method of calculation is applied. Reference Skrzypek G., Sadler R., Wiśniewski A., 2016. Reassessment of recommendations for processing mammal phosphate δ18O data for paleotemperature reconstruction. Palaeogeography, Palaeoclimatology, Palaeoecology 446, 162-167.

  2. Event-scale power law recession analysis: quantifying methodological uncertainty

    NASA Astrophysics Data System (ADS)

    Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.

    2017-01-01

    The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.

  3. Micro-CT evaluation of the marginal fit of CAD/CAM all ceramic crowns

    NASA Astrophysics Data System (ADS)

    Brenes, Christian

    Objectives: Evaluate the marginal fit of CAD/CAM all ceramic crowns made from lithium disilicate and zirconia using two different fabrication protocols (model and model-less). METHODS: Forty anterior all ceramic restorations (20 lithium disilicate, 20 zirconia) were fabricated using a CEREC Bluecam scanner. Two different fabrication methods were used: a full digital approach and a printed model. Completed crowns were cemented and marginal gap was evaluated using Micro-CT. Each specimen was analyzed in sagittal and trans-axial orientations, allowing a 360° evaluation of the vertical and horizontal fit. RESULTS: Vertical measurements in the lingual, distal and mesial views had and estimated marginal gap from 101.9 to 133.9 microns for E-max crowns and 126.4 to 165.4 microns for zirconia. No significant differences were found between model and model-less techniques. CONCLUSION: Lithium disilicate restorations exhibited a more accurate and consistent marginal adaptation when compared to zirconia crowns. No statistically significant differences were observed when comparing model or model-less approaches.

  4. Reverse engineering the gap gene network of Drosophila melanogaster.

    PubMed

    Perkins, Theodore J; Jaeger, Johannes; Reinitz, John; Glass, Leon

    2006-05-01

    A fundamental problem in functional genomics is to determine the structure and dynamics of genetic networks based on expression data. We describe a new strategy for solving this problem and apply it to recently published data on early Drosophila melanogaster development. Our method is orders of magnitude faster than current fitting methods and allows us to fit different types of rules for expressing regulatory relationships. Specifically, we use our approach to fit models using a smooth nonlinear formalism for modeling gene regulation (gene circuits) as well as models using logical rules based on activation and repression thresholds for transcription factors. Our technique also allows us to infer regulatory relationships de novo or to test network structures suggested by the literature. We fit a series of models to test several outstanding questions about gap gene regulation, including regulation of and by hunchback and the role of autoactivation. Based on our modeling results and validation against the experimental literature, we propose a revised network structure for the gap gene system. Interestingly, some relationships in standard textbook models of gap gene regulation appear to be unnecessary for or even inconsistent with the details of gap gene expression during wild-type development.

  5. Extreme value modelling of Ghana stock exchange index.

    PubMed

    Nortey, Ezekiel N N; Asare, Kwabena; Mettle, Felix Okoe

    2015-01-01

    Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana stock exchange all-shares index (2000-2010) by applying the extreme value theory (EVT) to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before the EVT method was applied. The Peak Over Threshold approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model's goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the value at risk and expected shortfall risk measures at some high quantiles, based on the fitted GPD model.

  6. Determining the turnover time of groundwater systems with the aid of environmental tracers. 1. Models and their applicability

    NASA Astrophysics Data System (ADS)

    Małoszewski, P.; Zuber, A.

    1982-06-01

    Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.

  7. Tumor Control Probability Modeling for Stereotactic Body Radiation Therapy of Early-Stage Lung Cancer Using Multiple Bio-physical Models

    PubMed Central

    Liu, Feng; Tai, An; Lee, Percy; Biswas, Tithi; Ding, George X.; El Naqa, Isaam; Grimm, Jimm; Jackson, Andrew; Kong, Feng-Ming (Spring); LaCouture, Tamara; Loo, Billy; Miften, Moyed; Solberg, Timothy; Li, X Allen

    2017-01-01

    Purpose To analyze pooled clinical data using different radiobiological models and to understand the relationship between biologically effective dose (BED) and tumor control probability (TCP) for stereotactic body radiotherapy (SBRT) of early-stage non-small cell lung cancer (NSCLC). Method and Materials The clinical data of 1-, 2-, 3-, and 5-year actuarial or Kaplan-Meier TCP from 46 selected studies were collected for SBRT of NSCLC in the literature. The TCP data were separated for Stage T1 and T2 tumors if possible, otherwise collected for combined stages. BED was calculated at isocenters using six radiobiological models. For each model, the independent model parameters were determined from a fit to the TCP data using the least chi-square (χ2) method with either one set of parameters regardless of tumor stages or two sets for T1 and T2 tumors separately. Results The fits to the clinic data yield consistent results of large α/β ratios of about 20 Gy for all models investigated. The regrowth model that accounts for the tumor repopulation and heterogeneity leads to a better fit to the data, compared to other 5 models where the fits were indistinguishable between the models. The models based on the fitting parameters predict that the T2 tumors require about additional 1 Gy physical dose at isocenters per fraction (≤5 fractions) to achieve the optimal TCP when compared to the T1 tumors. Conclusion This systematic analysis of a large set of published clinical data using different radiobiological models shows that local TCP for SBRT of early-stage NSCLC has strong dependence on BED with large α/β ratios of about 20 Gy. The six models predict that a BED (calculated with α/β of 20) of 90 Gy is sufficient to achieve TCP ≥ 95%. Among the models considered, the regrowth model leads to a better fit to the clinical data. PMID:27871671

  8. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  9. Robust mislabel logistic regression without modeling mislabel probabilities.

    PubMed

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  10. Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Rogers, Adam; Safi-Harb, Samar; Fiege, Jason

    2015-08-01

    The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.

  11. Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.

    PubMed

    Jiang, Zhixing; Zhang, David; Lu, Guangming

    2018-04-19

    Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Promoting Fitness and Safety in Elementary Students: A Randomized Control Study of the Michigan Model for Health

    ERIC Educational Resources Information Center

    O'Neill, James M.; Clark, Jeffrey K.; Jones, James A.

    2016-01-01

    Background: In elementary grades, comprehensive health education curricula have demonstrated effectiveness in addressing singular health issues. The Michigan Model for Health (MMH) was implemented and evaluated to determine its impact on nutrition, physical fitness, and safety knowledge and skills. Methods: Schools (N = 52) were randomly assigned…

  13. Measured, modeled, and causal conceptions of fitness

    PubMed Central

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  14. Outlier Detection in High-Stakes Certification Testing. Research Report.

    ERIC Educational Resources Information Center

    Meijer, Rob R.

    Recent developments of person-fit analysis in computerized adaptive testing (CAT) are discussed. Methods from statistical process control are presented that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory (IRT) model in a CAT. Most person-fit research in CAT is restricted to…

  15. Bayesian Revision of Residual Detection Power

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2013-01-01

    This paper addresses some issues with quality assessment and quality assurance in response surface modeling experiments executed in wind tunnels. The role of data volume on quality assurance for response surface models is reviewed. Specific wind tunnel response surface modeling experiments are considered for which apparent discrepancies exist between fit quality expectations based on implemented quality assurance tactics, and the actual fit quality achieved in those experiments. These discrepancies are resolved by using Bayesian inference to account for certain imperfections in the assessment methodology. Estimates of the fraction of out-of-tolerance model predictions based on traditional frequentist methods are revised to account for uncertainty in the residual assessment process. The number of sites in the design space for which residuals are out of tolerance is seen to exceed the number of sites where the model actually fails to fit the data. A method is presented to estimate how much of the design space in inadequately modeled by low-order polynomial approximations to the true but unknown underlying response function.

  16. Modeling and Bayesian parameter estimation for shape memory alloy bending actuators

    NASA Astrophysics Data System (ADS)

    Crews, John H.; Smith, Ralph C.

    2012-04-01

    In this paper, we employ a homogenized energy model (HEM) for shape memory alloy (SMA) bending actuators. Additionally, we utilize a Bayesian method for quantifying parameter uncertainty. The system consists of a SMA wire attached to a flexible beam. As the actuator is heated, the beam bends, providing endoscopic motion. The model parameters are fit to experimental data using an ordinary least-squares approach. The uncertainty in the fit model parameters is then quantified using Markov Chain Monte Carlo (MCMC) methods. The MCMC algorithm provides bounds on the parameters, which will ultimately be used in robust control algorithms. One purpose of the paper is to test the feasibility of the Random Walk Metropolis algorithm, the MCMC method used here.

  17. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    PubMed

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  18. Pseudo second order kinetics and pseudo isotherms for malachite green onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2006-08-25

    Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.

  19. SU-F-BRD-16: Relative Biological Effectiveness of Double-Strand Break Induction for Modeling Cell Survival in Pristine Proton Beams of Different Dose-Averaged Linear Energy Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, C; Bronk, L; UT Graduate School of Biomedical Sciences at Houston, Houston, TX

    2015-06-15

    Purpose: High throughput in vitro experiments assessing cell survival following proton radiation indicate that both the alpha and the beta parameters of the linear quadratic model increase with increasing proton linear energy transfer (LET). We investigated the relative biological effectiveness (RBE) of double-strand break (DSB) induction as a means of explaining the experimental results. Methods: Experiments were performed with two lung cancer cell lines and a range of proton LET values (0.94 – 19.4 keV/µm) using an experimental apparatus designed to irradiate cells in a 96 well plate such that each column encounters protons of different dose-averaged LET (LETd). Traditionalmore » linear quadratic survival curve fitting was performed, and alpha, beta, and RBE values obtained. Survival curves were also fit with a model incorporating RBE of DSB induction as the sole fit parameter. Fitted values of the RBE of DSB induction were then compared to values obtained using Monte Carlo Damage Simulation (MCDS) software and energy spectra calculated with Geant4. Other parameters including alpha, beta, and number of DSBs were compared to those obtained from traditional fitting. Results: Survival curve fitting with RBE of DSB induction yielded alpha and beta parameters that increase with proton LETd, which follows from the standard method of fitting; however, relying on a single fit parameter provided more consistent trends. The fitted values of RBE of DSB induction increased beyond what is predicted from MCDS data above proton LETd of approximately 10 keV/µm. Conclusion: In order to accurately model in vitro proton irradiation experiments performed with high throughput methods, the RBE of DSB induction must increase more rapidly than predicted by MCDS above LETd of 10 keV/µm. This can be explained by considering the increased complexity of DSBs or the nature of intra-track pairwise DSB interactions in this range of LETd values. NIH Grant 2U19CA021239-35.« less

  20. Extracting harmonic signal from a chaotic background with local linear model

    NASA Astrophysics Data System (ADS)

    Li, Chenlong; Su, Liyun

    2017-02-01

    In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.

  1. Fitness components and ecological risk of transgenic release: a model using Japanese medaka (Oryzias latipes).

    PubMed

    Muir, W M; Howard, R D

    2001-07-01

    Any release of transgenic organisms into nature is a concern because ecological relationships between genetically engineered organisms and other organisms (including their wild-type conspecifics) are unknown. To address this concern, we developed a method to evaluate risk in which we input estimates of fitness parameters from a founder population into a recurrence model to predict changes in transgene frequency after a simulated transgenic release. With this method, we grouped various aspects of an organism's life cycle into six net fitness components: juvenile viability, adult viability, age at sexual maturity, female fecundity, male fertility, and mating advantage. We estimated these components for wild-type and transgenic individuals using the fish, Japanese medaka (Oryzias latipes). We generalized our model's predictions using various combinations of fitness component values in addition to our experimentally derived estimates. Our model predicted that, for a wide range of parameter values, transgenes could spread in populations despite high juvenile viability costs if transgenes also have sufficiently high positive effects on other fitness components. Sensitivity analyses indicated that transgene effects on age at sexual maturity should have the greatest impact on transgene frequency, followed by juvenile viability, mating advantage, female fecundity, and male fertility, with changes in adult viability, resulting in the least impact.

  2. Nonparametric Determination of Redshift Evolution Index of Dark Energy

    NASA Astrophysics Data System (ADS)

    Ziaeepour, Houri

    We propose a nonparametric method to determine the sign of γ — the redshift evolution index of dark energy. This is important for distinguishing between positive energy models, a cosmological constant, and what is generally called ghost models. Our method is based on geometrical properties and is more tolerant to uncertainties of other cosmological parameters than fitting methods in what concerns the sign of γ. The same parametrization can also be used for determining γ and its redshift dependence by fitting. We apply this method to SNLS supernovae and to gold sample of re-analyzed supernovae data from Riess et al. Both datasets show strong indication of a negative γ. If this result is confirmed by more extended and precise data, many of the dark energy models, including simple cosmological constant, standard quintessence models without interaction between quintessence scalar field(s) and matter, and scaling models are ruled out. We have also applied this method to Gurzadyan-Xue models with varying fundamental constants to demonstrate the possibility of using it to test other cosmologies.

  3. [An experimental study on the effect of different optical impression methods on marginal and internal fit of all-ceramic crowns].

    PubMed

    Tan, Fa-Bing; Wang, Lu; Fu, Gang; Wu, Shu-Hong; Jin, Ping

    2010-02-01

    To study the effect of different optical impression methods in Cerec 3D/Inlab MC XL system on marginal and internal fit of all-ceramic crowns. A right mandibular first molar in the standard model was used to prepare full crown and replicated into thirty-two plaster casts. Sixteen of them were selected randomly for bonding crown and the others were used for taking optical impression, in half of which the direct optical impression taking method were used and the others were used for the indirect method, and then eight Cerec Blocs all-ceramic crowns were manufactured respectively. The fit of all-ceramic crowns were evaluated by modified United States Public Health Service (USPHS) criteria and scanning electron microscope (SEM) imaging, and the data were statistically analyzed with SAS 9.1 software. The clinically acceptable rate for all marginal measurement sites was 87.5% according to USPHS criteria. There was no statistically significant difference in marginal fit between direct and indirect method group (P > 0.05). With SEM imaging, all marginal measurement sites were less than 120 microm and no statistically significant difference was found between direct and indirect method group in terms of marginal or internal fit (P > 0.05). But the direct method group showed better fit than indirect method group in terms of mesial surface, lingual surface, buccal surface and occlusal surface (P < 0.05). The distal surface's fit was worse and the obvious difference was observed between mesial surface and distal surface in direct method group (P < 0.01). Under the conditions of this study, the optical impression method had no significant effect on marginal fit of Cerec Blocs crowns, but it had certain effect on internal fit. Overall all-ceramic crowns appeared to have clinically acceptable marginal fit.

  4. Fitting ERGMs on big networks.

    PubMed

    An, Weihua

    2016-09-01

    The exponential random graph model (ERGM) has become a valuable tool for modeling social networks. In particular, ERGM provides great flexibility to account for both covariates effects on tie formations and endogenous network formation processes. However, there are both conceptual and computational issues for fitting ERGMs on big networks. This paper describes a framework and a series of methods (based on existent algorithms) to address these issues. It also outlines the advantages and disadvantages of the methods and the conditions to which they are most applicable. Selected methods are illustrated through examples. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. [A correction method of baseline drift of discrete spectrum of NIR].

    PubMed

    Hu, Ai-Qin; Yuan, Hong-Fu; Song, Chun-Feng; Li, Xiao-Yu

    2014-10-01

    In the present paper, a new correction method of baseline drift of discrete spectrum is proposed by combination of cubic spline interpolation and first order derivative. A fitting spectrum is constructed by cubic spline interpolation, using the datum in discrete spectrum as interpolation nodes. The fitting spectrum is differentiable. First order derivative is applied to the fitting spectrum to calculate derivative spectrum. The spectral wavelengths which are the same as the original discrete spectrum were taken out from the derivative spectrum to constitute the first derivative spectra of the discrete spectra, thereby to correct the baseline drift of the discrete spectra. The effects of the new method were demonstrated by comparison of the performances of multivariate models built using original spectra, direct differential spectra and the spectra pretreated by the new method. The results show that negative effects on the performance of multivariate model caused by baseline drift of discrete spectra can be effectively eliminated by the new method.

  6. Sensitivity of Chemical Shift-Encoded Fat Quantification to Calibration of Fat MR Spectrum

    PubMed Central

    Wang, Xiaoke; Hernando, Diego; Reeder, Scott B.

    2015-01-01

    Purpose To evaluate the impact of different fat spectral models on proton density fat-fraction (PDFF) quantification using chemical shift-encoded (CSE) MRI. Material and Methods Simulations and in vivo imaging were performed. In a simulation study, spectral models of fat were compared pairwise. Comparison of magnitude fitting and mixed fitting was performed over a range of echo times and fat fractions. In vivo acquisitions from 41 patients were reconstructed using 7 published spectral models of fat. T2-corrected STEAM-MRS was used as reference. Results Simulations demonstrate that imperfectly calibrated spectral models of fat result in biases that depend on echo times and fat fraction. Mixed fitting is more robust against this bias than magnitude fitting. Multi-peak spectral models showed much smaller differences among themselves than when compared to the single-peak spectral model. In vivo studies show all multi-peak models agree better (for mixed fitting, slope ranged from 0.967–1.045 using linear regression) with reference standard than the single-peak model (for mixed fitting, slope=0.76). Conclusion It is essential to use a multi-peak fat model for accurate quantification of fat with CSE-MRI. Further, fat quantification techniques using multi-peak fat models are comparable and no specific choice of spectral model is shown to be superior to the rest. PMID:25845713

  7. Validation of a Cognitive Diagnostic Model across Multiple Forms of a Reading Comprehension Assessment

    ERIC Educational Resources Information Center

    Clark, Amy K.

    2013-01-01

    The present study sought to fit a cognitive diagnostic model (CDM) across multiple forms of a passage-based reading comprehension assessment using the attribute hierarchy method. Previous research on CDMs for reading comprehension assessments served as a basis for the attributes in the hierarchy. The two attribute hierarchies were fit to data from…

  8. Applying a Hypoxia-Incorporating TCP Model to Experimental Data on Rat Sarcoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruggieri, Ruggero, E-mail: ruggieri.ruggero@gmail.com; Stavreva, Nadejda; Naccarato, Stefania

    2012-08-01

    Purpose: To verify whether a tumor control probability (TCP) model which mechanistically incorporates acute and chronic hypoxia is able to describe animal in vivo dose-response data, exhibiting tumor reoxygenation. Methods and Materials: The investigated TCP model accounts for tumor repopulation, reoxygenation of chronic hypoxia, and fluctuating oxygenation of acute hypoxia. Using the maximum likelihood method, the model is fitted to Fischer-Moulder data on Wag/Rij rats, inoculated with rat rhabdomyosarcoma BA1112, and irradiated in vivo using different fractionation schemes. This data set is chosen because two of the experimental dose-response curves exhibit an inverse dose behavior, which is interpreted as duemore » to reoxygenation. The tested TCP model is complex, and therefore, in vivo cell survival data on the same BA1112 cell line from Reinhold were added to Fischer-Moulder data and fitted simultaneously with a corresponding cell survival function. Results: The obtained fit to the combined Fischer-Moulder-Reinhold data was statistically acceptable. The best-fit values of the model parameters for which information exists were in the range of published values. The cell survival curves of well-oxygenated and hypoxic cells, computed using the best-fit values of the radiosensitivities and the initial number of clonogens, were in good agreement with the corresponding in vitro and in situ experiments of Reinhold. The best-fit values of most of the hypoxia-related parameters were used to recompute the TCP for non-small cell lung cancer patients as a function of the number of fractions, TCP(n). Conclusions: The investigated TCP model adequately describes animal in vivo data exhibiting tumor reoxygenation. The TCP(n) curve computed for non-small cell lung cancer patients with the best-fit values of most of the hypoxia-related parameters confirms previously obtained abrupt reduction in TCP for n < 10, thus warning against the adoption of severely hypofractionated schedules.« less

  9. A new analytical method for estimating lumped parameter constants of linear viscoelastic models from strain rate tests

    NASA Astrophysics Data System (ADS)

    Mattei, G.; Ahluwalia, A.

    2018-04-01

    We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.

  10. Tensor-guided fitting of subduction slab depths

    USGS Publications Warehouse

    Bazargani, Farhad; Hayes, Gavin P.

    2013-01-01

    Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.

  11. A point cloud modeling method based on geometric constraints mixing the robust least squares method

    NASA Astrophysics Data System (ADS)

    Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan

    2016-10-01

    The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results show that the internal accuracy is improved, and it is shown that the improved method for point clouds modeling proposed by this paper outperforms the traditional point clouds modeling methods.

  12. Universal Linear Fit Identification: A Method Independent of Data, Outliers and Noise Distribution Model and Free of Missing or Removed Data Imputation.

    PubMed

    Adikaram, K K L B; Hussein, M A; Effenberger, M; Becker, T

    2015-01-01

    Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio Rmax of amax - amin and Sn - amin*n and that of Rmin of amax - amin and amax*n - Sn are always equal to 2/n, where amax is the maximum element, amin is the minimum element and Sn is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, Rmax > 2/n and Rmin > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k1) and 2/n * (1 + k2), respectively, where k1 > k2 and 0 ≤ k1 ≤ n/2 - 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10-4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process.

  13. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  14. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach

    PubMed Central

    Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin

    2014-01-01

    Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456

  15. In vitro differential diagnosis of clavus and verruca by a predictive model generated from electrical impedance.

    PubMed

    Hung, Chien-Ya; Sun, Pei-Lun; Chiang, Shu-Jen; Jaw, Fu-Shan

    2014-01-01

    Similar clinical appearances prevent accurate diagnosis of two common skin diseases, clavus and verruca. In this study, electrical impedance is employed as a novel tool to generate a predictive model for differentiating these two diseases. We used 29 clavus and 28 verruca lesions. To obtain impedance parameters, a LCR-meter system was applied to measure capacitance (C), resistance (Re), impedance magnitude (Z), and phase angle (θ). These values were combined with lesion thickness (d) to characterize the tissue specimens. The results from clavus and verruca were then fitted to a univariate logistic regression model with the generalized estimating equations (GEE) method. In model generation, log ZSD and θSD were formulated as predictors by fitting a multiple logistic regression model with the same GEE method. The potential nonlinear effects of covariates were detected by fitting generalized additive models (GAM). Moreover, the model was validated by the goodness-of-fit (GOF) assessments. Significant mean differences of the index d, Re, Z, and θ are found between clavus and verruca (p<0.001). A final predictive model is established with Z and θ indices. The model fits the observed data quite well. In GOF evaluation, the area under the receiver operating characteristics (ROC) curve is 0.875 (>0.7), the adjusted generalized R2 is 0.512 (>0.3), and the p value of the Hosmer-Lemeshow GOF test is 0.350 (>0.05). This technique promises to provide an approved model for differential diagnosis of clavus and verruca. It could provide a rapid, relatively low-cost, safe and non-invasive screening tool in clinic use.

  16. Pre-processing by data augmentation for improved ellipse fitting.

    PubMed

    Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J

    2018-01-01

    Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.

  17. Model-free estimation of the psychometric function

    PubMed Central

    Żychaluk, Kamila; Foster, David H.

    2009-01-01

    A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental. PMID:19633355

  18. The Chandra Source Catalog 2.0: Spectral Properties

    NASA Astrophysics Data System (ADS)

    McCollough, Michael L.; Siemiginowska, Aneta; Burke, Douglas; Nowak, Michael A.; Primini, Francis Anthony; Laurino, Omar; Nguyen, Dan T.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula; Chandra Source Catalog Team

    2018-01-01

    The second release of the Chandra Source Catalog (CSC) contains all sources identified from sixteen years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package) using wstat as a fit statistic and Bayesian draws method to determine errors. Three models were fit to each source: an absorbed power-law, blackbody, and Bremsstrahlung emission. The fitted parameter values for the power-law, blackbody, and Bremsstrahlung models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy fluxes computed from the normalizations of predefined absorbed power-law, black-body, Bremsstrahlung, and APEC models needed to match the observed net X-ray counts. For sources that have been observed multiple times we performed a Bayesian Blocks analysis will have been performed (see the Primini et al. poster) and the most significant block will have a joint fit performed for the mentioned spectral models. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard). This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  19. Effect of Solar Wind Drag on the Determination of the Properties of Coronal Mass Ejections from Heliospheric Images

    NASA Astrophysics Data System (ADS)

    Lugaz, N.; Kintner, P.

    2013-07-01

    The Fixed-Φ (FΦ) and Harmonic Mean (HM) fitting methods are two methods to determine the "average" direction and velocity of coronal mass ejections (CMEs) from time-elongation tracks produced by Heliospheric Imagers (HIs), such as the HIs onboard the STEREO spacecraft. Both methods assume a constant velocity in their descriptions of the time-elongation profiles of CMEs, which are used to fit the observed time-elongation data. Here, we analyze the effect of aerodynamic drag on CMEs propagating through interplanetary space, and how this drag affects the result of the FΦ and HM fitting methods. A simple drag model is used to analytically construct time-elongation profiles which are then fitted with the two methods. It is found that higher angles and velocities give rise to greater error in both methods, reaching errors in the direction of propagation of up to 15∘ and 30∘ for the FΦ and HM fitting methods, respectively. This is due to the physical accelerations of the CMEs being interpreted as geometrical accelerations by the fitting methods. Because of the geometrical definition of the HM fitting method, it is more affected by the acceleration than the FΦ fitting method. Overall, we find that both techniques overestimate the initial (and final) velocity and direction for fast CMEs propagating beyond 90∘ from the Sun-spacecraft line, meaning that arrival times at 1 AU would be predicted early (by up to 12 hours). We also find that the direction and arrival time of a wide and decelerating CME can be better reproduced by the FΦ due to the cancelation of two errors: neglecting the CME width and neglecting the CME deceleration. Overall, the inaccuracies of the two fitting methods are expected to play an important role in the prediction of CME hit and arrival times as we head towards solar maximum and the STEREO spacecraft further move behind the Sun.

  20. An interactive program for pharmacokinetic modeling.

    PubMed

    Lu, D R; Mao, F

    1993-05-01

    A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.

  1. Comparative analysis through probability distributions of a data set

    NASA Astrophysics Data System (ADS)

    Cristea, Gabriel; Constantinescu, Dan Mihai

    2018-02-01

    In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.

  2. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    NASA Astrophysics Data System (ADS)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  3. FragFit: a web-application for interactive modeling of protein segments into cryo-EM density maps.

    PubMed

    Tiemann, Johanna K S; Rose, Alexander S; Ismer, Jochen; Darvish, Mitra D; Hilal, Tarek; Spahn, Christian M T; Hildebrand, Peter W

    2018-05-21

    Cryo-electron microscopy (cryo-EM) is a standard method to determine the three-dimensional structures of molecular complexes. However, easy to use tools for modeling of protein segments into cryo-EM maps are sparse. Here, we present the FragFit web-application, a web server for interactive modeling of segments of up to 35 amino acids length into cryo-EM density maps. The fragments are provided by a regularly updated database containing at the moment about 1 billion entries extracted from PDB structures and can be readily integrated into a protein structure. Fragments are selected based on geometric criteria, sequence similarity and fit into a given cryo-EM density map. Web-based molecular visualization with the NGL Viewer allows interactive selection of fragments. The FragFit web-application, accessible at http://proteinformatics.de/FragFit, is free and open to all users, without any login requirements.

  4. Model selection for the North American Breeding Bird Survey: A comparison of methods

    USGS Publications Warehouse

    Link, William; Sauer, John; Niven, Daniel

    2017-01-01

    The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.

  5. Hybrid active contour model for inhomogeneous image segmentation with background estimation

    NASA Astrophysics Data System (ADS)

    Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun

    2018-03-01

    This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.

  6. Accuracy of Digital Impressions and Fitness of Single Crowns Based on Digital Impressions

    PubMed Central

    Yang, Xin; Lv, Pin; Liu, Yihong; Si, Wenjie; Feng, Hailan

    2015-01-01

    In this study, the accuracy (precision and trueness) of digital impressions and the fitness of single crowns manufactured based on digital impressions were evaluated. #14-17 epoxy resin dentitions were made, while full-crown preparations of extracted natural teeth were embedded at #16. (1) To assess precision, deviations among repeated scan models made by intraoral scanner TRIOS and MHT and model scanner D700 and inEos were calculated through best-fit algorithm and three-dimensional (3D) comparison. Root mean square (RMS) and color-coded difference images were offered. (2) To assess trueness, micro computed tomography (micro-CT) was used to get the reference model (REF). Deviations between REF and repeated scan models (from (1)) were calculated. (3) To assess fitness, single crowns were manufactured based on TRIOS, MHT, D700 and inEos scan models. The adhesive gaps were evaluated under stereomicroscope after cross-sectioned. Digital impressions showed lower precision and better trueness. Except for MHT, the means of RMS for precision were lower than 10 μm. Digital impressions showed better internal fitness. Fitness of single crowns based on digital impressions was up to clinical standard. Digital impressions could be an alternative method for single crowns manufacturing. PMID:28793417

  7. Estimation of fitness from energetics and life-history data: An example using mussels.

    PubMed

    Sebens, Kenneth P; Sarà, Gianluca; Carrington, Emily

    2018-06-01

    Changing environments have the potential to alter the fitness of organisms through effects on components of fitness such as energy acquisition, metabolic cost, growth rate, survivorship, and reproductive output. Organisms, on the other hand, can alter aspects of their physiology and life histories through phenotypic plasticity as well as through genetic change in populations (selection). Researchers examining the effects of environmental variables frequently concentrate on individual components of fitness, although methods exist to combine these into a population level estimate of average fitness, as the per capita rate of population growth for a set of identical individuals with a particular set of traits. Recent advances in energetic modeling have provided excellent data on energy intake and costs leading to growth, reproduction, and other life-history parameters; these in turn have consequences for survivorship at all life-history stages, and thus for fitness. Components of fitness alone (performance measures) are useful in determining organism response to changing conditions, but are often not good predictors of fitness; they can differ in both form and magnitude, as demonstrated in our model. Here, we combine an energetics model for growth and allocation with a matrix model that calculates population growth rate for a group of individuals with a particular set of traits. We use intertidal mussels as an example, because data exist for some of the important energetic and life-history parameters, and because there is a hypothesized energetic trade-off between byssus production (affecting survivorship), and energy used for growth and reproduction. The model shows exactly how strong this trade-off is in terms of overall fitness, and it illustrates conditions where fitness components are good predictors of actual fitness, and cases where they are not. In addition, the model is used to examine the effects of environmental change on this trade-off and on both fitness and on individual fitness components.

  8. A matrix-based method of moments for fitting the multivariate random effects model for meta-analysis and meta-regression

    PubMed Central

    Jackson, Dan; White, Ian R; Riley, Richard D

    2013-01-01

    Multivariate meta-analysis is becoming more commonly used. Methods for fitting the multivariate random effects model include maximum likelihood, restricted maximum likelihood, Bayesian estimation and multivariate generalisations of the standard univariate method of moments. Here, we provide a new multivariate method of moments for estimating the between-study covariance matrix with the properties that (1) it allows for either complete or incomplete outcomes and (2) it allows for covariates through meta-regression. Further, for complete data, it is invariant to linear transformations. Our method reduces to the usual univariate method of moments, proposed by DerSimonian and Laird, in a single dimension. We illustrate our method and compare it with some of the alternatives using a simulation study and a real example. PMID:23401213

  9. Preschool Psychopathology Reported by Parents in 23 Societies: Testing the Seven-Syndrome Model of the Child Behavior Checklist for Ages 1.5–5

    PubMed Central

    Ivanova, Masha Y.; Achenbach, Thomas M.; Rescorla, Leslie A.; Harder, Valerie S.; Ang, Rebecca P.; Bilenberg, Niels; Bjarnadottir, Gudrun; Capron, Christiane; De Pauw, Sarah S.W.; Dias, Pedro; Dobrean, Anca; Doepfner, Manfred; Duyme, Michele; Eapen, Valsamma; Erol, Nese; Esmaeili, Elaheh Mohammad; Ezpeleta, Lourdes; Frigerio, Alessandra; Gonçalves, Miguel M.; Gudmundsson, Halldor S.; Jeng, Suh-Fang; Jetishi, Pranvera; Jusiene, Roma; Kim, Young-Ah; Kristensen, Solvejg; Lecannelier, Felipe; Leung, Patrick W.L.; Liu, Jianghong; Montirosso, Rosario; Oh, Kyung Ja; Plueck, Julia; Pomalima, Rolando; Shahini, Mimoza; Silva, Jaime R.; Simsek, Zynep; Sourander, Andre; Valverde, Jose; Van Leeuwen, Karla G.; Woo, Bernardine S.C.; Wu, Yen-Tzu; Zubrick, Stephen R.; Verhulst, Frank C.

    2014-01-01

    Objective To test the fit of a seven-syndrome model to ratings of preschoolers' problems by parents in very diverse societies. Method Parents of 19,106 children 18 to 71 months of age from 23 societies in Asia, Australasia, Europe, the Middle East, and South America completed the Child Behavior Checklist for Ages 1.5–5 (CBCL/1.5–5). Confirmatory factor analyses were used to test the seven-syndrome model separately for each society. Results The primary model fit index, the root mean square error of approximation (RMSEA), indicated acceptable to good fit for each society. Although a six-syndrome model combining the Emotionally Reactive and Anxious/Depressed syndromes also fit the data for nine societies, it fit less well than the seven-syndrome model for seven of the nine societies. Other fit indices yielded less consistent results than the RMSEA. Conclusions The seven-syndrome model provides one way to capture patterns of children's problems that are manifested in ratings by parents from many societies. Clinicians working with preschoolers from these societies can thus assess and describe parents' ratings of behavioral, emotional, and social problems in terms of the seven syndromes. The results illustrate possibilities for culture–general taxonomic constructs of preschool psychopathology. Problems not captured by the CBCL/1.5–5 may form additional syndromes, and other syndrome models may also fit the data. PMID:21093771

  10. Left ventricle segmentation via two-layer level sets with circular shape constraint.

    PubMed

    Yang, Cong; Wu, Weiguo; Su, Yuanqi; Zhang, Shaoxiang

    2017-05-01

    This paper proposes a circular shape constraint and a novel two-layer level set method for the segmentation of the left ventricle (LV) from short-axis magnetic resonance images without training any shape models. Since the shape of LV throughout the apex-base axis is close to a ring shape, we propose a circle fitting term in the level set framework to detect the endocardium. The circle fitting term imposes a penalty on the evolving contour from its fitting circle, and thereby handles quite well with issues in LV segmentation, especially the presence of outflow track in basal slices and the intensity overlap between TPM and the myocardium. To extract the whole myocardium, the circle fitting term is incorporated into two-layer level set method. The endocardium and epicardium are respectively represented by two specified level contours of the level set function, which are evolved by an edge-based and a region-based active contour model. The proposed method has been quantitatively validated on the public data set from MICCAI 2009 challenge on the LV segmentation. Experimental results and comparisons with state-of-the-art demonstrate the accuracy and robustness of our method. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. TEMPy: a Python library for assessment of three-dimensional electron microscopy density fits.

    PubMed

    Farabella, Irene; Vasishtan, Daven; Joseph, Agnel Praveen; Pandurangan, Arun Prasad; Sahota, Harpal; Topf, Maya

    2015-08-01

    Three-dimensional electron microscopy is currently one of the most promising techniques used to study macromolecular assemblies. Rigid and flexible fitting of atomic models into density maps is often essential to gain further insights into the assemblies they represent. Currently, tools that facilitate the assessment of fitted atomic models and maps are needed. TEMPy (template and electron microscopy comparison using Python) is a toolkit designed for this purpose. The library includes a set of methods to assess density fits in intermediate-to-low resolution maps, both globally and locally. It also provides procedures for single-fit assessment, ensemble generation of fits, clustering, and multiple and consensus scoring, as well as plots and output files for visualization purposes to help the user in analysing rigid and flexible fits. The modular nature of TEMPy helps the integration of scoring and assessment of fits into large pipelines, making it a tool suitable for both novice and expert structural biologists.

  12. Videodensitometric Methods for Cardiac Output Measurements

    NASA Astrophysics Data System (ADS)

    Mischi, Massimo; Kalker, Ton; Korsten, Erik

    2003-12-01

    Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.

  13. Rank-based methods for modeling dependence between loss triangles.

    PubMed

    Côté, Marie-Pier; Genest, Christian; Abdallah, Anas

    2016-01-01

    In order to determine the risk capital for their aggregate portfolio, property and casualty insurance companies must fit a multivariate model to the loss triangle data relating to each of their lines of business. As an inadequate choice of dependence structure may have an undesirable effect on reserve estimation, a two-stage inference strategy is proposed in this paper to assist with model selection and validation. Generalized linear models are first fitted to the margins. Standardized residuals from these models are then linked through a copula selected and validated using rank-based methods. The approach is illustrated with data from six lines of business of a large Canadian insurance company for which two hierarchical dependence models are considered, i.e., a fully nested Archimedean copula structure and a copula-based risk aggregation model.

  14. Fuel consumption modeling in support of ATM environmental decision-making

    DOT National Transportation Integrated Search

    2009-07-01

    The FAA has recently updated the airport terminal : area fuel consumption methods used in its environmental models. : These methods are based on fitting manufacturers fuel : consumption data to empirical equations. The new fuel : consumption metho...

  15. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    PubMed

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  16. Maximum-likelihood curve-fitting scheme for experiments with pulsed lasers subject to intensity fluctuations.

    PubMed

    Metz, Thomas; Walewski, Joachim; Kaminski, Clemens F

    2003-03-20

    Evaluation schemes, e.g., least-squares fitting, are not generally applicable to any types of experiments. If the evaluation schemes were not derived from a measurement model that properly described the experiment to be evaluated, poorer precision or accuracy than attainable from the measured data could result. We outline ways in which statistical data evaluation schemes should be derived for all types of experiment, and we demonstrate them for laser-spectroscopic experiments, in which pulse-to-pulse fluctuations of the laser power cause correlated variations of laser intensity and generated signal intensity. The method of maximum likelihood is demonstrated in the derivation of an appropriate fitting scheme for this type of experiment. Statistical data evaluation contains the following steps. First, one has to provide a measurement model that considers statistical variation of all enclosed variables. Second, an evaluation scheme applicable to this particular model has to be derived or provided. Third, the scheme has to be characterized in terms of accuracy and precision. A criterion for accepting an evaluation scheme is that it have accuracy and precision as close as possible to the theoretical limit. The fitting scheme derived for experiments with pulsed lasers is compared to well-established schemes in terms of fitting power and rational functions. The precision is found to be as much as three timesbetter than for simple least-squares fitting. Our scheme also suppresses the bias on the estimated model parameters that other methods may exhibit if they are applied in an uncritical fashion. We focus on experiments in nonlinear spectroscopy, but the fitting scheme derived is applicable in many scientific disciplines.

  17. Fitting membrane resistance along with action potential shape in cardiac myocytes improves convergence: application of a multi-objective parallel genetic algorithm.

    PubMed

    Kaur, Jaspreet; Nygren, Anders; Vigmond, Edward J

    2014-01-01

    Fitting parameter sets of non-linear equations in cardiac single cell ionic models to reproduce experimental behavior is a time consuming process. The standard procedure is to adjust maximum channel conductances in ionic models to reproduce action potentials (APs) recorded in isolated cells. However, vastly different sets of parameters can produce similar APs. Furthermore, even with an excellent AP match in case of single cell, tissue behaviour may be very different. We hypothesize that this uncertainty can be reduced by additionally fitting membrane resistance (Rm). To investigate the importance of Rm, we developed a genetic algorithm approach which incorporated Rm data calculated at a few points in the cycle, in addition to AP morphology. Performance was compared to a genetic algorithm using only AP morphology data. The optimal parameter sets and goodness of fit as computed by the different methods were compared. First, we fit an ionic model to itself, starting from a random parameter set. Next, we fit the AP of one ionic model to that of another. Finally, we fit an ionic model to experimentally recorded rabbit action potentials. Adding the extra objective (Rm, at a few voltages) to the AP fit, lead to much better convergence. Typically, a smaller MSE (mean square error, defined as the average of the squared error between the target AP and AP that is to be fitted) was achieved in one fifth of the number of generations compared to using only AP data. Importantly, the variability in fit parameters was also greatly reduced, with many parameters showing an order of magnitude decrease in variability. Adding Rm to the objective function improves the robustness of fitting, better preserving tissue level behavior, and should be incorporated.

  18. Time series modeling and forecasting using memetic algorithms for regime-switching models.

    PubMed

    Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel

    2012-11-01

    In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.

  19. Mechanisms of complex network growth: Synthesis of the preferential attachment and fitness models

    NASA Astrophysics Data System (ADS)

    Golosovsky, Michael

    2018-06-01

    We analyze growth mechanisms of complex networks and focus on their validation by measurements. To this end we consider the equation Δ K =A (t ) (K +K0) Δ t , where K is the node's degree, Δ K is its increment, A (t ) is the aging constant, and K0 is the initial attractivity. This equation has been commonly used to validate the preferential attachment mechanism. We show that this equation is undiscriminating and holds for the fitness model [Caldarelli et al., Phys. Rev. Lett. 89, 258702 (2002), 10.1103/PhysRevLett.89.258702] as well. In other words, accepted method of the validation of the microscopic mechanism of network growth does not discriminate between "rich-gets-richer" and "good-gets-richer" scenarios. This means that the growth mechanism of many natural complex networks can be based on the fitness model rather than on the preferential attachment, as it was believed so far. The fitness model yields the long-sought explanation for the initial attractivity K0, an elusive parameter which was left unexplained within the framework of the preferential attachment model. We show that the initial attractivity is determined by the width of the fitness distribution. We also present the network growth model based on recursive search with memory and show that this model contains both the preferential attachment and the fitness models as extreme cases.

  20. An evaluation of the structural validity of the shoulder pain and disability index (SPADI) using the Rasch model.

    PubMed

    Jerosch-Herold, Christina; Chester, Rachel; Shepstone, Lee; Vincent, Joshua I; MacDermid, Joy C

    2018-02-01

    The shoulder pain and disability index (SPADI) has been extensively evaluated for its psychometric properties using classical test theory (CTT). The purpose of this study was to evaluate its structural validity using Rasch model analysis. Responses to the SPADI from 1030 patients referred for physiotherapy with shoulder pain and enrolled in a prospective cohort study were available for Rasch model analysis. Overall fit, individual person and item fit, response format, dependence, unidimensionality, targeting, reliability and differential item functioning (DIF) were examined. The SPADI pain subscale initially demonstrated a misfit due to DIF by age and gender. After iterative analysis it showed good fit to the Rasch model with acceptable targeting and unidimensionality (overall fit Chi-square statistic 57.2, p = 0.1; mean item fit residual 0.19 (1.5) and mean person fit residual 0.44 (1.1); person separation index (PSI) of 0.83. The disability subscale however shows significant misfit due to uniform DIF even after iterative analyses were used to explore different solutions to the sources of misfit (overall fit (Chi-square statistic 57.2, p = 0.1); mean item fit residual 0.54 (1.26) and mean person fit residual 0.38 (1.0); PSI 0.84). Rasch Model analysis of the SPADI has identified some strengths and limitations not previously observed using CTT methods. The SPADI should be treated as two separate subscales. The SPADI is a widely used outcome measure in clinical practice and research; however, the scores derived from it must be interpreted with caution. The pain subscale fits the Rasch model expectations well. The disability subscale does not fit the Rasch model and its current format does not meet the criteria for true interval-level measurement required for use as a primary endpoint in clinical trials. Clinicians should therefore exercise caution when interpreting score changes on the disability subscale and attempt to compare their scores to age- and sex-stratified data.

  1. AssignFit: a program for simultaneous assignment and structure refinement from solid-state NMR spectra

    PubMed Central

    Tian, Ye; Schwieters, Charles D.; Opella, Stanley J.; Marassi, Francesca M.

    2011-01-01

    AssignFit is a computer program developed within the XPLOR-NIH package for the assignment of dipolar coupling (DC) and chemical shift anisotropy (CSA) restraints derived from the solid-state NMR spectra of protein samples with uniaxial order. The method is based on minimizing the difference between experimentally observed solid-state NMR spectra and the frequencies back calculated from a structural model. Starting with a structural model and a set of DC and CSA restraints grouped only by amino acid type, as would be obtained by selective isotopic labeling, AssignFit generates all of the possible assignment permutations and calculates the corresponding atomic coordinates oriented in the alignment frame, together with the associated set of NMR frequencies, which are then compared with the experimental data for best fit. Incorporation of AssignFit in a simulated annealing refinement cycle provides an approach for simultaneous assignment and structure refinement (SASR) of proteins from solid-state NMR orientation restraints. The methods are demonstrated with data from two integral membrane proteins, one α-helical and one β-barrel, embedded in phospholipid bilayer membranes. PMID:22036904

  2. Modeling method of time sequence model based grey system theory and application proceedings

    NASA Astrophysics Data System (ADS)

    Wei, Xuexia; Luo, Yaling; Zhang, Shiqiang

    2015-12-01

    This article gives a modeling method of grey system GM(1,1) model based on reusing information and the grey system theory. This method not only extremely enhances the fitting and predicting accuracy of GM(1,1) model, but also maintains the conventional routes' merit of simple computation. By this way, we have given one syphilis trend forecast method based on reusing information and the grey system GM(1,1) model.

  3. A systematic way for the cost reduction of density fitting methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kállay, Mihály, E-mail: kallay@mail.bme.hu

    2014-12-28

    We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less

  4. Robust fitting for neuroreceptor mapping.

    PubMed

    Chang, Chung; Ogden, R Todd

    2009-03-15

    Among many other uses, positron emission tomography (PET) can be used in studies to estimate the density of a neuroreceptor at each location throughout the brain by measuring the concentration of a radiotracer over time and modeling its kinetics. There are a variety of kinetic models in common usage and these typically rely on nonlinear least-squares (LS) algorithms for parameter estimation. However, PET data often contain artifacts (such as uncorrected head motion) and so the assumptions on which the LS methods are based may be violated. Quantile regression (QR) provides a robust alternative to LS methods and has been used successfully in many applications. We consider fitting various kinetic models to PET data using QR and study the relative performance of the methods via simulation. A data adaptive method for choosing between LS and QR is proposed and the performance of this method is also studied.

  5. Energy landscape analysis of neuroimaging data

    NASA Astrophysics Data System (ADS)

    Ezaki, Takahiro; Watanabe, Takamitsu; Ohzeki, Masayuki; Masuda, Naoki

    2017-05-01

    Computational neuroscience models have been used for understanding neural dynamics in the brain and how they may be altered when physiological or other conditions change. We review and develop a data-driven approach to neuroimaging data called the energy landscape analysis. The methods are rooted in statistical physics theory, in particular the Ising model, also known as the (pairwise) maximum entropy model and Boltzmann machine. The methods have been applied to fitting electrophysiological data in neuroscience for a decade, but their use in neuroimaging data is still in its infancy. We first review the methods and discuss some algorithms and technical aspects. Then, we apply the methods to functional magnetic resonance imaging data recorded from healthy individuals to inspect the relationship between the accuracy of fitting, the size of the brain system to be analysed and the data length. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  6. Uncertainty quantification for optical model parameters

    DOE PAGES

    Lovell, A. E.; Nunes, F. M.; Sarich, J.; ...

    2017-02-21

    Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of our work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fitmore » and create corresponding 95% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. Here, we study a number of reactions involving neutron and deuteron projectiles with energies in the range of 5–25 MeV/u, on targets with mass A=12–208. We investigate the correlations between the parameters in the fit. The case of deuterons on 12C is discussed in detail: the elastic-scattering fit and the prediction of 12C(d,p) 13C transfer angular distributions, using both uncorrelated and correlated χ 2 minimization functions. The general features for all cases are compiled in a systematic manner to identify trends. This work shows that, in many cases, the correlated χ 2 functions (in comparison to the uncorrelated χ 2 functions) provide a more natural parameterization of the process. These correlated functions do, however, produce broader confidence bands. Further optimization may require improvement in the models themselves and/or more information included in the fit.« less

  7. A Case-Based Exploration of Task/Technology Fit in a Knowledge Management Context

    DTIC Science & Technology

    2008-03-01

    have a difficult time articulating to others. Researchers who subscribe to the constructionist perspective view knowledge as an inherently social ...Acceptance Model With Task-Technology Fit Constructs. Information & Management, 36, 9-21. Dooley, D. (2001). Social Research Methods (4th ed.). Upper...L. (2006). Social Research Methods : Qualitative and Quantitative Approaches (6 ed.). Boston: Pearson Education, Inc. Nonaka, I. (1994). A Dynamic

  8. Limited-information goodness-of-fit testing of diagnostic classification item response models.

    PubMed

    Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen

    2016-11-01

    Despite the growing popularity of diagnostic classification models (e.g., Rupp et al., 2010, Diagnostic measurement: theory, methods, and applications, Guilford Press, New York, NY) in educational and psychological measurement, methods for testing their absolute goodness of fit to real data remain relatively underdeveloped. For tests of reasonable length and for realistic sample size, full-information test statistics such as Pearson's X 2 and the likelihood ratio statistic G 2 suffer from sparseness in the underlying contingency table from which they are computed. Recently, limited-information fit statistics such as Maydeu-Olivares and Joe's (2006, Psychometrika, 71, 713) M 2 have been found to be quite useful in testing the overall goodness of fit of item response theory models. In this study, we applied Maydeu-Olivares and Joe's (2006, Psychometrika, 71, 713) M 2 statistic to diagnostic classification models. Through a series of simulation studies, we found that M 2 is well calibrated across a wide range of diagnostic model structures and was sensitive to certain misspecifications of the item model (e.g., fitting disjunctive models to data generated according to a conjunctive model), errors in the Q-matrix (adding or omitting paths, omitting a latent variable), and violations of local item independence due to unmodelled testlet effects. On the other hand, M 2 was largely insensitive to misspecifications in the distribution of higher-order latent dimensions and to the specification of an extraneous attribute. To complement the analyses of the overall model goodness of fit using M 2 , we investigated the utility of the Chen and Thissen (1997, J. Educ. Behav. Stat., 22, 265) local dependence statistic XLD2 for characterizing sources of misfit, an important aspect of model appraisal often overlooked in favour of overall statements. The XLD2 statistic was found to be slightly conservative (with Type I error rates consistently below the nominal level) but still useful in pinpointing the sources of misfit. Patterns of local dependence arising due to specific model misspecifications are illustrated. Finally, we used the M 2 and XLD2 statistics to evaluate a diagnostic model fit to data from the Trends in Mathematics and Science Study, drawing upon analyses previously conducted by Lee et al., (2011, IJT, 11, 144). © 2016 The British Psychological Society.

  9. IRT Model Selection Methods for Dichotomous Items

    ERIC Educational Resources Information Center

    Kang, Taehoon; Cohen, Allan S.

    2007-01-01

    Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…

  10. A nonparametric smoothing method for assessing GEE models with longitudinal binary data.

    PubMed

    Lin, Kuo-Chin; Chen, Yi-Ju; Shyr, Yu

    2008-09-30

    Studies involving longitudinal binary responses are widely applied in the health and biomedical sciences research and frequently analyzed by generalized estimating equations (GEE) method. This article proposes an alternative goodness-of-fit test based on the nonparametric smoothing approach for assessing the adequacy of GEE fitted models, which can be regarded as an extension of the goodness-of-fit test of le Cessie and van Houwelingen (Biometrics 1991; 47:1267-1282). The expectation and approximate variance of the proposed test statistic are derived. The asymptotic distribution of the proposed test statistic in terms of a scaled chi-squared distribution and the power performance of the proposed test are discussed by simulation studies. The testing procedure is demonstrated by two real data. Copyright (c) 2008 John Wiley & Sons, Ltd.

  11. On the accuracy of models for predicting sound propagation in fitted rooms.

    PubMed

    Hodgson, M

    1990-08-01

    The objective of this article is to make a contribution to the evaluation of the accuracy and applicability of models for predicting the sound propagation in fitted rooms such as factories, classrooms, and offices. The models studied are 1:50 scale models; the method-of-image models of Jovicic, Lindqvist, Hodgson, Kurze, and of Lemire and Nicolas; the emprical formula of Friberg; and Ondet and Barbry's ray-tracing model. Sound propagation predictions by the analytic models are compared with the results of sound propagation measurements in a 1:50 scale model and in a warehouse, both containing various densities of approximately isotropically distributed, rectangular-parallelepipedic fittings. The results indicate that the models of Friberg and of Lemire and Nicolas are fundamentally incorrect. While more generally applicable versions exist, the versions of the models of Jovicic and Kurze studied here are found to be of limited applicability since they ignore vertical-wall reflections. The Hodgson and Lindqvist models appear to be accurate in certain limited cases. This preliminary study found the ray-tracing model of Ondet and Barbry to be the most accurate of all the cases studied. Furthermore, it has the necessary flexibility with respect to room geometry, surface-absorption distribution, and fitting distribution. It appears to be the model with the greatest applicability to fitted-room sound propagation prediction.

  12. Hierarchical animal movement models for population-level inference

    USGS Publications Warehouse

    Hooten, Mevin B.; Buderman, Frances E.; Brost, Brian M.; Hanks, Ephraim M.; Ivans, Jacob S.

    2016-01-01

    New methods for modeling animal movement based on telemetry data are developed regularly. With advances in telemetry capabilities, animal movement models are becoming increasingly sophisticated. Despite a need for population-level inference, animal movement models are still predominantly developed for individual-level inference. Most efforts to upscale the inference to the population level are either post hoc or complicated enough that only the developer can implement the model. Hierarchical Bayesian models provide an ideal platform for the development of population-level animal movement models but can be challenging to fit due to computational limitations or extensive tuning required. We propose a two-stage procedure for fitting hierarchical animal movement models to telemetry data. The two-stage approach is statistically rigorous and allows one to fit individual-level movement models separately, then resample them using a secondary MCMC algorithm. The primary advantages of the two-stage approach are that the first stage is easily parallelizable and the second stage is completely unsupervised, allowing for an automated fitting procedure in many cases. We demonstrate the two-stage procedure with two applications of animal movement models. The first application involves a spatial point process approach to modeling telemetry data, and the second involves a more complicated continuous-time discrete-space animal movement model. We fit these models to simulated data and real telemetry data arising from a population of monitored Canada lynx in Colorado, USA.

  13. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  14. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    ERIC Educational Resources Information Center

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  15. Recommended test methods and pass/fail criteria for a respirator fit capability test of half-mask air-purifying respirators

    PubMed Central

    Zhuang, Ziqing; Bergman, Michael; Lei, Zhipeng; Niezgoda, George; Shaffer, Ronald

    2017-01-01

    This study assessed key test parameters and pass/fail criteria options for developing a respirator fit capability (RFC) test for half-mask air-purifying particulate respirators. Using a 25-subject test panel, benchmark RFC data were collected for 101 National Institute for Occupational Safety and Health-certified respirator models. These models were further grouped into 61 one-, two-, or three-size families. Fit testing was done using a PortaCount® Plus with N95-Companion accessory and an Occupational Safety and Health Administration-accepted quantitative fit test protocol. Three repeated tests (donnings) per subject/respirator model combination were performed. The panel passing rate (PPR) (number or percentage of the 25-subject panel achieving acceptable fit) was determined for each model using five different alternative criteria for determining acceptable fit. When the 101 models are evaluated individually (i.e., not grouped by families), the percentages of models capable of fitting >75% (19/25 subjects) of the panel were 29% and 32% for subjects achieving a fit factor ≥100 for at least one of the first two donnings and at least one of three donnings, respectively. When the models are evaluated grouped into families and using >75% of panel subjects achieving a fit factor ≥100 for at least one of two donnings as the PPR pass/fail criterion, 48% of all models can pass. When >50% (13/25 subjects) of panel subjects was the PPR criterion, the percentage of passing models increased to 70%. Testing respirators grouped into families and evaluating the first two donnings for each of two respirator sizes provided the best balance between meeting end user expectations and creating a performance bar for manufacturers. Specifying the test criterion for a subject obtaining acceptable fit as achieving a fit factor ≥100 on at least one out of the two donnings is reasonable because a majority of existing respirator families can achieve an PPR of >50% using this criterion. The different test criteria can be considered by standards development organizations when developing standards. PMID:28278067

  16. Recommended test methods and pass/fail criteria for a respirator fit capability test of half-mask air-purifying respirators.

    PubMed

    Zhuang, Ziqing; Bergman, Michael; Lei, Zhipeng; Niezgoda, George; Shaffer, Ronald

    2017-06-01

    This study assessed key test parameters and pass/fail criteria options for developing a respirator fit capability (RFC) test for half-mask air-purifying particulate respirators. Using a 25-subject test panel, benchmark RFC data were collected for 101 National Institute for Occupational Safety and Health-certified respirator models. These models were further grouped into 61 one-, two-, or three-size families. Fit testing was done using a PortaCount® Plus with N95-Companion accessory and an Occupational Safety and Health Administration-accepted quantitative fit test protocol. Three repeated tests (donnings) per subject/respirator model combination were performed. The panel passing rate (PPR) (number or percentage of the 25-subject panel achieving acceptable fit) was determined for each model using five different alternative criteria for determining acceptable fit. When the 101 models are evaluated individually (i.e., not grouped by families), the percentages of models capable of fitting >75% (19/25 subjects) of the panel were 29% and 32% for subjects achieving a fit factor ≥100 for at least one of the first two donnings and at least one of three donnings, respectively. When the models are evaluated grouped into families and using >75% of panel subjects achieving a fit factor ≥100 for at least one of two donnings as the PPR pass/fail criterion, 48% of all models can pass. When >50% (13/25 subjects) of panel subjects was the PPR criterion, the percentage of passing models increased to 70%. Testing respirators grouped into families and evaluating the first two donnings for each of two respirator sizes provided the best balance between meeting end user expectations and creating a performance bar for manufacturers. Specifying the test criterion for a subject obtaining acceptable fit as achieving a fit factor ≥100 on at least one out of the two donnings is reasonable because a majority of existing respirator families can achieve an PPR of >50% using this criterion. The different test criteria can be considered by standards development organizations when developing standards.

  17. Improved battery parameter estimation method considering operating scenarios for HEV/EV applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jufeng; Xia, Bing; Shang, Yunlong

    This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less

  18. Improved battery parameter estimation method considering operating scenarios for HEV/EV applications

    DOE PAGES

    Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...

    2016-12-22

    This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less

  19. Using a second‐order differential model to fit data without baselines in protein isothermal chemical denaturation

    PubMed Central

    Tang, Chuanning; Lew, Scott

    2016-01-01

    Abstract In vitro protein stability studies are commonly conducted via thermal or chemical denaturation/renaturation of protein. Conventional data analyses on the protein unfolding/(re)folding require well‐defined pre‐ and post‐transition baselines to evaluate Gibbs free‐energy change associated with the protein unfolding/(re)folding. This evaluation becomes problematic when there is insufficient data for determining the pre‐ or post‐transition baselines. In this study, fitting on such partial data obtained in protein chemical denaturation is established by introducing second‐order differential (SOD) analysis to overcome the limitations that the conventional fitting method has. By reducing numbers of the baseline‐related fitting parameters, the SOD analysis can successfully fit incomplete chemical denaturation data sets with high agreement to the conventional evaluation on the equivalent completed data, where the conventional fitting fails in analyzing them. This SOD fitting for the abbreviated isothermal chemical denaturation further fulfills data analysis methods on the insufficient data sets conducted in the two prevalent protein stability studies. PMID:26757366

  20. Protein Dynamics from NMR: The Slowly Relaxing Local Structure Analysis Compared with Model-Free Analysis

    PubMed Central

    Meirovitch, Eva; Shapiro, Yury E.; Polimeno, Antonino; Freed, Jack H.

    2009-01-01

    15N-1H spin relaxation is a powerful method for deriving information on protein dynamics. The traditional method of data analysis is model-free (MF), where the global and local N-H motions are independent and the local geometry is simplified. The common MF analysis consists of fitting single-field data. The results are typically field-dependent, and multi-field data cannot be fit with standard fitting schemes. Cases where known functional dynamics has not been detected by MF were identified by us and others. Recently we applied to spin relaxation in proteins the Slowly Relaxing Local Structure (SRLS) approach which accounts rigorously for mode-mixing and general features of local geometry. SRLS was shown to yield MF in appropriate asymptotic limits. We found that the experimental spectral density corresponds quite well to the SRLS spectral density. The MF formulae are often used outside of their validity ranges, allowing small data sets to be force-fitted with good statistics but inaccurate best-fit parameters. This paper focuses on the mechanism of force-fitting and its implications. It is shown that MF force-fits the experimental data because mode-mixing, the rhombic symmetry of the local ordering and general features of local geometry are not accounted for. Combined multi-field multi-temperature data analyzed by MF may lead to the detection of incorrect phenomena, while conformational entropy derived from MF order parameters may be highly inaccurate. On the other hand, fitting to more appropriate models can yield consistent physically insightful information. This requires that the complexity of the theoretical spectral densities matches the integrity of the experimental data. As shown herein, the SRLS densities comply with this requirement. PMID:16821820

  1. Pseudo-second order models for the adsorption of safranin onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth

    2007-04-02

    Kinetic experiments were carried out for the sorption of safranin onto activated carbon particles. The kinetic data were fitted to pseudo-second order model of Ho, Sobkowsk and Czerwinski, Blanchard et al. and Ritchie by linear and non-linear regression methods. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo-second order models were the same. Non-linear regression analysis showed that both Blanchard et al. and Ho have similar ideas on the pseudo-second order model but with different assumptions. The best fit of experimental data in Ho's pseudo-second order expression by linear and non-linear regression method showed that Ho pseudo-second order model was a better kinetic expression when compared to other pseudo-second order kinetic expressions.

  2. Parameterizing sorption isotherms using a hybrid global-local fitting procedure.

    PubMed

    Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J

    2017-05-01

    Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Examining the Process of Responding to Circumplex Scales of Interpersonal Values Items: Should Ideal Point Scoring Methods Be Considered?

    PubMed

    Ling, Ying; Zhang, Minqiang; Locke, Kenneth D; Li, Guangming; Li, Zonglong

    2016-01-01

    The Circumplex Scales of Interpersonal Values (CSIV) is a 64-item self-report measure of goals from each octant of the interpersonal circumplex. We used item response theory methods to compare whether dominance models or ideal point models best described how people respond to CSIV items. Specifically, we fit a polytomous dominance model called the generalized partial credit model and an ideal point model of similar complexity called the generalized graded unfolding model to the responses of 1,893 college students. The results of both graphical comparisons of item characteristic curves and statistical comparisons of model fit suggested that an ideal point model best describes the process of responding to CSIV items. The different models produced different rank orderings of high-scoring respondents, but overall the models did not differ in their prediction of criterion variables (agentic and communal interpersonal traits and implicit motives).

  4. Structural Equation Models in a Redundancy Analysis Framework With Covariates.

    PubMed

    Lovaglio, Pietro Giorgio; Vittadini, Giorgio

    2014-01-01

    A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.

  5. Training models of anatomic shape variability

    PubMed Central

    Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang

    2008-01-01

    Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919

  6. Association of Fitness With Incident Dyslipidemias Over 25 Years in the Coronary Artery Risk Development in Young Adults Study

    PubMed Central

    Sarzynski, Mark A.; Schuna, John M.; Carnethon, Mercedes R.; Jacobs, David R.; Lewis, Cora E.; Quesenberry, Charles P.; Sidney, Stephen; Schreiner, Pamela J.; Sternfeld, Barbara

    2015-01-01

    Introduction Few studies have examined the longitudinal associations of fitness or changes in fitness on the risk of developing dyslipidemias. This study examined the associations of: (1) baseline fitness with 25-year dyslipidemia incidence; and (2) 20-year fitness change on dyslipidemia development in middle age in the Coronary Artery Risk Development in young Adults (CARDIA) study. Methods Multivariable Cox proportional hazards regression models were used to test the association of baseline fitness (1985–1986) with dyslipidemia incidence over 25 years (2010–2011) in CARDIA (N=4,898). Modified Poisson regression models were used to examine the association of 20-year change in fitness with dyslipidemia incidence between Years 20 and 25 (n=2,487). Data were analyzed in June 2014 and February 2015. Results In adjusted models, the risk of incident low high-density lipoprotein cholesterol (HDL-C), high triglycerides, and high low-density lipoprotein cholesterol (LDL-C) was significantly lower, by 9%, 16%, and 14%, respectively, for each 2.0-minute increase in baseline treadmill endurance. After additional adjustment for baseline trait level, the associations remained significant for incident high triglycerides and high LDL-C in the total population and for incident high triglycerides in both men and women. In race-stratified models, these associations appeared to be limited to whites. In adjusted models, change in fitness did not predict 5-year incidence of dyslipidemias, whereas baseline fitness significantly predicted 5-year incidence of high triglycerides. Conclusions Our findings demonstrate the importance of cardiorespiratory fitness in young adulthood as a risk factor for developing dyslipidemias, particularly high triglycerides, during the transition to middle age. PMID:26165197

  7. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    NASA Astrophysics Data System (ADS)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  8. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    PubMed

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  9. Fitting the curve in Excel®: Systematic curve fitting of laboratory and remotely sensed planetary spectra

    NASA Astrophysics Data System (ADS)

    McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.

    2017-03-01

    Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.

  10. Perceived sports competence mediates the relationship between childhood motor skill proficiency and adolescent physical activity and fitness: a longitudinal assessment

    PubMed Central

    Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R

    2008-01-01

    Background The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. Methods In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Results Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Conclusion Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth. PMID:18687148

  11. Potential energy surface fitting by a statistically localized, permutationally invariant, local interpolating moving least squares method for the many-body potential: Method and application to N{sub 4}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu

    2014-02-07

    Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less

  12. Statistical modelling for recurrent events: an application to sports injuries

    PubMed Central

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-01-01

    Background Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. Objective This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Methods Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. Results The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Conclusions Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. PMID:22872683

  13. SU-E-I-07: Response Characteristics and Signal Conversion Modeling of KV Flat-Panel Detector in Cone Beam CT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yu; Cao, Ruifen; Pei, Xi

    2015-06-15

    Purpose: The flat-panel detector response characteristics are investigated to optimize the scanning parameter considering the image quality and less radiation dose. The signal conversion model is also established to predict the tumor shape and physical thickness changes. Methods: With the ELEKTA XVI system, the planar images of 10cm water phantom were obtained under different image acquisition conditions, including tube voltage, electric current, exposure time and frames. The averaged responses of square area in center were analyzed using Origin8.0. The response characteristics for each scanning parameter were depicted by different fitting types. The transmission measured for 10cm water was compared tomore » Monte Carlo simulation. Using the quadratic calibration method, a series of variable-thickness water phantoms images were acquired to derive the signal conversion model. A 20cm wedge water phantom with 2cm step thickness was used to verify the model. At last, the stability and reproducibility of the model were explored during a four week period. Results: The gray values of image center all decreased with the increase of different image acquisition parameter presets. The fitting types adopted were linear fitting, quadratic polynomial fitting, Gauss fitting and logarithmic fitting with the fitting R-Square 0.992, 0.995, 0.997 and 0.996 respectively. For 10cm water phantom, the transmission measured showed better uniformity than Monte Carlo simulation. The wedge phantom experiment show that the radiological thickness changes prediction error was in the range of (-4mm, 5mm). The signal conversion model remained consistent over a period of four weeks. Conclusion: The flat-panel response decrease with the increase of different scanning parameters. The preferred scanning parameter combination was 100kV, 10mA, 10ms, 15frames. It is suggested that the signal conversion model could effectively be used for tumor shape change and radiological thickness prediction. Supported by National Natural Science Foundation of China (81101132, 11305203) and Natural Science Foundation of Anhui Province (11040606Q55, 1308085QH138)« less

  14. glmnetLRC f/k/a lrc package: Logistic Regression Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-06-09

    Methods for fitting and predicting logistic regression classifiers (LRC) with an arbitrary loss function using elastic net or best subsets. This package adds additional model fitting features to the existing glmnet and bestglm R packages. This package was created to perform the analyses described in Amidan BG, Orton DJ, LaMarche BL, et al. 2014. Signatures for Mass Spectrometry Data Quality. Journal of Proteome Research. 13(4), 2215-2222. It makes the model fitting available in the glmnet and bestglm packages more general by identifying optimal model parameters via cross validation with an customizable loss function. It also identifies the optimal threshold formore » binary classification.« less

  15. Aeroelastic modeling for the FIT (Functional Integration Technology) team F/A-18 simulation

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Wieseman, Carol D.

    1989-01-01

    As part of Langley Research Center's commitment to developing multidisciplinary integration methods to improve aerospace systems, the Functional Integration Technology (FIT) team was established to perform dynamics integration research using an existing aircraft configuration, the F/A-18. An essential part of this effort has been the development of a comprehensive simulation modeling capability that includes structural, control, and propulsion dynamics as well as steady and unsteady aerodynamics. The structural and unsteady aerodynamics contributions come from an aeroelastic mode. Some details of the aeroelastic modeling done for the Functional Integration Technology (FIT) team research are presented. Particular attention is given to work done in the area of correction factors to unsteady aerodynamics data.

  16. Convex Regression with Interpretable Sharp Partitions

    PubMed Central

    Petersen, Ashley; Simon, Noah; Witten, Daniela

    2016-01-01

    We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120

  17. A method to characterize average cervical spine ligament response based on raw data sets for implementation into injury biomechanics models.

    PubMed

    Mattucci, Stephen F E; Cronin, Duane S

    2015-01-01

    Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Optimization Methods in Sherpa

    NASA Astrophysics Data System (ADS)

    Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.

    2009-09-01

    Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).

  19. A New Computational Method to Fit the Weighted Euclidean Distance Model.

    ERIC Educational Resources Information Center

    De Leeuw, Jan; Pruzansky, Sandra

    1978-01-01

    A computational method for weighted euclidean distance scaling (a method of multidimensional scaling) which combines aspects of an "analytic" solution with an approach using loss functions is presented. (Author/JKS)

  20. Self-consistent Bulge/Disk/Halo Galaxy Dynamical Modeling Using Integral Field Kinematics

    NASA Astrophysics Data System (ADS)

    Taranu, D. S.; Obreschkow, D.; Dubinski, J. J.; Fogarty, L. M. R.; van de Sande, J.; Catinella, B.; Cortese, L.; Moffett, A.; Robotham, A. S. G.; Allen, J. T.; Bland-Hawthorn, J.; Bryant, J. J.; Colless, M.; Croom, S. M.; D'Eugenio, F.; Davies, R. L.; Drinkwater, M. J.; Driver, S. P.; Goodwin, M.; Konstantopoulos, I. S.; Lawrence, J. S.; López-Sánchez, Á. R.; Lorente, N. P. F.; Medling, A. M.; Mould, J. R.; Owers, M. S.; Power, C.; Richards, S. N.; Tonini, C.

    2017-11-01

    We introduce a method for modeling disk galaxies designed to take full advantage of data from integral field spectroscopy (IFS). The method fits equilibrium models to simultaneously reproduce the surface brightness, rotation, and velocity dispersion profiles of a galaxy. The models are fully self-consistent 6D distribution functions for a galaxy with a Sérsic profile stellar bulge, exponential disk, and parametric dark-matter halo, generated by an updated version of GalactICS. By creating realistic flux-weighted maps of the kinematic moments (flux, mean velocity, and dispersion), we simultaneously fit photometric and spectroscopic data using both maximum-likelihood and Bayesian (MCMC) techniques. We apply the method to a GAMA spiral galaxy (G79635) with kinematics from the SAMI Galaxy Survey and deep g- and r-band photometry from the VST-KiDS survey, comparing parameter constraints with those from traditional 2D bulge-disk decomposition. Our method returns broadly consistent results for shared parameters while constraining the mass-to-light ratios of stellar components and reproducing the H I-inferred circular velocity well beyond the limits of the SAMI data. Although the method is tailored for fitting integral field kinematic data, it can use other dynamical constraints like central fiber dispersions and H I circular velocities, and is well-suited for modeling galaxies with a combination of deep imaging and H I and/or optical spectra (resolved or otherwise). Our implementation (MagRite) is computationally efficient and can generate well-resolved models and kinematic maps in under a minute on modern processors.

  1. Multi-Gaussian fitting for pulse waveform using Weighted Least Squares and multi-criteria decision making method.

    PubMed

    Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan

    2013-11-01

    Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012

    PubMed Central

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    Background The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. Methods In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). Results The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Conclusion Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis. PMID:26901682

  3. Retrospective Correction of Physiological Noise in DTI Using an Extended Tensor Model and Peripheral Measurements

    PubMed Central

    Mohammadi, Siawoosh; Hutton, Chloe; Nagy, Zoltan; Josephs, Oliver; Weiskopf, Nikolaus

    2013-01-01

    Diffusion tensor imaging is widely used in research and clinical applications, but this modality is highly sensitive to artefacts. We developed an easy-to-implement extension of the original diffusion tensor model to account for physiological noise in diffusion tensor imaging using measures of peripheral physiology (pulse and respiration), the so-called extended tensor model. Within the framework of the extended tensor model two types of regressors, which respectively modeled small (linear) and strong (nonlinear) variations in the diffusion signal, were derived from peripheral measures. We tested the performance of four extended tensor models with different physiological noise regressors on nongated and gated diffusion tensor imaging data, and compared it to an established data-driven robust fitting method. In the brainstem and cerebellum the extended tensor models reduced the noise in the tensor-fit by up to 23% in accordance with previous studies on physiological noise. The extended tensor model addresses both large-amplitude outliers and small-amplitude signal-changes. The framework of the extended tensor model also facilitates further investigation into physiological noise in diffusion tensor imaging. The proposed extended tensor model can be readily combined with other artefact correction methods such as robust fitting and eddy current correction. PMID:22936599

  4. Predicting diabetes mellitus using SMOTE and ensemble machine learning approach: The Henry Ford ExercIse Testing (FIT) project.

    PubMed

    Alghamdi, Manal; Al-Mallah, Mouaz; Keteyian, Steven; Brawner, Clinton; Ehrman, Jonathan; Sakr, Sherif

    2017-01-01

    Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, Naïve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE). The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (Naïve Bayes Tree, Random Forest, and Logistic Model Tree) and achieved high accuracy of prediction (AUC = 0.92). The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data.

  5. IDENTIFICATION OF REGIME SHIFTS IN TIME SERIES USING NEIGHBORHOOD STATISTICS

    EPA Science Inventory

    The identification of alternative dynamic regimes in ecological systems requires several lines of evidence. Previous work on time series analysis of dynamic regimes includes mainly model-fitting methods. We introduce two methods that do not use models. These approaches use state-...

  6. Simulation and fitting of complex reaction network TPR: The key is the objective function

    DOE PAGES

    Savara, Aditya Ashi

    2016-07-07

    In this research, a method has been developed for finding improved fits during simulation and fitting of data from complex reaction network temperature programmed reactions (CRN-TPR). It was found that simulation and fitting of CRN-TPR presents additional challenges relative to simulation and fitting of simpler TPR systems. The method used here can enable checking the plausibility of proposed chemical mechanisms and kinetic models. The most important finding was that when choosing an objective function, use of an objective function that is based on integrated production provides more utility in finding improved fits when compared to an objective function based onmore » the rate of production. The response surface produced by using the integrated production is monotonic, suppresses effects from experimental noise, requires fewer points to capture the response behavior, and can be simulated numerically with smaller errors. For CRN-TPR, there is increased importance (relative to simple reaction network TPR) in resolving of peaks prior to fitting, as well as from weighting of experimental data points. Using an implicit ordinary differential equation solver was found to be inadequate for simulating CRN-TPR. Lastly, the method employed here was capable of attaining improved fits in simulation and fitting of CRN-TPR when starting with a postulated mechanism and physically realistic initial guesses for the kinetic parameters.« less

  7. Structure of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition Criteria for Obsessive–Compulsive Personality Disorder in Patients With Binge Eating Disorder

    PubMed Central

    Ansell, Emily B; Pinto, Anthony; Edelen, Maria Orlando; Grilo, Carlos M

    2013-01-01

    Objective To examine 1-, 2-, and 3-factor model structures through confirmatory analytic procedures for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) obsessive–compulsive personality disorder (OCPD) criteria in patients with binge eating disorder (BED). Method Participants were consecutive outpatients (n = 263) with binge eating disorder and were assessed with semi-structured interviews. The 8 OCPD criteria were submitted to confirmatory factor analyses in Mplus Version 4.2 (Los Angeles, CA) in which previously identified factor models of OCPD were compared for fit, theoretical relevance, and parsimony. Nested models were compared for significant improvements in model fit. Results Evaluation of indices of fit in combination with theoretical considerations suggest a multifactorial model is a significant improvement in fit over the current DSM-IV single-factor model of OCPD. Though the data support both 2- and 3-factor models, the 3-factor model is hindered by an underspecified third factor. Conclusion A multifactorial model of OCPD incorporating the factors perfectionism and rigidity represents the best compromise of fit and theory in modelling the structure of OCPD in patients with BED. A third factor representing miserliness may be relevant in BED populations but needs further development. The perfectionism and rigidity factors may represent distinct intrapersonal and interpersonal attempts at control and may have implications for the assessment of OCPD. PMID:19087485

  8. Validation of Western North America Models based on finite-frequency and ray theory imaging methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Maceira, Monica; Porritt, Robert W.

    2015-02-02

    We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-­wave FF approach and the RT approach, when the data selection, processing and reference models are the same.

  9. Decomposition of mineral absorption bands using nonlinear least squares curve fitting: Application to Martian meteorites and CRISM data

    NASA Astrophysics Data System (ADS)

    Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.

    2011-04-01

    This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.

  10. Improved Forecasting Methods for Naval Manpower Studies

    DTIC Science & Technology

    2015-03-25

    Using monthly data is likely to improve the overall fit of the models and the accuracy of the BP test . A measure of unemployment to control for...measure of the relative goodness of fit of a statistical model. It is grounded in the concept of information entropy, in effect, offering a relative...the Kullback – Leibler divergence, DKL(f,g1); similarly, the information lost from using g2 to

  11. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  12. MISFITS: evaluating the goodness of fit between a phylogenetic model and an alignment.

    PubMed

    Nguyen, Minh Anh Thi; Klaere, Steffen; von Haeseler, Arndt

    2011-01-01

    As models of sequence evolution become more and more complicated, many criteria for model selection have been proposed, and tools are available to select the best model for an alignment under a particular criterion. However, in many instances the selected model fails to explain the data adequately as reflected by large deviations between observed pattern frequencies and the corresponding expectation. We present MISFITS, an approach to evaluate the goodness of fit (http://www.cibiv.at/software/misfits). MISFITS introduces a minimum number of "extra substitutions" on the inferred tree to provide a biologically motivated explanation why the alignment may deviate from expectation. These extra substitutions plus the evolutionary model then fully explain the alignment. We illustrate the method on several examples and then give a survey about the goodness of fit of the selected models to the alignments in the PANDIT database.

  13. Automated crystallographic ligand building using the medial axis transform of an electron-density isosurface.

    PubMed

    Aishima, Jun; Russel, Daniel S; Guibas, Leonidas J; Adams, Paul D; Brunger, Axel T

    2005-10-01

    Automatic fitting methods that build molecules into electron-density maps usually fail below 3.5 A resolution. As a first step towards addressing this problem, an algorithm has been developed using an approximation of the medial axis to simplify an electron-density isosurface. This approximation captures the central axis of the isosurface with a graph which is then matched against a graph of the molecular model. One of the first applications of the medial axis to X-ray crystallography is presented here. When applied to ligand fitting, the method performs at least as well as methods based on selecting peaks in electron-density maps. Generalization of the method to recognition of common features across multiple contour levels could lead to powerful automatic fitting methods that perform well even at low resolution.

  14. Bringing the cross-correlation method up to date

    NASA Technical Reports Server (NTRS)

    Statler, Thomas

    1995-01-01

    The cross-correlation (XC) method of Tonry & Davis (1979, AJ, 84, 1511) is generalized to arbitrary parametrized line profiles. In the new algorithm the correlation function itself, rather than the observed galaxy spectrum, is fitted by the model line profile: this removes much of the complication in the error analysis caused by template mismatch. Like the Fourier correlation quotient (FCQ) method of Bender (1990, A&A, 229, 441), the inferred line profiles are, up to a normalization constant, independent of template mismatch as long as there are no blended lines. The standard reduced chi(exp 2) is a good measure of the fit of the inferred velocity distribution, largely decoupled from the fit of the spectral template. The updated XC method performs as well as other recently developed methods, with the added virtue of conceptual simplicity.

  15. Bayesian Computation for Log-Gaussian Cox Processes: A Comparative Analysis of Methods

    PubMed Central

    Teng, Ming; Nathoo, Farouk S.; Johnson, Timothy D.

    2017-01-01

    The Log-Gaussian Cox Process is a commonly used model for the analysis of spatial point pattern data. Fitting this model is difficult because of its doubly-stochastic property, i.e., it is an hierarchical combination of a Poisson process at the first level and a Gaussian Process at the second level. Various methods have been proposed to estimate such a process, including traditional likelihood-based approaches as well as Bayesian methods. We focus here on Bayesian methods and several approaches that have been considered for model fitting within this framework, including Hamiltonian Monte Carlo, the Integrated nested Laplace approximation, and Variational Bayes. We consider these approaches and make comparisons with respect to statistical and computational efficiency. These comparisons are made through several simulation studies as well as through two applications, the first examining ecological data and the second involving neuroimaging data. PMID:29200537

  16. An adaptive approach to the dynamic allocation of buffer storage. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Crooke, S. C.

    1970-01-01

    Several strategies for the dynamic allocation of buffer storage are simulated and compared. The basic algorithms investigated, using actual statistics observed in the Univac 1108 EXEC 8 System, include the buddy method and the first-fit method. Modifications are made to the basic methods in an effort to improve and to measure allocation performance. A simulation model of an adaptive strategy is developed which permits interchanging the two different methods, the buddy and the first-fit methods with some modifications. Using an adaptive strategy, each method may be employed in the statistical environment in which its performance is superior to the other method.

  17. Perceptions and receptivity of non-spousal family support: A mixed methods study of psychological distress among older, church-going African American men

    PubMed Central

    Watkins, Daphne C.; Wharton, Tracy; Mitchell, Jamie A.; Matusko, Niki; Kales, Helen

    2016-01-01

    The purpose of this study was to explore the role of non-spousal family support on mental health among older, church-going African American men. The mixed methods objective was to employ a design that used existing qualitative and quantitative data to explore the interpretive context within which social and cultural experiences occur. Qualitative data (n=21) were used to build a conceptual model that was tested using quantitative data (n= 401). Confirmatory factor analysis indicated an inverse association between non-spousal family support and distress. The comparative fit index, Tucker-Lewis fit index, and root mean square error of approximation indicated good model fit. This study offers unique methodological approaches to using existing, complementary data sources to understand the health of African American men. PMID:28943829

  18. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    PubMed

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  19. Bayesian Analysis of Longitudinal Data Using Growth Curve Models

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.

    2007-01-01

    Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…

  20. Calibration of GRB Luminosity Relations with Cosmography

    NASA Astrophysics Data System (ADS)

    Gao, He; Liang, Nan; Zhu, Zong-Hong

    For the use of gamma-ray bursts (GRBs) to probe cosmology in a cosmology-independent way, a new method has been proposed to obtain luminosity distances of GRBs by interpolating directly from the Hubble diagram of SNe Ia, and then calibrating GRB relations at high redshift. In this paper, following the basic assumption in the interpolation method that objects at the same redshift should have the same luminosity distance, we propose another approach to calibrate GRB luminosity relations with cosmographic fitting directly from SN Ia data. In cosmography, there is a well-known fitting formula which can reflect the Hubble relation between luminosity distance and redshift with cosmographic parameters which can be fitted from observation data. Using the Cosmographic fitting results from the Union set of SNe Ia, we calibrate five GRB relations using GRB sample at z ≤ 1.4 and deduce distance moduli of GRBs at 1.4 < z ≤ 6.6 by generalizing above calibrated relations at high redshift. Finally, we constrain the dark energy parameterization models of the Chevallier-Polarski-Linder (CPL) model, the Jassal-Bagla-Padmanabhan (JBP) model and the Alam model with GRB data at high redshift, as well as with the cosmic microwave background radiation (CMB) and the baryonic acoustic oscillation (BAO) observations, and we find the ΛCDM model is consistent with the current data in 1-σ confidence region.

  1. Accounting for nonsampling error in estimates of HIV epidemic trends from antenatal clinic sentinel surveillance

    PubMed Central

    Eaton, Jeffrey W.; Bao, Le

    2017-01-01

    Objectives The aim of the study was to propose and demonstrate an approach to allow additional nonsampling uncertainty about HIV prevalence measured at antenatal clinic sentinel surveillance (ANC-SS) in model-based inferences about trends in HIV incidence and prevalence. Design Mathematical model fitted to surveillance data with Bayesian inference. Methods We introduce a variance inflation parameter σinfl2 that accounts for the uncertainty of nonsampling errors in ANC-SS prevalence. It is additive to the sampling error variance. Three approaches are tested for estimating σinfl2 using ANC-SS and household survey data from 40 subnational regions in nine countries in sub-Saharan, as defined in UNAIDS 2016 estimates. Methods were compared using in-sample fit and out-of-sample prediction of ANC-SS data, fit to household survey prevalence data, and the computational implications. Results Introducing the additional variance parameter σinfl2 increased the error variance around ANC-SS prevalence observations by a median of 2.7 times (interquartile range 1.9–3.8). Using only sampling error in ANC-SS prevalence ( σinfl2=0), coverage of 95% prediction intervals was 69% in out-of-sample prediction tests. This increased to 90% after introducing the additional variance parameter σinfl2. The revised probabilistic model improved model fit to household survey prevalence and increased epidemic uncertainty intervals most during the early epidemic period before 2005. Estimating σinfl2 did not increase the computational cost of model fitting. Conclusions: We recommend estimating nonsampling error in ANC-SS as an additional parameter in Bayesian inference using the Estimation and Projection Package model. This approach may prove useful for incorporating other data sources such as routine prevalence from Prevention of mother-to-child transmission testing into future epidemic estimates. PMID:28296801

  2. Kinematic modelling of disc galaxies using graphics processing units

    NASA Astrophysics Data System (ADS)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  3. Evaluation of internal fit of interim crown fabricated with CAD/CAM milling and 3D printing system.

    PubMed

    Lee, Wan-Sun; Lee, Du-Hyeong; Lee, Kyu-Bok

    2017-08-01

    This study is to evaluate the internal fit of the crown manufactured by CAD/CAM milling method and 3D printing method. The master model was fabricated with stainless steel by using CNC machine and the work model was created from the vinyl-polysiloxane impression. After scanning the working model, the design software is used to design the crown. The saved STL file is used on the CAD/CAM milling method and two types of 3D printing method to produce 10 interim crowns per group. Internal discrepancy measurement uses the silicon replica method and the measured data are analyzed with One-way ANOVA to verify the statistic significance. The discrepancy means (standard deviation) of the 3 groups are 171.6 (97.4) µm for the crown manufactured by the milling system and 149.1 (65.9) and 91.1 (36.4) µm, respectively, for the crowns manufactured with the two types of 3D printing system. There was a statistically significant difference and the 3D printing system group showed more outstanding value than the milling system group. The marginal and internal fit of the interim restoration has more outstanding 3D printing method than the CAD/CAM milling method. Therefore, the 3D printing method is considered as applicable for not only the interim restoration production, but also in the dental prosthesis production with a higher level of completion.

  4. Photometric survey, modelling, and scaling of long-period and low-amplitude asteroids

    NASA Astrophysics Data System (ADS)

    Marciniak, A.; Bartczak, P.; Müller, T.; Sanabria, J. J.; Alí-Lagoa, V.; Antonini, P.; Behrend, R.; Bernasconi, L.; Bronikowska, M.; Butkiewicz-Bąk, M.; Cikota, A.; Crippa, R.; Ditteon, R.; Dudziński, G.; Duffard, R.; Dziadura, K.; Fauvaud, S.; Geier, S.; Hirsch, R.; Horbowicz, J.; Hren, M.; Jerosimic, L.; Kamiński, K.; Kankiewicz, P.; Konstanciak, I.; Korlevic, P.; Kosturkiewicz, E.; Kudak, V.; Manzini, F.; Morales, N.; Murawiecka, M.; Ogłoza, W.; Oszkiewicz, D.; Pilcher, F.; Polakis, T.; Poncy, R.; Santana-Ros, T.; Siwak, M.; Skiff, B.; Sobkowiak, K.; Stoss, R.; Żejmo, M.; Żukowski, K.

    2018-02-01

    Context. The available set of spin and shape modelled asteroids is strongly biased against slowly rotating targets and those with low lightcurve amplitudes. This is due to the observing selection effects. As a consequence, the current picture of asteroid spin axis distribution, rotation rates, radiometric properties, or aspects related to the object's internal structure might be affected too. Aims: To counteract these selection effects, we are running a photometric campaign of a large sample of main belt asteroids omitted in most previous studies. Using least chi-squared fitting we determined synodic rotation periods and verified previous determinations. When a dataset for a given target was sufficiently large and varied, we performed spin and shape modelling with two different methods to compare their performance. Methods: We used the convex inversion method and the non-convex SAGE algorithm, applied on the same datasets of dense lightcurves. Both methods search for the lowest deviations between observed and modelled lightcurves, though using different approaches. Unlike convex inversion, the SAGE method allows for the existence of valleys and indentations on the shapes based only on lightcurves. Results: We obtain detailed spin and shape models for the first five targets of our sample: (159) Aemilia, (227) Philosophia, (329) Svea, (478) Tergeste, and (487) Venetia. When compared to stellar occultation chords, our models obtained an absolute size scale and major topographic features of the shape models were also confirmed. When applied to thermophysical modelling (TPM), they provided a very good fit to the infrared data and allowed their size, albedo, and thermal inertia to be determined. Conclusions: Convex and non-convex shape models provide comparable fits to lightcurves. However, some non-convex models fit notably better to stellar occultation chords and to infrared data in sophisticated thermophysical modelling (TPM). In some cases TPM showed strong preference for one of the spin and shape solutions. Also, we confirmed that slowly rotating asteroids tend to have higher-than-average values of thermal inertia, which might be caused by properties of the surface layers underlying the skin depth. The photometric data is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/610/A7

  5. Fuzzy Analytic Hierarchy Process-based Chinese Resident Best Fitness Behavior Method Research.

    PubMed

    Wang, Dapeng; Zhang, Lan

    2015-01-01

    With explosive development in Chinese economy and science and technology, people's pursuit of health becomes more and more intense, therefore Chinese resident sports fitness activities have been rapidly developed. However, different fitness events popularity degrees and effects on body energy consumption are different, so bases on this, the paper researches on fitness behaviors and gets Chinese residents sports fitness behaviors exercise guide, which provides guidance for propelling to national fitness plan's implementation and improving Chinese resident fitness scientization. The paper starts from the perspective of energy consumption, it mainly adopts experience method, determines Chinese resident favorite sports fitness event energy consumption through observing all kinds of fitness behaviors energy consumption, and applies fuzzy analytic hierarchy process to make evaluation on bicycle riding, shadowboxing practicing, swimming, rope skipping, jogging, running, aerobics these seven fitness events. By calculating fuzzy rate model's membership and comparing their sizes, it gets fitness behaviors that are more helpful for resident health, more effective and popular. Finally, it gets conclusions that swimming is a best exercise mode and its membership is the highest. Besides, the memberships of running, rope skipping and shadowboxing practicing are also relative higher. It should go in for bodybuilding by synthesizing above several kinds of fitness events according to different physical conditions; different living conditions so that can better achieve the purpose of fitness exercises.

  6. An investigation into the psychometric properties of the Hospital Anxiety and Depression Scale in patients with breast cancer

    PubMed Central

    Rodgers, Jacqui; Martin, Colin R; Morse, Rachel C; Kendell, Kate; Verrill, Mark

    2005-01-01

    Background To determine the psychometric properties of the Hospital Anxiety and Depression Scale (HADS) in patients with breast cancer and determine the suitability of the instrument for use with this clinical group. Methods A cross-sectional design was used. The study used a pooled data set from three breast cancer clinical groups. The dependent variables were HADS anxiety and depression sub-scale scores. Exploratory and confirmatory factor analyses were conducted on the HADS to determine its psychometric properties in 110 patients with breast cancer. Seven models were tested to determine model fit to the data. Results Both factor analysis methods indicated that three-factor models provided a better fit to the data compared to two-factor (anxiety and depression) models for breast cancer patients. Clark and Watson's three factor tripartite and three factor hierarchical models provided the best fit. Conclusion The underlying factor structure of the HADS in breast cancer patients comprises three distinct, but correlated factors, negative affectivity, autonomic anxiety and anhedonic depression. The clinical utility of the HADS in screening for anxiety and depression in breast cancer patients may be enhanced by using a modified scoring procedure based on a three-factor model of psychological distress. This proposed alternate scoring method involving regressing autonomic anxiety and anhedonic depression factors onto the third factor (negative affectivity) requires further investigation in order to establish its efficacy. PMID:16018801

  7. Results of the Verification of the Statistical Distribution Model of Microseismicity Emission Characteristics

    NASA Astrophysics Data System (ADS)

    Cianciara, Aleksander

    2016-09-01

    The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.

  8. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  9. Method development estimating ambient mercury concentration from monitored mercury wet deposition

    NASA Astrophysics Data System (ADS)

    Chen, S. M.; Qiu, X.; Zhang, L.; Yang, F.; Blanchard, P.

    2013-05-01

    Speciated atmospheric mercury data have recently been monitored at multiple locations in North America; but the spatial coverage is far less than the long-established mercury wet deposition network. The present study describes a first attempt linking ambient concentration with wet deposition using Beta distribution fitting of a ratio estimate. The mean, median, mode, standard deviation, and skewness of the fitted Beta distribution parameters were generated using data collected in 2009 at 11 monitoring stations. Comparing the normalized histogram and the fitted density function, the empirical and fitted Beta distribution of the ratio shows a close fit. The estimated ambient mercury concentration was further partitioned into reactive gaseous mercury and particulate bound mercury using linear regression model developed by Amos et al. (2012). The method presented here can be used to roughly estimate mercury ambient concentration at locations and/or times where such measurement is not available but where wet deposition is monitored.

  10. DNA from fecal immunochemical test can replace stool for detection of colonic lesions using a microbiota-based model.

    PubMed

    Baxter, Nielson T; Koumpouras, Charles C; Rogers, Mary A M; Ruffin, Mack T; Schloss, Patrick D

    2016-11-14

    There is a significant demand for colorectal cancer (CRC) screening methods that are noninvasive, inexpensive, and capable of accurately detecting early stage tumors. It has been shown that models based on the gut microbiota can complement the fecal occult blood test and fecal immunochemical test (FIT). However, a barrier to microbiota-based screening is the need to collect and store a patient's stool sample. Using stool samples collected from 404 patients, we tested whether the residual buffer containing resuspended feces in FIT cartridges could be used in place of intact stool samples. We found that the bacterial DNA isolated from FIT cartridges largely recapitulated the community structure and membership of patients' stool microbiota and that the abundance of bacteria associated with CRC were conserved. We also found that models for detecting CRC that were generated using bacterial abundances from FIT cartridges were equally predictive as models generated using bacterial abundances from stool. These findings demonstrate the potential for using residual buffer from FIT cartridges in place of stool for microbiota-based screening for CRC. This may reduce the need to collect and process separate stool samples and may facilitate combining FIT and microbiota-based biomarkers into a single test. Additionally, FIT cartridges could constitute a novel data source for studying the role of the microbiome in cancer and other diseases.

  11. Local Minima Free Parameterized Appearance Models

    PubMed Central

    Nguyen, Minh Hoai; De la Torre, Fernando

    2010-01-01

    Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches. PMID:21804750

  12. A modified active appearance model based on an adaptive artificial bee colony.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.

  13. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  14. Whole Protein Native Fitness Potentials

    NASA Astrophysics Data System (ADS)

    Faraggi, Eshel; Kloczkowski, Andrzej

    2013-03-01

    Protein structure prediction can be separated into two tasks: sample the configuration space of the protein chain, and assign a fitness between these hypothetical models and the native structure of the protein. One of the more promising developments in this area is that of knowledge based energy functions. However, standard approaches using pair-wise interactions have shown shortcomings demonstrated by the superiority of multi-body-potentials. These shortcomings are due to residue pair-wise interaction being dependent on other residues along the chain. We developed a method that uses whole protein information filtered through machine learners to score protein models based on their likeness to native structures. For all models we calculated parameters associated with the distance to the solvent and with distances between residues. These parameters, in addition to energy estimates obtained by using a four-body-potential, DFIRE, and RWPlus were used as training for machine learners to predict the fitness of the models. Testing on CASP 9 targets showed that our method is superior to DFIRE, RWPlus, and the four-body potential, which are considered standards in the field.

  15. Calibration of the COBE FIRAS instrument

    NASA Technical Reports Server (NTRS)

    Fixsen, D. J.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E., Jr.; Hewagama, T.; Isaacman, R. B.; Jensen, K. A.; Mather, J. C.; Massa, D. L.; Meyer, S. S.

    1994-01-01

    The Far-Infrared Absolute Spectrophotometer (FIRAS) instrument on the Cosmic Background Explorer (COBE) satellite was designed to accurately measure the spectrum of the cosmic microwave background radiation (CMBR) in the frequency range 1-95/cm with an angular resolution of 7 deg. We describe the calibration of this instrument, including the method of obtaining calibration data, reduction of data, the instrument model, fitting the model to the calibration data, and application of the resulting model solution to sky observations. The instrument model fits well for calibration data that resemble sky condition. The method of propagating detector noise through the calibration process to yield a covariance matrix of the calibrated sky data is described. The final uncertainties are variable both in frequency and position, but for a typical calibrated sky 2.6 deg square pixel and 0.7/cm spectral element the random detector noise limit is of order of a few times 10(exp -7) ergs/sq cm/s/sr cm for 2-20/cm, and the difference between the sky and the best-fit cosmic blackbody can be measured with a gain uncertainty of less than 3%.

  16. Method Effects on an Adaptation of the Rosenberg Self-Esteem Scale in Greek and the Role of Personality Traits.

    PubMed

    Michaelides, Michalis P; Koutsogiorgi, Chrystalla; Panayiotou, Georgia

    2016-01-01

    Rosenberg's Self-Esteem Scale is a balanced, 10-item scale designed to be unidimensional; however, research has repeatedly shown that its factorial structure is contaminated by method effects due to item wording. Beyond the substantive self-esteem factor, 2 additional factors linked to the positive and negative wording of items have been theoretically specified and empirically supported. Initial evidence has revealed systematic relations of the 2 method factors with variables expressing approach and avoidance motivation. This study assessed the fit of competing confirmatory factor analytic models for the Rosenberg Self-Esteem Scale using data from 2 samples of adult participants in Cyprus. Models that accounted for both positive and negative wording effects via 2 latent method factors had better fit compared to alternative models. Measures of experiential avoidance, social anxiety, and private self-consciousness were associated with the method factors in structural equation models. The findings highlight the need to specify models with wording effects for a more accurate representation of the scale's structure and support the hypothesis of method factors as response styles, which are associated with individual characteristics related to avoidance motivation, behavioral inhibition, and anxiety.

  17. Modeling two strains of disease via aggregate-level infectivity curves.

    PubMed

    Romanescu, Razvan; Deardon, Rob

    2016-04-01

    Well formulated models of disease spread, and efficient methods to fit them to observed data, are powerful tools for aiding the surveillance and control of infectious diseases. Our project considers the problem of the simultaneous spread of two related strains of disease in a context where spatial location is the key driver of disease spread. We start our modeling work with the individual level models (ILMs) of disease transmission, and extend these models to accommodate the competing spread of the pathogens in a two-tier hierarchical population (whose levels we refer to as 'farm' and 'animal'). The postulated interference mechanism between the two strains is a period of cross-immunity following infection. We also present a framework for speeding up the computationally intensive process of fitting the ILM to data, typically done using Markov chain Monte Carlo (MCMC) in a Bayesian framework, by turning the inference into a two-stage process. First, we approximate the number of animals infected on a farm over time by infectivity curves. These curves are fit to data sampled from farms, using maximum likelihood estimation, then, conditional on the fitted curves, Bayesian MCMC inference proceeds for the remaining parameters. Finally, we use posterior predictive distributions of salient epidemic summary statistics, in order to assess the model fitted.

  18. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  19. An Improved Statistical Solution for Global Seismicity by the HIST-ETAS Approach

    NASA Astrophysics Data System (ADS)

    Chu, A.; Ogata, Y.; Katsura, K.

    2010-12-01

    For long-term global seismic model fitting, recent work by Chu et al. (2010) applied the spatial-temporal ETAS model (Ogata 1998) and analyzed global data partitioned into tectonic zones based on geophysical characteristics (Bird 2003), and it has shown tremendous improvements of model fitting compared with one overall global model. While the ordinary ETAS model assumes constant parameter values across the complete region analyzed, the hierarchical space-time ETAS model (HIST-ETAS, Ogata 2004) is a newly introduced approach by proposing regional distinctions of the parameters for more accurate seismic prediction. As the HIST-ETAS model has been fit to regional data of Japan (Ogata 2010), our work applies the model to describe global seismicity. Employing the Akaike's Bayesian Information Criterion (ABIC) as an assessment method, we compare the MLE results with zone divisions considered to results obtained by an overall global model. Location dependent parameters of the model and Gutenberg-Richter b-values are optimized, and seismological interpretations are discussed.

  20. A fast, model-independent method for cerebral cortical thickness estimation using MRI.

    PubMed

    Scott, M L J; Bromiley, P A; Thacker, N A; Hutchinson, C E; Jackson, A

    2009-04-01

    Several algorithms for measuring the cortical thickness in the human brain from MR image volumes have been described in the literature, the majority of which rely on fitting deformable models to the inner and outer cortical surfaces. However, the constraints applied during the model fitting process in order to enforce spherical topology and to fit the outer cortical surface in narrow sulci, where the cerebrospinal fluid (CSF) channel may be obscured by partial voluming, may introduce bias in some circumstances, and greatly increase the processor time required. In this paper we describe an alternative, voxel based technique that measures the cortical thickness using inversion recovery anatomical MR images. Grey matter, white matter and CSF are identified through segmentation, and edge detection is used to identify the boundaries between these tissues. The cortical thickness is then measured along the local 3D surface normal at every voxel on the inner cortical surface. The method was applied to 119 normal volunteers, and validated through extensive comparisons with published measurements of both cortical thickness and rate of thickness change with age. We conclude that the proposed technique is generally faster than deformable model-based alternatives, and free from the possibility of model bias, but suffers no reduction in accuracy. In particular, it will be applicable in data sets showing severe cortical atrophy, where thinning of the gyri leads to points of high curvature, and so the fitting of deformable models is problematic.

  1. Improved accuracy and precision of tracer kinetic parameters by joint fitting to variable flip angle and dynamic contrast enhanced MRI data.

    PubMed

    Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J

    2016-10-01

    To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  2. Simplified estimation of age-specific reference intervals for skewed data.

    PubMed

    Wright, E M; Royston, P

    1997-12-30

    Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.

  3. Analytic modeling of aerosol size distributions

    NASA Technical Reports Server (NTRS)

    Deepack, A.; Box, G. P.

    1979-01-01

    Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.

  4. Radar cross section models for limited aspect angle windows

    NASA Astrophysics Data System (ADS)

    Robinson, Mark C.

    1992-12-01

    This thesis presents a method for building Radar Cross Section (RCS) models of aircraft based on static data taken from limited aspect angle windows. These models statistically characterize static RCS. This is done to show that a limited number of samples can be used to effectively characterize static aircraft RCS. The optimum models are determined by performing both a Kolmogorov and a Chi-Square goodness-of-fit test comparing the static RCS data with a variety of probability density functions (pdf) that are known to be effective at approximating the static RCS of aircraft. The optimum parameter estimator is also determined by the goodness of-fit tests if there is a difference in pdf parameters obtained by the Maximum Likelihood Estimator (MLE) and the Method of Moments (MoM) estimators.

  5. flexsurv: A Platform for Parametric Survival Modeling in R

    PubMed Central

    Jackson, Christopher H.

    2018-01-01

    flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450

  6. A three-dimensional finite element analysis of a passive and friction fit implant abutment interface and the influence of occlusal table dimension on the stress distribution pattern on the implant and surrounding bone

    PubMed Central

    Sarfaraz, Hasan; Paulose, Anoopa; Shenoy, K. Kamalakanth; Hussain, Akhter

    2015-01-01

    Aims: The aim of the study was to evaluate the stress distribution pattern in the implant and the surrounding bone for a passive and a friction fit implant abutment interface and to analyze the influence of occlusal table dimension on the stress generated. Materials and Methods: CAD models of two different types of implant abutment connections, the passive fit or the slip-fit represented by the Nobel Replace Tri-lobe connection and the friction fit or active fit represented by the Nobel active conical connection were made. The stress distribution pattern was studied at different occlusal dimension. Six models were constructed in PRO-ENGINEER 05 of the two implant abutment connection for three different occlusal dimensions each. The implant and abutment complex was placed in cortical and cancellous bone modeled using a computed tomography scan. This complex was subjected to a force of 100 N in the axial and oblique direction. The amount of stress and the pattern of stress generated were recorded on a color scale using ANSYS 13 software. Results: The results showed that overall maximum Von Misses stress on the bone is significantly less for friction fit than the passive fit in any loading conditions stresses on the implant were significantly higher for the friction fit than the passive fit. The narrow occlusal table models generated the least amount of stress on the implant abutment interface. Conclusion: It can thus be concluded that the conical connection distributes more stress to the implant body and dissipates less stress to the surrounding bone. A narrow occlusal table considerably reduces the occlusal overload. PMID:26929518

  7. Wing Shape Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2015-01-01

    A new two step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to extrapolate the deflection and slope of the entire structure through the use of System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular wing. It is then applied to test data from a cantilevered swept wing model.

  8. Computational methods for constructing protein structure models from 3D electron microscopy maps.

    PubMed

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2013-10-01

    Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Building of an Experimental Cline With Arabidopsis thaliana to Estimate Herbicide Fitness Cost

    PubMed Central

    Roux, Fabrice; Giancola, Sandra; Durand, Stéphanie; Reboud, Xavier

    2006-01-01

    Various management strategies aim at maintaining pesticide resistance frequency under a threshold value by taking advantage of the benefit of the fitness penalty (the cost) expressed by the resistance allele outside the treated area or during the pesticide selection “off years.” One method to estimate a fitness cost is to analyze the resistance allele frequency along transects across treated and untreated areas. On the basis of the shape of the cline, this method gives the relative contributions of both gene flow and the fitness difference between genotypes in the treated and untreated areas. Taking advantage of the properties of such migration–selection balance, an artificial cline was built up to optimize the conditions where the fitness cost of two herbicide-resistant mutants (acetolactate synthase and auxin-induced target genes) in the model species Arabidopsis thaliana could be more accurately measured. The analysis of the microevolutionary dynamics in these experimental populations indicated mean fitness costs of ∼15 and 92% for the csr1-1 and axr2-1 resistances, respectively. In addition, negative frequency dependence for the fitness cost was also detected for the axr2-1 resistance. The advantages and disadvantages of the cline approach are discussed in regard to other methods of cost estimation. This comparison highlights the powerful ability of an experimental cline to measure low fitness costs and detect sensibility to frequency-dependent variations. PMID:16582450

  10. An Improved Cryosat-2 Sea Ice Freeboard Retrieval Algorithm Through the Use of Waveform Fitting

    NASA Technical Reports Server (NTRS)

    Kurtz, Nathan T.; Galin, N.; Studinger, M.

    2014-01-01

    We develop an empirical model capable of simulating the mean echo power cross product of CryoSat-2 SAR and SAR In mode waveforms over sea ice covered regions. The model simulations are used to show the importance of variations in the radar backscatter coefficient with incidence angle and surface roughness for the retrieval of surfaceelevation of both sea ice floes and leads. The numerical model is used to fit CryoSat-2 waveforms to enable retrieval of surface elevation through the use of look-up tables and a bounded trust region Newton least squares fitting approach. The use of a model to fit returns from sea ice regions offers advantages over currently used threshold retrackingmethods which are here shown to be sensitive to the combined effect of bandwidth limited range resolution and surface roughness variations. Laxon et al. (2013) have compared ice thickness results from CryoSat-2 and IceBridge, and found good agreement, however consistent assumptions about the snow depth and density of sea ice werenot used in the comparisons. To address this issue, we directly compare ice freeboard and thickness retrievals from the waveform fitting and threshold tracker methods of CryoSat-2 to Operation IceBridge data using a consistent set of parameterizations. For three IceBridge campaign periods from March 20112013, mean differences (CryoSat-2 IceBridge) of 0.144m and 1.351m are respectively found between the freeboard and thickness retrievals using a 50 sea ice floe threshold retracker, while mean differences of 0.019m and 0.182m are found when using the waveform fitting method. This suggests the waveform fitting technique is capable of better reconciling the seaice thickness data record from laser and radar altimetry data sets through the usage of consistent physical assumptions.

  11. Pile-up correction algorithm based on successive integration for high count rate medical imaging and radiation spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-07-01

    In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.

  12. Meta-Analytic Methods of Pooling Correlation Matrices for Structural Equation Modeling under Different Patterns of Missing Data

    ERIC Educational Resources Information Center

    Furlow, Carolyn F.; Beretvas, S. Natasha

    2005-01-01

    Three methods of synthesizing correlations for meta-analytic structural equation modeling (SEM) under different degrees and mechanisms of missingness were compared for the estimation of correlation and SEM parameters and goodness-of-fit indices by using Monte Carlo simulation techniques. A revised generalized least squares (GLS) method for…

  13. Determination of Kinetic Parameters for the Thermal Decomposition of Parthenium hysterophorus

    NASA Astrophysics Data System (ADS)

    Dhaundiyal, Alok; Singh, Suraj B.; Hanon, Muammel M.; Rawat, Rekha

    2018-02-01

    A kinetic study of pyrolysis process of Parthenium hysterophorous is carried out by using thermogravimetric analysis (TGA) equipment. The present study investigates the thermal degradation and determination of the kinetic parameters such as activation E and the frequency factor A using model-free methods given by Flynn Wall and Ozawa (FWO), Kissinger-Akahira-Sonuse (KAS) and Kissinger, and model-fitting (Coats Redfern). The results derived from thermal decomposition process demarcate decomposition of Parthenium hysterophorous among the three main stages, such as dehydration, active and passive pyrolysis. It is shown through DTG thermograms that the increase in the heating rate caused temperature peaks at maximum weight loss rate to shift towards higher temperature regime. The results are compared with Coats Redfern (Integral method) and experimental results have shown that values of kinetic parameters obtained from model-free methods are in good agreement. Whereas the results obtained through Coats Redfern model at different heating rates are not promising, however, the diffusion models provided the good fitting with the experimental data.

  14. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    NASA Technical Reports Server (NTRS)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  15. Sequential fitting-and-separating reflectance components for analytical bidirectional reflectance distribution function estimation.

    PubMed

    Lee, Yu; Yu, Chanki; Lee, Sang Wook

    2018-01-10

    We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.

  16. A method of fitting the gravity model based on the Poisson distribution.

    PubMed

    Flowerdew, R; Aitkin, M

    1982-05-01

    "In this paper, [the authors] suggest an alternative method for fitting the gravity model. In this method, the interaction variable is treated as the outcome of a discrete probability process, whose mean is a function of the size and distance variables. This treatment seems appropriate when the dependent variable represents a count of the number of items (people, vehicles, shipments) moving from one place to another. It would seem to have special advantages where there are some pairs of places between which few items move. The argument will be illustrated with reference to data on the numbers of migrants moving in 1970-1971 between pairs of the 126 labor market areas defined for Great Britain...." excerpt

  17. A smoothed residual based goodness-of-fit statistic for nest-survival models

    Treesearch

    Rodney X. Sturdivant; Jay J. Rotella; Robin E. Russell

    2008-01-01

    Estimating nest success and identifying important factors related to nest-survival rates is an essential goal for many wildlife researchers interested in understanding avian population dynamics. Advances in statistical methods have led to a number of estimation methods and approaches to modeling this problem. Recently developed models allow researchers to include a...

  18. Weighted regularized statistical shape space projection for breast 3D model reconstruction.

    PubMed

    Ruiz, Guillermo; Ramon, Eduard; García, Jaime; Sukno, Federico M; Ballester, Miguel A González

    2018-07-01

    The use of 3D imaging has increased as a practical and useful tool for plastic and aesthetic surgery planning. Specifically, the possibility of representing the patient breast anatomy in a 3D shape and simulate aesthetic or plastic procedures is a great tool for communication between surgeon and patient during surgery planning. For the purpose of obtaining the specific 3D model of the breast of a patient, model-based reconstruction methods can be used. In particular, 3D morphable models (3DMM) are a robust and widely used method to perform 3D reconstruction. However, if additional prior information (i.e., known landmarks) is combined with the 3DMM statistical model, shape constraints can be imposed to improve the 3DMM fitting accuracy. In this paper, we present a framework to fit a 3DMM of the breast to two possible inputs: 2D photos and 3D point clouds (scans). Our method consists in a Weighted Regularized (WR) projection into the shape space. The contribution of each point in the 3DMM shape is weighted allowing to assign more relevance to those points that we want to impose as constraints. Our method is applied at multiple stages of the 3D reconstruction process. Firstly, it can be used to obtain a 3DMM initialization from a sparse set of 3D points. Additionally, we embed our method in the 3DMM fitting process in which more reliable or already known 3D points or regions of points, can be weighted in order to preserve their shape information. The proposed method has been tested in two different input settings: scans and 2D pictures assessing both reconstruction frameworks with very positive results. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  20. Non-invasive breast biopsy method using GD-DTPA contrast enhanced MRI series and F-18-FDG PET/CT dynamic image series

    NASA Astrophysics Data System (ADS)

    Magri, Alphonso William

    This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.

  1. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  2. Non-proportional odds multivariate logistic regression of ordinal family data.

    PubMed

    Zaloumis, Sophie G; Scurrah, Katrina J; Harrap, Stephen B; Ellis, Justine A; Gurrin, Lyle C

    2015-03-01

    Methods to examine whether genetic and/or environmental sources can account for the residual variation in ordinal family data usually assume proportional odds. However, standard software to fit the non-proportional odds model to ordinal family data is limited because the correlation structure of family data is more complex than for other types of clustered data. To perform these analyses we propose the non-proportional odds multivariate logistic regression model and take a simulation-based approach to model fitting using Markov chain Monte Carlo methods, such as partially collapsed Gibbs sampling and the Metropolis algorithm. We applied the proposed methodology to male pattern baldness data from the Victorian Family Heart Study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A methodological approach to short-term tracking of youth physical fitness: the Oporto Growth, Health and Performance Study.

    PubMed

    Souza, Michele; Eisenmann, Joey; Chaves, Raquel; Santos, Daniel; Pereira, Sara; Forjaz, Cláudia; Maia, José

    2016-10-01

    In this paper, three different statistical approaches were used to investigate short-term tracking of cardiorespiratory and performance-related physical fitness among adolescents. Data were obtained from the Oporto Growth, Health and Performance Study and comprised 1203 adolescents (549 girls) divided into two age cohorts (10-12 and 12-14 years) followed for three consecutive years, with annual assessment. Cardiorespiratory fitness was assessed with 1-mile run/walk test; 50-yard dash, standing long jump, handgrip, and shuttle run test were used to rate performance-related physical fitness. Tracking was expressed in three different ways: auto-correlations, multilevel modelling with crude and adjusted model (for biological maturation, body mass index, and physical activity), and Cohen's Kappa (κ) computed in IBM SPSS 20.0, HLM 7.01 and Longitudinal Data Analysis software, respectively. Tracking of physical fitness components was (1) moderate-to-high when described by auto-correlations; (2) low-to-moderate when crude and adjusted models were used; and (3) low according to Cohen's Kappa (κ). These results demonstrate that when describing tracking, different methods should be considered since they provide distinct and more comprehensive views about physical fitness stability patterns.

  4. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.-C.; Lin, C.-Y.

    2012-04-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  5. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2012-12-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  6. Quantification of brain tissue through incorporation of partial volume effects

    NASA Astrophysics Data System (ADS)

    Gage, Howard D.; Santago, Peter, II; Snyder, Wesley E.

    1992-06-01

    This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.

  7. Hierarchical Bass model: a product diffusion model considering a diversity of sensitivity to fashion

    NASA Astrophysics Data System (ADS)

    Tashiro, Tohru

    2016-11-01

    We propose a new product diffusion model including the number of how many adopters or advertisements a non-adopter met until he/she adopts the product, where (non-)adopters mean people (not) possessing it. By this effect not considered in the Bass model, we can depict a diversity of sensitivity to fashion. As an application, we utilize the model to fit the iPod and the iPhone unit sales data, and so the better agreement is obtained than the Bass model for the iPod data. We also present a new method to estimate the number of advertisements in a society from fitting parameters of the Bass model and this new model.

  8. Using multiple group modeling to test moderators in meta-analysis.

    PubMed

    Schoemann, Alexander M

    2016-12-01

    Meta-analysis is a popular and flexible analysis that can be fit in many modeling frameworks. Two methods of fitting meta-analyses that are growing in popularity are structural equation modeling (SEM) and multilevel modeling (MLM). By using SEM or MLM to fit a meta-analysis researchers have access to powerful techniques associated with SEM and MLM. This paper details how to use one such technique, multiple group analysis, to test categorical moderators in meta-analysis. In a multiple group meta-analysis a model is fit to each level of the moderator simultaneously. By constraining parameters across groups any model parameter can be tested for equality. Using multiple groups to test for moderators is especially relevant in random-effects meta-analysis where both the mean and the between studies variance of the effect size may be compared across groups. A simulation study and the analysis of a real data set are used to illustrate multiple group modeling with both SEM and MLM. Issues related to multiple group meta-analysis and future directions for research are discussed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. [Equilibrium sorption isotherm for Cu2+ onto Hydrilla verticillata Royle and Myriophyllum spicatum].

    PubMed

    Yan, Chang-zhou; Zeng, A-yan; Jin, Xiang-can; Wang, Sheng-rui; Xu, Qiu-jin; Zhao, Jing-zhu

    2006-06-01

    Equilibrium sorption isotherms for Cu2+ onto Hydrilla verticillata Royle and Myriophyllum spicatum were studied. Both methods of linear and non-linear fitting were applied to describe the sorption isotherms, and their applicability were analyzed and compared. The results were: (1) The applicability of simulated equation can't be compared only by R2 and chi2 when equilibrium sorption model was used to quantify and contrast the performance of different biosorbents. Both methods of linear and non-linear fitting can be applied in different fitting equations to describe the equilibrium sorption isotherms respectively in order to obtain the actual and credible fitting results, and the fitting equation best accorded with experimental data can be selected; (2) In this experiment, the Langmuir model is more suitable to describe the sorption isotherm of Cu2+ biosorption by H. verticillata and M. spicatum, and there is greater difference between the experimental data and the calculated value of Freundlich model, especially for the linear form of Freundlich model; (3) The content of crude cellulose in dry matter is one of the main factor affecting the biosorption capacity of a submerged aquatic plant, and -OH and -CONH2 groups of polysaccharides on cell wall maybe are active center of biosorption; (4) According to the coefficients qm of the linear form of Langmuir model, the maximum sorption capacity of Cu2+ was found to be 21.55 mg/g and 10.80mg/g for H. verticillata and M. spicatum, respectively. The maximum specific surface area for H. verticillata for binding Cu2+ was 3.23m2/g, and it was 1.62m2/g for M. spicatum.

  10. Using forecast modelling to evaluate treatment effects in single-group interrupted time series analysis.

    PubMed

    Linden, Ariel

    2018-05-11

    Interrupted time series analysis (ITSA) is an evaluation methodology in which a single treatment unit's outcome is studied serially over time and the intervention is expected to "interrupt" the level and/or trend of that outcome. ITSA is commonly evaluated using methods which may produce biased results if model assumptions are violated. In this paper, treatment effects are alternatively assessed by using forecasting methods to closely fit the preintervention observations and then forecast the post-intervention trend. A treatment effect may be inferred if the actual post-intervention observations diverge from the forecasts by some specified amount. The forecasting approach is demonstrated using the effect of California's Proposition 99 for reducing cigarette sales. Three forecast models are fit to the preintervention series-linear regression (REG), Holt-Winters (HW) non-seasonal smoothing, and autoregressive moving average (ARIMA)-and forecasts are generated into the post-intervention period. The actual observations are then compared with the forecasts to assess intervention effects. The preintervention data were fit best by HW, followed closely by ARIMA. REG fit the data poorly. The actual post-intervention observations were above the forecasts in HW and ARIMA, suggesting no intervention effect, but below the forecasts in the REG (suggesting a treatment effect), thereby raising doubts about any definitive conclusion of a treatment effect. In a single-group ITSA, treatment effects are likely to be biased if the model is misspecified. Therefore, evaluators should consider using forecast models to accurately fit the preintervention data and generate plausible counterfactual forecasts, thereby improving causal inference of treatment effects in single-group ITSA studies. © 2018 John Wiley & Sons, Ltd.

  11. Fitting Residual Error Structures for Growth Models in SAS PROC MCMC

    ERIC Educational Resources Information Center

    McNeish, Daniel

    2017-01-01

    In behavioral sciences broadly, estimating growth models with Bayesian methods is becoming increasingly common, especially to combat small samples common with longitudinal data. Although Mplus is becoming an increasingly common program for applied research employing Bayesian methods, the limited selection of prior distributions for the elements of…

  12. Estimating Dynamical Systems: Derivative Estimation Hints from Sir Ronald A. Fisher

    ERIC Educational Resources Information Center

    Deboeck, Pascal R.

    2010-01-01

    The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common…

  13. Biocultural approach of the association between maturity and physical activity in youth.

    PubMed

    Werneck, André O; Silva, Danilo R; Collings, Paul J; Fernandes, Rômulo A; Ronque, Enio R V; Coelho-E-Silva, Manuel J; Sardinha, Luís B; Cyrino, Edilson S

    2017-11-13

    To test the biocultural model through direct and indirect associations between biological maturation, adiposity, cardiorespiratory fitness, feelings of sadness, social relationships, and physical activity in adolescents. This was a cross-sectional study conducted with 1,152 Brazilian adolescents aged between 10 and 17 years. Somatic maturation was estimated through Mirwald's method (peak height velocity). Physical activity was assessed through Baecke questionnaire (occupational, leisure, and sport contexts). Body mass index, body fat (sum of skinfolds), cardiorespiratory fitness (20-m shuttle run test), self-perceptions of social relationship, and frequency of sadness feelings were obtained for statistical modeling. Somatic maturation is directly related to sport practice and leisure time physical activity only among girls (β=0.12, p<0.05 and β=0.09, respectively, p<0.05). Moreover, biological (adiposity and cardiorespiratory fitness), psychological (sadness), and social (satisfaction with social relationships) variables mediated the association between maturity and physical activity in boys and for occupational physical activity in girls. In general, models presented good fit coefficients. Biocultural model presents good fit and emotional/biological factors mediate part of the relationship between somatic maturation and physical activity. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  14. Systematic wavelength selection for improved multivariate spectral analysis

    DOEpatents

    Thomas, Edward V.; Robinson, Mark R.; Haaland, David M.

    1995-01-01

    Methods and apparatus for determining in a biological material one or more unknown values of at least one known characteristic (e.g. the concentration of an analyte such as glucose in blood or the concentration of one or more blood gas parameters) with a model based on a set of samples with known values of the known characteristics and a multivariate algorithm using several wavelength subsets. The method includes selecting multiple wavelength subsets, from the electromagnetic spectral region appropriate for determining the known characteristic, for use by an algorithm wherein the selection of wavelength subsets improves the model's fitness of the determination for the unknown values of the known characteristic. The selection process utilizes multivariate search methods that select both predictive and synergistic wavelengths within the range of wavelengths utilized. The fitness of the wavelength subsets is determined by the fitness function F=.function.(cost, performance). The method includes the steps of: (1) using one or more applications of a genetic algorithm to produce one or more count spectra, with multiple count spectra then combined to produce a combined count spectrum; (2) smoothing the count spectrum; (3) selecting a threshold count from a count spectrum to select these wavelength subsets which optimize the fitness function; and (4) eliminating a portion of the selected wavelength subsets. The determination of the unknown values can be made: (1) noninvasively and in vivo; (2) invasively and in vivo; or (3) in vitro.

  15. A Novel Statistical Analysis and Interpretation of Flow Cytometry Data

    DTIC Science & Technology

    2013-07-05

    ing parameters of the mathematical model are determined by using least squares to fit the data in Figures 1 and 2 (see Section 4). The second method is...variance (for all k and j). Then the AIC, which is the expected value of the relative Kullback – Leibler distance for a given model [16], is Kn = n log ( J...emphasized that the fit of the model is quite good for both donors and cell types. As such, we proceed to analyse the dynamic responsiveness of the

  16. Structural and Practical Identifiability Issues of Immuno-Epidemiological Vector-Host Models with Application to Rift Valley Fever.

    PubMed

    Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia

    2016-09-01

    In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.

  17. Thermal Conductivity of Metallic Uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hin, Celine

    This project has developed a modeling and simulation approaches to predict the thermal conductivity of metallic fuels and their alloys. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Bothmore » methods were complementary. The models incorporated both phonon and electron contributions. Good agreement with experimental data over a wide temperature range were found. The models also provided insight into the different physical factors that govern the thermal conductivity under different temperatures. The models were general enough to incorporate more complex effects like additional alloying species, defects, transmutation products and noble gas bubbles to predict the behavior of complex metallic alloys like U-alloy fuel systems under burnup. 3 Introduction Thermal conductivity is an important thermal physical property affecting the performance and efficiency of metallic fuels [1]. Some experimental measurement of thermal conductivity and its correlation with composition and temperature from empirical fitting are available for U, Zr and their alloys with Pu and other minor actinides. However, as reviewed in by Kim, Cho and Sohn [2], due to the difficulty in doing experiments on actinide materials, thermal conductivities of metallic fuels have only been measured at limited alloy compositions and temperatures, some of them even being negative and unphysical. Furthermore, the correlations developed so far are empirical in nature and may not be accurate when used for prediction at conditions far from those used in the original fitting. Moreover, as fuels burn up in the reactor and fission products are built up, thermal conductivity is also significantly changed [3]. Unfortunately, fundamental understanding of the effect of fission products is also currently lacking. In this project, we probe thermal conductivity of metallic fuels with ab initio calculations, a theoretical tool with the potential to yield better accuracy and predictive power than empirical fitting. This work will both complement experimental data by determining thermal conductivity in wider composition and temperature ranges than is available experimentally, and also develop mechanistic understanding to guide better design of metallic fuels in the future. So far, we focused on α-U perfect crystal, the ground-state phase of U metal. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Both methods were complementary and very helpful to understand the physics behind the thermal conductivity in metallic uranium and other materials with similar characteristics. In Section I, the combined model developed at UWM is explained. In Section II, the ab-initio method developed at VT is described along with the uranium pseudo-potential and its validation. Section III is devoted to the work done by Jianguo Yu at INL. Finally, we will present the performance of the project in terms of milestones, publications, and presentations.« less

  18. The Hull Method for Selecting the Number of Common Factors

    ERIC Educational Resources Information Center

    Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.

    2011-01-01

    A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…

  19. Improvement to microphysical schemes in WRF Model based on observed data, part I: size distribution function

    NASA Astrophysics Data System (ADS)

    Shan, Y.; Eric, W.; Gao, L.; Zhao, T.; Yin, Y.

    2015-12-01

    In this study, we have evaluated the performance of size distribution functions (SDF) with 2- and 3-moments in fitting the observed size distribution of rain droplets at three different heights. The goal is to improve the microphysics schemes in meso-scale models, such as Weather Research and Forecast (WRF). Rain droplets were observed during eight periods of different rain types at three stations on the Yellow Mountain in East China. The SDF in this study were M-P distribution with a fixed shape parameter in Gamma SDF(FSP). Where the Gamma SDFs were obtained with three diagnosis methods with the shape parameters based on Milbrandt (2010; denoted DSPM10), Milbrandt (2005; denoted DSPM05) and Seifert (2008; denoted DSPS08) for solving the shape parameter(SSP) and Lognormal SDF. Based on the preliminary experiments, three ensemble methods deciding Gamma SDF was also developed and assessed. The magnitude of average relative error caused by applying a FSP was 10-2 for fitting 0-order moment of the observed rain droplet distribution, and the magnitude of average relative error changed to 10-1 and 100 respectively for 1-4 order moments and 5-6 order moments. To different extent, DSPM10, DSPM05, DSPS08, SSP and ensemble methods could improve fitting accuracies for 0-6 order moments, especially the one coupling SSP and DSPS08 methods, which provided a average relative error 6.46% for 1-4 order moments and 11.90% for 5-6 order moments, respectively. The relative error of fitting three moments using the Lognormal SDF was much larger than that of Gamma SDF. The threshold value of shape parameter ranged from 0 to 8, because values beyond this range could cause overflow in the calculation. When average diameter of rain droplets was less than 2mm, the possibility of unavailable shape parameter value(USPV) increased with a decreasing droplet size. There was strong sensitivity of moment group in fitting accuracy. When ensemble method coupling SSP and DSPS08 was used, a better fit to 1-3-5 moments of the SDF was possible compared to fitting the 0-3-6 moment group.

  20. Comparing two remote video survey methods for spatial predictions of the distribution and environmental niche suitability of demersal fishes.

    PubMed

    Galaiduk, Ronen; Radford, Ben T; Wilson, Shaun K; Harvey, Euan S

    2017-12-15

    Information on habitat associations from survey data, combined with spatial modelling, allow the development of more refined species distribution modelling which may identify areas of high conservation/fisheries value and consequentially improve conservation efforts. Generalised additive models were used to model the probability of occurrence of six focal species after surveys that utilised two remote underwater video sampling methods (i.e. baited and towed video). Models developed for the towed video method had consistently better predictive performance for all but one study species although only three models had a good to fair fit, and the rest were poor fits, highlighting the challenges associated with modelling habitat associations of marine species in highly homogenous, low relief environments. Models based on baited video dataset regularly included large-scale measures of structural complexity, suggesting fish attraction to a single focus point by bait. Conversely, models based on the towed video data often incorporated small-scale measures of habitat complexity and were more likely to reflect true species-habitat relationships. The cost associated with use of the towed video systems for surveying low-relief seascapes was also relatively low providing additional support for considering this method for marine spatial ecological modelling.

  1. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    PubMed

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  2. Simplified process model discovery based on role-oriented genetic mining.

    PubMed

    Zhao, Weidong; Liu, Xi; Dai, Weihui

    2014-01-01

    Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.

  3. TransFit: Finite element analysis data fitting software

    NASA Technical Reports Server (NTRS)

    Freeman, Mark

    1993-01-01

    The Advanced X-Ray Astrophysics Facility (AXAF) mission support team has made extensive use of geometric ray tracing to analyze the performance of AXAF developmental and flight optics. One important aspect of this performance modeling is the incorporation of finite element analysis (FEA) data into the surface deformations of the optical elements. TransFit is software designed for the fitting of FEA data of Wolter I optical surface distortions with a continuous surface description which can then be used by SAO's analytic ray tracing software, currently OSAC (Optical Surface Analysis Code). The improved capabilities of Transfit over previous methods include bicubic spline fitting of FEA data to accommodate higher spatial frequency distortions, fitted data visualization for assessing the quality of fit, the ability to accommodate input data from three FEA codes plus other standard formats, and options for alignment of the model coordinate system with the ray trace coordinate system. TransFit uses the AnswerGarden graphical user interface (GUI) to edit input parameters and then access routines written in PV-WAVE, C, and FORTRAN to allow the user to interactively create, evaluate, and modify the fit. The topics covered include an introduction to TransFit: requirements, designs philosophy, and implementation; design specifics: modules, parameters, fitting algorithms, and data displays; a procedural example; verification of performance; future work; and appendices on online help and ray trace results of the verification section.

  4. Structural Insights into Triglyceride Storage Mediated by Fat Storage-Inducing Transmembrane (FIT) Protein 2

    PubMed Central

    Gross, David A.; Snapp, Erik L.; Silver, David L.

    2010-01-01

    Fat storage-Inducing Transmembrane proteins 1 & 2 (FIT1/FITM1 and FIT2/FITM2) belong to a unique family of evolutionarily conserved proteins localized to the endoplasmic reticulum that are involved in triglyceride lipid droplet formation. FIT proteins have been shown to mediate the partitioning of cellular triglyceride into lipid droplets, but not triglyceride biosynthesis. FIT proteins do not share primary sequence homology with known proteins and no structural information is available to inform on the mechanism by which FIT proteins function. Here, we present the experimentally-solved topological models for FIT1 and FIT2 using N-glycosylation site mapping and indirect immunofluorescence techniques. These methods indicate that both proteins have six-transmembrane-domains with both N- and C-termini localized to the cytosol. Utilizing this model for structure-function analysis, we identified and characterized a gain-of-function mutant of FIT2 (FLL(157-9)AAA) in transmembrane domain 4 that markedly augmented the total number and mean size of lipid droplets. Using limited-trypsin proteolysis we determined that the FLL(157-9)AAA mutant has enhanced trypsin cleavage at K86 relative to wild-type FIT2, indicating a conformational change. Taken together, these studies indicate that FIT2 is a 6 transmembrane domain-containing protein whose conformation likely regulates its activity in mediating lipid droplet formation. PMID:20520733

  5. Multi-omics facilitated variable selection in Cox-regression model for cancer prognosis prediction.

    PubMed

    Liu, Cong; Wang, Xujun; Genchev, Georgi Z; Lu, Hui

    2017-07-15

    New developments in high-throughput genomic technologies have enabled the measurement of diverse types of omics biomarkers in a cost-efficient and clinically-feasible manner. Developing computational methods and tools for analysis and translation of such genomic data into clinically-relevant information is an ongoing and active area of investigation. For example, several studies have utilized an unsupervised learning framework to cluster patients by integrating omics data. Despite such recent advances, predicting cancer prognosis using integrated omics biomarkers remains a challenge. There is also a shortage of computational tools for predicting cancer prognosis by using supervised learning methods. The current standard approach is to fit a Cox regression model by concatenating the different types of omics data in a linear manner, while penalty could be added for feature selection. A more powerful approach, however, would be to incorporate data by considering relationships among omics datatypes. Here we developed two methods: a SKI-Cox method and a wLASSO-Cox method to incorporate the association among different types of omics data. Both methods fit the Cox proportional hazards model and predict a risk score based on mRNA expression profiles. SKI-Cox borrows the information generated by these additional types of omics data to guide variable selection, while wLASSO-Cox incorporates this information as a penalty factor during model fitting. We show that SKI-Cox and wLASSO-Cox models select more true variables than a LASSO-Cox model in simulation studies. We assess the performance of SKI-Cox and wLASSO-Cox using TCGA glioblastoma multiforme and lung adenocarcinoma data. In each case, mRNA expression, methylation, and copy number variation data are integrated to predict the overall survival time of cancer patients. Our methods achieve better performance in predicting patients' survival in glioblastoma and lung adenocarcinoma. Copyright © 2017. Published by Elsevier Inc.

  6. Improvements in prevalence trend fitting and incidence estimation in EPP 2013

    PubMed Central

    Brown, Tim; Bao, Le; Eaton, Jeffrey W.; Hogan, Daniel R.; Mahy, Mary; Marsh, Kimberly; Mathers, Bradley M.; Puckett, Robert

    2014-01-01

    Objective: Describe modifications to the latest version of the Joint United Nations Programme on AIDS (UNAIDS) Estimation and Projection Package component of Spectrum (EPP 2013) to improve prevalence fitting and incidence trend estimation in national epidemics and global estimates of HIV burden. Methods: Key changes made under the guidance of the UNAIDS Reference Group on Estimates, Modelling and Projections include: availability of a range of incidence calculation models and guidance for selecting a model; a shift to reporting the Bayesian median instead of the maximum likelihood estimate; procedures for comparison and validation against reported HIV and AIDS data; incorporation of national surveys as an integral part of the fitting and calibration procedure, allowing survey trends to inform the fit; improved antenatal clinic calibration procedures in countries without surveys; adjustment of national antiretroviral therapy reports used in the fitting to include only those aged 15–49 years; better estimates of mortality among people who inject drugs; and enhancements to speed fitting. Results: The revised models in EPP 2013 allow closer fits to observed prevalence trend data and reflect improving understanding of HIV epidemics and associated data. Conclusion: Spectrum and EPP continue to adapt to make better use of the existing data sources, incorporate new sources of information in their fitting and validation procedures, and correct for quantifiable biases in inputs as they are identified and understood. These adaptations provide countries with better calibrated estimates of incidence and prevalence, which increase epidemic understanding and provide a solid base for program and policy planning. PMID:25406747

  7. Two new methods to fit models for network meta-analysis with random inconsistency effects.

    PubMed

    Law, Martin; Jackson, Dan; Turner, Rebecca; Rhodes, Kirsty; Viechtbauer, Wolfgang

    2016-07-28

    Meta-analysis is a valuable tool for combining evidence from multiple studies. Network meta-analysis is becoming more widely used as a means to compare multiple treatments in the same analysis. However, a network meta-analysis may exhibit inconsistency, whereby the treatment effect estimates do not agree across all trial designs, even after taking between-study heterogeneity into account. We propose two new estimation methods for network meta-analysis models with random inconsistency effects. The model we consider is an extension of the conventional random-effects model for meta-analysis to the network meta-analysis setting and allows for potential inconsistency using random inconsistency effects. Our first new estimation method uses a Bayesian framework with empirically-based prior distributions for both the heterogeneity and the inconsistency variances. We fit the model using importance sampling and thereby avoid some of the difficulties that might be associated with using Markov Chain Monte Carlo (MCMC). However, we confirm the accuracy of our importance sampling method by comparing the results to those obtained using MCMC as the gold standard. The second new estimation method we describe uses a likelihood-based approach, implemented in the metafor package, which can be used to obtain (restricted) maximum-likelihood estimates of the model parameters and profile likelihood confidence intervals of the variance components. We illustrate the application of the methods using two contrasting examples. The first uses all-cause mortality as an outcome, and shows little evidence of between-study heterogeneity or inconsistency. The second uses "ear discharge" as an outcome, and exhibits substantial between-study heterogeneity and inconsistency. Both new estimation methods give results similar to those obtained using MCMC. The extent of heterogeneity and inconsistency should be assessed and reported in any network meta-analysis. Our two new methods can be used to fit models for network meta-analysis with random inconsistency effects. They are easily implemented using the accompanying R code in the Additional file 1. Using these estimation methods, the extent of inconsistency can be assessed and reported.

  8. Models for forecasting hospital bed requirements in the acute sector.

    PubMed Central

    Farmer, R D; Emami, J

    1990-01-01

    STUDY OBJECTIVE--The aim was to evaluate the current approach to forecasting hospital bed requirements. DESIGN--The study was a time series and regression analysis. The time series for mean duration of stay for general surgery in the age group 15-44 years (1969-1982) was used in the evaluation of different methods of forecasting future values of mean duration of stay and its subsequent use in the formation of hospital bed requirements. RESULTS--It has been suggested that the simple trend fitting approach suffers from model specification error and imposes unjustified restrictions on the data. Time series approach (Box-Jenkins method) was shown to be a more appropriate way of modelling the data. CONCLUSION--The simple trend fitting approach is inferior to the time series approach in modelling hospital bed requirements. PMID:2277253

  9. Fourier plane modeling of the jet in the galaxy M81

    NASA Astrophysics Data System (ADS)

    Ramessur, Arvind; Bietenholz, Michael F.; Leeuw, Lerothodi L.; Bartel, Norbert

    2015-03-01

    The nearby spiral galaxy M81 has a low-luminosity Active Galactic Nucleus in its center with a core and a one-sided curved jet, dubbed M81*, that is barely resolved with VLBI. To derive basic parameters such as the length of the jet, its orientation and curvature, the usual method of model-fitting with point sources and elliptical Gaussians may not always be the most appropriate one. We are developing Fourier-plane models for such sources, in particular an asymmetric triangle model to fit the extensive set of VLBI data of M81* in the u-v plane. This method may have an advantage over conventional ones in extracting information close to the resolution limit to provide us with a more comprehensive picture of the structure and evolution of the jet. We report on preliminary results.

  10. Strong Ground Motion Simulation and Source Modeling of the April 1, 2006 Tai-Tung Earthquake Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2010-12-01

    The Tai-Tung earthquake (ML=6.2) occurred at the southeastern part of Taiwan on April 1, 2006. We examine the source model of this event using the observed seismograms by CWBSN at five stations surrounding the source area. An objective estimation method was used to obtain the parameters N and C which are needed for the empirical Green’s function method by Irikura (1986). This method is called “source spectral ratio fitting method” which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tai-Tung mainshock in 2006 was estimated by comparing the observed waveforms with synthetics using empirical Green’s function method. The size of the asperity is about 3.5 km length along the strike direction by 7.0 km width along the dip direction. The rupture started at the left-bottom of the asperity and extended radially to the right-upper direction.

  11. Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

    2012-04-01

    Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.

  12. Estimating the Properties of Hard X-Ray Solar Flares by Constraining Model Parameters

    NASA Technical Reports Server (NTRS)

    Ireland, J.; Tolbert, A. K.; Schwartz, R. A.; Holman, G. D.; Dennis, B. R.

    2013-01-01

    We wish to better constrain the properties of solar flares by exploring how parameterized models of solar flares interact with uncertainty estimation methods. We compare four different methods of calculating uncertainty estimates in fitting parameterized models to Ramaty High Energy Solar Spectroscopic Imager X-ray spectra, considering only statistical sources of error. Three of the four methods are based on estimating the scale-size of the minimum in a hypersurface formed by the weighted sum of the squares of the differences between the model fit and the data as a function of the fit parameters, and are implemented as commonly practiced. The fourth method is also based on the difference between the data and the model, but instead uses Bayesian data analysis and Markov chain Monte Carlo (MCMC) techniques to calculate an uncertainty estimate. Two flare spectra are modeled: one from the Geostationary Operational Environmental Satellite X1.3 class flare of 2005 January 19, and the other from the X4.8 flare of 2002 July 23.We find that the four methods give approximately the same uncertainty estimates for the 2005 January 19 spectral fit parameters, but lead to very different uncertainty estimates for the 2002 July 23 spectral fit. This is because each method implements different analyses of the hypersurface, yielding method-dependent results that can differ greatly depending on the shape of the hypersurface. The hypersurface arising from the 2005 January 19 analysis is consistent with a normal distribution; therefore, the assumptions behind the three non- Bayesian uncertainty estimation methods are satisfied and similar estimates are found. The 2002 July 23 analysis shows that the hypersurface is not consistent with a normal distribution, indicating that the assumptions behind the three non-Bayesian uncertainty estimation methods are not satisfied, leading to differing estimates of the uncertainty. We find that the shape of the hypersurface is crucial in understanding the output from each uncertainty estimation technique, and that a crucial factor determining the shape of hypersurface is the location of the low-energy cutoff relative to energies where the thermal emission dominates. The Bayesian/MCMC approach also allows us to provide detailed information on probable values of the low-energy cutoff, Ec, a crucial parameter in defining the energy content of the flare-accelerated electrons. We show that for the 2002 July 23 flare data, there is a 95% probability that Ec lies below approximately 40 keV, and a 68% probability that it lies in the range 7-36 keV. Further, the low-energy cutoff is more likely to be in the range 25-35 keV than in any other 10 keV wide energy range. The low-energy cutoff for the 2005 January 19 flare is more tightly constrained to 107 +/- 4 keV with 68% probability.

  13. An application of model-fitting procedures for marginal structural models.

    PubMed

    Mortimer, Kathleen M; Neugebauer, Romain; van der Laan, Mark; Tager, Ira B

    2005-08-15

    Marginal structural models (MSMs) are being used more frequently to obtain causal effect estimates in observational studies. Although the principal estimator of MSM coefficients has been the inverse probability of treatment weight (IPTW) estimator, there are few published examples that illustrate how to apply IPTW or discuss the impact of model selection on effect estimates. The authors applied IPTW estimation of an MSM to observational data from the Fresno Asthmatic Children's Environment Study (2000-2002) to evaluate the effect of asthma rescue medication use on pulmonary function and compared their results with those obtained through traditional regression methods. Akaike's Information Criterion and cross-validation methods were used to fit the MSM. In this paper, the influence of model selection and evaluation of key assumptions such as the experimental treatment assignment assumption are discussed in detail. Traditional analyses suggested that medication use was not associated with an improvement in pulmonary function--a finding that is counterintuitive and probably due to confounding by symptoms and asthma severity. The final MSM estimated that medication use was causally related to a 7% improvement in pulmonary function. The authors present examples that should encourage investigators who use IPTW estimation to undertake and discuss the impact of model-fitting procedures to justify the choice of the final weights.

  14. Comparison of three commercially available fit-test methods.

    PubMed

    Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J

    2002-01-01

    American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.

  15. The Benefits of Including Clinical Factors in Rectal Normal Tissue Complication Probability Modeling After Radiotherapy for Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Defraene, Gilles, E-mail: gilles.defraene@uzleuven.be; Van den Bergh, Laura; Al-Mamgani, Abrahim

    2012-03-01

    Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including themore » most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable prediction models were obtained with LKB, RS, and logistic NTCP models. Including clinical factors improved the predictive power of all models significantly.« less

  16. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less

  17. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  18. Three-Dimensional Surface Parameters and Multi-Fractal Spectrum of Corroded Steel

    PubMed Central

    Shanhua, Xu; Songbo, Ren; Youde, Wang

    2015-01-01

    To study multi-fractal behavior of corroded steel surface, a range of fractal surfaces of corroded surfaces of Q235 steel were constructed by using the Weierstrass-Mandelbrot method under a high total accuracy. The multi-fractal spectrum of fractal surface of corroded steel was calculated to study the multi-fractal characteristics of the W-M corroded surface. Based on the shape feature of the multi-fractal spectrum of corroded steel surface, the least squares method was applied to the quadratic fitting of the multi-fractal spectrum of corroded surface. The fitting function was quantitatively analyzed to simplify the calculation of multi-fractal characteristics of corroded surface. The results showed that the multi-fractal spectrum of corroded surface was fitted well with the method using quadratic curve fitting, and the evolution rules and trends were forecasted accurately. The findings can be applied to research on the mechanisms of corroded surface formation of steel and provide a new approach for the establishment of corrosion damage constitutive models of steel. PMID:26121468

  19. Three-Dimensional Surface Parameters and Multi-Fractal Spectrum of Corroded Steel.

    PubMed

    Shanhua, Xu; Songbo, Ren; Youde, Wang

    2015-01-01

    To study multi-fractal behavior of corroded steel surface, a range of fractal surfaces of corroded surfaces of Q235 steel were constructed by using the Weierstrass-Mandelbrot method under a high total accuracy. The multi-fractal spectrum of fractal surface of corroded steel was calculated to study the multi-fractal characteristics of the W-M corroded surface. Based on the shape feature of the multi-fractal spectrum of corroded steel surface, the least squares method was applied to the quadratic fitting of the multi-fractal spectrum of corroded surface. The fitting function was quantitatively analyzed to simplify the calculation of multi-fractal characteristics of corroded surface. The results showed that the multi-fractal spectrum of corroded surface was fitted well with the method using quadratic curve fitting, and the evolution rules and trends were forecasted accurately. The findings can be applied to research on the mechanisms of corroded surface formation of steel and provide a new approach for the establishment of corrosion damage constitutive models of steel.

  20. Simultaneous Measurement of Thermal Conductivity and Specific Heat in a Single TDTR Experiment

    NASA Astrophysics Data System (ADS)

    Sun, Fangyuan; Wang, Xinwei; Yang, Ming; Chen, Zhe; Zhang, Hang; Tang, Dawei

    2018-01-01

    Time-domain thermoreflectance (TDTR) technique is a powerful thermal property measurement method, especially for nano-structures and material interfaces. Thermal properties can be obtained by fitting TDTR experimental data with a proper thermal transport model. In a single TDTR experiment, thermal properties with different sensitivity trends can be extracted simultaneously. However, thermal conductivity and volumetric heat capacity usually have similar trends in sensitivity for most materials; it is difficult to measure them simultaneously. In this work, we present a two-step data fitting method to measure the thermal conductivity and volumetric heat capacity simultaneously from a set of TDTR experimental data at single modulation frequency. This method takes full advantage of the information carried by both amplitude and phase signals; it is a more convenient and effective solution compared with the frequency-domain thermoreflectance method. The relative error is lower than 5 % for most cases. A silicon wafer sample was measured by TDTR method to verify the two-step fitting method.

  1. [Application of numerical convolution in in vivo/in vitro correlation research].

    PubMed

    Yue, Peng

    2009-01-01

    This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.

  2. Least Squares Best Fit Method for the Three Parameter Weibull Distribution: Analysis of Tensile and Bend Specimens with Volume or Surface Flaw Failure

    NASA Technical Reports Server (NTRS)

    Gross, Bernard

    1996-01-01

    Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.

  3. Electropyroelectric technique: A methodology free of fitting procedures for thermal effusivity determination in liquids.

    PubMed

    Ivanov, R; Marin, E; Villa, J; Gonzalez, E; Rodríguez, C I; Olvera, J E

    2015-06-01

    This paper describes an alternative methodology to determine the thermal effusivity of a liquid sample using the recently proposed electropyroelectric technique, without fitting the experimental data with a theoretical model and without having to know the pyroelectric sensor related parameters, as in most previous reported approaches. The method is not absolute, because a reference liquid with known thermal properties is needed. Experiments have been performed that demonstrate the high reliability and accuracy of the method with measurement uncertainties smaller than 3%.

  4. [Measurements of the concentration of atmospheric CO2 based on OP/FTIR method and infrared reflecting scanning Fourier transform spectrometry].

    PubMed

    Wei, Ru-Yi; Zhou, Jin-Song; Zhang, Xue-Min; Yu, Tao; Gao, Xiao-Hui; Ren, Xiao-Qiang

    2014-11-01

    The present paper describes the observations and measurements of the infrared absorption spectra of CO2 on the Earth's surface with OP/FTIR method by employing a mid-infrared reflecting scanning Fourier transform spectrometry, which are the first results produced by the first prototype in China developed by the team of authors. This reflecting scanning Fourier transform spectrometry works in the spectral range 2 100-3 150 cm(-1) with a spectral resolution of 2 cm(-1). Method to measure the atmospheric molecules was described and mathematical proof and quantitative algorithms to retrieve molecular concentration were established. The related models were performed both by a direct method based on the Beer-Lambert Law and by a simulating-fitting method based on HITRAN database and the instrument functions. Concentrations of CO2 were retrieved by the two models. The results of observation and modeling analyses indicate that the concentrations have a distribution of 300-370 ppm, and show tendency that going with the variation of the environment they first decrease slowly and then increase rapidly during the observation period, and reached low points in the afternoon and during the sunset. The concentrations with measuring times retrieved by the direct method and by the simulating-fitting method agree with each other very well, with the correlation of all the data is up to 99.79%, and the relative error is no more than 2.00%. The precision for retrieving is relatively high. The results of this paper demonstrate that, in the field of detecting atmospheric compositions, OP/FTIR method performed by the Infrared reflecting scanning Fourier transform spectrometry is a feasible and effective technical approach, and either the direct method or the simulating-fitting method is capable of retrieving concentrations with high precision.

  5. Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies

    NASA Astrophysics Data System (ADS)

    Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.

    2017-12-01

    Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark matter models. We find that the pure baryonic model with fitted M/L ratios falls within the 1σ confidence level for 10 HSB and 2 LSB galaxies, at the price of growing the M/Ls on average by a factor of two, but the fits are inferior compared to the best-fitting dark matter model.

  6. Curve fitting and modeling with splines using statistical variable selection techniques

    NASA Technical Reports Server (NTRS)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  7. An adjoint-based method for a linear mechanically-coupled tumor model: application to estimate the spatial variation of murine glioma growth based on diffusion weighted magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Feng, Xinzeng; Hormuth, David A.; Yankeelov, Thomas E.

    2018-06-01

    We present an efficient numerical method to quantify the spatial variation of glioma growth based on subject-specific medical images using a mechanically-coupled tumor model. The method is illustrated in a murine model of glioma in which we consider the tumor as a growing elastic mass that continuously deforms the surrounding healthy-appearing brain tissue. As an inverse parameter identification problem, we quantify the volumetric growth of glioma and the growth component of deformation by fitting the model predicted cell density to the cell density estimated using the diffusion-weighted magnetic resonance imaging data. Numerically, we developed an adjoint-based approach to solve the optimization problem. Results on a set of experimentally measured, in vivo rat glioma data indicate good agreement between the fitted and measured tumor area and suggest a wide variation of in-plane glioma growth with the growth-induced Jacobian ranging from 1.0 to 6.0.

  8. Measuring the Processes of Change From the Transtheoretical Model for Physical Activity and Exercise in Overweight and Obese Adults.

    PubMed

    Romain, Ahmed Jerôme; Bernard, Paquito; Hokayem, Marie; Gernigon, Christophe; Avignon, Antoine

    2016-03-01

    This study aimed to test three factorial structures conceptualizing the processes of change (POC) from the transtheoretical model and to examine the relationships between the POC and stages of change (SOC) among overweight and obese adults. Cross-sectional study. This study was conducted at the University Hospital of Montpellier, France. A sample of 289 overweight or obese participants (199 women) was enrolled in the study. Participants completed the POC and SOC questionnaires during a 5-day hospitalization for weight management. Structural equation modeling was used to compare the different factorial structures. The unweighted least-squares method was used to identify the best-fit indices for the five fully correlated model (goodness-of-fit statistic = .96; adjusted goodness-of-fit statistic = .95; standardized root mean residual = .062; normed-fit index = .95; parsimonious normed-fit index = .83; parsimonious goodness-of-fit statistic = .78). The multivariate analysis of variance was significant (p < .001). A post hoc test showed that individuals in advanced SOC used more of both experiential and behavioral POC than those in preaction stages, with effect sizes ranging from .06 to .29. This study supports the validity of the factorial structure of POC concerning physical activity and confirms the assumption that, in this context, people with excess weight use both experiential and behavioral processes. These preliminary results should be confirmed in a longitudinal study. © The Author(s) 2016.

  9. Using Social Judgment Theory method to examine how experienced occupational therapy driver assessors use information to make fitness-to-drive recommendations

    PubMed Central

    Harries, Priscilla; Davies, Miranda

    2015-01-01

    Introduction As people with a range of disabilities strive to increase their community mobility, occupational therapy driver assessors are increasingly required to make complex recommendations regarding fitness-to-drive. However, very little is known about how therapists use information to make decisions. The aim of this study was to model how experienced occupational therapy driver assessors weight and combine information when making fitness-to-drive recommendations and establish their level of decision agreement. Method Using Social Judgment Theory method, this study examined how 45 experienced occupational therapy driver assessors from the UK, Australia and New Zealand made fitness-to-drive recommendations for a series of 64 case scenarios. Participants completed the task on a dedicated website, and data were analysed using discriminant function analysis and an intraclass correlation coefficient. Results Accounting for 87% of the variance, the cues central to the fitness-to-drive recommendations made by assessors are the client’s physical skills, cognitive and perceptual skills, road law craft skills, vehicle handling skills and the number of driving instructor interventions. Agreement (consensus) between fitness-to-drive recommendations was very high: intraclass correlation coefficient = .97, 95% confidence interval .96–.98). Conclusion Findings can be used by both experienced and novice driver assessors to reflect on and strengthen the fitness-to-drive recommendations made to clients. PMID:26435572

  10. Health-Related Fitness Knowledge Development through Project-Based Learning

    ERIC Educational Resources Information Center

    Hastle, Peter A.; Chen, Senlin; Guarino, Anthony J.

    2017-01-01

    Purpose: The purpose of this study was to examine the process and outcome of an intervention using the project-based learning (PBL) model to increase students' health-related fitness (HRF) knowledge. Method: The participants were 185 fifth-grade students from three schools in Alabama (PBL group: n = 109; control group: n = 76). HRF knowledge was…

  11. Testing spectral models for stellar populations with star clusters - II. Results

    NASA Astrophysics Data System (ADS)

    González Delgado, Rosa M.; Cid Fernandes, Roberto

    2010-04-01

    High spectral resolution evolutionary synthesis models have become a routinely used ingredient in extragalactic work, and as such deserve thorough testing. Star clusters are ideal laboratories for such tests. This paper applies the spectral fitting methodology outlined in Paper I to a sample of clusters, mainly from the Magellanic Clouds and spanning a wide range in age and metallicity, fitting their integrated light spectra with a suite of modern evolutionary synthesis models for single stellar populations. The combinations of model plus spectral library employed in this investigation are Galaxev/STELIB, Vazdekis/MILES, SED@/GRANADA and Galaxev/MILES+GRANADA, which provide a representative sample of models currently available for spectral fitting work. A series of empirical tests are performed with these models, comparing the quality of the spectral fits and the values of age, metallicity and extinction obtained with each of them. A comparison is also made between the properties derived from these spectral fits and literature data on these nearby, well studied clusters. These comparisons are done with the general goal of providing useful feedback for model makers, as well as guidance to the users of such models. We find the following. (i) All models are able to derive ages that are in good agreement both with each other and with literature data, although ages derived from spectral fits are on average slightly older than those based on the S-colour-magnitude diagram (S-CMD) method as calibrated by Girardi et al. (ii) There is less agreement between the models for the metallicity and extinction. In particular, Galaxev/STELIB models underestimate the metallicity by ~0.6 dex, and the extinction is overestimated by 0.1 mag. (iii) New generations of models using the GRANADA and MILES libraries are superior to STELIB-based models both in terms of spectral fit quality and regarding the accuracy with which age and metallicity are retrieved. Accuracies of about 0.1 dex in age and 0.3 dex in metallicity can be achieved as long as the models are not extrapolated beyond their expected range of validity.

  12. Analysis technique for controlling system wavefront error with active/adaptive optics

    NASA Astrophysics Data System (ADS)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  13. Group Influences on Young Adult Warfighters’ Risk Taking

    DTIC Science & Technology

    2016-12-01

    Statistical Analysis Latent linear growth models were fitted using the maximum likelihood estimation method in Mplus (version 7.0; Muthen & Muthen...condition had a higher net score than those in the alone condition (b = 20.53, SE = 6.29, p < .001). Results of the relevant statistical analyses are...8.56 110.86*** 22.01 158.25*** 29.91 Model fit statistics BIC 4004.50 5302.539 5540.58 Chi-square (df) 41.51*** (16) 38.10** (20) 42.19** (20

  14. Regression Models for Identifying Noise Sources in Magnetic Resonance Images

    PubMed Central

    Zhu, Hongtu; Li, Yimei; Ibrahim, Joseph G.; Shi, Xiaoyan; An, Hongyu; Chen, Yashen; Gao, Wei; Lin, Weili; Rowe, Daniel B.; Peterson, Bradley S.

    2009-01-01

    Stochastic noise, susceptibility artifacts, magnetic field and radiofrequency inhomogeneities, and other noise components in magnetic resonance images (MRIs) can introduce serious bias into any measurements made with those images. We formally introduce three regression models including a Rician regression model and two associated normal models to characterize stochastic noise in various magnetic resonance imaging modalities, including diffusion-weighted imaging (DWI) and functional MRI (fMRI). Estimation algorithms are introduced to maximize the likelihood function of the three regression models. We also develop a diagnostic procedure for systematically exploring MR images to identify noise components other than simple stochastic noise, and to detect discrepancies between the fitted regression models and MRI data. The diagnostic procedure includes goodness-of-fit statistics, measures of influence, and tools for graphical display. The goodness-of-fit statistics can assess the key assumptions of the three regression models, whereas measures of influence can isolate outliers caused by certain noise components, including motion artifacts. The tools for graphical display permit graphical visualization of the values for the goodness-of-fit statistic and influence measures. Finally, we conduct simulation studies to evaluate performance of these methods, and we analyze a real dataset to illustrate how our diagnostic procedure localizes subtle image artifacts by detecting intravoxel variability that is not captured by the regression models. PMID:19890478

  15. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  16. Modeling multilayer x-ray reflectivity using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sánchez del Río, M.; Pareschi, G.; Michetschläger, C.

    2000-06-01

    The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thickness, density, roughness). Non-linear fitting of experimental data with simulations requires the use of initial values sufficiently close to the optimum value. This is a difficult task when the topology of the space of the variables is highly structured. We apply global optimization methods to fit multilayer reflectivity. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (selection, crossover, mutation, etc.) on the members of the parent generation. The pressure of selection drives the population to include "good" individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C and W/Si multilayers using genetic algorithms are presented. This method can also be applied to design multilayers optimized for a target application.

  17. Statistical distributions of extreme dry spell in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Jemain, Abdul Aziz

    2010-11-01

    Statistical distributions of annual extreme (AE) series and partial duration (PD) series for dry-spell event are analyzed for a database of daily rainfall records of 50 rain-gauge stations in Peninsular Malaysia, with recording period extending from 1975 to 2004. The three-parameter generalized extreme value (GEV) and generalized Pareto (GP) distributions are considered to model both series. In both cases, the parameters of these two distributions are fitted by means of the L-moments method, which provides a robust estimation of them. The goodness-of-fit (GOF) between empirical data and theoretical distributions are then evaluated by means of the L-moment ratio diagram and several goodness-of-fit tests for each of the 50 stations. It is found that for the majority of stations, the AE and PD series are well fitted by the GEV and GP models, respectively. Based on the models that have been identified, we can reasonably predict the risks associated with extreme dry spells for various return periods.

  18. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  19. A novel approach to determine primary stability of acetabular press-fit cups.

    PubMed

    Weißmann, Volker; Boss, Christian; Bader, Rainer; Hansmann, Harald

    2018-04-01

    Today hip cups are used in a large variety of design variants and in increasing numbers of units. Their development is steadily progressing. In addition to conventional manufacturing methods for hip cups, additive methods, in particular, play an increasingly important role as development progresses. The present paper describes a modified cup model developed based on a commercially available press-fit cup (Allofit 54/JJ). The press-fit cup was designed in two variants and manufactured using selective laser melting (SLM). Variant 1 (Ti) was modeled on the Allofit cup using an adapted process technology. Variant 2 (Ti-S) was provided with a porous load bearing structure on its surface. In addition to the typical (complete) geometry, both variants were also manufactured and tested in a reduced shape where only the press-fit area was formed. To assess the primary stability of the press-fit cups in the artificial bone cavity, pull-out and lever-out tests were carried out. Exact fit conditions and two-millimeter press-fit were investigated. The closed-cell PU foam used as an artificial bone cavity was mechanically characterized to exclude any influence on the results of the investigation. The pull-out forces of the Ti-variant (complete-526 N, reduced-468 N) and the Ti-S variant (complete-548 N, reduced-526 N) as well as the lever-out moments of the Ti-variant (complete-10 Nm, reduced-9.8 Nm) and the Ti-S variant (complete-9 Nm, reduced-7.9 N) show no significant differences in the results between complete and reduced cups. The results show that the use of reduced cups in a press-fit design is possible within the scope of development work. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. A New Navigation Satellite Clock Bias Prediction Method Based on Modified Clock-bias Quadratic Polynomial Model

    NASA Astrophysics Data System (ADS)

    Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.

    2016-01-01

    In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.

  1. Model of flare lightcurve profile observed in soft X-rays

    NASA Astrophysics Data System (ADS)

    Gryciuk, Magdalena; Siarkowski, Marek; Gburek, Szymon; Podgorski, Piotr; Sylwester, Janusz; Kepa, Anna; Mrozek, Tomasz

    We propose a new model for description of solar flare lightcurve profile observed in soft X-rays. The method assumes that single-peaked `regular' flares seen in lightcurves can be fitted with the elementary time profile being a convolution of Gaussian and exponential functions. More complex, multi-peaked flares can be decomposed as a sum of elementary profiles. During flare lightcurve fitting process a linear background is determined as well. In our study we allow the background shape over the event to change linearly with time. Presented approach originally was dedicated to the soft X-ray small flares recorded by Polish spectrophotometer SphinX during the phase of very deep solar minimum of activity, between 23 rd and 24 th Solar Cycles. However, the method can and will be used to interpret the lightcurves as obtained by the other soft X-ray broad-band spectrometers at the time of both low and higher solar activity level. In the paper we introduce the model and present examples of fits to SphinX and GOES 1-8 Å channel observations as well.

  2. FIT: statistical modeling tool for transcriptome dynamics under fluctuating field conditions

    PubMed Central

    Iwayama, Koji; Aisaka, Yuri; Kutsuna, Natsumaro

    2017-01-01

    Abstract Motivation: Considerable attention has been given to the quantification of environmental effects on organisms. In natural conditions, environmental factors are continuously changing in a complex manner. To reveal the effects of such environmental variations on organisms, transcriptome data in field environments have been collected and analyzed. Nagano et al. proposed a model that describes the relationship between transcriptomic variation and environmental conditions and demonstrated the capability to predict transcriptome variation in rice plants. However, the computational cost of parameter optimization has prevented its wide application. Results: We propose a new statistical model and efficient parameter optimization based on the previous study. We developed and released FIT, an R package that offers functions for parameter optimization and transcriptome prediction. The proposed method achieves comparable or better prediction performance within a shorter computational time than the previous method. The package will facilitate the study of the environmental effects on transcriptomic variation in field conditions. Availability and Implementation: Freely available from CRAN (https://cran.r-project.org/web/packages/FIT/). Contact: anagano@agr.ryukoku.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online PMID:28158396

  3. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    PubMed

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  4. Seasonal and Non-Seasonal Generalized Pareto Distribution to Estimate Extreme Significant Wave Height in The Banda Sea

    NASA Astrophysics Data System (ADS)

    Nursamsiah; Nugroho Sugianto, Denny; Suprijanto, Jusup; Munasik; Yulianto, Bambang

    2018-02-01

    The information of extreme wave height return level was required for maritime planning and management. The recommendation methods in analyzing extreme wave were better distributed by Generalized Pareto Distribution (GPD). Seasonal variation was often considered in the extreme wave model. This research aims to identify the best model of GPD by considering a seasonal variation of the extreme wave. By using percentile 95 % as the threshold of extreme significant wave height, the seasonal GPD and non-seasonal GPD fitted. The Kolmogorov-Smirnov test was applied to identify the goodness of fit of the GPD model. The return value from seasonal and non-seasonal GPD was compared with the definition of return value as criteria. The Kolmogorov-Smirnov test result shows that GPD fits data very well both seasonal and non-seasonal model. The seasonal return value gives better information about the wave height characteristics.

  5. The range of attraction for light traps catching Culicoides biting midges (Diptera: Ceratopogonidae)

    PubMed Central

    2013-01-01

    Background Culicoides are vectors of e.g. bluetongue virus and Schmallenberg virus in northern Europe. Light trapping is an important tool for detecting the presence and quantifying the abundance of vectors in the field. Until now, few studies have investigated the range of attraction of light traps. Methods Here we test a previously described mathematical model (Model I) and two novel models for the attraction of vectors to light traps (Model II and III). In Model I, Culicoides fly to the nearest trap from within a fixed range of attraction. In Model II Culicoides fly towards areas with greater light intensity, and in Model III Culicoides evaluate light sources in the field of view and fly towards the strongest. Model II and III incorporated the directionally dependent light field created around light traps with fluorescent light tubes. All three models were fitted to light trap collections obtained from two novel experimental setups in the field where traps were placed in different configurations. Results Results showed that overlapping ranges of attraction of neighboring traps extended the shared range of attraction. Model I did not fit data from any of the experimental setups. Model II could only fit data from one of the setups, while Model III fitted data from both experimental setups. Conclusions The model with the best fit, Model III, indicates that Culicoides continuously evaluate the light source direction and intensity. The maximum range of attraction of a single 4W CDC light trap was estimated to be approximately 15.25 meters. The attraction towards light traps is different from the attraction to host animals and thus light trap catches may not represent the vector species and numbers attracted to hosts. PMID:23497628

  6. A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony

    PubMed Central

    Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748

  7. PUSHing core-collapse simulations to explosion

    NASA Astrophysics Data System (ADS)

    Fröhlich, C.; Perego, A.; Hempe, M.; Ebinger, K.; Eichler, M.; Casanova, J.; Liebendörfer, M.; Thielemann, F.-K.

    2018-01-01

    We report on the PUSH method for artificially triggering core-collapse supernova explosions of massive stars in spherical symmetry. The PUSH method increases the energy deposition in the gain region proportionally to the heavy flavor neutrino fluxes.We summarize the parameter dependence of the method and calibrate PUSH to reproduce SN 1987A observables. We identify a best-fit progenitor and set of parameters that fit the explosion properties of SN 1987A, assuming 0.1 M⊙ of fallback. For the explored progenitor range of 18-21 M⊙, we find correlations between explosion properties and the compactness of the progenitor model.

  8. An evaluation framework for Health Information Systems: human, organization and technology-fit factors (HOT-fit).

    PubMed

    Yusof, Maryati Mohd; Kuljis, Jasna; Papazafeiropoulou, Anastasia; Stergioulas, Lampros K

    2008-06-01

    The realization of Health Information Systems (HIS) requires rigorous evaluation that addresses technology, human and organization issues. Our review indicates that current evaluation methods evaluate different aspects of HIS and they can be improved upon. A new evaluation framework, human, organization and technology-fit (HOT-fit) was developed after having conducted a critical appraisal of the findings of existing HIS evaluation studies. HOT-fit builds on previous models of IS evaluation--in particular, the IS Success Model and the IT-Organization Fit Model. This paper introduces the new framework for HIS evaluation that incorporates comprehensive dimensions and measures of HIS and provides a technological, human and organizational fit. Literature review on HIS and IS evaluation studies and pilot testing of developed framework. The framework was used to evaluate a Fundus Imaging System (FIS) of a primary care organization in the UK. The case study was conducted through observation, interview and document analysis. The main findings show that having the right user attitude and skills base together with good leadership, IT-friendly environment and good communication can have positive influence on the system adoption. Comprehensive, specific evaluation factors, dimensions and measures in the new framework (HOT-fit) are applicable in HIS evaluation. The use of such a framework is argued to be useful not only for comprehensive evaluation of the particular FIS system under investigation, but potentially also for any Health Information System in general.

  9. Toward a Model-Based Approach to the Clinical Assessment of Personality Psychopathology

    PubMed Central

    Eaton, Nicholas R.; Krueger, Robert F.; Docherty, Anna R.; Sponheim, Scott R.

    2015-01-01

    Recent years have witnessed tremendous growth in the scope and sophistication of statistical methods available to explore the latent structure of psychopathology, involving continuous, discrete, and hybrid latent variables. The availability of such methods has fostered optimism that they can facilitate movement from classification primarily crafted through expert consensus to classification derived from empirically-based models of psychopathological variation. The explication of diagnostic constructs with empirically supported structures can then facilitate the development of assessment tools that appropriately characterize these constructs. Our goal in this paper is to illustrate how new statistical methods can inform conceptualization of personality psychopathology and therefore its assessment. We use magical thinking as example, because both theory and earlier empirical work suggested the possibility of discrete aspects to the latent structure of personality psychopathology, particularly forms of psychopathology involving distortions of reality testing, yet other data suggest that personality psychopathology is generally continuous in nature. We directly compared the fit of a variety of latent variable models to magical thinking data from a sample enriched with clinically significant variation in psychotic symptomatology for explanatory purposes. Findings generally suggested a continuous latent variable model best represented magical thinking, but results varied somewhat depending on different indices of model fit. We discuss the implications of the findings for classification and applied personality assessment. We also highlight some limitations of this type of approach that are illustrated by these data, including the importance of substantive interpretation, in addition to use of model fit indices, when evaluating competing structural models. PMID:24007309

  10. The effect of different screw-tightening techniques on the strain generated on an internal-connection implant superstructure. Part 2: Models created with a splinted impression technique.

    PubMed

    Choi, Jung-Han

    2011-01-01

    This study aimed to evaluate the effect of different screw-tightening sequences, torques, and methods on the strains generated on an internal-connection implant (Astra Tech) superstructure with good fit. An edentulous mandibular master model and a metal framework directly connected to four parallel implants with a passive fit to each other were fabricated. Six stone casts were made from a dental stone master model by a splinted impression technique to represent a well-fitting situation with the metal framework. Strains generated by four screw-tightening sequences (1-2-3-4, 4-3-2-1, 2-4-3-1, and 2-3-1-4), two torques (10 and 20 Ncm), and two methods (one-step and two-step) were evaluated. In the two-step method, screws were tightened to the initial torque (10 Ncm) in a predetermined screw-tightening sequence and then to the final torque (20 Ncm) in the same sequence. Strains were recorded twice by three strain gauges attached to the framework (superior face midway between abutments). Deformation data were analyzed using multiple analysis of variance at a .05 level of statistical significance. In all stone casts, strains were produced by connection of the superstructure, regardless of screw-tightening sequence, torque, and method. No statistically significant differences in superstructure strains were found based on screw-tightening sequences (range, -409.8 to -413.8 μm/m), torques (-409.7 and -399.1 μm/m), or methods (-399.1 and -410.3 μm/m). Within the limitations of this in vitro study, screw-tightening sequence, torque, and method were not critical factors for the strain generated on a well-fitting internal-connection implant superstructure by the splinted impression technique. Further studies are needed to evaluate the effect of screw-tightening techniques on the preload stress in various different clinical situations.

  11. SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, X; Duan, J; Popple, R

    2014-06-01

    Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less

  12. Effective degrees of freedom: a flawed metaphor

    PubMed Central

    Janson, Lucas; Fithian, William; Hastie, Trevor J.

    2015-01-01

    Summary To most applied statisticians, a fitting procedure’s degrees of freedom is synonymous with its model complexity, or its capacity for overfitting to data. In particular, it is often used to parameterize the bias-variance tradeoff in model selection. We argue that, on the contrary, model complexity and degrees of freedom may correspond very poorly. We exhibit and theoretically explore various fitting procedures for which degrees of freedom is not monotonic in the model complexity parameter, and can exceed the total dimension of the ambient space even in very simple settings. We show that the degrees of freedom for any non-convex projection method can be unbounded. PMID:26977114

  13. Tests of a habitat suitability model for black-capped chickadees

    USGS Publications Warehouse

    Schroeder, Richard L.

    1990-01-01

    The black-capped chickadee (Parus atricapillus) Habitat Suitability Index (HSI) model provides a quantitative rating of the capability of a habitat to support breeding, based on measures related to food and nest site availability. The model assumption that tree canopy volume can be predicted from measures of tree height and canopy closure was tested using data from foliage volume studies conducted in the riparian cottonwood habitat along the South Platte River in Colorado. Least absolute deviations (LAD) regression showed that canopy cover and over story tree height yielded volume predictions significantly lower than volume estimated by more direct methods. Revisions to these model relations resulted in improved predictions of foliage volume. The relation between the HSI and estimates of black-capped chickadee population densities was examined using LAD regression for both the original model and the model with the foliage volume revisions. Residuals from these models were compared to residuals from both a zero slope model and an ideal model. The fit model for the original HSI differed significantly from the ideal model, whereas the fit model for the original HSI did not differ significantly from the ideal model. However, both the fit model for the original HSI and the fit model for the revised HSI did not differ significantly from a model with a zero slope. Although further testing of the revised model is needed, its use is recommended for more realistic estimates of tree canopy volume and habitat suitability.

  14. Target 3-D reconstruction of streak tube imaging lidar based on Gaussian fitting

    NASA Astrophysics Data System (ADS)

    Yuan, Qingyu; Niu, Lihong; Hu, Cuichun; Wu, Lei; Yang, Hongru; Yu, Bing

    2018-02-01

    Streak images obtained by the streak tube imaging lidar (STIL) contain the distance-azimuth-intensity information of a scanned target, and a 3-D reconstruction of the target can be carried out through extracting the characteristic data of multiple streak images. Significant errors will be caused in the reconstruction result by the peak detection method due to noise and other factors. So as to get a more precise 3-D reconstruction, a peak detection method based on Gaussian fitting of trust region is proposed in this work. Gaussian modeling is performed on the returned wave of single time channel of each frame, then the modeling result which can effectively reduce the noise interference and possesses a unique peak could be taken as the new returned waveform, lastly extracting its feature data through peak detection. The experimental data of aerial target is for verifying this method. This work shows that the peak detection method based on Gaussian fitting reduces the extraction error of the feature data to less than 10%; utilizing this method to extract the feature data and reconstruct the target make it possible to realize the spatial resolution with a minimum 30 cm in the depth direction, and improve the 3-D imaging accuracy of the STIL concurrently.

  15. LIFESTYLE INDICATORS AND CARDIORESPIRATORY FITNESS IN ADOLESCENTS

    PubMed Central

    de Victo, Eduardo Rossato; Ferrari, Gerson Luis de Moraes; da Silva, João Pedro; Araújo, Timóteo Leandro; Matsudo, Victor Keihan Rodrigues

    2017-01-01

    ABSTRACT Objective: To evaluate the lifestyle indicators associated with cardiorespiratory fitness in adolescents from Ilhabela, São Paulo, Brazil. Methods: The sample consisted of 181 adolescents (53% male) from the Mixed Longitudinal Project on Growth, Development, and Physical Fitness of Ilhabela. Body composition (weight, height, and body mass index, or BMI), school transportation, time spent sitting, physical activity, sports, television time (TV), having a TV in the bedroom, sleep, health perception, diet, and economic status (ES) were analyzed. Cardiorespiratory fitness was estimated by the submaximal progressive protocol performed on a cycle ergometer. Linear regression models were used with the stepwise method. Results: The sample average age was 14.8 years, and the average cardiorespiratory fitness was 42.2 mL.kg-1.min-1 (42.9 for boys and 41.4 for girls; p=0.341). In the total sample, BMI (unstandardized regression coefficient [B]=-0.03), height (B=-0.01), ES (B=0.10), gender (B=0.12), and age (B=0.03) were significantly associated with cardiorespiratory fitness. In boys, BMI, height, not playing any sports, and age were significantly associated with cardiorespiratory fitness. In girls, BMI, ES, and having a TV in the bedroom were significantly associated with cardiorespiratory fitness. Conclusions: Lifestyle indicators influenced the cardiorespiratory fitness; BMI, ES, and age influenced both sexes. Not playing any sports, for boys, and having a TV in the bedroom, for girls, also influenced cardiorespiratory fitness. Public health measures to improve lifestyle indicators can help to increase cardiorespiratory fitness levels. PMID:28977318

  16. Potential fitting biases resulting from grouping data into variable width bins

    NASA Astrophysics Data System (ADS)

    Towers, S.

    2014-07-01

    When reading peer-reviewed scientific literature describing any analysis of empirical data, it is natural and correct to proceed with the underlying assumption that experiments have made good faith efforts to ensure that their analyses yield unbiased results. However, particle physics experiments are expensive and time consuming to carry out, thus if an analysis has inherent bias (even if unintentional), much money and effort can be wasted trying to replicate or understand the results, particularly if the analysis is fundamental to our understanding of the universe. In this note we discuss the significant biases that can result from data binning schemes. As we will show, if data are binned such that they provide the best comparison to a particular (but incorrect) model, the resulting model parameter estimates when fitting to the binned data can be significantly biased, leading us to too often accept the model hypothesis when it is not in fact true. When using binned likelihood or least squares methods there is of course no a priori requirement that data bin sizes need to be constant, but we show that fitting to data grouped into variable width bins is particularly prone to produce biased results if the bin boundaries are chosen to optimize the comparison of the binned data to a wrong model. The degree of bias that can be achieved simply with variable binning can be surprisingly large. Fitting the data with an unbinned likelihood method, when possible to do so, is the best way for researchers to show that their analyses are not biased by binning effects. Failing that, equal bin widths should be employed as a cross-check of the fitting analysis whenever possible.

  17. Psychosocial work environment and health in U.S. metropolitan areas: a test of the demand-control and demand-control-support models.

    PubMed

    Muntaner, C; Schoenbach, C

    1994-01-01

    The authors use confirmatory factor analysis to investigate the psychosocial dimensions of work environments relevant to health outcomes, in a representative sample of five U.S. metropolitan areas. Through an aggregated inference system, scales from Schwartz and associates' job scoring system and from the Dictionary of Occupational Titles (DOT) were employed to examine two alternative models: the demand-control model of Karasek and Theorell and Johnson's demand-control-support model. Confirmatory factor analysis was used to test the two models. The two multidimensional models yielded better fits than an unstructured model. After allowing for the measurement error variance due to the method of assessment (Schwartz and associates' system or DOT), both models yielded acceptable goodness-of-fit indices, but the fit of the demand-control-support model was significantly better. Overall these results indicate that the dimensions of Control (substantive complexity of work, skill discretion, decision authority), Demands (physical exertion, physical demands and hazards), and Social Support (coworker and supervisor social supports) provide an acceptable account of the psychosocial dimensions of work associated with health outcomes.

  18. Software Dependability Assessment Methods.

    DTIC Science & Technology

    1986-11-01

    Maor.rldx TABLE ~~~~~~ ~ ~ .2.RVU OFWR ETBI1YMDL Goei Crot~Nor~ Nu wu.. Ps.~o, r1.~~j% 2-4 __ N App L,-caW L Nct A pp.cLL tc ReAAW -Ur e ’-cdtl L. Reaons I...bug is found and immediately removed. The model provides a good fit with data. The parameter estimates are reasonable for the data sets tested. 3.0...where n = the number of errors found to date. 6.3 Study Results The model provides a good fit with data. The model runs into slight trouble with its "no

  19. Modelling the distribution of chickens, ducks, and geese in China

    USGS Publications Warehouse

    Prosser, Diann J.; Wu, Junxi; Ellis, Erie C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius

    2011-01-01

    Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China's chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for 1/4 of the sample data which were not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China's first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives.

  20. Modelling the distribution of chickens, ducks, and geese in China

    PubMed Central

    Prosser, Diann J.; Wu, Junxi; Ellis, Erle C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius

    2011-01-01

    Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China’s chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for ¼ of the sample data which was not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China’s first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives. PMID:21765567

  1. Calibrating reaction rates for the CREST model

    NASA Astrophysics Data System (ADS)

    Handley, Caroline A.; Christie, Michael A.

    2017-01-01

    The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.

  2. Improving the Fitness of High-Dimensional Biomechanical Models via Data-Driven Stochastic Exploration

    PubMed Central

    Bustamante, Carlos D.; Valero-Cuevas, Francisco J.

    2010-01-01

    The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906

  3. Forecasting incidence of hemorrhagic fever with renal syndrome in China using ARIMA model

    PubMed Central

    2011-01-01

    Background China is a country that is most seriously affected by hemorrhagic fever with renal syndrome (HFRS) with 90% of HFRS cases reported globally. At present, HFRS is getting worse with increasing cases and natural foci in China. Therefore, there is an urgent need for monitoring and predicting HFRS incidence to make the control of HFRS more effective. In this study, we applied a stochastic autoregressive integrated moving average (ARIMA) model with the objective of monitoring and short-term forecasting HFRS incidence in China. Methods Chinese HFRS data from 1975 to 2008 were used to fit ARIMA model. Akaike Information Criterion (AIC) and Ljung-Box test were used to evaluate the constructed models. Subsequently, the fitted ARIMA model was applied to obtain the fitted HFRS incidence from 1978 to 2008 and contrast with corresponding observed values. To assess the validity of the proposed model, the mean absolute percentage error (MAPE) between the observed and fitted HFRS incidence (1978-2008) was calculated. Finally, the fitted ARIMA model was used to forecast the incidence of HFRS of the years 2009 to 2011. All analyses were performed using SAS9.1 with a significant level of p < 0.05. Results The goodness-of-fit test of the optimum ARIMA (0,3,1) model showed non-significant autocorrelations in the residuals of the model (Ljung-Box Q statistic = 5.95,P = 0.3113). The fitted values made by ARIMA (0,3,1) model for years 1978-2008 closely followed the observed values for the same years, with a mean absolute percentage error (MAPE) of 12.20%. The forecast values from 2009 to 2011 were 0.69, 0.86, and 1.21per 100,000 population, respectively. Conclusion ARIMA models applied to historical HFRS incidence data are an important tool for HFRS surveillance in China. This study shows that accurate forecasting of the HFRS incidence is possible using an ARIMA model. If predicted values from this study are accurate, China can expect a rise in HFRS incidence. PMID:21838933

  4. Influencing factors and kinetics analysis on the leaching of iron from boron carbide waste-scrap with ultrasound-assisted method.

    PubMed

    Li, Xin; Xing, Pengfei; Du, Xinghong; Gao, Shuaibo; Chen, Chen

    2017-09-01

    In this paper, the ultrasound-assisted leaching of iron from boron carbide waste-scrap was investigated and the optimization of different influencing factors had also been performed. The factors investigated were acid concentration, liquid-solid ratio, leaching temperature, ultrasonic power and frequency. The leaching of iron with conventional method at various temperatures was also performed. The results show the maximum iron leaching ratios are 87.4%, 94.5% for 80min-leaching with conventional method and 50min-leaching with ultrasound assistance, respectively. The leaching of waste-scrap with conventional method fits the chemical reaction-controlled model. The leaching with ultrasound assistance fits chemical reaction-controlled model, diffusion-controlled model for the first stage and second stage, respectively. The assistance of ultrasound can greatly improve the iron leaching ratio, accelerate the leaching rate, shorten leaching time and lower the residual iron, comparing with conventional method. The advantages of ultrasound-assisted leaching were also confirmed by the SEM-EDS analysis and elemental analysis of the raw material and leached solid samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Fast, scalable prediction of deleterious noncoding variants from functional and population genomic data.

    PubMed

    Huang, Yi-Fei; Gulko, Brad; Siepel, Adam

    2017-04-01

    Many genetic variants that influence phenotypes of interest are located outside of protein-coding genes, yet existing methods for identifying such variants have poor predictive power. Here we introduce a new computational method, called LINSIGHT, that substantially improves the prediction of noncoding nucleotide sites at which mutations are likely to have deleterious fitness consequences, and which, therefore, are likely to be phenotypically important. LINSIGHT combines a generalized linear model for functional genomic data with a probabilistic model of molecular evolution. The method is fast and highly scalable, enabling it to exploit the 'big data' available in modern genomics. We show that LINSIGHT outperforms the best available methods in identifying human noncoding variants associated with inherited diseases. In addition, we apply LINSIGHT to an atlas of human enhancers and show that the fitness consequences at enhancers depend on cell type, tissue specificity, and constraints at associated promoters.

  6. Item Response Theory Modeling of the Philadelphia Naming Test.

    PubMed

    Fergadiotis, Gerasimos; Kellough, Stacey; Hula, William D

    2015-06-01

    In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating explanatory variables to item difficulty. This article describes the statistical model underlying the computer adaptive PNT presented in a companion article (Hula, Kellough, & Fergadiotis, 2015). Using archival data, we evaluated the fit of the PNT to 1- and 2-parameter logistic models and examined the precision of the resulting parameter estimates. We regressed the item difficulty estimates on three predictor variables: word length, age of acquisition, and contextual diversity. The 2-parameter logistic model demonstrated marginally better fit, but the fit of the 1-parameter logistic model was adequate. Precision was excellent for both person ability and item difficulty estimates. Word length, age of acquisition, and contextual diversity all independently contributed to variance in item difficulty. Item-response-theory methods can be productively used to analyze and quantify anomia severity in aphasia. Regression of item difficulty on lexical variables supported the validity of the PNT and interpretation of anomia severity scores in the context of current word-finding models.

  7. Estimated landmark calibration of biomechanical models for inverse kinematics.

    PubMed

    Trinler, Ursula; Baker, Richard

    2018-01-01

    Inverse kinematics is emerging as the optimal method in movement analysis to fit a multi-segment biomechanical model to experimental marker positions. A key part of this process is calibrating the model to the dimensions of the individual being analysed which requires scaling of the model, pose estimation and localisation of tracking markers within the relevant segment coordinate systems. The aim of this study is to propose a generic technique for this process and test a specific application to the OpenSim model Gait2392. Kinematic data from 10 healthy adult participants were captured in static position and normal walking. Results showed good average static and dynamic fitting errors between virtual and experimental markers of 0.8 cm and 0.9 cm, respectively. Highest fitting errors were found on the epicondyle (static), feet (static, dynamic) and on the thigh (dynamic). These result from inconsistencies between the model geometry and degrees of freedom and the anatomy and movement pattern of the individual participants. A particular limitation is in estimating anatomical landmarks from the bone meshes supplied with Gait2392 which do not conform with the bone morphology of the participants studied. Soft tissue artefact will also affect fitting the model to walking trials. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Inferring Diffusion Dynamics from FCS in Heterogeneous Nuclear Environments

    PubMed Central

    Tsekouras, Konstantinos; Siegel, Amanda P.; Day, Richard N.; Pressé, Steve

    2015-01-01

    Fluorescence correlation spectroscopy (FCS) is a noninvasive technique that probes the diffusion dynamics of proteins down to single-molecule sensitivity in living cells. Critical mechanistic insight is often drawn from FCS experiments by fitting the resulting time-intensity correlation function, G(t), to known diffusion models. When simple models fail, the complex diffusion dynamics of proteins within heterogeneous cellular environments can be fit to anomalous diffusion models with adjustable anomalous exponents. Here, we take a different approach. We use the maximum entropy method to show—first using synthetic data—that a model for proteins diffusing while stochastically binding/unbinding to various affinity sites in living cells gives rise to a G(t) that could otherwise be equally well fit using anomalous diffusion models. We explain the mechanistic insight derived from our method. In particular, using real FCS data, we describe how the effects of cell crowding and binding to affinity sites manifest themselves in the behavior of G(t). Our focus is on the diffusive behavior of an engineered protein in 1) the heterochromatin region of the cell’s nucleus as well as 2) in the cell’s cytoplasm and 3) in solution. The protein consists of the basic region-leucine zipper (BZip) domain of the CCAAT/enhancer-binding protein (C/EBP) fused to fluorescent proteins. PMID:26153697

  9. A Numerical Method for Calculating Stellar Occultation Light Curves from an Arbitrary Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Chamberlain, D. M.; Elliot, J. L.

    1997-01-01

    We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.

  10. Rasch fit statistics and sample size considerations for polytomous data

    PubMed Central

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-01-01

    Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722

  11. The Impact of Intraclass Correlation on the Effectiveness of Level-Specific Fit Indices in Multilevel Structural Equation Modeling

    PubMed Central

    Hsu, Hsien-Yuan; Lin, Jr-Hung; Kwok, Oi-Man; Acosta, Sandra; Willson, Victor

    2016-01-01

    Several researchers have recommended that level-specific fit indices should be applied to detect the lack of model fit at any level in multilevel structural equation models. Although we concur with their view, we note that these studies did not sufficiently consider the impact of intraclass correlation (ICC) on the performance of level-specific fit indices. Our study proposed to fill this gap in the methodological literature. A Monte Carlo study was conducted to investigate the performance of (a) level-specific fit indices derived by a partially saturated model method (e.g., CFIPS_B and CFIPS_W) and (b) SRMRW and SRMRB in terms of their performance in multilevel structural equation models across varying ICCs. The design factors included intraclass correlation (ICC: ICC1 = 0.091 to ICC6 = 0.500), numbers of groups in between-level models (NG: 50, 100, 200, and 1,000), group size (GS: 30, 50, and 100), and type of misspecification (no misspecification, between-level misspecification, and within-level misspecification). Our simulation findings raise a concern regarding the performance of between-level-specific partial saturated fit indices in low ICC conditions: the performances of both TLIPS_B and RMSEAPS_B were more influenced by ICC compared with CFIPS_B and SRMRB. However, when traditional cutoff values (RMSEA≤ 0.06; CFI, TLI≥ 0.95; SRMR≤ 0.08) were applied, CFIPS_B and TLIPS_B were still able to detect misspecified between-level models even when ICC was as low as 0.091 (ICC1). On the other hand, both RMSEAPS_B and SRMRB were not recommended under low ICC conditions. PMID:29795901

  12. Improved mapping of radio sources from VLBI data by least-square fit

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.

    1985-01-01

    A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.

  13. Broadband distortion modeling in Lyman-α forest BAO fitting

    DOE PAGES

    Blomqvist, Michael; Kirkby, David; Bautista, Julian E.; ...

    2015-11-23

    Recently, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≃ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. Here, we describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. In implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b F and the redshift-space distortion parameter β F for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on βF and the combination b F(1+β F) by more than a factor of seven. The measured values at redshift z=2.3 are βF=1.39 +0.11 +0.24 +0.38 -0.10 -0.19 -0.28 and bF(1+βF)=-0.374 +0.007 +0.013 +0.020 -0.007 -0.014 -0.022 (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less

  14. Broadband distortion modeling in Lyman-α forest BAO fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blomqvist, Michael; Kirkby, David; Margala, Daniel, E-mail: cblomqvi@uci.edu, E-mail: dkirkby@uci.edu, E-mail: dmargala@uci.edu

    2015-11-01

    In recent years, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≅ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. We describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. Implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b{sub F} and the redshift-space distortion parameter β{sub F} for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on β{sub F} and the combination b{sub F}(1+β{sub F}) by more than a factor of seven. The measured values at redshift z=2.3 are β{sub F}=1.39{sup +0.11 +0.24 +0.38}{sub −0.10 −0.19 −0.28} and b{sub F}(1+β{sub F})=−0.374{sup +0.007 +0.013 +0.020}{sub −0.007 −0.014 −0.022} (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less

  15. Performance of time-series methods in forecasting the demand for red blood cell transfusion.

    PubMed

    Pereira, Arturo

    2004-05-01

    Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.

  16. Atmospheric Retrieval Analysis of the Directly Imaged Exoplanet HR 8799b

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Min; Heng, Kevin; Irwin, Patrick G. J.

    2013-12-01

    Directly imaged exoplanets are unexplored laboratories for the application of the spectral and temperature retrieval method, where the chemistry and composition of their atmospheres are inferred from inverse modeling of the available data. As a pilot study, we focus on the extrasolar gas giant HR 8799b, for which more than 50 data points are available. We upgrade our non-linear optimal estimation retrieval method to include a phenomenological model of clouds that requires the cloud optical depth and monodisperse particle size to be specified. Previous studies have focused on forward models with assumed values of the exoplanetary properties; there is no consensus on the best-fit values of the radius, mass, surface gravity, and effective temperature of HR 8799b. We show that cloud-free models produce reasonable fits to the data if the atmosphere is of super-solar metallicity and non-solar elemental abundances. Intermediate cloudy models with moderate values of the cloud optical depth and micron-sized particles provide an equally reasonable fit to the data and require a lower mean molecular weight. We report our best-fit values for the radius, mass, surface gravity, and effective temperature of HR 8799b. The mean molecular weight is about 3.8, while the carbon-to-oxygen ratio is about unity due to the prevalence of carbon monoxide. Our study emphasizes the need for robust claims about the nature of an exoplanetary atmosphere to be based on analyses involving both photometry and spectroscopy and inferred from beyond a few photometric data points, such as are typically reported for hot Jupiters.

  17. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  18. The Role of Perceived and Actual Motor Competency on Children's Physical Activity and Cardiorespiratory Fitness during Middle Childhood

    ERIC Educational Resources Information Center

    Gu, Xiangli; Thomas, Katherine Thomas; Chen, Yu-Lin

    2017-01-01

    Purpose: Guided by Stodden et al.'s (2008) conceptual model, the purpose of this study was to examine the associations among perceived competence, actual motor competence (MC), physical activity (PA), and cardiorespiratory fitness in elementary children. The group differences were also investigated as a function of MC levels. Methods: A…

  19. Gray Matter Correlates of Fluid, Crystallized, and Spatial Intelligence: Testing the P-FIT Model

    ERIC Educational Resources Information Center

    Colom, Roberto; Haier, Richard J.; Head, Kevin; Alvarez-Linera, Juan; Quiroga, Maria Angeles; Shih, Pei Chun; Jung, Rex E.

    2009-01-01

    The parieto-frontal integration theory (P-FIT) nominates several areas distributed throughout the brain as relevant for intelligence. This theory was derived from previously published studies using a variety of both imaging methods and tests of cognitive ability. Here we test this theory in a new sample of young healthy adults (N = 100) using a…

  20. Bayesian cross-validation for model evaluation and selection, with application to the North American Breeding Bird Survey

    USGS Publications Warehouse

    Link, William; Sauer, John R.

    2016-01-01

    The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.

  1. Modeling aboveground tree woody biomass using national-scale allometric methods and airborne lidar

    NASA Astrophysics Data System (ADS)

    Chen, Qi

    2015-08-01

    Estimating tree aboveground biomass (AGB) and carbon (C) stocks using remote sensing is a critical component for understanding the global C cycle and mitigating climate change. However, the importance of allometry for remote sensing of AGB has not been recognized until recently. The overarching goals of this study are to understand the differences and relationships among three national-scale allometric methods (CRM, Jenkins, and the regional models) of the Forest Inventory and Analysis (FIA) program in the U.S. and to examine the impacts of using alternative allometry on the fitting statistics of remote sensing-based woody AGB models. Airborne lidar data from three study sites in the Pacific Northwest, USA were used to predict woody AGB estimated from the different allometric methods. It was found that the CRM and Jenkins estimates of woody AGB are related via the CRM adjustment factor. In terms of lidar-biomass modeling, CRM had the smallest model errors, while the Jenkins method had the largest ones and the regional method was between. The best model fitting from CRM is attributed to its inclusion of tree height in calculating merchantable stem volume and the strong dependence of non-merchantable stem biomass on merchantable stem biomass. This study also argues that it is important to characterize the allometric model errors for gaining a complete understanding of the remotely-sensed AGB prediction errors.

  2. ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.

    2011-04-20

    While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less

  3. Extracting Fitness Relationships and Oncogenic Patterns among Driver Genes in Cancer.

    PubMed

    Zhang, Xindong; Gao, Lin; Jia, Songwei

    2017-12-25

    Driver mutation provides fitness advantage to cancer cells, the accumulation of which increases the fitness of cancer cells and accelerates cancer progression. This work seeks to extract patterns accumulated by driver genes ("fitness relationships") in tumorigenesis. We introduce a network-based method for extracting the fitness relationships of driver genes by modeling the network properties of the "fitness" of cancer cells. Colon adenocarcinoma (COAD) and skin cutaneous malignant melanoma (SKCM) are employed as case studies. Consistent results derived from different background networks suggest the reliability of the identified fitness relationships. Additionally co-occurrence analysis and pathway analysis reveal the functional significance of the fitness relationships with signaling transduction. In addition, a subset of driver genes called the "fitness core" is recognized for each case. Further analyses indicate the functional importance of the fitness core in carcinogenesis, and provide potential therapeutic opportunities in medicinal intervention. Fitness relationships characterize the functional continuity among driver genes in carcinogenesis, and suggest new insights in understanding the oncogenic mechanisms of cancers, as well as providing guiding information for medicinal intervention.

  4. Mapping ecological systems with a random foret model: tradeoffs between errors and bias

    Treesearch

    Emilie Grossmann; Janet Ohmann; James Kagan; Heather May; Matthew Gregory

    2010-01-01

    New methods for predictive vegetation mapping allow improved estimations of plant community composition across large regions. Random Forest (RF) models limit over-fitting problems of other methods, and are known for making accurate classification predictions from noisy, nonnormal data, but can be biased when plot samples are unbalanced. We developed two contrasting...

  5. Older driver fitness-to-drive evaluation using naturalistic driving data.

    PubMed

    Guo, Feng; Fang, Youjia; Antin, Jonathan F

    2015-09-01

    As our driving population continues to age, it is becoming increasingly important to find a small set of easily administered fitness metrics that can meaningfully and reliably identify at-risk seniors requiring more in-depth evaluation of their driving skills and weaknesses. Sixty driver assessment metrics related to fitness-to-drive were examined for 20 seniors who were followed for a year using the naturalistic driving paradigm. Principal component analysis and negative binomial regression modeling approaches were used to develop parsimonious models relating the most highly predictive of the driver assessment metrics to the safety-related outcomes observed in the naturalistic driving data. This study provides important confirmation using naturalistic driving methods of the relationship between contrast sensitivity and crash-related events. The results of this study provide crucial information on the continuing journey to identify metrics and protocols that could be applied to determine seniors' fitness to drive. Published by Elsevier Ltd.

  6. Progress Toward Optimizing Prosthetic Socket Fit and Suspension Using Elevated Vacuum to Promote Residual Limb Health.

    PubMed

    Wernke, Matthew M; Schroeder, Ryan M; Haynes, Michael L; Nolt, Lonnie L; Albury, Alexander W; Colvin, James M

    2017-07-01

    Objective: Prosthetic sockets are custom made for each amputee, yet there are no quantitative tools to determine the appropriateness of socket fit. Ensuring a proper socket fit can have significant effects on the health of residual limb soft tissues and overall function and acceptance of the prosthetic limb. Previous work found that elevated vacuum pressure data can detect movement between the residual limb and the prosthetic socket; however, the correlation between the two was specific to each user. The overall objective of this work is to determine the relationship between elevated vacuum pressure deviations and prosthetic socket fit. Approach: A tension compression machine was used to apply repeated controlled forces onto a residual limb model with sockets of different internal volume. Results: The vacuum pressure-displacement relationship was dependent on socket fit. The vacuum pressure data were sensitive enough to detect differences of 1.5% global volume and can likely detect differences even smaller. Limb motion was reduced as surface area of contact between the limb model and socket was maximized. Innovation: The results suggest that elevated vacuum pressure data provide information to quantify socket fit. Conclusions: This study provides evidence that the use of elevated vacuum pressure data may provide a method for prosthetists to quantify and monitor socket fit. Future studies should investigate the relationship between socket fit, limb motion, and limb health to define optimal socket fit parameters.

  7. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  8. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  9. Evaluating flow cytometer performance with weighted quadratic least squares analysis of LED and multi-level bead data

    PubMed Central

    Parks, David R.; Khettabi, Faysal El; Chase, Eric; Hoffman, Robert A.; Perfetto, Stephen P.; Spidlen, Josef; Wood, James C.S.; Moore, Wayne A.; Brinkman, Ryan R.

    2017-01-01

    We developed a fully automated procedure for analyzing data from LED pulses and multi-level bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all of the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than for multi-level bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. PMID:28160404

  10. Cluster Free Energies from Simple Simulations of Small Numbers of Aggregants: Nucleation of Liquid MTBE from Vapor and Aqueous Phases.

    PubMed

    Patel, Lara A; Kindt, James T

    2017-03-14

    We introduce a global fitting analysis method to obtain free energies of association of noncovalent molecular clusters using equilibrated cluster size distributions from unbiased constant-temperature molecular dynamics (MD) simulations. Because the systems simulated are small enough that the law of mass action does not describe the aggregation statistics, the method relies on iteratively determining a set of cluster free energies that, using appropriately weighted sums over all possible partitions of N monomers into clusters, produces the best-fit size distribution. The quality of these fits can be used as an objective measure of self-consistency to optimize the cutoff distance that determines how clusters are defined. To showcase the method, we have simulated a united-atom model of methyl tert-butyl ether (MTBE) in the vapor phase and in explicit water solution over a range of system sizes (up to 95 MTBE in the vapor phase and 60 MTBE in the aqueous phase) and concentrations at 273 K. The resulting size-dependent cluster free energy functions follow a form derived from classical nucleation theory (CNT) quite well over the full range of cluster sizes, although deviations are more pronounced for small cluster sizes. The CNT fit to cluster free energies yielded surface tensions that were in both cases lower than those for the simulated planar interfaces. We use a simple model to derive a condition for minimizing non-ideal effects on cluster size distributions and show that the cutoff distance that yields the best global fit is consistent with this condition.

  11. Suicide risk factors for young adults: testing a model across ethnicities.

    PubMed

    Gutierrez, P M; Rodriguez, P J; Garcia, P

    2001-06-01

    A general path model based on existing suicide risk research was developed to test factors contributing to current suicidal ideation in young adults. A sample of 673 undergraduate students completed a packet of questionnaires containing the Beck Depression Inventory, Adult Suicidal Ideation Questionnaire, and Multi-Attitude Suicide Tendency Scale. They also provided information on history of suicidality and exposure to attempted and completed suicide in others. Structural equation modeling was used to test the fit of the data to the hypothesized model. Goodness-of-fit indices were adequate and supported the interactive effects of exposure, repulsion by life, depression, and history of self-harm on current ideation. Model fit for three subgroups based on race/ethnicity (i.e., White, Black, and Hispanic) determined that repulsion by life and depression function differently across groups. Implications of these findings for current methods of suicide risk assessment and future research are discussed in the context of the importance of culture.

  12. [Does the GHQ-12 scoring system affect its factor structure? An exploratory study of Ibero American students].

    PubMed

    Urzúa, Alfonso; Caqueo-Urízar, Alejandra; Bargsted, Mariana; Irarrázaval, Matías

    2015-06-01

    This study aimed to evaluate whether the scoring system of the General Health Questionnaire (GHQ-12) alters the instrument's factor structure. The method considered 1,972 university students from nine Ibero American countries. Modeling was performed with structural equations for 1, 2, and 3 latent factors. The mechanism for scoring the questions was analyzed within each type of structure. The results indicate that models with 2 and 3 factors show better goodness-of-fit. In relation to scoring mechanisms, procedure 0-1-1-1 for models with 2 and 3 factors showed the best fit. In conclusion, there appears to be a relationship between the response format and the number of factors identified in the instrument's structure. The model with the best fit was 3-factor 0-1-1-1-formatted, but 0-1-2-3 has acceptable and more stable indicators and provides a better format for two- and three-dimensional models.

  13. Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data

    PubMed Central

    Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei

    2015-01-01

    Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172

  14. Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.

    PubMed

    Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei

    2015-01-01

    Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.

  15. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples.

    PubMed

    Abma, Femke I; Bültmann, Ute; Amick Iii, Benjamin C; Arends, Iris; Dorland, Heleen F; Flach, Peter A; van der Klink, Jac J L; van de Ven, Hardy A; Bjørner, Jakob Bue

    2017-09-09

    Objective The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons' health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands with mixed clinical conditions and job types to evaluate the comparability of the scale structure. Methods Confirmatory factor and multi-group analyses were conducted in six cross-sectional working samples (total N = 2433) to evaluate and compare a five-factor model structure of the WRFQ (work scheduling demands, output demands, physical demands, mental and social demands, and flexibility demands). Model fit indices were calculated based on RMSEA ≤ 0.08 and CFI ≥ 0.95. After fitting the five-factor model, the multidimensional structure of the instrument was evaluated across samples using a second order factor model. Results The factor structure was robust across samples and a multi-group model had adequate fit (RMSEA = 0.63, CFI = 0.972). In sample specific analyses, minor modifications were necessary in three samples (final RMSEA 0.055-0.080, final CFI between 0.955 and 0.989). Applying the previous first order specifications, a second order factor model had adequate fit in all samples. Conclusion A five-factor model of the WRFQ showed consistent structural validity across samples. A second order factor model showed adequate fit, but the second order factor loadings varied across samples. Therefore subscale scores are recommended to compare across different clinical and working samples.

  16. ElEvoHI: A Novel CME Prediction Tool for Heliospheric Imaging Combining an Elliptical Front with Drag-based Model Fitting

    NASA Astrophysics Data System (ADS)

    Rollett, T.; Möstl, C.; Isavnin, A.; Davies, J. A.; Kubicka, M.; Amerstorfer, U. V.; Harrison, R. A.

    2016-06-01

    In this study, we present a new method for forecasting arrival times and speeds of coronal mass ejections (CMEs) at any location in the inner heliosphere. This new approach enables the adoption of a highly flexible geometrical shape for the CME front with an adjustable CME angular width and an adjustable radius of curvature of its leading edge, I.e., the assumed geometry is elliptical. Using, as input, Solar TErrestrial RElations Observatory (STEREO) heliospheric imager (HI) observations, a new elliptic conversion (ElCon) method is introduced and combined with the use of drag-based model (DBM) fitting to quantify the deceleration or acceleration experienced by CMEs during propagation. The result is then used as input for the Ellipse Evolution Model (ElEvo). Together, ElCon, DBM fitting, and ElEvo form the novel ElEvoHI forecasting utility. To demonstrate the applicability of ElEvoHI, we forecast the arrival times and speeds of 21 CMEs remotely observed from STEREO/HI and compare them to in situ arrival times and speeds at 1 AU. Compared to the commonly used STEREO/HI fitting techniques (Fixed-ϕ, Harmonic Mean, and Self-similar Expansion fitting), ElEvoHI improves the arrival time forecast by about 2 to ±6.5 hr and the arrival speed forecast by ≈ 250 to ±53 km s-1, depending on the ellipse aspect ratio assumed. In particular, the remarkable improvement of the arrival speed prediction is potentially beneficial for predicting geomagnetic storm strength at Earth.

  17. Modelling Schumann resonances from ELF measurements using non-linear optimization methods

    NASA Astrophysics Data System (ADS)

    Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo

    2017-04-01

    Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.

  18. MIA analysis of FPGA BPMs and beam optics at APS

    NASA Astrophysics Data System (ADS)

    Ji, Da-Heng; Wang, Chun-Xi; Qin, Qing

    2012-11-01

    Model independent analysis, which was developed for high precision and fast beam dynamics analysis, is a promising diagnostic tool for modern accelerators. We implemented a series of methods to analyze the turn-by-turn BPM data. Green's functions corresponding to the local transfer matrix elements R12 or R34 are extracted from BPM data and fitted with the model lattice using least-square fitting. Here, we report experimental results obtained from analyzing the transverse motion of a beam in the storage ring at the Advanced Photon Source. BPM gains and uncoupled optics parameters are successfully determined. Quadrupole strengths are adjusted for fitting but can not be uniquely determined in general due to an insufficient number of BPMs.

  19. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  20. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  1. [Analysis of the impact of job characteristics and organizational support for workplace violence].

    PubMed

    Li, M L; Chen, P; Zeng, F H; Cui, Q L; Zeng, J; Zhao, X S; Li, Z N

    2017-12-20

    Objective: To analyze the effect of job characteristics and organizational support for workplace violence, explore the influence path and the theoretical model, and provide a theoretical basis for reducing workplace violence. Methods: Stratified random sampling was used to select 813 medical staff, conductors and bus drivers in Chongqing with a self-made questionnaire to investigate job characteristics, organization attitude toward workplace violence, workplace violence, fear of violence, workplace violence, etc from February to October, 2014. Amos 21.0 was used to analyze the path and to establish a theoretical model of workplace violence. Results: The odds ratio of work characteristics and organizational attitude to workplace violence were 6.033 and 0.669, respectively, and the path coefficients were 0.41 and-0.14, respectively ( P <0.05). The Fitting indexes of the model: Chi-square (χ(2)) =67.835, The ratio of the chi-square to the degree of freedom (χ(2)/df) =5.112, Good-of-fit index (GFI) =0.970, Adjusted good-of-fit index (AGFI) =0.945, Normed fit index (NFI) =0.923, Root mean square error of approximation (RMSEA) =0.071, Fit criterion (Fmin) =0.092, so the model fit well with the data. Conclusion: The job characteristic is a risk factor for workplace violence while organizational attitude is a protective factor for workplace violence, so changing the job characteristics and improving the enthusiasm of the organization to deal with workplace violence are conducive to reduce workplace violence and increase loyalty to the unit.

  2. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-05-13

    Here, we propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. Finally, the method has been successfully demonstrated on the NSLS-II storage ring.

  3. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.

  4. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.

  5. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  6. Ultra wideband (0.5-16 kHz) MR elastography for robust shear viscoelasticity model identification.

    PubMed

    Liu, Yifei; Yasar, Temel K; Royston, Thomas J

    2014-12-21

    Changes in the viscoelastic parameters of soft biological tissues often correlate with progression of disease, trauma or injury, and response to treatment. Identifying the most appropriate viscoelastic model, then estimating and monitoring the corresponding parameters of that model can improve insight into the underlying tissue structural changes. MR Elastography (MRE) provides a quantitative method of measuring tissue viscoelasticity. In a previous study by the authors (Yasar et al 2013 Magn. Reson. Med. 70 479-89), a silicone-based phantom material was examined over the frequency range of 200 Hz-7.75 kHz using MRE, an unprecedented bandwidth at that time. Six viscoelastic models including four integer order models and two fractional order models, were fit to the wideband viscoelastic data (measured storage and loss moduli as a function of frequency). The 'fractional Voigt' model (spring and springpot in parallel) exhibited the best fit and was even able to fit the entire frequency band well when it was identified based only on a small portion of the band. This paper is an extension of that study with a wider frequency range from 500 Hz to 16 kHz. Furthermore, more fractional order viscoelastic models are added to the comparison pool. It is found that added complexity of the viscoelastic model provides only marginal improvement over the 'fractional Voigt' model. And, again, the fractional order models show significant improvement over integer order viscoelastic models that have as many or more fitting parameters.

  7. A comparison of modelling techniques used to characterise oxygen uptake kinetics during the on-transient of exercise.

    PubMed

    Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A

    2001-09-01

    We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.

  8. Prediction uncertainty and optimal experimental design for learning dynamical systems.

    PubMed

    Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  9. Multiple organ definition in CT using a Bayesian approach for 3D model fitting

    NASA Astrophysics Data System (ADS)

    Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.

    1995-08-01

    Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.

  10. High-order shock-fitted detonation propagation in high explosives

    NASA Astrophysics Data System (ADS)

    Romick, Christopher M.; Aslam, Tariq D.

    2017-03-01

    A highly accurate numerical shock and material interface fitting scheme composed of fifth-order spatial and third- or fifth-order temporal discretizations is applied to the two-dimensional reactive Euler equations in both slab and axisymmetric geometries. High rates of convergence are not typically possible with shock-capturing methods as the Taylor series analysis breaks down in the vicinity of discontinuities. Furthermore, for typical high explosive (HE) simulations, the effects of material interfaces at the charge boundary can also cause significant computational errors. Fitting a computational boundary to both the shock front and material interface (i.e. streamline) alleviates the computational errors associated with captured shocks and thus opens up the possibility of high rates of convergence for multi-dimensional shock and detonation flows. Several verification tests, including a Sedov blast wave, a Zel'dovich-von Neumann-Döring (ZND) detonation wave, and Taylor-Maccoll supersonic flow over a cone, are utilized to demonstrate high rates of convergence to nontrivial shock and reaction flows. Comparisons to previously published shock-capturing multi-dimensional detonations in a polytropic fluid with a constant adiabatic exponent (PF-CAE) are made, demonstrating significantly lower computational error for the present shock and material interface fitting method. For an error on the order of 10 m /s, which is similar to that observed in experiments, shock-fitting offers a computational savings on the order of 1000. In addition, the behavior of the detonation phase speed is examined for several slab widths to evaluate the detonation performance of PBX 9501 while utilizing the Wescott-Stewart-Davis (WSD) model, which is commonly used in HE modeling. It is found that the thickness effect curve resulting from this equation of state and reaction model using published values is dramatically more steep than observed in recent experiments. Utilizing the present fitting strategy, in conjunction with a nonlinear optimizer, a new set of reaction rate parameters improves the correlation of the model to experimental results. Finally, this new model is tested against two dimensional slabs as a validation test.

  11. A Method to Test Model Calibration Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less

  12. A Method to Test Model Calibration Techniques: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less

  13. Estimating and modelling cure in population-based cancer studies within the framework of flexible parametric survival models

    PubMed Central

    2011-01-01

    Background When the mortality among a cancer patient group returns to the same level as in the general population, that is, the patients no longer experience excess mortality, the patients still alive are considered "statistically cured". Cure models can be used to estimate the cure proportion as well as the survival function of the "uncured". One limitation of parametric cure models is that the functional form of the survival of the "uncured" has to be specified. It can sometimes be hard to find a survival function flexible enough to fit the observed data, for example, when there is high excess hazard within a few months from diagnosis, which is common among older age groups. This has led to the exclusion of older age groups in population-based cancer studies using cure models. Methods Here we have extended the flexible parametric survival model to incorporate cure as a special case to estimate the cure proportion and the survival of the "uncured". Flexible parametric survival models use splines to model the underlying hazard function, and therefore no parametric distribution has to be specified. Results We have compared the fit from standard cure models to our flexible cure model, using data on colon cancer patients in Finland. This new method gives similar results to a standard cure model, when it is reliable, and better fit when the standard cure model gives biased estimates. Conclusions Cure models within the framework of flexible parametric models enables cure modelling when standard models give biased estimates. These flexible cure models enable inclusion of older age groups and can give stage-specific estimates, which is not always possible from parametric cure models. PMID:21696598

  14. Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.

    2004-01-01

    Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.

  15. Revisiting Gaussian Process Regression Modeling for Localization in Wireless Sensor Networks

    PubMed Central

    Richter, Philipp; Toledano-Ayala, Manuel

    2015-01-01

    Signal strength-based positioning in wireless sensor networks is a key technology for seamless, ubiquitous localization, especially in areas where Global Navigation Satellite System (GNSS) signals propagate poorly. To enable wireless local area network (WLAN) location fingerprinting in larger areas while maintaining accuracy, methods to reduce the effort of radio map creation must be consolidated and automatized. Gaussian process regression has been applied to overcome this issue, also with auspicious results, but the fit of the model was never thoroughly assessed. Instead, most studies trained a readily available model, relying on the zero mean and squared exponential covariance function, without further scrutinization. This paper studies the Gaussian process regression model selection for WLAN fingerprinting in indoor and outdoor environments. We train several models for indoor/outdoor- and combined areas; we evaluate them quantitatively and compare them by means of adequate model measures, hence assessing the fit of these models directly. To illuminate the quality of the model fit, the residuals of the proposed model are investigated, as well. Comparative experiments on the positioning performance verify and conclude the model selection. In this way, we show that the standard model is not the most appropriate, discuss alternatives and present our best candidate. PMID:26370996

  16. Small area estimation for semicontinuous data.

    PubMed

    Chandra, Hukum; Chambers, Ray

    2016-03-01

    Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  18. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  19. TESTING WIND AS AN EXPLANATION FOR THE SPIN PROBLEM IN THE CONTINUUM-FITTING METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Bei; Czerny, Bożena; Sobolewska, Małgosia

    2016-04-20

    The continuum-fitting method is one of the two most advanced methods of determining the black hole spin in accreting X-ray binary systems. There are, however, still some unresolved issues with the underlying disk models. One of these issues manifests as an apparent decrease in spin for increasing source luminosity. Here, we perform a few simple tests to establish whether outflows from the disk close to the inner radius can address this problem. We employ four different parametric models to describe the wind and compare these to the apparent decrease in spin with luminosity measured in the sources LMC X-3 andmore » GRS 1915+105. Wind models in which parameters do not explicitly depend on the accretion rate cannot reproduce the spin measurements. Models with mass accretion rate dependent outflows, however, have spectra that emulate the observed ones. The assumption of a wind thus effectively removes the artifact of spin decrease. This solution is not unique; the same conclusion can be obtained using a truncated inner disk model. To distinguish among the valid models, we will need high-resolution X-ray data and a realistic description of the Comptonization in the wind.« less

  20. PF2fit: Polar Fast Fourier Matched Alignment of Atomistic Structures with 3D Electron Microscopy Maps.

    PubMed

    Bettadapura, Radhakrishna; Rasheed, Muhibur; Vollrath, Antje; Bajaj, Chandrajit

    2015-10-01

    There continue to be increasing occurrences of both atomistic structure models in the PDB (possibly reconstructed from X-ray diffraction or NMR data), and 3D reconstructed cryo-electron microscopy (3D EM) maps (albeit at coarser resolution) of the same or homologous molecule or molecular assembly, deposited in the EMDB. To obtain the best possible structural model of the molecule at the best achievable resolution, and without any missing gaps, one typically aligns (match and fits) the atomistic structure model with the 3D EM map. We discuss a new algorithm and generalized framework, named PF(2) fit (Polar Fast Fourier Fitting) for the best possible structural alignment of atomistic structures with 3D EM. While PF(2) fit enables only a rigid, six dimensional (6D) alignment method, it augments prior work on 6D X-ray structure and 3D EM alignment in multiple ways: Scoring. PF(2) fit includes a new scoring scheme that, in addition to rewarding overlaps between the volumes occupied by the atomistic structure and 3D EM map, rewards overlaps between the volumes complementary to them. We quantitatively demonstrate how this new complementary scoring scheme improves upon existing approaches. PF(2) fit also includes two scoring functions, the non-uniform exterior penalty and the skeleton-secondary structure score, and implements the scattering potential score as an alternative to traditional Gaussian blurring. Search. PF(2) fit utilizes a fast polar Fourier search scheme, whose main advantage is the ability to search over uniformly and adaptively sampled subsets of the space of rigid-body motions. PF(2) fit also implements a new reranking search and scoring methodology that considerably improves alignment metrics in results obtained from the initial search.

  1. PF2 fit: Polar Fast Fourier Matched Alignment of Atomistic Structures with 3D Electron Microscopy Maps

    PubMed Central

    Bettadapura, Radhakrishna; Rasheed, Muhibur; Vollrath, Antje; Bajaj, Chandrajit

    2015-01-01

    There continue to be increasing occurrences of both atomistic structure models in the PDB (possibly reconstructed from X-ray diffraction or NMR data), and 3D reconstructed cryo-electron microscopy (3D EM) maps (albeit at coarser resolution) of the same or homologous molecule or molecular assembly, deposited in the EMDB. To obtain the best possible structural model of the molecule at the best achievable resolution, and without any missing gaps, one typically aligns (match and fits) the atomistic structure model with the 3D EM map. We discuss a new algorithm and generalized framework, named PF2 fit (Polar Fast Fourier Fitting) for the best possible structural alignment of atomistic structures with 3D EM. While PF2 fit enables only a rigid, six dimensional (6D) alignment method, it augments prior work on 6D X-ray structure and 3D EM alignment in multiple ways: Scoring. PF2 fit includes a new scoring scheme that, in addition to rewarding overlaps between the volumes occupied by the atomistic structure and 3D EM map, rewards overlaps between the volumes complementary to them. We quantitatively demonstrate how this new complementary scoring scheme improves upon existing approaches. PF2 fit also includes two scoring functions, the non-uniform exterior penalty and the skeleton-secondary structure score, and implements the scattering potential score as an alternative to traditional Gaussian blurring. Search. PF2 fit utilizes a fast polar Fourier search scheme, whose main advantage is the ability to search over uniformly and adaptively sampled subsets of the space of rigid-body motions. PF2 fit also implements a new reranking search and scoring methodology that considerably improves alignment metrics in results obtained from the initial search. PMID:26469938

  2. Single point dilution method for the quantitative analysis of antibodies to the gag24 protein of HIV-1.

    PubMed

    Palenzuela, D O; Benítez, J; Rivero, J; Serrano, R; Ganzó, O

    1997-10-13

    In the present work a concept proposed in 1992 by Dopotka and Giesendorf was applied to the quantitative analysis of antibodies to the p24 protein of HIV-1 in infected asymptomatic individuals and AIDS patients. Two approaches were analyzed, a linear model OD = b0 + b1.log(titer) and a nonlinear log(titer) = alpha.OD beta, similar to the Dopotka-Giesendorf's model. The above two proposed models adequately fit the dependence of the optical density values at a single point dilution, and titers achieved by the end point dilution method (EPDM). Nevertheless, the nonlinear model better fits the experimental data, according to residuals analysis. Classical EPDM was compared with the new single point dilution method (SPDM) using both models. The best correlation between titers calculated using both models and titers achieved by EPDM was obtained with the nonlinear model. The correlation coefficients for the nonlinear and linear models were r = 0.85 and r = 0.77, respectively. A new correction factor was introduced into the nonlinear model and this reduced the day-to-day variation of titer values. In general, SPDM saves time, reagents and is more precise and sensitive to changes in antibody levels, and therefore has a higher resolution than EPDM.

  3. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    NASA Astrophysics Data System (ADS)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  5. Mastoid Cavity Dimensions and Shape: Method of Measurement and Virtual Fitting of Implantable Devices

    PubMed Central

    Handzel, Ophir; Wang, Haobing; Fiering, Jason; Borenstein, Jeffrey T.; Mescher, Mark J.; Leary Swan, Erin E.; Murphy, Brian A.; Chen, Zhiqiang; Peppi, Marcello; Sewell, William F.; Kujawa, Sharon G.; McKenna, Michael J.

    2009-01-01

    Temporal bone implants can be used to electrically stimulate the auditory nerve, to amplify sound, to deliver drugs to the inner ear and potentially for other future applications. The implants require storage space and access to the middle or inner ears. The most acceptable space is the cavity created by a canal wall up mastoidectomy. Detailed knowledge of the available space for implantation and pathways to access the middle and inner ears is necessary for the design of implants and successful implantation. Based on temporal bone CT scans a method for three-dimensional reconstruction of a virtual canal wall up mastoidectomy space is described. Using Amira® software the area to be removed during such surgery is marked on axial CT slices, and a three-dimensional model of that space is created. The average volume of 31 reconstructed models is 12.6 cm3 with standard deviation of 3.69 cm3, ranging from 7.97 to 23.25 cm3. Critical distances were measured directly from the model and their averages were calculated: height 3.69 cm, depth 2.43 cm, length above the external auditory canal (EAC) 4.45 cm and length posterior to EAC 3.16 cm. These linear measurements did not correlate well with volume measurements. The shape of the models was variable to a significant extent making the prediction of successful implantation for a given design based on linear and volumetric measurement unreliable. Hence, to assure successful implantation, preoperative assessment should include a virtual fitting of an implant into the intended storage space. The above-mentioned three-dimensional models were exported from Amira to a Solidworks application where virtual fitting was performed. Our results are compared to other temporal bone implant virtual fitting studies. Virtual fitting has been suggested for other human applications. PMID:19372649

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaczmarski, Krzysztof; Guiochon, Georges A

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventionalmore » procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N = 500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.« less

  7. Some Empirical Evidence for Latent Trait Model Selection.

    ERIC Educational Resources Information Center

    Hutten, Leah R.

    The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…

  8. Fitting aerodynamic forces in the Laplace domain: An application of a nonlinear nongradient technique to multilevel constrained optimization

    NASA Technical Reports Server (NTRS)

    Tiffany, S. H.; Adams, W. M., Jr.

    1984-01-01

    A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.

  9. Detecting sea-level hazards: Simple regression-based methods for calculating the acceleration of sea level

    USGS Publications Warehouse

    Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.

    2016-01-04

    Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.

  10. Theoretical study of the accuracy of the elution by characteristic points method for bi-langmuir isotherms.

    PubMed

    Ravald, L; Fornstedt, T

    2001-01-26

    The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.

  11. Nonlinear Constitutive Modeling of Piezoelectric Ceramics

    NASA Astrophysics Data System (ADS)

    Xu, Jia; Li, Chao; Wang, Haibo; Zhu, Zhiwen

    2017-12-01

    Nonlinear constitutive modeling of piezoelectric ceramics is discussed in this paper. Van der Pol item is introduced to explain the simple hysteretic curve. Improved nonlinear difference items are used to interpret the hysteresis phenomena of piezoelectric ceramics. The fitting effect of the model on experimental data is proved by the partial least-square regression method. The results show that this method can describe the real curve well. The results of this paper are helpful to piezoelectric ceramics constitutive modeling.

  12. Time series behaviour of the number of Air Asia passengers: A distributional approach

    NASA Astrophysics Data System (ADS)

    Asrah, Norhaidah Mohd; Djauhari, Maman Abdurachman

    2013-09-01

    The common practice to time series analysis is by fitting a model and then further analysis is conducted on the residuals. However, if we know the distributional behavior of time series, the analyses in model identification, parameter estimation, and model checking are more straightforward. In this paper, we show that the number of Air Asia passengers can be represented as a geometric Brownian motion process. Therefore, instead of using the standard approach in model fitting, we use an appropriate transformation to come up with a stationary, normally distributed and even independent time series. An example in forecasting the number of Air Asia passengers will be given to illustrate the advantages of the method.

  13. Framework based on stochastic L-Systems for modeling IP traffic with multifractal behavior

    NASA Astrophysics Data System (ADS)

    Salvador, Paulo S.; Nogueira, Antonio; Valadas, Rui

    2003-08-01

    In a previous work we have introduced a multifractal traffic model based on so-called stochastic L-Systems, which were introduced by biologist A. Lindenmayer as a method to model plant growth. L-Systems are string rewriting techniques, characterized by an alphabet, an axiom (initial string) and a set of production rules. In this paper, we propose a novel traffic model, and an associated parameter fitting procedure, which describes jointly the packet arrival and the packet size processes. The packet arrival process is modeled through a L-System, where the alphabet elements are packet arrival rates. The packet size process is modeled through a set of discrete distributions (of packet sizes), one for each arrival rate. In this way the model is able to capture correlations between arrivals and sizes. We applied the model to measured traffic data: the well-known pOct Bellcore, a trace of aggregate WAN traffic and two traces of specific applications (Kazaa and Operation Flashing Point). We assess the multifractality of these traces using Linear Multiscale Diagrams. The suitability of the traffic model is evaluated by comparing the empirical and fitted probability mass and autocovariance functions; we also compare the packet loss ratio and average packet delay obtained with the measured traces and with traces generated from the fitted model. Our results show that our L-System based traffic model can achieve very good fitting performance in terms of first and second order statistics and queuing behavior.

  14. Application Mail Tracking Using RSA Algorithm As Security Data and HOT-Fit a Model for Evaluation System

    NASA Astrophysics Data System (ADS)

    Permadi, Ginanjar Setyo; Adi, Kusworo; Gernowo, Rahmad

    2018-02-01

    RSA algorithm give security in the process of the sending of messages or data by using 2 key, namely private key and public key .In this research to ensure and assess directly systems are made have meet goals or desire using a comprehensive evaluation methods HOT-Fit system .The purpose of this research is to build a information system sending mail by applying methods of security RSA algorithm and to evaluate in uses the method HOT-Fit to produce a system corresponding in the faculty physics. Security RSA algorithm located at the difficulty of factoring number of large coiled factors prima, the results of the prime factors has to be done to obtain private key. HOT-Fit has three aspects assessment, in the aspect of technology judging from the system status, the quality of system and quality of service. In the aspect of human judging from the use of systems and satisfaction users while in the aspect of organization judging from the structure and environment. The results of give a tracking system sending message based on the evaluation acquired.

  15. Three-dimensional simulation of human teeth and its application in dental education and research.

    PubMed

    Koopaie, Maryam; Kolahdouz, Sajad

    2016-01-01

    Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible.

  16. Three-dimensional simulation of human teeth and its application in dental education and research

    PubMed Central

    Koopaie, Maryam; Kolahdouz, Sajad

    2016-01-01

    Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible. PMID:28491836

  17. Adaptive Strategies and Person-Environment Fit among Functionally Limited Older Adults Aging in Place: A Mixed Methods Approach

    PubMed Central

    Lien, Laura L.; Steggell, Carmen D.; Iwarsson, Susanne

    2015-01-01

    Older adults prefer to age in place, necessitating a match between person and environment, or person-environment (P-E) fit. In occupational therapy practice, home modifications can support independence, but more knowledge is needed to optimize interventions targeting the housing situation of older adults. In response, this study aimed to explore the accessibility and usability of the home environment to further understand adaptive environmental behaviors. Mixed methods data were collected using objective and perceived indicators of P-E fit among 12 older adults living in community-dwelling housing. Quantitative data described objective P-E fit in terms of accessibility, while qualitative data explored perceived P-E fit in terms of usability. While accessibility problems were prevalent, participants’ perceptions of usability revealed a range of adaptive environmental behaviors employed to meet functional needs. A closer examination of the P-E interaction suggests that objective accessibility does not always stipulate perceived usability, which appears to be malleable with age, self-perception, and functional competency. Findings stress the importance of evaluating both objective and perceived indicators of P-E fit to provide housing interventions that support independence. Further exploration of adaptive processes in older age may serve to deepen our understanding of both P-E fit frameworks and theoretical models of aging well. PMID:26404352

  18. Adaptive Strategies and Person-Environment Fit among Functionally Limited Older Adults Aging in Place: A Mixed Methods Approach.

    PubMed

    Lien, Laura L; Steggell, Carmen D; Iwarsson, Susanne

    2015-09-23

    Older adults prefer to age in place, necessitating a match between person and environment, or person-environment (P-E) fit. In occupational therapy practice, home modifications can support independence, but more knowledge is needed to optimize interventions targeting the housing situation of older adults. In response, this study aimed to explore the accessibility and usability of the home environment to further understand adaptive environmental behaviors. Mixed methods data were collected using objective and perceived indicators of P-E fit among 12 older adults living in community-dwelling housing. Quantitative data described objective P-E fit in terms of accessibility, while qualitative data explored perceived P-E fit in terms of usability. While accessibility problems were prevalent, participants' perceptions of usability revealed a range of adaptive environmental behaviors employed to meet functional needs. A closer examination of the P-E interaction suggests that objective accessibility does not always stipulate perceived usability, which appears to be malleable with age, self-perception, and functional competency. Findings stress the importance of evaluating both objective and perceived indicators of P-E fit to provide housing interventions that support independence. Further exploration of adaptive processes in older age may serve to deepen our understanding of both P-E fit frameworks and theoretical models of aging well.

  19. Inference of gene regulatory networks from genome-wide knockout fitness data

    PubMed Central

    Wang, Liming; Wang, Xiaodong; Arkin, Adam P.; Samoilov, Michael S.

    2013-01-01

    Motivation: Genome-wide fitness is an emerging type of high-throughput biological data generated for individual organisms by creating libraries of knockouts, subjecting them to broad ranges of environmental conditions, and measuring the resulting clone-specific fitnesses. Since fitness is an organism-scale measure of gene regulatory network behaviour, it may offer certain advantages when insights into such phenotypical and functional features are of primary interest over individual gene expression. Previous works have shown that genome-wide fitness data can be used to uncover novel gene regulatory interactions, when compared with results of more conventional gene expression analysis. Yet, to date, few algorithms have been proposed for systematically using genome-wide mutant fitness data for gene regulatory network inference. Results: In this article, we describe a model and propose an inference algorithm for using fitness data from knockout libraries to identify underlying gene regulatory networks. Unlike most prior methods, the presented approach captures not only structural, but also dynamical and non-linear nature of biomolecular systems involved. A state–space model with non-linear basis is used for dynamically describing gene regulatory networks. Network structure is then elucidated by estimating unknown model parameters. Unscented Kalman filter is used to cope with the non-linearities introduced in the model, which also enables the algorithm to run in on-line mode for practical use. Here, we demonstrate that the algorithm provides satisfying results for both synthetic data as well as empirical measurements of GAL network in yeast Saccharomyces cerevisiae and TyrR–LiuR network in bacteria Shewanella oneidensis. Availability: MATLAB code and datasets are available to download at http://www.duke.edu/∼lw174/Fitness.zip and http://genomics.lbl.gov/supplemental/fitness-bioinf/ Contact: wangx@ee.columbia.edu or mssamoilov@lbl.gov Supplementary information: Supplementary data are available at Bioinformatics online PMID:23271269

  20. Syndromes of Self-Reported Psychopathology for Ages 18-59 in 29 Societies.

    PubMed

    Ivanova, Masha Y; Achenbach, Thomas M; Rescorla, Leslie A; Tumer, Lori V; Ahmeti-Pronaj, Adelina; Au, Alma; Maese, Carmen Avila; Bellina, Monica; Caldas, J Carlos; Chen, Yi-Chuen; Csemy, Ladislav; da Rocha, Marina M; Decoster, Jeroen; Dobrean, Anca; Ezpeleta, Lourdes; Fontaine, Johnny R J; Funabiki, Yasuko; Guðmundsson, Halldór S; Harder, Valerie S; de la Cabada, Marie Leiner; Leung, Patrick; Liu, Jianghong; Mahr, Safia; Malykh, Sergey; Maras, Jelena Srdanovic; Markovic, Jasminka; Ndetei, David M; Oh, Kyung Ja; Petot, Jean-Michel; Riad, Geylan; Sakarya, Direnc; Samaniego, Virginia C; Sebre, Sandra; Shahini, Mimoza; Silvares, Edwiges; Simulioniene, Roma; Sokoli, Elvisa; Talcott, Joel B; Vazquez, Natalia; Zasepa, Ewa

    2015-06-01

    This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults' self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18-59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 1½-18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies.

  1. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  2. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    PubMed

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  3. Molecular dynamics force-field refinement against quasi-elastic neutron scattering data

    DOE PAGES

    Borreguero Calvo, Jose M.; Lynch, Vickie E.

    2015-11-23

    Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulationmore » due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.« less

  4. Indirect scaling methods for testing quantitative emotion theories.

    PubMed

    Junge, Martin; Reisenzein, Rainer

    2013-01-01

    Two studies investigated the utility of indirect scaling methods, based on graded pair comparisons, for the testing of quantitative emotion theories. In Study 1, we measured the intensity of relief and disappointment caused by lottery outcomes, and in Study 2, the intensity of disgust evoked by pictures, using both direct intensity ratings and graded pair comparisons. The stimuli were systematically constructed to reflect variables expected to influence the intensity of the emotions according to theoretical models of relief/disappointment and disgust, respectively. Two probabilistic scaling methods were used to estimate scale values from the pair comparison judgements: Additive functional measurement (AFM) and maximum likelihood difference scaling (MLDS). The emotion models were fitted to the direct and indirect intensity measurements using nonlinear regression (Study 1) and analysis of variance (Study 2). Both studies found substantially improved fits of the emotion models for the indirectly determined emotion intensities, with their advantage being evident particularly at the level of individual participants. The results suggest that indirect scaling methods yield more precise measurements of emotion intensity than rating scales and thereby provide stronger tests of emotion theories in general and quantitative emotion theories in particular.

  5. A Method for Modeling the Intrinsic Dynamics of Intraindividual Variability: Recovering the Parameters of Simulated Oscillators in Multi-Wave Panel Data.

    ERIC Educational Resources Information Center

    Boker, Steven M.; Nesselroade, John R.

    2002-01-01

    Examined two methods for fitting models of intrinsic dynamics to intraindividual variability data by testing these techniques' behavior in equations through simulation studies. Among the main results is the demonstration that a local linear approximation of derivatives can accurately recover the parameters of a simulated linear oscillator, with…

  6. The Impact of Student Teaching Experience on Pre-Service Teachers' Readiness for Technology Integration: A Mixed Methods Study with Growth Curve Modeling

    ERIC Educational Resources Information Center

    Sun, Yan; Strobel, Johannes; Newby, Timothy J.

    2017-01-01

    Adopting a two-phase explanatory sequential mixed methods research design, the current study examined the impact of student teaching experiences on pre-service teachers' readiness for technology integration. In phase-1 of quantitative investigation, 2-level growth curve models were fitted using online repeated measures survey data collected from…

  7. A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems

    PubMed Central

    Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.

    2013-01-01

    Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415

  8. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  9. A comparison of direct and indirect methods for the estimation of health utilities from clinical outcomes.

    PubMed

    Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb

    2014-10-01

    Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.

  10. Feed-in tariff structure development for photovoltaic electricity and the associated benefits for the Kingdom of Bahrain

    NASA Astrophysics Data System (ADS)

    Haji, Shaker; Durazi, Amal; Al-Alawi, Yaser

    2018-05-01

    In this study, the feed-in tariff (FIT) scheme was considered to facilitate an effective introduction of renewable energy in the Kingdom of Bahrain. An economic model was developed for the estimation of feasible FIT rates for photovoltaic (PV) electricity on a residential scale. The calculations of FIT rates were based mainly on the local solar radiation, the cost of a grid-connected PV system, the operation and maintenance cost, and the provided financial support. The net present value and internal rate of return methods were selected for model evaluation with the guide of simple payback period to determine the cost of energy and feasible FIT rates under several scenarios involving different capital rebate percentages, loan down payment percentages, and PV system costs. Moreover, to capitalise on the FIT benefits, its impact on the stakeholders beyond the households was investigated in terms of natural gas savings, emissions cutback, job creation, and PV-electricity contribution towards the energy demand growth. The study recommended the introduction of the FIT scheme in the Kingdom of Bahrain due to its considerable benefits through a setup where each household would purchase the PV system through a loan, with the government and the electricity customers sharing the FIT cost.

  11. Associations between different components of fitness and fatness with academic performance in Chilean youths

    PubMed Central

    2016-01-01

    Objectives To analyze the associations between different components of fitness and fatness with academic performance, adjusting the analysis by sex, age, socio-economic status, region and school type in a Chilean sample. Methods Data of fitness, fatness and academic performance was obtained from the Chilean System for the Assessment of Educational Quality test for eighth grade in 2011 and includes a sample of 18,746 subjects (49% females). Partial correlations adjusted by confounders were done to explore association between fitness and fatness components, and between the academic scores. Three unadjusted and adjusted linear regression models were done in order to analyze the associations of variables. Results Fatness has a negative association with academic performance when Body Mass Index (BMI) and Waist to Height Ratio (WHR) are assessed independently. When BMI and WHR are assessed jointly and adjusted by cofounders, WHR is more associated with academic performance than BMI, and only the association of WHR is positive. For fitness components, strength was the variable most associated with the academic performance. Cardiorespiratory capacity was not associated with academic performance if fatness and other fitness components are included in the model. Conclusions Fitness and fatness are associated with academic performance. WHR and strength are more related with academic performance than BMI and cardiorespiratory capacity. PMID:27761345

  12. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    PubMed

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  13. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    PubMed Central

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812

  14. Model Fit to Experimental Data for Foam-Assisted Deep Vadose Zone Remediation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roostapour, A.; Lee, G.; Zhong, Lirong

    2014-01-15

    Foam has been regarded as a promising means of remeidal amendment delivery to overcome subsurface heterogeneity in subsurface remediation processes. This study investigates how a foam model, developed by Method of Characteristics and fractional flow analysis in the companion paper of Roostapour and Kam (2012), can be applied to make a fit to a set of existing laboratory flow experiments (Zhong et al., 2009) in an application relevant to deep vadose zone remediation. This study reveals a few important insights regarding foam-assisted deep vadose zone remediation: (i) the mathematical framework established for foam modeling can fit typical flow experiments matchingmore » wave velocities, saturation history , and pressure responses; (ii) the set of input parameters may not be unique for the fit, and therefore conducting experiments to measure basic model parameters related to relative permeability, initial and residual saturations, surfactant adsorption and so on should not be overlooked; and (iii) gas compressibility plays an important role for data analysis, thus should be handled carefully in laboratory flow experiments. Foam kinetics, causing foam texture to reach its steady-state value slowly, may impose additional complications.« less

  15. Rasch analysis of the Chedoke-McMaster Attitudes towards Children with Handicaps scale.

    PubMed

    Armstrong, Megan; Morris, Christopher; Tarrant, Mark; Abraham, Charles; Horton, Mike C

    2017-02-01

    Aim To assess whether the Chedoke-McMaster Attitudes towards Children with Handicaps (CATCH) 36-item total scale and subscales fit the unidimensional Rasch model. Method The CATCH was administered to 1881 children, aged 7-16 years in a cross-sectional survey. Data were used from a random sample of 416 for the initial Rasch analysis. The analysis was performed on the 36-item scale and then separately for each subscale. The analysis explored fit to the Rasch model in terms of overall scale fit, individual item fit, item response categories, and unidimensionality. Item bias for gender and school level was also assessed. Revised scales were then tested on an independent second random sample of 415 children. Results Analyses indicated that the 36-item overall scale was not unidimensional and did not fit the Rasch model. Two scales of affective attitudes and behavioural intention were retained after four items were removed from each due to misfit to the Rasch model. Additionally, the scaling was improved when the two most negative response categories were aggregated. There was no item bias by gender or school level on the revised scales. Items assessing cognitive attitudes did not fit the Rasch model and had low internal consistency as a scale. Conclusion Affective attitudes and behavioural intention CATCH sub-scales should be treated separately. Caution should be exercised when using the cognitive subscale. Implications for Rehabilitation The 36-item Chedoke-McMaster Attitudes towards Children with Handicaps (CATCH) scale as a whole did not fit the Rasch model; thus indicating a multi-dimensional scale. Researchers should use two revised eight-item subscales of affective attitudes and behavioural intentions when exploring interventions aiming to improve children's attitudes towards disabled people or factors associated with those attitudes. Researchers should use the cognitive subscale with caution, as it did not create a unidimensional and internally consistent scale. Therefore, conclusions drawn from this scale may not accurately reflect children's attitudes.

  16. Kinetics and thermodynamics studies of silver ions adsorption onto coconut shell activated carbon.

    PubMed

    Silva-Medeiros, Flávia V; Consolin-Filho, Nelson; Xavier de Lima, Mateus; Bazzo, Fernando Previato; Barros, Maria Angélica S D; Bergamasco, Rosângela; Tavares, Célia R G

    2016-12-01

    The presence of silver in the natural water environment has been of great concern because of its toxicity, especially when it is in the free ion form (Ag(+)). This paper aims to study the adsorption kinetics of silver ions from an aqueous solution onto coconut shell activated carbon using batch methods. Batch kinetic data were fitted to the first-order model and the pseudo-second-order model, and this last equation fits correctly the experimental data. Equilibrium experiments were carried out at 30°C, 40°C, and 50°C. The adsorption isotherms were reasonably fit using Langmuir model, and the adsorption process was slightly influenced by changes in temperature. Thermodynamic parameters (ΔH°, ΔG°, and ΔS°) were determined. The adsorption process seems to be non-favorable, exothermic, and have an increase in the orderness.

  17. BOOTSTRAPPING THE CORONAL MAGNETIC FIELD WITH STEREO: UNIPOLAR POTENTIAL FIELD MODELING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aschwanden, Markus J.; Sandman, Anne W., E-mail: aschwanden@lmsal.co

    We investigate the recently quantified misalignment of {alpha}{sub mis} {approx} 20{sup 0}-40{sup 0} between the three-dimensional geometry of stereoscopically triangulated coronal loops observed with STEREO/EUVI (in four active regions (ARs)) and theoretical (potential or nonlinear force-free) magnetic field models extrapolated from photospheric magnetograms. We develop an efficient method of bootstrapping the coronal magnetic field by forward fitting a parameterized potential field model to the STEREO-observed loops. The potential field model consists of a number of unipolar magnetic charges that are parameterized by decomposing a photospheric magnetogram from the Michelson Doppler Imager. The forward-fitting method yields a best-fit magnetic field modelmore » with a reduced misalignment of {alpha}{sub PF} {approx} 13{sup 0}-20{sup 0}. We also evaluate stereoscopic measurement errors and find a contribution of {alpha}{sub SE} {approx} 7{sup 0}-12{sup 0}, which constrains the residual misalignment to {alpha}{sub NP} {approx} 11{sup 0}-17{sup 0}, which is likely due to the nonpotentiality of the ARs. The residual misalignment angle, {alpha}{sub NP}, of the potential field due to nonpotentiality is found to correlate with the soft X-ray flux of the AR, which implies a relationship between electric currents and plasma heating.« less

  18. Text vectorization based on character recognition and character stroke modeling

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang; Zhou, Bingfeng; Tse, Francis; Mu, Yadong; He, Tao

    2014-03-01

    In this paper, a text vectorization method is proposed using OCR (Optical Character Recognition) and character stroke modeling. This is based on the observation that for a particular character, its font glyphs may have different shapes, but often share same stroke structures. Like many other methods, the proposed algorithm contains two procedures, dominant point determination and data fitting. The first one partitions the outlines into segments and second one fits a curve to each segment. In the proposed method, the dominant points are classified as "major" (specifying stroke structures) and "minor" (specifying serif shapes). A set of rules (parameters) are determined offline specifying for each character the number of major and minor dominant points and for each dominant point the detection and fitting parameters (projection directions, boundary conditions and smoothness). For minor points, multiple sets of parameters could be used for different fonts. During operation, OCR is performed and the parameters associated with the recognized character are selected. Both major and minor dominant points are detected as a maximization process as specified by the parameter set. For minor points, an additional step could be performed to test the competing hypothesis and detect degenerated cases.

  19. A calculation and uncertainty evaluation method for the effective area of a piston rod used in quasi-static pressure calibration

    NASA Astrophysics Data System (ADS)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2018-04-01

    This paper describes the merits and demerits of different sensors for measuring propellant gas pressure, the applicable range of the frequently used dynamic pressure calibration methods, and the working principle of absolute quasi-static pressure calibration based on the drop-weight device. The main factors affecting the accuracy of pressure calibration are analyzed from two aspects of the force sensor and the piston area. To calculate the effective area of the piston rod and evaluate the uncertainty between the force sensor and the corresponding peak pressure in the absolute quasi-static pressure calibration process, a method for solving these problems based on the least squares principle is proposed. According to the relevant quasi-static pressure calibration experimental data, the least squares fitting model between the peak force and the peak pressure, and the effective area of the piston rod and its measurement uncertainty, are obtained. The fitting model is tested by an additional group of experiments, and the peak pressure obtained by the existing high-precision comparison calibration method is taken as the reference value. The test results show that the peak pressure obtained by the least squares fitting model is closer to the reference value than the one directly calculated by the cross-sectional area of the piston rod. When the peak pressure is higher than 150 MPa, the percentage difference is less than 0.71%, which can meet the requirements of practical application.

  20. SU-D-204-03: Comparison of Patient Positioning Methods Through Modeling of Acute Rectal Toxicity in Intensity Modulated Radiation Therapy for Prostate Cancer. Does Quality of Data Matter More Than the Quantity?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X; Fatyga, M; Vora, S

    Purpose: To determine if differences in patient positioning methods have an impact on the incidence and modeling of grade >=2 acute rectal toxicity in prostate cancer patients who were treated with Intensity Modulated Radiation Therapy (IMRT). Methods: We compared two databases of patients treated with radiation therapy for prostate cancer: a database of 79 patients who were treated with 7 field IMRT and daily image guided positioning based on implanted gold markers (IGRTdb), and a database of 302 patients who were treated with 5 field IMRT and daily positioning using a trans-abdominal ultrasound system (USdb). Complete planning dosimetry was availablemore » for IGRTdb patients while limited planning dosimetry, recorded at the time of planning, was available for USdb patients. We fit Lyman-Kutcher-Burman (LKB) model to IGRTdb only, and Univariate Logistic Regression (ULR) NTCP model to both databases. We perform Receiver Operating Characteristics analysis to determine the predictive power of NTCP models. Results: The incidence of grade >= 2 acute rectal toxicity in IGRTdb was 20%, while the incidence in USdb was 54%. Fits of both LKB and ULR models yielded predictive NTCP models for IGRTdb patients with Area Under the Curve (AUC) in the 0.63 – 0.67 range. Extrapolation of the ULR model from IGRTdb to planning dosimetry in USdb predicts that the incidence of acute rectal toxicity in USdb should not exceed 40%. Fits of the ULR model to the USdb do not yield predictive NTCP models and their AUC is consistent with AUC = 0.5. Conclusion: Accuracy of a patient positioning system affects clinically observed toxicity rates and the quality of NTCP models that can be derived from toxicity data. Poor correlation between planned and clinically delivered dosimetry may lead to erroneous or poorly performing NTCP models, even if the number of patients in a database is large.« less

  1. Variation and Grey GM(1, 1) Prediction of Melting Peak Temperature of Polypropylene During Ultraviolet Radiation Aging

    NASA Astrophysics Data System (ADS)

    Chen, K.; Y Zhang, T.; Zhang, F.; Zhang, Z. R.

    2017-12-01

    Grey system theory regards uncertain system in which information is known partly and unknown partly as research object, extracts useful information from part known, and thereby revealing the potential variation rule of the system. In order to research the applicability of data-driven modelling method in melting peak temperature (T m) fitting and prediction of polypropylene (PP) during ultraviolet radiation aging, the T m of homo-polypropylene after different ultraviolet radiation exposure time investigated by differential scanning calorimeter was fitted and predicted by grey GM(1, 1) model based on grey system theory. The results show that the T m of PP declines with the prolong of aging time, and fitting and prediction equation obtained by grey GM(1, 1) model is T m = 166.567472exp(-0.00012t). Fitting effect of the above equation is excellent and the maximum relative error between prediction value and actual value of T m is 0.32%. Grey system theory needs less original data, has high prediction accuracy, and can be used to predict aging behaviour of PP.

  2. SU-D-204-05: Fitting Four NTCP Models to Treatment Outcome Data of Salivary Glands Recorded Six Months After Radiation Therapy for Head and Neck Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavroidis, P; Price, A; Kostich, M

    Purpose: To estimate the radiobiological parameters of four popular NTCP models that describe the dose-response relations of salivary glands to the severity of patient reported dry mouth 6 months post chemo-radiotherapy. To identify the glands, which best correlate with the manifestation of those clinical endpoints. Finally, to evaluate the goodness-of-fit of the NTCP models. Methods: Forty-three patients were treated on a prospective multiinstitutional phase II study for oropharyngeal squamous cell carcinoma. All the patients received 60 Gy IMRT and they reported symptoms using the novel patient reported outcome version of the CTCAE. We derived the individual patient dosimetric data ofmore » the parotid and submandibular glands (SMG) as separate structures as well as combinations. The Lyman-Kutcher-Burman (LKB), Relative Seriality (RS), Logit and Relative Logit (RL) NTCP models were used to fit the patients data. The fitting of the different models was assessed through the area under the receiver operating characteristic curve (AUC) and the Odds Ratio methods. Results: The AUC values were highest for the contralateral parotid for Grade ≥ 2 (0.762 for the LKB, RS, Logit and 0.753 for the RL). For the salivary glands the AUC values were: 0.725 for the LKB, RS, Logit and 0.721 for the RL. For the contralateral SMG the AUC values were: 0.721 for LKB, 0.714 for Logit and 0.712 for RS and RL. The Odds Ratio for the contralateral parotid was 5.8 (1.3–25.5) for all the four NTCP models for the radiobiological dose threshold of 21Gy. Conclusion: It was shown that all the examined NTCP models could fit the clinical data well with very similar accuracy. The contralateral parotid gland appears to correlated best with the clinical endpoints of severe/very severe dry mouth. An EQD2Gy dose of 21Gy appears to be a safe threshold to be used as a constraint in treatment planning.« less

  3. Feasibility of Clinician-Facilitated Three-Dimensional Printing of Synthetic Cranioplasty Flaps.

    PubMed

    Panesar, Sandip S; Belo, Joao Tiago A; D'Souza, Rhett N

    2018-05-01

    Integration of three-dimensional (3D) printing and stereolithography into clinical practice is in its nascence, and concepts may be esoteric to the practicing neurosurgeon. Currently, creation of 3D printed implants involves recruitment of offsite third parties. We explored a range of 3D scanning and stereolithographic techniques to create patient-specific synthetic implants using an onsite, clinician-facilitated approach. We simulated bilateral craniectomies in a single cadaveric specimen. We devised 3 methods of creating stereolithographically viable virtual models from removed bone. First, we used preoperative and postoperative computed tomography scanner-derived bony window models from which the flap was extracted. Second, we used an entry-level 3D light scanner to scan and render models of the individual bone pieces. Third, we used an arm-mounted, 3D laser scanner to create virtual models using a real-time approach. Flaps were printed from the computed tomography scanner and laser scanner models only in a ultraviolet-cured polymer. The light scanner did not produce suitable virtual models for printing. The computed tomography scanner-derived models required extensive postfabrication modification to fit the existing defects. The laser scanner models assumed good fit within the defects without any modification. The methods presented varying levels of complexity in acquisition and model rendering. Each technique required hardware at varying in price points from $0 to approximately $100,000. The laser scanner models produced the best quality parts, which had near-perfect fit with the original defects. Potential neurosurgical applications of this technology are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Modeling of ultrasonic degradation of non-volatile organic compounds by Langmuir-type kinetics.

    PubMed

    Chiha, Mahdi; Merouani, Slimane; Hamdaoui, Oualid; Baup, Stéphane; Gondrexon, Nicolas; Pétrier, Christian

    2010-06-01

    Sonochemical degradation of phenol (Ph), 4-isopropylphenol (4-IPP) and Rhodamine B (RhB) in aqueous solutions was investigated for a large range of initial concentrations in order to analyze the reaction kinetics. The initial rates of substrate degradation and H(2)O(2) formation as a function of initial concentrations were determined. The obtained results show that the degradation rate increases with increasing initial substrate concentration up to a plateau and that the sonolytic destruction occurs mainly through reactions with hydroxyl radicals in the interfacial region of cavitation bubbles. The rate of H(2)O(2) formation decreases with increasing substrate concentration and reaches a minimum, followed by almost constant production rate for higher substrate concentrations. Sonolytic degradation data were analyzed by the models of Okitsu et al. [K. Okitsu, K. Iwasaki, Y. Yobiko, H. Bandow, R. Nishimura, Y. Maeda, Sonochemical degradation of azo dyes in aqueous solution: a new heterogeneous kinetics model taking into account the local concentration OH radicals and azo dyes, Ultrason. Sonochem. 12 (2005) 255-262.] and Seprone et al. [N. Serpone, R. Terzian, H. Hidaka, E. Pelizzetti, Ultrasonic induced dehalogenation and oxidation of 2-, 3-, and 4-chlorophenol in air-equilibrated aqueous media. Similarities with irradiated semiconductor particulates, J. Phys. Chem. 98 (1994) 2634-2640.] developed on the basis of a Langmuir-type mechanism. The five linearized forms of the Okitsu et al.'s equation as well as the non-linear curve fitting analysis method were discussed. Results show that it is not appropriate to use the coefficient of determination of the linear regression method for comparing the best-fitting. Among the five linear expressions of the Okitsu et al.'s kinetic model, form-2 expression very well represent the degradation data for Ph and 4-IPP. Non-linear curve fitting analysis method was found to be the more appropriate method to determine the model parameters. An excellent representation of the experimental results of sonolytic destruction of RhB was obtained using the Serpone et al.'s model. The Serpone et al.'s model gives a worse fit for the sonolytic degradation data of Ph and 4-IPP. These results indicate that Ph and 4-IPP undergo degradation predominantly at the bubble/solution interface, whereas RhB undergoes degradation at both bubble/solution interface and in the bulk solution. (c) 2010 Elsevier B.V. All rights reserved.

  5. FIREFLY (Fitting IteRativEly For Likelihood analYsis): a full spectral fitting code

    NASA Astrophysics Data System (ADS)

    Wilkinson, David M.; Maraston, Claudia; Goddard, Daniel; Thomas, Daniel; Parikh, Taniya

    2017-12-01

    We present a new spectral fitting code, FIREFLY, for deriving the stellar population properties of stellar systems. FIREFLY is a chi-squared minimization fitting code that fits combinations of single-burst stellar population models to spectroscopic data, following an iterative best-fitting process controlled by the Bayesian information criterion. No priors are applied, rather all solutions within a statistical cut are retained with their weight. Moreover, no additive or multiplicative polynomials are employed to adjust the spectral shape. This fitting freedom is envisaged in order to map out the effect of intrinsic spectral energy distribution degeneracies, such as age, metallicity, dust reddening on galaxy properties, and to quantify the effect of varying input model components on such properties. Dust attenuation is included using a new procedure, which was tested on Integral Field Spectroscopic data in a previous paper. The fitting method is extensively tested with a comprehensive suite of mock galaxies, real galaxies from the Sloan Digital Sky Survey and Milky Way globular clusters. We also assess the robustness of the derived properties as a function of signal-to-noise ratio (S/N) and adopted wavelength range. We show that FIREFLY is able to recover age, metallicity, stellar mass, and even the star formation history remarkably well down to an S/N ∼ 5, for moderately dusty systems. Code and results are publicly available.1

  6. Spatially explicit habitat models for 28 fishes from the Upper Mississippi River System (AHAG 2.0)

    USGS Publications Warehouse

    Ickes, Brian S.; Sauer, J.S.; Richards, N.; Bowler, M.; Schlifer, B.

    2014-01-01

    Environmental management actions in the Upper Mississippi River System (UMRS) typically require pre-project assessments of predicted benefits under a range of project scenarios. The U.S. Army Corps of Engineers (USACE) now requires certified and peer-reviewed models to conduct these assessments. Previously, habitat benefits were estimated for fish communities in the UMRS using the Aquatic Habitat Appraisal Guide (AHAG v.1.0; AHAG from hereon). This spreadsheet-based model used a habitat suitability index (HSI) approach that drew heavily upon Habitat Evaluation Procedures (HEP; U.S. Fish and Wildlife Service, 1980) by the U.S. Fish and Wildlife Service (USFWS). The HSI approach requires developing species response curves for different environmental variables that seek to broadly represent habitat. The AHAG model uses species-specific response curves assembled from literature values, data from other ecosystems, or best professional judgment. A recent scientific review of the AHAG indicated that the model’s effectiveness is reduced by its dated approach to large river ecosystems, uncertainty regarding its data inputs and rationale for habitat-species response relationships, and lack of field validation (Abt Associates Inc., 2011). The reviewers made two major recommendations: (1) incorporate empirical data from the UMRS into defining the empirical response curves, and (2) conduct post-project biological evaluations to test pre-project benefits estimated by AHAG. Our objective was to address the first recommendation and generate updated response curves for AHAG using data from the Upper Mississippi River Restoration-Environmental Management Program (UMRR-EMP) Long Term Resource Monitoring Program (LTRMP) element. Fish community data have been collected by LTRMP (Gutreuter and others, 1995; Ratcliff and others, in press) for 20 years from 6 study reaches representing 1,930 kilometers of river and >140 species of fish. We modeled a subset of these data (28 different species; occurrences at sampling sites as observed in day electrofishing samples) using multiple logistic regression with presence/absence responses. Each species’ probability of occurrence, at each sample site, was modeled as a function of 17 environmental variables observed at each sample site by LTRMP standardized protocols. The modeling methods used (1) a forward-selection process to identify the most important predictors and their relative contributions to predictions; (2) partial methods on the predictor set to control variance inflation; and (3) diagnostics for LTRMP design elements that may influence model fits. Models were fit for 28 species, representing 3 habitat guilds (Lentic, Lotic, and Generalist). We intended to develop “systemic models” using data from all six LTRMP study reaches simultaneously; however, this proved impossible. Thus, we “regionalized” the models, creating two models for each species: “Upper Reach” models, using data from Pools 4, 8, and 13; and “Lower Reach” models, using data from Pool 26, the Open River Reach of the Mississippi River, and the La Grange reach of the Illinois River. A total of 56 models were attempted. For any given site-scale prediction, each model used data from the three LTRMP study reaches comprising the regional model to make predictions. For example, a site-scale prediction in Pool 8 was made using data from Pools 4, 8, and 13. This is the fundamental nature and trade-off of regionalizing these models for broad management application. Model fits were deemed “certifiably good” using the Hosmer and Lemeshow Goodness-of-Fit statistic (Hosmer and Lemeshow, 2000). This test post-partitions model predictions into 10 groups and conducts inferential tests on correspondences between observed and expected probability of occurrence across all partitions, under Chi-square distributional assumptions. This permits an inferential test of how well the models fit and a tool for reporting when they did not (and perhaps why). Our goal was to develop regionalized models, and to assess and describe circumstances when a good fit was not possible. Seven fish species composed the Lentic guild. Good fits were achieved for six Upper Reach models. In the Lower Reach, no model produced good fits for the Lentic guild. This was due to (1) lentic species being much less prominent in the Lower Reach study areas, and (2) those that do express greater prominence principally do so only in the La Grange reach of the Illinois River. Thus, developing Lower Reach models for Lentic species will require parsing La Grange from the other two Lower Reach study areas and fitting separate models. We did not do that as part of this study, but it could be done at a later time. Nine species comprised the Lotic guild. Good fits were achieved for seven Upper Reach models and six Lower Reach models. Four species had good fits for both regions (flathead catfish, blue sucker, sauger, and shorthead redhorse). Three species showed zoogeographic zonation, with a good model fit in one of the regions, but not in the region in which they were absent or rarely occurred (blue catfish, rock bass, and skipjack herring). Twelve species comprised the Generalist guild. Good fits were achieved for five Upper Reach models and eight Lower Reach models. Six species had good fits for both regions (brook silverside, emerald shiner, freshwater drum, logperch, longnose gar, and white bass). Two species showed zoogeographic zonation, with a good model fit in one of the regions, but not in the region in which they were absent or rarely occurred (red shiner and blackstripe topminnow). Poorly fit models were almost always due to the diagnostic variable “field station,” a surrogate for river mile. In these circumstances, the residuals for “field station” were non-randomly distributed and often strongly ordered. This indicates either fitting “pool scale” models for these species and regions, or explicitly model covariances between “field station” and the other predictors within the existing modeling framework. Further efforts on these models should seek to resolve these issues using one of these two approaches. In total, nine species, representing two of the three guilds (Lotic and Generalist), produced well-fit models for both regions. These nine species should comprise the basis for AHAG 2.0. Additional work, likely requiring downscaling of the regional models to pool-scale models, will be needed to incorporate additional species. Alternately, a regionalized AHAG could be comprised of those species, per region, that achieved well-fit models. The number of species and the composition of the regional species pools will differ among regions as a consequence. Each of these alternatives has both pros and cons, and managers are encouraged to consider them fully before further advancing this approach to modeling multi-species habitat suitability.

  7. "The Theory was Beautiful Indeed": Rise, Fall and Circulation of Maximizing Methods in Population Genetics (1930-1980).

    PubMed

    Grodwohl, Jean-Baptiste

    2017-08-01

    Describing the theoretical population geneticists of the 1960s, Joseph Felsenstein reminisced: "our central obsession was finding out what function evolution would try to maximize. Population geneticists used to think, following Sewall Wright, that mean relative fitness, W, would be maximized by natural selection" (Felsenstein 2000). The present paper describes the genesis, diffusion and fall of this "obsession", by giving a biography of the mean fitness function in population genetics. This modeling method devised by Sewall Wright in the 1930s found its heyday in the late 1950s and early 1960s, in the wake of Motoo Kimura's and Richard Lewontin's works. It seemed a reliable guide in the mathematical study of deterministic effects (the study of natural selection in populations of infinite size, with no drift), leading to powerful generalizations presenting law-like properties. Progress in population genetics theory, it then seemed, would come from the application of this method to the study of systems with several genes. This ambition came to a halt in the context of the influential objections made by the Australian mathematician Patrick Moran in 1963. These objections triggered a controversy between mathematically- and biologically-inclined geneticists, with affected both the formal standards and the aims of population genetics as a science. Over the course of the 1960s, the mean fitness method withered with the ambition of developing the deterministic theory. The mathematical theory became increasingly complex. Kimura re-focused his modeling work on the theory of random processes; as a result of his computer simulations, Lewontin became the staunchest critic of maximizing principles in evolutionary biology. The mean fitness method then migrated to other research areas, being refashioned and used in evolutionary quantitative genetics and behavioral ecology.

  8. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models

    PubMed Central

    Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.

    2017-01-01

    Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161

  9. Normal Tissue Complication Probability Modeling of Radiation-Induced Hypothyroidism After Head-and-Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakhshandeh, Mohsen; Hashemi, Bijan, E-mail: bhashemi@modares.ac.ir; Mahdavi, Seied Rabi Mehdi

    Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-basedmore » treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented normal tissue complication probability models showed a parallel architecture for the thyroid. The mean dose model can be used as the best model to describe the dose-response relationship for hypothyroidism complication.« less

  10. Growth curves for ostriches (Struthio camelus) in a Brazilian population.

    PubMed

    Ramos, S B; Caetano, S L; Savegnago, R P; Nunes, B N; Ramos, A A; Munari, D P

    2013-01-01

    The objective of this study was to fit growth curves using nonlinear and linear functions to describe the growth of ostriches in a Brazilian population. The data set consisted of 112 animals with BW measurements from hatching to 383 d of age. Two nonlinear growth functions (Gompertz and logistic) and a third-order polynomial function were applied. The parameters for the models were estimated using the least-squares method and Gauss-Newton algorithm. The goodness-of-fit of the models was assessed using R(2) and the Akaike information criterion. The R(2) calculated for the logistic growth model was 0.945 for hens and 0.928 for cockerels and for the Gompertz growth model, 0.938 for hens and 0.924 for cockerels. The third-order polynomial fit gave R(2) of 0.938 for hens and 0.924 for cockerels. Among the Akaike information criterion calculations, the logistic growth model presented the lowest values in this study, both for hens and for cockerels. Nonlinear models are more appropriate for describing the sigmoid nature of ostrich growth.

  11. Theoretical study of the accuracy of the pulse method, frontal analysis, and frontal analysis by characteristic points for the determination of single component adsorption isotherms.

    PubMed

    Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges

    2009-02-13

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.

  12. A novel method for designing and fabricating low-cost facepiece prototypes.

    PubMed

    Joe, Paula S; Shum, Phillip C; Brown, David W; Lungu, Claudiu T

    2014-01-01

    In 2010, the National Institute for Occupational Safety and Health (NIOSH) published new digital head form models based on their recently updated fit-test panel. The new panel, based on the 2000 census to better represent the modern work force, created two additional sizes: Short/Wide and Long/Narrow. While collecting the anthropometric data that comprised the panel, additional three-dimensional data were collected on a subset of the subjects. Within each sizing category, five individuals' three-dimensional data were used to create the new head form models. While NIOSH has recommended a switch to a five-size system for designing respirators, little has been done in assessing the potential benefits of this change. With commercially available elastomeric facepieces available in only three or four size systems, it was necessary to develop the facepieces to enable testing. This study aims to develop a method for designing and fabricating elastomeric facepieces tailored to the new head form designs for use in fit-testing studies. This novel method used computed tomography of a solid silicone facepiece and a number of computer-aided design programs (VolView, ParaView, MEGG3D, and RapidForm XOR) to develop a facepiece model to accommodate the Short/Wide head form. The generated model was given a physical form by means of three-dimensional printing using stereolithography (SLA). The printed model was then used to create a silicone mold from which elastomeric prototypes can be cast. The prototype facepieces were cast in two types of silicone for use in future fit-testing.

  13. Dynamic characteristics of oxygen consumption.

    PubMed

    Ye, Lin; Argha, Ahmadreza; Yu, Hairong; Celler, Branko G; Nguyen, Hung T; Su, Steven

    2018-04-23

    Previous studies have indicated that oxygen uptake ([Formula: see text]) is one of the most accurate indices for assessing the cardiorespiratory response to exercise. In most existing studies, the response of [Formula: see text] is often roughly modelled as a first-order system due to the inadequate stimulation and low signal to noise ratio. To overcome this difficulty, this paper proposes a novel nonparametric kernel-based method for the dynamic modelling of [Formula: see text] response to provide a more robust estimation. Twenty healthy non-athlete participants conducted treadmill exercises with monotonous stimulation (e.g., single step function as input). During the exercise, [Formula: see text] was measured and recorded by a popular portable gas analyser ([Formula: see text], COSMED). Based on the recorded data, a kernel-based estimation method was proposed to perform the nonparametric modelling of [Formula: see text]. For the proposed method, a properly selected kernel can represent the prior modelling information to reduce the dependence of comprehensive stimulations. Furthermore, due to the special elastic net formed by [Formula: see text] norm and kernelised [Formula: see text] norm, the estimations are smooth and concise. Additionally, the finite impulse response based nonparametric model which estimated by the proposed method can optimally select the order and fit better in terms of goodness-of-fit comparing to classical methods. Several kernels were introduced for the kernel-based [Formula: see text] modelling method. The results clearly indicated that the stable spline (SS) kernel has the best performance for [Formula: see text] modelling. Particularly, based on the experimental data from 20 participants, the estimated response from the proposed method with SS kernel was significantly better than the results from the benchmark method [i.e., prediction error method (PEM)] ([Formula: see text] vs [Formula: see text]). The proposed nonparametric modelling method is an effective method for the estimation of the impulse response of VO 2 -Speed system. Furthermore, the identified average nonparametric model method can dynamically predict [Formula: see text] response with acceptable accuracy during treadmill exercise.

  14. Global parameter estimation for thermodynamic models of transcriptional regulation.

    PubMed

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. More Sophisticated Fits of the Oribts of Haumea's Interacting Moons

    NASA Astrophysics Data System (ADS)

    Oldroyd, William Jared; Ragozzine, Darin; Porter, Simon

    2018-04-01

    Since the discovery of Haumea's moons, it has been a challenge to model the orbits of its moons, Hi’iaka and Namaka. With many precision HST observations, Ragozzine & Brown 2009 succeeded in calculating a three-point mass model which was essential because Keplerian orbits were not a statistically acceptable fit. New data obtained in 2010 could be fit by adding a J2 and spin pole to Haumea, but new data from 2015 was far from the predicted locations, even after an extensive exploration using Bayesian Markov Chain Monte Carlo methods (using emcee). Here we report on continued investigations as to why our model cannot fit the full 10-year baseline of data. We note that by ignoring Haumea and instead examining the relative motion of the two moons in the Hi’iaka centered frame leads to adequate fits for the data. This suggests there are additional parameters connected to Haumea that will be required in a full model. These parameters are potentially related to photocenter-barycenter shifts which could be significant enough to affect the fitting process; these are unlikely to be caused by the newly discovered ring (Ortiz et al. 2017) or by unknown satellites (Burkhart et al. 2016). Additionally, we have developed a new SPIN+N-bodY integrator called SPINNY that self-consistently calculates the interactions between n-quadrupoles and is designed to test the importance of other possible effects (Haumea C22, satellite torques on the spin-pole, Sun, etc.) on our astrometric fits. By correctly determining the orbit of Haumea’s satellites we develop a better understanding of the physical properties of each of the objects with implications for the formation of Haumea, its moons, and its collisional family.

  16. Stochastic optimization for modeling physiological time series: application to the heart rate response to exercise

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, M. S.; Stirling, J. R.

    2007-01-01

    Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.

  17. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  18. Poroviscoelastic cartilage properties in the mouse from indentation.

    PubMed

    Chiravarambath, Sidharth; Simha, Narendra K; Namani, Ravi; Lewis, Jack L

    2009-01-01

    A method for fitting parameters in a poroviscoelastic (PVE) model of articular cartilage in the mouse is presented. Indentation is performed using two different sized indenters and then these data are fitted using a PVE finite element program and parameter extraction algorithm. Data from a smaller indenter, a 15 mum diameter flat-ended 60 deg cone, is first used to fit the viscoelastic (VE) parameters, on the basis that for this tip size the gel diffusion time (approximate time constant of the poroelastic (PE) response) is of the order of 0.1 s, so that the PE response is negligible. These parameters are then used to fit the data from a second 170 mum diameter flat-ended 60 deg cone for the PE parameters, using the VE parameters extracted from the data from the 15 mum tip. Data from tests on five different mouse tibial plateaus are presented and fitted. Parameter variation studies for the larger indenter show that for this case the VE and PE time responses overlap in time, necessitating the use of both models.

  19. A TCP model for external beam treatment of intermediate-risk prostate cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, Sean; Putten, Wil van der

    2013-03-15

    Purpose: Biological models offer the ability to predict clinical outcomes. The authors describe a model to predict the clinical response of intermediate-risk prostate cancer to external beam radiotherapy for a variety of fractionation regimes. Methods: A fully heterogeneous population averaged tumor control probability model was fit to clinical outcome data for hyper, standard, and hypofractionated treatments. The tumor control probability model was then employed to predict the clinical outcome of extreme hypofractionation regimes, as utilized in stereotactic body radiotherapy. Results: The tumor control probability model achieves an excellent level of fit, R{sup 2} value of 0.93 and a root meanmore » squared error of 1.31%, to the clinical outcome data for hyper, standard, and hypofractionated treatments using realistic values for biological input parameters. Residuals Less-Than-Or-Slanted-Equal-To 1.0% are produced by the tumor control probability model when compared to clinical outcome data for stereotactic body radiotherapy. Conclusions: The authors conclude that this tumor control probability model, used with the optimized radiosensitivity values obtained from the fit, is an appropriate mechanistic model for the analysis and evaluation of external beam RT plans with regard to tumor control for these clinical conditions.« less

  20. A discrete spectral analysis for determining quasi-linear viscoelastic properties of biological materials

    PubMed Central

    Babaei, Behzad; Abramowitch, Steven D.; Elson, Elliot L.; Thomopoulos, Stavros; Genin, Guy M.

    2015-01-01

    The viscoelastic behaviour of a biological material is central to its functioning and is an indicator of its health. The Fung quasi-linear viscoelastic (QLV) model, a standard tool for characterizing biological materials, provides excellent fits to most stress–relaxation data by imposing a simple form upon a material's temporal relaxation spectrum. However, model identification is challenging because the Fung QLV model's ‘box’-shaped relaxation spectrum, predominant in biomechanics applications, can provide an excellent fit even when it is not a reasonable representation of a material's relaxation spectrum. Here, we present a robust and simple discrete approach for identifying a material's temporal relaxation spectrum from stress–relaxation data in an unbiased way. Our ‘discrete QLV’ (DQLV) approach identifies ranges of time constants over which the Fung QLV model's typical box spectrum provides an accurate representation of a particular material's temporal relaxation spectrum, and is effective at providing a fit to this model. The DQLV spectrum also reveals when other forms or discrete time constants are more suitable than a box spectrum. After validating the approach against idealized and noisy data, we applied the methods to analyse medial collateral ligament stress–relaxation data and identify the strengths and weaknesses of an optimal Fung QLV fit. PMID:26609064

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, Seán, E-mail: walshsharp@gmail.com; Department of Oncology, Gray Institute for Radiation Oncology and Biology, University of Oxford, Oxford OX3 7DQ; Roelofs, Erik

    Purpose: A fully heterogeneous population averaged mechanistic tumor control probability (TCP) model is appropriate for the analysis of external beam radiotherapy (EBRT). This has been accomplished for EBRT photon treatment of intermediate-risk prostate cancer. Extending the TCP model for low and high-risk patients would be beneficial in terms of overall decision making. Furthermore, different radiation treatment modalities such as protons and carbon-ions are becoming increasingly available. Consequently, there is a need for a complete TCP model. Methods: A TCP model was fitted and validated to a primary endpoint of 5-year biological no evidence of disease clinical outcome data obtained frommore » a review of the literature for low, intermediate, and high-risk prostate cancer patients (5218 patients fitted, 1088 patients validated), treated by photons, protons, or carbon-ions. The review followed the preferred reporting item for systematic reviews and meta-analyses statement. Treatment regimens include standard fractionation and hypofractionation treatments. Residual analysis and goodness of fit statistics were applied. Results: The TCP model achieves a good level of fit overall, linear regression results in a p-value of <0.000 01 with an adjusted-weighted-R{sup 2} value of 0.77 and a weighted root mean squared error (wRMSE) of 1.2%, to the fitted clinical outcome data. Validation of the model utilizing three independent datasets obtained from the literature resulted in an adjusted-weighted-R{sup 2} value of 0.78 and a wRMSE of less than 1.8%, to the validation clinical outcome data. The weighted mean absolute residual across the entire dataset is found to be 5.4%. Conclusions: This TCP model fitted and validated to clinical outcome data, appears to be an appropriate model for the inclusion of all clinical prostate cancer risk categories, and allows evaluation of current EBRT modalities with regard to tumor control prediction.« less

  2. Inferring diffusion dynamics from FCS in heterogeneous nuclear environments.

    PubMed

    Tsekouras, Konstantinos; Siegel, Amanda P; Day, Richard N; Pressé, Steve

    2015-07-07

    Fluorescence correlation spectroscopy (FCS) is a noninvasive technique that probes the diffusion dynamics of proteins down to single-molecule sensitivity in living cells. Critical mechanistic insight is often drawn from FCS experiments by fitting the resulting time-intensity correlation function, G(t), to known diffusion models. When simple models fail, the complex diffusion dynamics of proteins within heterogeneous cellular environments can be fit to anomalous diffusion models with adjustable anomalous exponents. Here, we take a different approach. We use the maximum entropy method to show-first using synthetic data-that a model for proteins diffusing while stochastically binding/unbinding to various affinity sites in living cells gives rise to a G(t) that could otherwise be equally well fit using anomalous diffusion models. We explain the mechanistic insight derived from our method. In particular, using real FCS data, we describe how the effects of cell crowding and binding to affinity sites manifest themselves in the behavior of G(t). Our focus is on the diffusive behavior of an engineered protein in 1) the heterochromatin region of the cell's nucleus as well as 2) in the cell's cytoplasm and 3) in solution. The protein consists of the basic region-leucine zipper (BZip) domain of the CCAAT/enhancer-binding protein (C/EBP) fused to fluorescent proteins. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  3. Comparison of Two Methods for Calculating the Frictional Properties of Articular Cartilage Using a Simple Pendulum and Intact Mouse Knee Joints

    PubMed Central

    Drewniak, Elizabeth I.; Jay, Gregory D.; Fleming, Braden C.; Crisco, Joseph J.

    2009-01-01

    In attempts to better understand the etiology of osteoarthritis, a debilitating joint disease that results in the degeneration of articular cartilage in synovial joints, researchers have focused on joint tribology, the study of joint friction, lubrication, and wear. Several different approaches have been used to investigate the frictional properties of articular cartilage. In this study, we examined two analysis methods for calculating the coefficient of friction (μ) using a simple pendulum system and BL6 murine knee joints (n=10) as the fulcrum. A Stanton linear decay model (Lin μ) and an exponential model that accounts for viscous damping (Exp μ) were fit to the decaying pendulum oscillations. Root mean square error (RMSE), asymptotic standard error (ASE), and coefficient of variation (CV) were calculated to evaluate the fit and measurement precision of each model. This investigation demonstrated that while Lin μ was more repeatable, based on CV (5.0% for Lin μ; 18% for Exp μ), Exp μ provided a better fitting model, based on RMSE (0.165° for Exp μ; 0.391° for Lin μ) and ASE (0.033 for Exp μ; 0.185 for Lin μ), and had a significantly lower coefficient of friction value (0.022±0.007 for Exp μ; 0.042±0.016 for Lin μ) (p=0.001). This study details the use of a simple pendulum for examining cartilage properties in situ that will have applications investigating cartilage mechanics in a variety of species. The Exp μ model provided a more accurate fit to the experimental data for predicting the frictional properties of intact joints in pendulum systems. PMID:19632680

  4. A Quadriparametric Model to Describe the Diversity of Waves Applied to Hormonal Data.

    PubMed

    Abdullah, Saman; Bouchard, Thomas; Klich, Amna; Leiva, Rene; Pyper, Cecilia; Genolini, Christophe; Subtil, Fabien; Iwaz, Jean; Ecochard, René

    2018-05-01

    Even in normally cycling women, hormone level shapes may widely vary between cycles and between women. Over decades, finding ways to characterize and compare cycle hormone waves was difficult and most solutions, in particular polynomials or splines, do not correspond to physiologically meaningful parameters. We present an original concept to characterize most hormone waves with only two parameters. The modelling attempt considered pregnanediol-3-alpha-glucuronide (PDG) and luteinising hormone (LH) levels in 266 cycles (with ultrasound-identified ovulation day) in 99 normally fertile women aged 18 to 45. The study searched for a convenient wave description process and carried out an extended search for the best fitting density distribution. The highly flexible beta-binomial distribution offered the best fit of most hormone waves and required only two readily available and understandable wave parameters: location and scale. In bell-shaped waves (e.g., PDG curves), early peaks may be fitted with a low location parameter and a low scale parameter; plateau shapes are obtained with higher scale parameters. I-shaped, J-shaped, and U-shaped waves (sometimes the shapes of LH curves) may be fitted with high scale parameter and, respectively, low, high, and medium location parameter. These location and scale parameters will be later correlated with feminine physiological events. Our results demonstrate that, with unimodal waves, complex methods (e.g., functional mixed effects models using smoothing splines, second-order growth mixture models, or functional principal-component- based methods) may be avoided. The use, application, and, especially, result interpretation of four-parameter analyses might be advantageous within the context of feminine physiological events. Schattauer GmbH.

  5. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.

  6. Interaction Models for Functional Regression.

    PubMed

    Usset, Joseph; Staicu, Ana-Maria; Maity, Arnab

    2016-02-01

    A functional regression model with a scalar response and multiple functional predictors is proposed that accommodates two-way interactions in addition to their main effects. The proposed estimation procedure models the main effects using penalized regression splines, and the interaction effect by a tensor product basis. Extensions to generalized linear models and data observed on sparse grids or with measurement error are presented. A hypothesis testing procedure for the functional interaction effect is described. The proposed method can be easily implemented through existing software. Numerical studies show that fitting an additive model in the presence of interaction leads to both poor estimation performance and lost prediction power, while fitting an interaction model where there is in fact no interaction leads to negligible losses. The methodology is illustrated on the AneuRisk65 study data.

  7. Progress Toward Optimizing Prosthetic Socket Fit and Suspension Using Elevated Vacuum to Promote Residual Limb Health

    PubMed Central

    Wernke, Matthew M.; Schroeder, Ryan M.; Haynes, Michael L.; Nolt, Lonnie L.; Albury, Alexander W.; Colvin, James M.

    2017-01-01

    Objective: Prosthetic sockets are custom made for each amputee, yet there are no quantitative tools to determine the appropriateness of socket fit. Ensuring a proper socket fit can have significant effects on the health of residual limb soft tissues and overall function and acceptance of the prosthetic limb. Previous work found that elevated vacuum pressure data can detect movement between the residual limb and the prosthetic socket; however, the correlation between the two was specific to each user. The overall objective of this work is to determine the relationship between elevated vacuum pressure deviations and prosthetic socket fit. Approach: A tension compression machine was used to apply repeated controlled forces onto a residual limb model with sockets of different internal volume. Results: The vacuum pressure–displacement relationship was dependent on socket fit. The vacuum pressure data were sensitive enough to detect differences of 1.5% global volume and can likely detect differences even smaller. Limb motion was reduced as surface area of contact between the limb model and socket was maximized. Innovation: The results suggest that elevated vacuum pressure data provide information to quantify socket fit. Conclusions: This study provides evidence that the use of elevated vacuum pressure data may provide a method for prosthetists to quantify and monitor socket fit. Future studies should investigate the relationship between socket fit, limb motion, and limb health to define optimal socket fit parameters. PMID:28736683

  8. A comparison of fit of CNC-milled titanium and zirconia frameworks to implants.

    PubMed

    Abduo, Jaafar; Lyons, Karl; Waddell, Neil; Bennani, Vincent; Swain, Michael

    2012-05-01

    Computer numeric controlled (CNC) milling was proven to be predictable method to fabricate accurately fitting implant titanium frameworks. However, no data are available regarding the fit of CNC-milled implant zirconia frameworks. To compare the precision of fit of implant frameworks milled from titanium and zirconia and relate it to peri-implant strain development after framework fixation. A partially edentulous epoxy resin models received two Branemark implants in the areas of the lower left second premolar and second molar. From this model, 10 identical frameworks were fabricated by mean of CNC milling. Half of them were made from titanium and the other half from zirconia. Strain gauges were mounted close to the implants to qualitatively and quantitatively assess strain development as a result of framework fitting. In addition, the fit of the framework implant interface was measured using an optical microscope, when only one screw was tightened (passive fit) and when all screws were tightened (vertical fit). The data was statistically analyzed using the Mann-Whitney test. All frameworks produced measurable amounts of peri-implant strain. The zirconia frameworks produced significantly less strain than titanium. Combining the qualitative and quantitative information indicates that the implants were under vertical displacement rather than horizontal. The vertical fit was similar for zirconia (3.7 µm) and titanium (3.6 µm) frameworks; however, the zirconia frameworks exhibited a significantly finer passive fit (5.5 µm) than titanium frameworks (13.6 µm). CNC milling produced zirconia and titanium frameworks with high accuracy. The difference between the two materials in terms of fit is expected to be of minimal clinical significance. The strain developed around the implants was more related to the framework fit rather than framework material. © 2011 Wiley Periodicals, Inc.

  9. Beta-Poisson model for single-cell RNA-seq data analyses.

    PubMed

    Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi

    2016-07-15

    Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Species area relationships in mediterranean-climate plant communities

    USGS Publications Warehouse

    Keeley, Jon E.; Fotheringham, C.J.

    2003-01-01

    Aim To determine the best-fit model of species–area relationships for Mediterranean-type plant communities and evaluate how community structure affects these species–area models.Location Data were collected from California shrublands and woodlands and compared with literature reports for other Mediterranean-climate regions.Methods The number of species was recorded from 1, 100 and 1000 m2 nested plots. Best fit to the power model or exponential model was determined by comparing adjusted r2 values from the least squares regression, pattern of residuals, homoscedasticity across scales, and semi-log slopes at 1–100 m2 and 100–1000 m2. Dominance–diversity curves were tested for fit to the lognormal model, MacArthur's broken stick model, and the geometric and harmonic series.Results Early successional Western Australia and California shrublands represented the extremes and provide an interesting contrast as the exponential model was the best fit for the former, and the power model for the latter, despite similar total species richness. We hypothesize that structural differences in these communities account for the different species–area curves and are tied to patterns of dominance, equitability and life form distribution. Dominance–diversity relationships for Western Australian heathlands exhibited a close fit to MacArthur's broken stick model, indicating more equitable distribution of species. In contrast, Californian shrublands, both postfire and mature stands, were best fit by the geometric model indicating strong dominance and many minor subordinate species. These regions differ in life form distribution, with annuals being a major component of diversity in early successional Californian shrublands although they are largely lacking in mature stands. Both young and old Australian heathlands are dominated by perennials, and annuals are largely absent. Inherent in all of these ecosystems is cyclical disequilibrium caused by periodic fires. The potential for community reassembly is greater in Californian shrublands where only a quarter of the flora resprout, whereas three quarters resprout in Australian heathlands.Other Californian vegetation types sampled include coniferous forests, oak savannas and desert scrub, and demonstrate that different community structures may lead to a similar species–area relationship. Dominance–diversity relationships for coniferous forests closely follow a geometric model whereas associated oak savannas show a close fit to the lognormal model. However, for both communities, species–area curves fit a power model. The primary driver appears to be the presence of annuals. Desert scrub communities illustrate dramatic changes in both species diversity and dominance–diversity relationships in high and low rainfall years, because of the disappearance of annuals in drought years.Main conclusions Species–area curves for immature shrublands in California and the majority of Mediterranean plant communities fit a power function model. Exceptions that fit the exponential model are not because of sampling error or scaling effects, rather structural differences in these communities provide plausible explanations. The exponential species–area model may arise in more than one way. In the highly diverse Australian heathlands it results from a rapid increase in species richness at small scales. In mature California shrublands it results from very depauperate richness at the community scale. In both instances the exponential model is tied to a preponderance of perennials and paucity of annuals. For communities fit by a power model, coefficients z and log c exhibit a number of significant correlations with other diversity parameters, suggesting that they have some predictive value in ecological communities.

  11. Evaluating flow cytometer performance with weighted quadratic least squares analysis of LED and multi-level bead data.

    PubMed

    Parks, David R; El Khettabi, Faysal; Chase, Eric; Hoffman, Robert A; Perfetto, Stephen P; Spidlen, Josef; Wood, James C S; Moore, Wayne A; Brinkman, Ryan R

    2017-03-01

    We developed a fully automated procedure for analyzing data from LED pulses and multilevel bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than that from multilevel bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  12. Normal tissue complication probability modelling of tissue fibrosis following breast radiotherapy

    NASA Astrophysics Data System (ADS)

    Alexander, M. A. R.; Brooks, W. A.; Blake, S. W.

    2007-04-01

    Cosmetic late effects of radiotherapy such as tissue fibrosis are increasingly regarded as being of importance. It is generally considered that the complication probability of a radiotherapy plan is dependent on the dose uniformity, and can be reduced by using better compensation to remove dose hotspots. This work aimed to model the effects of improved dose homogeneity on complication probability. The Lyman and relative seriality NTCP models were fitted to clinical fibrosis data for the breast collated from the literature. Breast outlines were obtained from a commercially available Rando phantom using the Osiris system. Multislice breast treatment plans were produced using a variety of compensation methods. Dose-volume histograms (DVHs) obtained for each treatment plan were reduced to simple numerical parameters using the equivalent uniform dose and effective volume DVH reduction methods. These parameters were input into the models to obtain complication probability predictions. The fitted model parameters were consistent with a parallel tissue architecture. Conventional clinical plans generally showed reducing complication probabilities with increasing compensation sophistication. Extremely homogenous plans representing idealized IMRT treatments showed increased complication probabilities compared to conventional planning methods, as a result of increased dose to areas receiving sub-prescription doses using conventional techniques.

  13. Machine learning approaches to the social determinants of health in the health and retirement study.

    PubMed

    Seligman, Benjamin; Tuljapurkar, Shripad; Rehkopf, David

    2018-04-01

    Social and economic factors are important predictors of health and of recognized importance for health systems. However, machine learning, used elsewhere in the biomedical literature, has not been extensively applied to study relationships between society and health. We investigate how machine learning may add to our understanding of social determinants of health using data from the Health and Retirement Study. A linear regression of age and gender, and a parsimonious theory-based regression additionally incorporating income, wealth, and education, were used to predict systolic blood pressure, body mass index, waist circumference, and telomere length. Prediction, fit, and interpretability were compared across four machine learning methods: linear regression, penalized regressions, random forests, and neural networks. All models had poor out-of-sample prediction. Most machine learning models performed similarly to the simpler models. However, neural networks greatly outperformed the three other methods. Neural networks also had good fit to the data ( R 2 between 0.4-0.6, versus <0.3 for all others). Across machine learning models, nine variables were frequently selected or highly weighted as predictors: dental visits, current smoking, self-rated health, serial-seven subtractions, probability of receiving an inheritance, probability of leaving an inheritance of at least $10,000, number of children ever born, African-American race, and gender. Some of the machine learning methods do not improve prediction or fit beyond simpler models, however, neural networks performed well. The predictors identified across models suggest underlying social factors that are important predictors of biological indicators of chronic disease, and that the non-linear and interactive relationships between variables fundamental to the neural network approach may be important to consider.

  14. Biomechanical modeling of acetabular component polyethylene stresses, fracture risk, and wear rate following press-fit implantation.

    PubMed

    Ong, Kevin L; Rundell, Steve; Liepins, Imants; Laurent, Ryan; Markel, David; Kurtz, Steven M

    2009-11-01

    Press-fit implantation may result in acetabular component deformation between the ischial-ilial columns ("pinching"). The biomechanical and clinical consequences of liner pinching due to press-fit implantation have not been well studied. We compared the effects of pinching on the polyethylene fracture risk, potential wear rate, and stresses for two different thickness liners using computational methods. Line-to-line ("no pinch") reaming and 2 mm underreaming press fit ("pinch") conditions were examined for Trident cups with X3 polyethylene liner wall thicknesses of 5.9 mm (36E) and 3.8 mm (40E). Press-fit cup deformations were measured from a foam block configuration. A hybrid material model, calibrated to experimentally determined stress-strain behavior of sequentially annealed polyethylene, was applied to the computational model. Molecular chain stretch did not exceed the fracture threshold in any cases. Nominal shell pinch of 0.28 mm was estimated to increase the volumetric wear rate by 70% for both cups and peak contact stresses by 140 and 170% for the 5.9 and 3.8 mm-thick liners, respectively. Although pinching increases liner stresses, polyethylene fracture is highly unlikely, and the volumetric wear rates are likely to be low compared to conventional polyethylene. (c) 2009 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  15. Sensitivity analysis of respiratory parameter uncertainties: impact of criterion function form and constraints.

    PubMed

    Lutchen, K R

    1990-08-01

    A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.

  16. Experimental study of water desorption isotherms and thin-layer convective drying kinetics of bay laurel leaves

    NASA Astrophysics Data System (ADS)

    Ghnimi, Thouraya; Hassini, Lamine; Bagane, Mohamed

    2016-12-01

    The aim of this work is to determine the desorption isotherms and the drying kinetics of bay laurel leaves ( Laurus Nobilis L.). The desorption isotherms were performed at three temperature levels: 50, 60 and 70 °C and at water activity ranging from 0.057 to 0.88 using the statistic gravimetric method. Five sorption models were used to fit desorption experimental isotherm data. It was found that Kuhn model offers the best fitting of experimental moisture isotherms in the mentioned investigated ranges of temperature and water activity. The Net isosteric heat of water desorption was evaluated using The Clausius-Clapeyron equation and was then best correlated to equilibrium moisture content by the empirical Tsami's equation. Thin layer convective drying curves of bay laurel leaves were obtained for temperatures of 45, 50, 60 and 70 °C, relative humidity of 5, 15, 30 and 45 % and air velocities of 1, 1.5 and 2 m/s. A non linear regression procedure of Levenberg-Marquardt was used to fit drying curves with five semi empirical mathematical models available in the literature, The R2 and χ2 were used to evaluate the goodness of fit of models to data. Based on the experimental drying curves the drying characteristic curve (DCC) has been established and fitted with a third degree polynomial function. It was found that the Midilli Kucuk model was the best semi-empirical model describing thin layer drying kinetics of bay laurel leaves. The bay laurel leaves effective moisture diffusivity and activation energy were also identified.

  17. Fit of interim crowns fabricated using photopolymer-jetting 3D printing.

    PubMed

    Mai, Hang-Nga; Lee, Kyu-Bok; Lee, Du-Hyeong

    2017-08-01

    The fit of interim crowns fabricated using 3-dimensional (3D) printing is unknown. The purpose of this in vitro study was to evaluate the fit of interim crowns fabricated using photopolymer-jetting 3D printing and to compare it with that of milling and compression molding methods. Twelve study models were fabricated by making an impression of a metal master model of the mandibular first molar. On each study model, interim crowns (N=36) were fabricated using compression molding (molding group, n=12), milling (milling group, n=12), and 3D polymer-jetting methods. The crowns were prepared as follows: molding group, overimpression technique; milling group, a 5-axis dental milling machine; and polymer-jetting group using a 3D printer. The fit of interim crowns was evaluated in the proximal, marginal, internal axial, and internal occlusal regions by using the image-superimposition and silicone-replica techniques. The Mann-Whitney U test and Kruskal-Wallis tests were used to compare the results among groups (α=.05). Compared with the molding group, the milling and polymer-jetting groups showed more accurate results in the proximal and marginal regions (P<.001). In the axial regions, even though the mean discrepancy was smallest in the molding group, the data showed large deviations. In the occlusal region, the polymer-jetting group was the most accurate, and compared with the other groups, the milling group showed larger internal discrepancies (P<.001). Polymer-jet 3D printing significantly enhanced the fit of interim crowns, particularly in the occlusal region. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  18. Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.

    ERIC Educational Resources Information Center

    Tomas, Jose M.; Oliver, Amparo

    1999-01-01

    Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)

  19. Capturing the Central Line Bundle Infection Prevention Interventions: Comparison of Reflective and Composite Modeling Methods

    PubMed Central

    Gilmartin, Heather M.; Sousa, Karen H.; Battaglia, Catherine

    2016-01-01

    Background The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking. Objectives Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs. Methods A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness-Refined study. The sample was randomly split into exploration and validation subsets. Results The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01), and CLABSIs (reflective = −.28; composite = −.25; p =.01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM. Discussion There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled, or with directional ambiguity, to increase transparency and bring confidence to study findings. PMID:27579507

  20. Ipsative imputation for a 15-item Geriatric Depression Scale in community-dwelling elderly people.

    PubMed

    Imai, Hissei; Furukawa, Toshiaki A; Kasahara, Yoriko; Ishimoto, Yasuko; Kimura, Yumi; Fukutomi, Eriko; Chen, Wen-Ling; Tanaka, Mire; Sakamoto, Ryota; Wada, Taizo; Fujisawa, Michiko; Okumiya, Kiyohito; Matsubayashi, Kozo

    2014-09-01

    Missing data are inevitable in almost all medical studies. Imputation methods using the probabilistic model are common, but they cannot impute individual data and require special software. In contrast, the ipsative imputation method, which substitutes the missing items by the mean of the remaining items within the individual, is easy and does not need any special software, but it can provide individual scores. The aim of the present study was to evaluate the validity of the ipsative imputation method using data involving the 15-item Geriatric Depression Scale. Participants were community-dwelling elderly individuals (n = 1178). A structural equation model was constructed. The model fit indexes were calculated to assess the validity of the imputation method when it is used for individuals who were missing 20% of data or less and 40% of data or less, depending on whether we assumed that their correlation coefficients were the same as the dataset with no missing items. Finally, we compared path coefficients of the dataset imputed by ipsative imputation with those by multiple imputation. When compared with the assumption that the datasets differed, all of the model fit indexes were better under the assumption that the dataset without missing data is the same as that that was missing 20% of data or less. However, by the same assumption, the model fit indexes were worse in the dataset that was missing 40% of data or less. The path coefficients of the dataset imputed by ipsative imputation and by multiple imputation were compatible with each other if the proportion of missing items was 20% or less. Ipsative imputation appears to be a valid imputation method and can be used to impute data in studies using the 15-item Geriatric Depression Scale, if the percentage of its missing items is 20% or less. © 2014 The Authors. Psychogeriatrics © 2014 Japanese Psychogeriatric Society.

Top