Application of separable parameter space techniques to multi-tracer PET compartment modeling.
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-02-07
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.
2016-02-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J
2016-01-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888
Calibrating White Dwarf Asteroseismic Fitting Techniques
NASA Astrophysics Data System (ADS)
Castanheira, B. G.; Romero, A. D.; Bischoff-Kim, A.
2017-03-01
The main goal of looking for intrinsic variability in stars is the unique opportunity to study their internal structure. Once we have extracted independent modes from the data, it appears to be a simple matter of comparing the period spectrum with those from theoretical model grids to learn the inner structure of that star. However, asteroseismology is much more complicated than this simple description. We must account not only for observational uncertainties in period determination, but most importantly for the limitations of the model grids, coming from the uncertainties in the constitutive physics, and of the fitting techniques. In this work, we will discuss results of numerical experiments where we used different independently calculated model grids (white dwarf cooling models WDEC and fully evolutionary LPCODE-PUL) and fitting techniques to fit synthetic stars. The advantage of using synthetic stars is that we know the details of their interior structure so we can assess how well our models and fitting techniques are able to the recover the interior structure, as well as the stellar parameters.
NASA Astrophysics Data System (ADS)
Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.
1996-01-01
A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.
NASA Astrophysics Data System (ADS)
Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.
2011-04-01
This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.
An interactive user-friendly approach to surface-fitting three-dimensional geometries
NASA Technical Reports Server (NTRS)
Cheatwood, F. Mcneil; Dejarnette, Fred R.
1988-01-01
A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.
Fitting neuron models to spike trains.
Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.
Analysis technique for controlling system wavefront error with active/adaptive optics
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Fitting and Modeling in the ASC Data Analysis Environment
NASA Astrophysics Data System (ADS)
Doe, S.; Siemiginowska, A.; Joye, W.; McDowell, J.
As part of the AXAF Science Center (ASC) Data Analysis Environment, we will provide to the astronomical community a Fitting Application. We present a design of the application in this paper. Our design goal is to give the user the flexibility to use a variety of optimization techniques (Levenberg-Marquardt, maximum entropy, Monte Carlo, Powell, downhill simplex, CERN-Minuit, and simulated annealing) and fit statistics (chi (2) , Cash, variance, and maximum likelihood); our modular design allows the user easily to add their own optimization techniques and/or fit statistics. We also present a comparison of the optimization techniques to be provided by the Application. The high spatial and spectral resolutions that will be obtained with AXAF instruments require a sophisticated data modeling capability. We will provide not only a suite of astronomical spatial and spectral source models, but also the capability of combining these models into source models of up to four data dimensions (i.e., into source functions f(E,x,y,t)). We will also provide tools to create instrument response models appropriate for each observation.
Some Improved Diagnostics for Failure of The Rasch Model.
ERIC Educational Resources Information Center
Molenaar, Ivo W.
1983-01-01
Goodness of fit tests for the Rasch model are typically large-sample, global measures. This paper offers suggestions for small-sample exploratory techniques for examining the fit of item data to the Rasch model. (Author/JKS)
Gunsoy, S; Ulusoy, M
2016-01-01
The purpose of this study was to evaluate the internal and marginal fit of chrome cobalt (Co-Cr) crowns were fabricated with laser sintering, computer-aided design (CAD) and computer-aided manufacturing, and conventional methods. Polyamide master and working models were designed and fabricated. The models were initially designed with a software application for three-dimensional (3D) CAD (Maya, Autodesk Inc.). All models were fabricated models were produced by a 3D printer (EOSINT P380 SLS, EOS). 128 1-unit Co-Cr fixed dental prostheses were fabricated with four different techniques: Conventional lost wax method, milled wax with lost-wax method (MWLW), direct laser metal sintering (DLMS), and milled Co-Cr (MCo-Cr). The cement film thickness of the marginal and internal gaps was measured by an observer using a stereomicroscope after taking digital photos in ×24. Best fit rates according to mean and standard deviations of all measurements was in DLMS both in premolar (65.84) and molar (58.38) models in μm. A significant difference was found DLMS and the rest of fabrication techniques (P < 0.05). No significant difference was found between MCo-CR and MWLW in all fabrication techniques both in premolar and molar models (P > 0.05). DMLS was best fitting fabrication techniques for single crown based on the results.The best fit was found in marginal; the larger gap was found in occlusal.All groups were within the clinically acceptable misfit range.
Sakr, Sherif; Elshawi, Radwa; Ahmed, Amjad M; Qureshi, Waqas T; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J; Al-Mallah, Mouaz H
2017-12-19
Prior studies have demonstrated that cardiorespiratory fitness (CRF) is a strong marker of cardiovascular health. Machine learning (ML) can enhance the prediction of outcomes through classification techniques that classify the data into predetermined categories. The aim of this study is to present an evaluation and comparison of how machine learning techniques can be applied on medical records of cardiorespiratory fitness and how the various techniques differ in terms of capabilities of predicting medical outcomes (e.g. mortality). We use data of 34,212 patients free of known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems Between 1991 and 2009 and had a complete 10-year follow-up. Seven machine learning classification techniques were evaluated: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF). In order to handle the imbalanced dataset used, the Synthetic Minority Over-Sampling Technique (SMOTE) is used. Two set of experiments have been conducted with and without the SMOTE sampling technique. On average over different evaluation metrics, SVM Classifier has shown the lowest performance while other models like BN, BC and DT performed better. The RF classifier has shown the best performance (AUC = 0.97) among all models trained using the SMOTE sampling. The results show that various ML techniques can significantly vary in terms of its performance for the different evaluation metrics. It is also not necessarily that the more complex the ML model, the more prediction accuracy can be achieved. The prediction performance of all models trained with SMOTE is much better than the performance of models trained without SMOTE. The study shows the potential of machine learning methods for predicting all-cause mortality using cardiorespiratory fitness data.
Dahl, Bjørn Einar; Rønold, Hans Jacob; Dahl, Jon E
2017-03-01
Whether single crowns produced by computer-aided design and computer-aided manufacturing (CAD-CAM) have an internal fit comparable to crowns made by lost-wax metal casting technique is unknown. The purpose of this in vitro study was to compare the internal fit of single crowns produced with the lost-wax and metal casting technique with that of single crowns produced with the CAD-CAM technique. The internal fit of 5 groups of single crowns produced with the CAD-CAM technique was compared with that of single crowns produced in cobalt-chromium with the conventional lost-wax and metal casting technique. Comparison was performed using the triple-scan protocol; scans of the master model, the crown on the master model, and the intaglio of the crown were superimposed and analyzed with computer software. The 5 groups were milled presintered zirconia, milled hot isostatic pressed zirconia, milled lithium disilicate, milled cobalt-chromium, and laser-sintered cobalt-chromium. The cement space in both the mesiodistal and buccopalatal directions was statistically smaller (P<.05) for crowns made by the conventional lost-wax and metal casting technique compared with that of crowns produced by the CAD-CAM technique. Single crowns made using the conventional lost-wax and metal casting technique have better internal fit than crowns produced using the CAD-CAM technique. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Fitting Neuron Models to Spike Trains
Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
A Method to Test Model Calibration Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
A Method to Test Model Calibration Techniques: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Ullattuthodi, Sujana; Cherian, Kandathil Phillip; Anandkumar, R; Nambiar, M Sreedevi
2017-01-01
This in vitro study seeks to evaluate and compare the marginal and internal fit of cobalt-chromium copings fabricated using the conventional and direct metal laser sintering (DMLS) techniques. A master model of a prepared molar tooth was made using cobalt-chromium alloy. Silicone impression of the master model was made and thirty standardized working models were then produced; twenty working models for conventional lost-wax technique and ten working models for DMLS technique. A total of twenty metal copings were fabricated using two different production techniques: conventional lost-wax method and DMLS; ten samples in each group. The conventional and DMLS copings were cemented to the working models using glass ionomer cement. Marginal gap of the copings were measured at predetermined four points. The die with the cemented copings are standardized-sectioned with a heavy duty lathe. Then, each sectioned samples were analyzed for the internal gap between the die and the metal coping using a metallurgical microscope. Digital photographs were taken at ×50 magnification and analyzed using measurement software. Statistical analysis was done by unpaired t -test and analysis of variance (ANOVA). The results of this study reveal that no significant difference was present in the marginal gap of conventional and DMLS copings ( P > 0.05) by means of ANOVA. The mean values of internal gap of DMLS copings were significantly greater than that of conventional copings ( P < 0.05). Within the limitations of this in vitro study, it was concluded that the internal fit of conventional copings was superior to that of the DMLS copings. Marginal fit of the copings fabricated by two different techniques had no significant difference.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
A modified active appearance model based on an adaptive artificial bee colony.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Safa, Mohammad
2016-09-01
Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
W. Hasan, W. Z.
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554
Using multiple group modeling to test moderators in meta-analysis.
Schoemann, Alexander M
2016-12-01
Meta-analysis is a popular and flexible analysis that can be fit in many modeling frameworks. Two methods of fitting meta-analyses that are growing in popularity are structural equation modeling (SEM) and multilevel modeling (MLM). By using SEM or MLM to fit a meta-analysis researchers have access to powerful techniques associated with SEM and MLM. This paper details how to use one such technique, multiple group analysis, to test categorical moderators in meta-analysis. In a multiple group meta-analysis a model is fit to each level of the moderator simultaneously. By constraining parameters across groups any model parameter can be tested for equality. Using multiple groups to test for moderators is especially relevant in random-effects meta-analysis where both the mean and the between studies variance of the effect size may be compared across groups. A simulation study and the analysis of a real data set are used to illustrate multiple group modeling with both SEM and MLM. Issues related to multiple group meta-analysis and future directions for research are discussed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...
2015-12-10
We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less
A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony
Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748
Impact of Missing Data on Person-Model Fit and Person Trait Estimation
ERIC Educational Resources Information Center
Zhang, Bo; Walker, Cindy M.
2008-01-01
The purpose of this research was to examine the effects of missing data on person-model fit and person trait estimation in tests with dichotomous items. Under the missing-completely-at-random framework, four missing data treatment techniques were investigated including pairwise deletion, coding missing responses as incorrect, hotdeck imputation,…
Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS
Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise
2013-01-01
1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.
Rheem, Sungsue; Rheem, Insoo; Oh, Sejong
2017-01-01
Response surface methodology (RSM) is a useful set of statistical techniques for modeling and optimizing responses in research studies of food science. In the analysis of response surface data, a second-order polynomial regression model is usually used. However, sometimes we encounter situations where the fit of the second-order model is poor. If the model fitted to the data has a poor fit including a lack of fit, the modeling and optimization results might not be accurate. In such a case, using a fullest balanced model, which has no lack of fit, can fix such problem, enhancing the accuracy of the response surface modeling and optimization. This article presents how to develop and use such a model for the better modeling and optimizing of the response through an illustrative re-analysis of a dataset in Park et al. (2014) published in the Korean Journal for Food Science of Animal Resources .
NASA Astrophysics Data System (ADS)
Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.
2017-12-01
The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.
Molitor, John
2012-03-01
Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.
ERIC Educational Resources Information Center
Shaw, Susan M.; Kemeny, Lidia
1989-01-01
Looked at techniques for promoting fitness participation among adolescent girls, in particular those which emphasize the slim ideal. Relative effectiveness of posters using different models (slim, average, overweight) and different messages (slimness, activity, health) was tested using 627 female high school students. Found slim model to be most…
A best-fit model for concept vectors in biomedical research grants.
Johnson, Calvin; Lau, William; Bhandari, Archna; Hays, Timothy
2008-11-06
The Research, Condition, and Disease Categorization (RCDC) project was created to standardize budget reporting by research topic. Text mining techniques have been implemented to classify NIH grant applications into proper research and disease categories. A best-fit model is shown to achieve classification performance rivaling that of concept vectors produced by human experts.
Behavioral Modeling and Characterization of Nonlinear Operation in RF and Microwave Systems
2005-01-01
the model further reinforces the intuition gained by employing this modeling technique. 84 Chapter 5 Remote Characterization of RF Devices 5.1...was used to extract the power series coefficients, 21 dBm. This further reinforces the conclusion that the nonlinear coefficients should be extracted...are becoming important. The fit of the odd-ordered model reinforces this hypothesis since the phase component of the fit roughly splits the
NASA Astrophysics Data System (ADS)
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Genome-wide heterogeneity of nucleotide substitution model fit.
Arbiza, Leonardo; Patricio, Mateus; Dopazo, Hernán; Posada, David
2011-01-01
At a genomic scale, the patterns that have shaped molecular evolution are believed to be largely heterogeneous. Consequently, comparative analyses should use appropriate probabilistic substitution models that capture the main features under which different genomic regions have evolved. While efforts have concentrated in the development and understanding of model selection techniques, no descriptions of overall relative substitution model fit at the genome level have been reported. Here, we provide a characterization of best-fit substitution models across three genomic data sets including coding regions from mammals, vertebrates, and Drosophila (24,000 alignments). According to the Akaike Information Criterion (AIC), 82 of 88 models considered were selected as best-fit models at least in one occasion, although with very different frequencies. Most parameter estimates also varied broadly among genes. Patterns found for vertebrates and Drosophila were quite similar and often more complex than those found in mammals. Phylogenetic trees derived from models in the 95% confidence interval set showed much less variance and were significantly closer to the tree estimated under the best-fit model than trees derived from models outside this interval. Although alternative criteria selected simpler models than the AIC, they suggested similar patterns. All together our results show that at a genomic scale, different gene alignments for the same set of taxa are best explained by a large variety of different substitution models and that model choice has implications on different parameter estimates including the inferred phylogenetic trees. After taking into account the differences related to sample size, our results suggest a noticeable diversity in the underlying evolutionary process. All together, we conclude that the use of model selection techniques is important to obtain consistent phylogenetic estimates from real data at a genomic scale.
Anan, Mohammad Tarek M.; Al-Saadi, Mohannad H.
2015-01-01
Objective The aim of this study was to compare the fit accuracies of metal partial removable dental prosthesis (PRDP) frameworks fabricated by the traditional technique (TT) or the light-curing modeling material technique (LCMT). Materials and methods A metal model of a Kennedy class III modification 1 mandibular dental arch with two edentulous spaces of different spans, short and long, was used for the study. Thirty identical working casts were used to produce 15 PRDP frameworks each by TT and by LCMT. Every framework was transferred to a metal master cast to measure the gap between the metal base of the framework and the crest of the alveolar ridge of the cast. Gaps were measured at three points on each side by a USB digital intraoral camera at ×16.5 magnification. Images were transferred to a graphics editing program. A single examiner performed all measurements. The two-tailed t-test was performed at the 5% significance level. Results The mean gap value was significantly smaller in the LCMT group compared to the TT group. The mean value of the short edentulous span was significantly smaller than that of the long edentulous span in the LCMT group, whereas the opposite result was obtained in the TT group. Conclusion Within the limitations of this study, it can be concluded that the fit of the LCMT-fabricated frameworks was better than the fit of the TT-fabricated frameworks. The framework fit can differ according to the span of the edentate ridge and the fabrication technique for the metal framework. PMID:26236129
Bayesian component separation: The Planck experience
NASA Astrophysics Data System (ADS)
Wehus, Ingunn Kathrine; Eriksen, Hans Kristian
2018-05-01
Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.
Unterhofer, Claudia; Wipplinger, Christoph; Verius, Michael; Recheis, Wolfgang; Thomé, Claudius; Ortler, Martin
Reconstruction of large cranial defects after craniectomy can be accomplished by free-hand poly-methyl-methacrylate (PMMA) or industrially manufactured implants. The free-hand technique often does not achieve satisfactory cosmetic results but is inexpensive. In an attempt to combine the accuracy of specifically manufactured implants with low cost of PMMA. Forty-six consecutive patients with large skull defects after trauma or infection were retrospectively analyzed. The defects were reconstructed using computer-aided design/computer-aided manufacturing (CAD/CAM) techniques. The computer file was imported into a rapid prototyping (RP) machine to produce an acrylonitrile-butadiene-styrene model (ABS) of the patient's bony head. The gas-sterilized model was used as a template for the intraoperative modeling of the PMMA cranioplasty. Thus, not the PMMA implant was generated by CAD/CAM technique but the model of the patients head to easily form a well-fitting implant. Cosmetic outcome was rated on a six-tiered scale by the patients after a minimum follow-up of three months. The mean size of the defect was 74.36cm 2 . The implants fitted well in all patients. Seven patients had a postoperative complication and underwent reoperation. Mean follow-up period was 41 months (range 2-91 months). Results were excellent in 42, good in three and not satisfactory in one patient. Costs per implant were approximately 550 Euros. PMMA implants fabricated in-house by direct molding using a bio-model of the patients bony head are easily produced, fit properly and are inexpensive compared to cranial implants fabricated with other RP or milling techniques. Copyright © 2017 Polish Neurological Society. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
Nonparametric Model of Smooth Muscle Force Production During Electrical Stimulation.
Cole, Marc; Eikenberry, Steffen; Kato, Takahide; Sandler, Roman A; Yamashiro, Stanley M; Marmarelis, Vasilis Z
2017-03-01
A nonparametric model of smooth muscle tension response to electrical stimulation was estimated using the Laguerre expansion technique of nonlinear system kernel estimation. The experimental data consisted of force responses of smooth muscle to energy-matched alternating single pulse and burst current stimuli. The burst stimuli led to at least a 10-fold increase in peak force in smooth muscle from Mytilus edulis, despite the constant energy constraint. A linear model did not fit the data. However, a second-order model fit the data accurately, so the higher-order models were not required to fit the data. Results showed that smooth muscle force response is not linearly related to the stimulation power.
Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria
2017-10-01
Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.
Analytical methods in multivariate highway safety exposure data estimation
DOT National Transportation Integrated Search
1984-01-01
Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...
Efe, Turgay; Füglein, Alexander; Heyse, Thomas J; Stein, Thomas; Timmesfeld, Nina; Fuchs-Winkelmann, Susanne; Schmitt, Jan; Paletta, Jürgen R J; Schofer, Markus D
2012-02-01
Adequate graft fixation over a certain time period is necessary for successful cartilage repair and permanent integration of the graft into the surrounding tissue. The aim of the present study was to test the primary stability of a new cell-free collagen gel plug (CaReS(®)-1S) with two different graft fixation techniques over a simulated early postoperative period. Isolated chondral lesions (11 mm diameter by 6 mm deep) down to the subchondral bone plate were created on the medial femoral condyle in 40 porcine knee specimens. The collagen scaffolds were fixed in 20 knees each by press-fit only or by press-fit + fibrin glue. Each knee was then put through 2,000 cycles in an ex vivo continuous passive motion model. Before and after the 2,000 motions, standardized digital pictures of the grafts were taken. The area of worn surface as a percentage of the total collagen plug surface was evaluated using image analysis software. No total delamination of the scaffolds to leave an empty defect site was recorded in any of the knees. The two fixation techniques showed no significant difference in worn surface area after 2,000 cycles (P = n.s.). This study reveals that both the press-fit only and the press-fit + fibrin glue technique provide similar, adequate, stability of a type I collagen plug in the described porcine model. In the clinical setting, this fact may be particularly important for implantation of arthroscopic grafts.
Variable Complexity Optimization of Composite Structures
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
2002-01-01
The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
Effects of replicative fitness on competing HIV strains.
Chirove, Faraimunashe; Lungu, Edward M
2013-07-01
We develop an n-strain model to show the effects of replicative fitness of competing viral strains exerting selective density-dependant infective pressure on each other. A two strain model is used to illustrate the results. A perturbation technique and numerical simulations were used to establish the existence and stability of steady states. More than one infected steady states governed by the replicative fitness resulted from the model exhibiting either strain replacement or co-infection. We found that the presence of two or more HIV strains could result in a disease-free state that, in general, is not globally stable. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.
2013-02-01
We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.
Markgraf, Rainer; Deutschinoff, Gerd; Pientka, Ludger; Scholten, Theo; Lorenz, Cristoph
2001-01-01
Background: Mortality predictions calculated using scoring scales are often not accurate in populations other than those in which the scales were developed because of differences in case-mix. The present study investigates the effect of first-level customization, using a logistic regression technique, on discrimination and calibration of the Acute Physiology and Chronic Health Evaluation (APACHE) II and III scales. Method: Probabilities of hospital death for patients were estimated by applying APACHE II and III and comparing these with observed outcomes. Using the split sample technique, a customized model to predict outcome was developed by logistic regression. The overall goodness-of-fit of the original and the customized models was assessed. Results: Of 3383 consecutive intensive care unit (ICU) admissions over 3 years, 2795 patients could be analyzed, and were split randomly into development and validation samples. The discriminative powers of APACHE II and III were unchanged by customization (areas under the receiver operating characteristic [ROC] curve 0.82 and 0.85, respectively). Hosmer-Lemeshow goodness-of-fit tests showed good calibration for APACHE II, but insufficient calibration for APACHE III. Customization improved calibration for both models, with a good fit for APACHE III as well. However, fit was different for various subgroups. Conclusions: The overall goodness-of-fit of APACHE III mortality prediction was improved significantly by customization, but uniformity of fit in different subgroups was not achieved. Therefore, application of the customized model provides no advantage, because differences in case-mix still limit comparisons of quality of care. PMID:11178223
Ivanov, R; Marin, E; Villa, J; Gonzalez, E; Rodríguez, C I; Olvera, J E
2015-06-01
This paper describes an alternative methodology to determine the thermal effusivity of a liquid sample using the recently proposed electropyroelectric technique, without fitting the experimental data with a theoretical model and without having to know the pyroelectric sensor related parameters, as in most previous reported approaches. The method is not absolute, because a reference liquid with known thermal properties is needed. Experiments have been performed that demonstrate the high reliability and accuracy of the method with measurement uncertainties smaller than 3%.
Kinematic modelling of disc galaxies using graphics processing units
NASA Astrophysics Data System (ADS)
Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.
2016-01-01
With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.
Five-Year Wilkinson Microwave Anisotropy Probe (WMAP1) Observations: Galactic Foreground Emission
NASA Technical Reports Server (NTRS)
Gold, B.; Bennett, C.L.; Larson, D.; Hill, R.S.; Odegard, N.; Weiland, J.L.; Hinshaw, G.; Kogut, A.; Wollack, E.; Page, L.;
2008-01-01
We present a new estimate of foreground emission in the WMAP data, using a Markov chain Monte Carlo (MCMC) method. The new technique delivers maps of each foreground component for a variety of foreground models, error estimates of the uncertainty of each foreground component, and provides an overall goodness-of-fit measurement. The resulting foreground maps are in broad agreement with those from previous techniques used both within the collaboration and by other authors. We find that for WMAP data, a simple model with power-law synchrotron, free-free, and thermal dust components fits 90% of the sky with a reduced X(sup 2) (sub v) of 1.14. However, the model does not work well inside the Galactic plane. The addition of either synchrotron steepening or a modified spinning dust model improves the fit. This component may account for up to 14% of the total flux at Ka-band (33 GHz). We find no evidence for foreground contamination of the CMB temperature map in the 85% of the sky used for cosmological analysis.
Stochastic approach to data analysis in fluorescence correlation spectroscopy.
Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo
2006-09-21
Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).
TEMPy: a Python library for assessment of three-dimensional electron microscopy density fits.
Farabella, Irene; Vasishtan, Daven; Joseph, Agnel Praveen; Pandurangan, Arun Prasad; Sahota, Harpal; Topf, Maya
2015-08-01
Three-dimensional electron microscopy is currently one of the most promising techniques used to study macromolecular assemblies. Rigid and flexible fitting of atomic models into density maps is often essential to gain further insights into the assemblies they represent. Currently, tools that facilitate the assessment of fitted atomic models and maps are needed. TEMPy (template and electron microscopy comparison using Python) is a toolkit designed for this purpose. The library includes a set of methods to assess density fits in intermediate-to-low resolution maps, both globally and locally. It also provides procedures for single-fit assessment, ensemble generation of fits, clustering, and multiple and consensus scoring, as well as plots and output files for visualization purposes to help the user in analysing rigid and flexible fits. The modular nature of TEMPy helps the integration of scoring and assessment of fits into large pipelines, making it a tool suitable for both novice and expert structural biologists.
Sensitivity of Chemical Shift-Encoded Fat Quantification to Calibration of Fat MR Spectrum
Wang, Xiaoke; Hernando, Diego; Reeder, Scott B.
2015-01-01
Purpose To evaluate the impact of different fat spectral models on proton density fat-fraction (PDFF) quantification using chemical shift-encoded (CSE) MRI. Material and Methods Simulations and in vivo imaging were performed. In a simulation study, spectral models of fat were compared pairwise. Comparison of magnitude fitting and mixed fitting was performed over a range of echo times and fat fractions. In vivo acquisitions from 41 patients were reconstructed using 7 published spectral models of fat. T2-corrected STEAM-MRS was used as reference. Results Simulations demonstrate that imperfectly calibrated spectral models of fat result in biases that depend on echo times and fat fraction. Mixed fitting is more robust against this bias than magnitude fitting. Multi-peak spectral models showed much smaller differences among themselves than when compared to the single-peak spectral model. In vivo studies show all multi-peak models agree better (for mixed fitting, slope ranged from 0.967–1.045 using linear regression) with reference standard than the single-peak model (for mixed fitting, slope=0.76). Conclusion It is essential to use a multi-peak fat model for accurate quantification of fat with CSE-MRI. Further, fat quantification techniques using multi-peak fat models are comparable and no specific choice of spectral model is shown to be superior to the rest. PMID:25845713
A CAD System for Evaluating Footwear Fit
NASA Astrophysics Data System (ADS)
Savadkoohi, Bita Ture; de Amicis, Raffaele
With the great growth in footwear demand, the footwear manufacturing industry, for achieving commercial success, must be able to provide the footwear that fulfills consumer's requirement better than it's competitors. Accurate fitting for shoes is an important factor in comfort and functionality. Footwear fitter measurement have been using manual measurement for a long time, but the development of 3D acquisition devices and the advent of powerful 3D visualization and modeling techniques, automatically analyzing, searching and interpretation of the models have now made automatic determination of different foot dimensions feasible. In this paper, we proposed an approach for finding footwear fit within the shoe last data base. We first properly aligned the 3D models using "Weighted" Principle Component Analysis (WPCA). After solving the alignment problem we used an efficient algorithm for cutting the 3D model in order to find the footwear fit from shoe last data base.
NASA Astrophysics Data System (ADS)
Pinheiro da Silva, L.; Auvergne, M.; Toublanc, D.; Rowe, J.; Kuschnig, R.; Matthews, J.
2006-06-01
Context: .Fitting photometry algorithms can be very effective provided that an accurate model of the instrumental point spread function (PSF) is available. When high-precision time-resolved photometry is required, however, the use of point-source star images as empirical PSF models can be unsatisfactory, due to the limits in their spatial resolution. Theoretically-derived models, on the other hand, are limited by the unavoidable assumption of simplifying hypothesis, while the use of analytical approximations is restricted to regularly-shaped PSFs. Aims: .This work investigates an innovative technique for space-based fitting photometry, based on the reconstruction of an empirical but properly-resolved PSF. The aim is the exploitation of arbitrary star images, including those produced under intentional defocus. The cases of both MOST and COROT, the first space telescopes dedicated to time-resolved stellar photometry, are considered in the evaluation of the effectiveness and performances of the proposed methodology. Methods: .PSF reconstruction is based on a set of star images, periodically acquired and presenting relative subpixel displacements due to motion of the acquisition system, in this case the jitter of the satellite attitude. Higher resolution is achieved through the solution of the inverse problem. The approach can be regarded as a special application of super-resolution techniques, though a specialised procedure is proposed to better meet the PSF determination problem specificities. The application of such a model to fitting photometry is illustrated by numerical simulations for COROT and on a complete set of observations from MOST. Results: .We verify that, in both scenarios, significantly better resolved PSFs can be estimated, leading to corresponding improvements in photometric results. For COROT, indeed, subpixel reconstruction enabled the successful use of fitting algorithms despite its rather complex PSF profile, which could hardly be modeled otherwise. For MOST, whose direct-imaging PSF is closer to the ordinary, comparison to other models or photometry techniques were carried out and confirmed the potential of PSF reconstruction in real observational conditions.
Örtorp, Anders; Jönsson, David; Mouhsen, Alaa; Vult von Steyern, Per
2011-04-01
This study sought to evaluate and compare the marginal and internal fit in vitro of three-unit FDPs in Co-Cr made using four fabrication techniques, and to conclude in which area the largest misfit is present. An epoxy resin master model was produced. The impression was first made with silicone, and master and working models were then produced. A total of 32 three-unit Co-Cr FDPs were fabricated with four different production techniques: conventional lost-wax method (LW), milled wax with lost-wax method (MW), milled Co-Cr (MC), and direct laser metal sintering (DLMS). Each of the four groups consisted of eight FDPs (test groups). The FDPs were cemented on their cast and standardised-sectioned. The cement film thickness of the marginal and internal gaps was measured in a stereomicroscope, digital photos were taken at 12× magnification and then analyzed using measurement software. Statistical analyses were performed with one-way ANOVA and Tukey's test. Best fit based on the means (SDs) in μm for all measurement points was in the DLMS group 84 (60) followed by MW 117 (89), LW 133 (89) and MC 166 (135). Significant differences were present between MC and DLMS (p<0.05). The regression analyses presented differences within the parameters: production technique, tooth size, position and measurement point (p < 0.05). Best fit was found in the DLMS group followed by MW, LW and MC. In all four groups, best fit in both abutments was along the axial walls and in the deepest part of the chamfer preparation. The greatest misfit was present occlusally in all specimens. Copyright © 2010 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Shot model parameters for Cygnus X-1 through phase portrait fitting
NASA Technical Reports Server (NTRS)
Lochner, James C.; Swank, J. H.; Szymkowiak, A. E.
1991-01-01
Shot models for systems having about 1/f power density spectrum are developed by utilizing a distribution of shot durations. Parameters of the distribution are determined by fitting the power spectrum either with analytic forms for the spectrum of a shot model with a given shot profile, or with the spectrum derived from numerical realizations of trial shot models. The shot fraction is specified by fitting the phase portrait, which is a plot of intensity at a given time versus intensity at a delayed time and in principle is sensitive to different shot profiles. These techniques have been extensively applied to the X-ray variability of Cygnus X-1, using HEAO 1 A-2 and an Exosat ME observation. The power spectra suggest models having characteristic shot durations lasting from milliseconds to a few seconds, while the phase portrait fits give shot fractions of about 50 percent. Best fits to the portraits are obtained if the amplitude of the shot is a power-law function of the duration of the shot. These fits prefer shots having a symmetric exponential rise and decay. Results are interpreted in terms of a distribution of magnetic flares in the accretion disk.
KNGEOID14: A national hybrid geoid model in Korea
NASA Astrophysics Data System (ADS)
Kang, S.; Sung, Y. M.; KIM, H.; Kim, Y. S.
2016-12-01
This study describes in brief the construction of a national hybrid geoid model in Korea, KNGEOID14, which can be used as an accurate vertical datum in/around Korea. The hybrid geoid model should be determined by fitting the gravimetric geoid to the geometric geoid undulations from GNSS/Leveling data which were presented the local vertical level. For developing the gravimetric geoid model, we determined all frequency parts (long, middle and short-frequency) of gravimetric geoid using all available data with optimal remove-restore technique based on EGM2008 reference surface. In remove-restore technique, the EGM2008 model to degree 360, RTM reduction method were used for calculating the long, middle and short-frequency part of gravimetric geoid, respectively. A number of gravity data compiled for modeling the middle-frequency part, residual geoid, containing 8,866 points gravity data on land and ocean areas. And, the DEM data gridded by 100m×100m were used for short-frequency part, is the topographic effect on the geoid generated by RTM method. The accuracy of gravimetric geoid model were evaluated by comparison with GNSS/Leveling data was about -0.362m ± 0.055m. Finally, we developed the national hybrid geoid model in Korea, KNGEOID14, corrected to gravimetric geoid with the correction term by fitting the about 1,200 GNSS/Leveling data on Korean bench marks. The correction term is modeled using the difference between GNSS/Leveling derived geoidal heights and gravimetric geoidal heights. The stochastic model used in the calculation of correction term is the LSC technique based on second-order Markov covariance function. The post-fit error (mean and std. dev.) of the KNGEOID14 model was evaluated as 0.001m ± 0.033m. Concerning the result of this study, the accurate orthometric height at any points in Korea will be easily and precisely calculated by combining the geoidal height from KNGEOID14 and ellipsoidal height from GPS observation technique.
NASA Astrophysics Data System (ADS)
Lukman, Iing; Ibrahim, Noor A.; Daud, Isa B.; Maarof, Fauziah; Hassan, Mohd N.
2002-03-01
Survival analysis algorithm is often applied in the data mining process. Cox regression is one of the survival analysis tools that has been used in many areas, and it can be used to analyze the failure times of aircraft crashed. Another survival analysis tool is the competing risks where we have more than one cause of failure acting simultaneously. Lunn-McNeil analyzed the competing risks in the survival model using Cox regression with censored data. The modified Lunn-McNeil technique is a simplify of the Lunn-McNeil technique. The Kalbfleisch-Prentice technique is involving fitting models separately from each type of failure, treating other failure types as censored. To compare the two techniques, (the modified Lunn-McNeil and Kalbfleisch-Prentice) a simulation study was performed. Samples with various sizes and censoring percentages were generated and fitted using both techniques. The study was conducted by comparing the inference of models, using Root Mean Square Error (RMSE), the power tests, and the Schoenfeld residual analysis. The power tests in this study were likelihood ratio test, Rao-score test, and Wald statistics. The Schoenfeld residual analysis was conducted to check the proportionality of the model through its covariates. The estimated parameters were computed for the cause-specific hazard situation. Results showed that the modified Lunn-McNeil technique was better than the Kalbfleisch-Prentice technique based on the RMSE measurement and Schoenfeld residual analysis. However, the Kalbfleisch-Prentice technique was better than the modified Lunn-McNeil technique based on power tests measurement.
NASA Astrophysics Data System (ADS)
Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.
2017-10-01
Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.
Model-checking techniques based on cumulative residuals.
Lin, D Y; Wei, L J; Ying, Z
2002-03-01
Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.
Next generation initiation techniques
NASA Technical Reports Server (NTRS)
Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans
1993-01-01
Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The third kind of next-generation technique involves strategies to initialize convective scale (non-hydrostatic) models.
Jeon, Young-Chan; Jeong, Chang-Mo
2017-01-01
PURPOSE The purpose of this study was to compare the fit of cast gold crowns fabricated from the conventional and the digital impression technique. MATERIALS AND METHODS Artificial tooth in a master model and abutment teeth in ten patients were restored with cast gold crowns fabricated from the digital and the conventional impression technique. The forty silicone replicas were cut in three sections; each section was evaluated in nine points. The measurement was carried out by using a measuring microscope and I-Soultion. Data from the silicone replica were analyzed and all tests were performed with α-level of 0.05. RESULTS 1. The average gaps of cast gold crowns fabricated from the digital impression technique were larger than those of the conventional impression technique significantly. 2. In marginal and internal axial gap of cast gold crowns, no statistical differences were found between the two impression techniques. 3. The internal occlusal gaps of cast gold crowns fabricated from the digital impression technique were larger than those of the conventional impression technique significantly. CONCLUSION Both prostheses presented clinically acceptable results with comparing the fit. The prostheses fabricated from the digital impression technique showed more gaps, in respect of occlusal surface. PMID:28243386
Dahl, Bjørn E; Dahl, Jon E; Rønold, Hans J
2018-02-01
Suboptimal adaptation of fixed dental prostheses (FDPs) can lead to technical and biological complications. It is unclear if the computer-aided design/computer-aided manufacturing (CAD/CAM) technique improves adaptation of FDPs compared with FDPs made using the lost-wax and metal casting technique. Three-unit FDPs were manufactured by CAD/CAM based on digital impression of a typodont model. The FDPs were made from one of five materials: pre-sintered zirconium dioxide; hot isostatic pressed zirconium dioxide; lithium disilicate glass-ceramic; milled cobalt-chromium; and laser-sintered cobalt-chromium. The FDPs made using the lost-wax and metal casting technique were used as reference. The fit of the FDPs was analysed using the triple-scan method. The fit was evaluated for both single abutments and three-unit FDPs. The average cement space varied between 50 μm and 300 μm. Insignificant differences in internal fit were observed between the CAD/CAM-manufactured FDPs, and none of the FPDs had cement spaces that were statistically significantly different from those of the reference FDP. For all FDPs, the cement space at a marginal band 0.5-1.0 mm from the preparation margin was less than 100 μm. The milled cobalt-chromium FDP had the closest fit. The cement space of FDPs produced using the CAD/CAM technique was similar to that of FDPs produced using the conventional lost-wax and metal casting technique. © 2017 Eur J Oral Sci.
A fast, model-independent method for cerebral cortical thickness estimation using MRI.
Scott, M L J; Bromiley, P A; Thacker, N A; Hutchinson, C E; Jackson, A
2009-04-01
Several algorithms for measuring the cortical thickness in the human brain from MR image volumes have been described in the literature, the majority of which rely on fitting deformable models to the inner and outer cortical surfaces. However, the constraints applied during the model fitting process in order to enforce spherical topology and to fit the outer cortical surface in narrow sulci, where the cerebrospinal fluid (CSF) channel may be obscured by partial voluming, may introduce bias in some circumstances, and greatly increase the processor time required. In this paper we describe an alternative, voxel based technique that measures the cortical thickness using inversion recovery anatomical MR images. Grey matter, white matter and CSF are identified through segmentation, and edge detection is used to identify the boundaries between these tissues. The cortical thickness is then measured along the local 3D surface normal at every voxel on the inner cortical surface. The method was applied to 119 normal volunteers, and validated through extensive comparisons with published measurements of both cortical thickness and rate of thickness change with age. We conclude that the proposed technique is generally faster than deformable model-based alternatives, and free from the possibility of model bias, but suffers no reduction in accuracy. In particular, it will be applicable in data sets showing severe cortical atrophy, where thinning of the gyri leads to points of high curvature, and so the fitting of deformable models is problematic.
Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.
2004-01-01
Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.
Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R
2008-08-08
The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth.
Choi, Jung-Han
2011-01-01
This study aimed to evaluate the effect of different screw-tightening sequences, torques, and methods on the strains generated on an internal-connection implant (Astra Tech) superstructure with good fit. An edentulous mandibular master model and a metal framework directly connected to four parallel implants with a passive fit to each other were fabricated. Six stone casts were made from a dental stone master model by a splinted impression technique to represent a well-fitting situation with the metal framework. Strains generated by four screw-tightening sequences (1-2-3-4, 4-3-2-1, 2-4-3-1, and 2-3-1-4), two torques (10 and 20 Ncm), and two methods (one-step and two-step) were evaluated. In the two-step method, screws were tightened to the initial torque (10 Ncm) in a predetermined screw-tightening sequence and then to the final torque (20 Ncm) in the same sequence. Strains were recorded twice by three strain gauges attached to the framework (superior face midway between abutments). Deformation data were analyzed using multiple analysis of variance at a .05 level of statistical significance. In all stone casts, strains were produced by connection of the superstructure, regardless of screw-tightening sequence, torque, and method. No statistically significant differences in superstructure strains were found based on screw-tightening sequences (range, -409.8 to -413.8 μm/m), torques (-409.7 and -399.1 μm/m), or methods (-399.1 and -410.3 μm/m). Within the limitations of this in vitro study, screw-tightening sequence, torque, and method were not critical factors for the strain generated on a well-fitting internal-connection implant superstructure by the splinted impression technique. Further studies are needed to evaluate the effect of screw-tightening techniques on the preload stress in various different clinical situations.
Estimation of parameters of dose volume models and their confidence limits
NASA Astrophysics Data System (ADS)
van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.
2003-07-01
Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.
Irvine, Michael A; Hollingsworth, T Déirdre
2018-05-26
Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J
2016-06-01
The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Random-growth urban model with geographical fitness
NASA Astrophysics Data System (ADS)
Kii, Masanobu; Akimoto, Keigo; Doi, Kenji
2012-12-01
This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.
Chow, Sy- Miin; Lu, Zhaohua; Zhu, Hongtu; Sherwood, Andrew
2014-01-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation–maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456
NASA Astrophysics Data System (ADS)
Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert
2016-05-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.
Balbuena Ortega, A; Arroyo Carrasco, M L; Méndez Otero, M M; Gayou, V L; Delgado Macuil, R; Martínez Gutiérrez, H; Iturbe Castillo, M D
2014-12-12
In this paper, the nonlinear refractive index of colloidal gold nanoparticles under continuous wave illumination is investigated with the z -scan technique. Gold nanoparticles were synthesized using ascorbic acid as reductant, phosphates as stabilizer and cetyltrimethylammonium chloride (CTAC) as surfactant agent. The nanoparticle size was controlled with the CTAC concentration. Experiments changing incident power and sample concentration were done. The experimental z -scan results were fitted with three models: thermal lens, aberrant thermal lens and the nonlocal model. It is shown that the nonlocal model reproduces with exceptionally good agreement; the obtained experimental behaviour.
ERIC Educational Resources Information Center
Cheung, Mike W.-L.; Cheung, Shu Fai
2016-01-01
Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
A new two step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to extrapolate the deflection and slope of the entire structure through the use of System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular wing. It is then applied to test data from a cantilevered swept wing model.
NASA Technical Reports Server (NTRS)
Zoladz, Tom; Patel, Sandeep; Lee, Erik; Karon, Dave
2011-01-01
An advanced methodology for extracting the hydraulic dynamic pump transfer matrix (Yp) for a cavitating liquid rocket engine turbopump inducer+impeller has been developed. The transfer function is required for integrated vehicle pogo stability analysis as well as optimization of local inducer pumping stability. Laboratory pulsed subscale waterflow test of the J-2X oxygen turbo pump is introduced and our new extraction method applied to the data collected. From accurate measures of pump inlet and discharge perturbational mass flows and pressures, and one-dimensional flow models that represents complete waterflow loop physics, we are able to derive Yp and hence extract the characteristic pump parameters: compliance, pump gain, impedance, mass flow gain. Detailed modeling is necessary to accurately translate instrument plane measurements to the pump inlet and discharge and extract Yp. We present the MSFC Dynamic Lump Parameter Fluid Model Framework and describe critical dynamic component details. We report on fit minimization techniques, cost (fitness) function derivation, and resulting model fits to our experimental data are presented. Comparisons are made to alternate techniques for spatially translating measurement stations to actual pump inlet and discharge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholey, J. E.; Lin, L.; Ainsley, C. G.
2015-06-15
Purpose: To evaluate the accuracy and limitations of a commercially-available treatment planning system’s (TPS’s) dose calculation algorithm for proton pencil-beam scanning (PBS) and present a novel technique to efficiently derive a clinically-acceptable beam model. Methods: In-air fluence profiles of PBS spots were modeled in the TPS alternately as single-(SG) and double-Gaussian (DG) functions, based on fits to commissioning data. Uniform-fluence, single-energy-layer square fields of various sizes and energies were calculated with both beam models and delivered to water. Dose was measured at several depths. Motivated by observed discrepancies in measured-versus-calculated dose comparisons, a third model was constructed based on double-Gaussianmore » parameters contrived through a novel technique developed to minimize these differences (DGC). Eleven cuboid-dose-distribution-shaped fields with varying range/modulation and field size were subsequently generated in the TPS, using each of the three beam models described, and delivered to water. Dose was measured at the middle of each spread-out Bragg peak. Results: For energies <160 MeV, the DG model fit square-field measurements to <2% at all depths, while the SG model could disagree by >6%. For energies >160 MeV, both SG and DG models fit square-field measurements to <1% at <4 cm depth, but could exceed 6% deeper. By comparison, disagreement with the DGC model was always <3%. For the cuboid plans, calculation-versus-measured percent dose differences exceeded 7% for the SG model, being larger for smaller fields. The DG model showed <3% disagreement for all field sizes in shorter-range beams, although >5% differences for smaller fields persisted in longer-range beams. In contrast, the DGC model predicted measurements to <2% for all beams. Conclusion: Neither the TPS’s SG nor DG models, employed as intended, are ideally suited for routine clinical use. However, via a novel technique to be presented, its DG model can be tuned judiciously to yield acceptable results.« less
Simplified estimation of age-specific reference intervals for skewed data.
Wright, E M; Royston, P
1997-12-30
Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.
On Using Surrogates with Genetic Programming.
Hildebrandt, Torsten; Branke, Jürgen
2015-01-01
One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.
Kattner, Florian; Cochrane, Aaron; Green, C Shawn
2017-09-01
The majority of theoretical models of learning consider learning to be a continuous function of experience. However, most perceptual learning studies use thresholds estimated by fitting psychometric functions to independent blocks, sometimes then fitting a parametric function to these block-wise estimated thresholds. Critically, such approaches tend to violate the basic principle that learning is continuous through time (e.g., by aggregating trials into large "blocks" for analysis that each assume stationarity, then fitting learning functions to these aggregated blocks). To address this discrepancy between base theory and analysis practice, here we instead propose fitting a parametric function to thresholds from each individual trial. In particular, we implemented a dynamic psychometric function whose parameters were allowed to change continuously with each trial, thus parameterizing nonstationarity. We fit the resulting continuous time parametric model to data from two different perceptual learning tasks. In nearly every case, the quality of the fits derived from the continuous time parametric model outperformed the fits derived from a nonparametric approach wherein separate psychometric functions were fit to blocks of trials. Because such a continuous trial-dependent model of perceptual learning also offers a number of additional advantages (e.g., the ability to extrapolate beyond the observed data; the ability to estimate performance on individual critical trials), we suggest that this technique would be a useful addition to each psychophysicist's analysis toolkit.
Using the Flipchem Photochemistry Model When Fitting Incoherent Scatter Radar Data
NASA Astrophysics Data System (ADS)
Reimer, A. S.; Varney, R. H.
2017-12-01
The North face Resolute Bay Incoherent Scatter Radar (RISR-N) routinely images the dynamics of the polar ionosphere, providing measurements of the plasma density, electron temperature, ion temperature, and line of sight velocity with seconds to minutes time resolution. RISR-N does not directly measure ionospheric parameters, but backscattered signals, recording them as voltage samples. Using signal processing techniques, radar autocorrelation functions (ACF) are estimated from the voltage samples. A model of the signal ACF is then fitted to the ACF using non-linear least-squares techniques to obtain the best-fit ionospheric parameters. The signal model, and therefore the fitted parameters, depend on the ionospheric ion composition that is used [e.g. Zettergren et. al. (2010), Zou et. al. (2017)].The software used to process RISR-N ACF data includes the "flipchem" model, which is an ion photochemistry model developed by Richards [2011] that was adapted from the Field LineInterhemispheric Plasma (FLIP) model. Flipchem requires neutral densities, neutral temperatures, electron density, ion temperature, electron temperature, solar zenith angle, and F10.7 as inputs to compute ion densities, which are input to the signal model. A description of how the flipchem model is used in RISR-N fitting software will be presented. Additionally, a statistical comparison of the fitted electron density, ion temperature, electron temperature, and velocity obtained using a flipchem ionosphere, a pure O+ ionosphere, and a Chapman O+ ionosphere will be presented. The comparison covers nearly two years of RISR-N data (April 2015 - December 2016). Richards, P. G. (2011), Reexamination of ionospheric photochemistry, J. Geophys. Res., 116, A08307, doi:10.1029/2011JA016613.Zettergren, M., Semeter, J., Burnett, B., Oliver, W., Heinselman, C., Blelly, P.-L., and Diaz, M.: Dynamic variability in F-region ionospheric composition at auroral arc boundaries, Ann. Geophys., 28, 651-664, https://doi.org/10.5194/angeo-28-651-2010, 2010.Zou, S., D. Ozturk, R. Varney, and A. Reimer (2017), Effects of sudden commencement on the ionosphere: PFISR observations and global MHD simulation, Geophys. Res. Lett., 44, 3047-3058, doi:10.1002/2017GL072678.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.
Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems
Timmis, Jon; Qwarnstrom, Eva E.
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414
Liu, Y F; Yu, H; Wang, W N; Gao, B
2017-06-09
Objective: To evaluate the processing accuracy, internal quality and suitability of the titanium alloy frameworks of removable partial denture (RPD) fabricated by selective laser melting (SLM) technique, and to provide reference for clinical application. Methods: The plaster model of one clinical patient was used as the working model, and was scanned and reconstructed into a digital working model. A RPD framework was designed on it. Then, eight corresponding RPD frameworks were fabricated using SLM technique. Three-dimensional (3D) optical scanner was used to scan and obtain the 3D data of the frameworks and the data was compared with the original computer aided design (CAD) model to evaluate their processing precision. The traditional casting pure titanium frameworks was used as the control group, and the internal quality was analyzed by X-ray examination. Finally, the fitness of the frameworks was examined on the plaster model. Results: The overall average deviation of the titanium alloy RPD framework fabricated by SLM technology was (0.089±0.076) mm, the root mean square error was 0.103 mm. No visible pores, cracks and other internal defects was detected in the frameworks. The framework fits on the plaster model completely, and its tissue surface fitted on the plaster model well. There was no obvious movement. Conclusions: The titanium alloy RPD framework fabricated by SLM technology is of good quality.
Accelerated Microstructure Imaging via Convex Optimization (AMICO) from diffusion MRI data.
Daducci, Alessandro; Canales-Rodríguez, Erick J; Zhang, Hui; Dyrby, Tim B; Alexander, Daniel C; Thiran, Jean-Philippe
2015-01-15
Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Russell, Robert D; Huo, Michael H; Rodrigues, Danieli C; Kosmopoulos, Victor
2016-11-14
Stable femoral fixation during uncemented total hip arthroplasty is critical to allow for subsequent osseointegration of the prosthesis. Varying stem designs provide surgeons with multiple options to gain femoral fixation. The purpose of this study was to compare the initial fixation stability of cylindrical and tapered stem implants using two different underreaming techniques (press-fit conditions) for revision total hip arthroplasty (THA). A finite element femur model was created from three-dimensional computed tomography images simulating a trabecular bone defect commonly observed in revision THA. Two 18-mm generic femoral hip implants were modeled using the same geometry, differing only in that one had a cylindrical stem and the other had a 2 degree tapered stem. Surgery was simulated using a 0.05-mm and 0.01-mm press-fit and tested with a physiologically relevant loading protocol. Mean contact pressure was influenced more by the surgical technique than by the stem geometry. The 0.05-mm press-fit condition resulted in the highest contact pressures for both the cylindrical (27.35 MPa) and tapered (20.99 MPa) stems. Changing the press-fit to 0.01-mm greatly decreased the contact pressure by 79.8% and 78.5% for the cylindrical (5.53 MPa) and tapered (4.52 MPa) models, respectively. The cylindrical stem geometry consistently showed less relative micromotion at all the cross-sections sampled as compared to the tapered stem regardless of press-fit condition. This finite element analysis study demonstrates that tapered stem results in lower average contact pressure and greater micromotion at the implant-bone interface than a cylindrical stem geometry. More studies are needed to establish how these different stem geometries perform in such non-ideal conditions encountered in revision THA cases where less bone stock is available.
NASA Astrophysics Data System (ADS)
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
Jadhav, Vivek Dattatray; Motwani, Bhagwan K; Shinde, Jitendra; Adhapure, Prasad
2017-01-01
The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. The results of the t -test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness.
Haselhuhn, Klaus; Marotti, Juliana; Tortamano, Pedro; Weiss, Claudia; Suleiman, Lubna; Wolfart, Stefan
2014-12-01
Passive fit of the prosthetic superstructure is important to avoid complications; however, evaluation of passive fit is not possible using conventional procedures. Thus, the aim of this study was to check and locate mechanical stress in bar restorations fabricated using two casting techniques. Fifteen patients received four implants in the interforaminal region of the mandible, and a bar was fabricated using either the cast-on abutment or lost-wax casting technique. The fit accuracy was checked according to the Sheffield's test criteria. Measurements were recorded on the master model with a gap-free, passive fit using foil strain gauges both before and after tightening the prosthetic screws. Data acquisition and processing was analyzed with computer software and submitted to statistical analysis (ANOVA). The greatest axial distortion was at position 42 with the cast-on abutment technique, with a mean distortion of 450 μm/m. The lowest axial distortion occurred at position 44 with the lost-wax casting technique, with a mean distortion of 100 μm/m. The minimal differences between the means of axial distortion do not indicate any significant differences between the techniques (P = 0.2076). Analysis of the sensor axial distortion in relation to the implant position produced a significant difference (P < 0.0001). Significantly higher measurements were recorded in the axial distortion analysis of the distal sensors of implants at the 34 and 44 regions than on the mesial positions at the 32 and 42 regions (P = 0.0481). The measuring technique recorded axial distortion in the implant-supported superstructures. Distortions were present at both casting techniques, with no significant difference between the sides.
EM in high-dimensional spaces.
Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim
2005-06-01
This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
A method for modeling discontinuities in a microwave coaxial transmission line
NASA Technical Reports Server (NTRS)
Otoshi, T. Y.
1992-01-01
A method for modeling discontinuities in a coaxial transmission line is presented. The methodology involves the use of a nonlinear least-squares fit program to optimize the fit between theoretical data (from the model) and experimental data. When this method was applied to modeling discontinuities in a slightly damaged Galileo spacecraft S-band (2.295-GHz) antenna cable, excellent agreement between theory and experiment was obtained over a frequency range of 1.70-2.85 GHz. The same technique can be applied for diagnostics and locating unknown discontinuities in other types of microwave transmission lines, such as rectangular, circular, and beam waveguides.
A method for modeling discontinuities in a microwave coaxial transmission line
NASA Astrophysics Data System (ADS)
Otoshi, T. Y.
1992-08-01
A method for modeling discontinuities in a coaxial transmission line is presented. The methodology involves the use of a nonlinear least-squares fit program to optimize the fit between theoretical data (from the model) and experimental data. When this method was applied to modeling discontinuities in a slightly damaged Galileo spacecraft S-band (2.295-GHz) antenna cable, excellent agreement between theory and experiment was obtained over a frequency range of 1.70-2.85 GHz. The same technique can be applied for diagnostics and locating unknown discontinuities in other types of microwave transmission lines, such as rectangular, circular, and beam waveguides.
Validation of catchment models for predicting land-use and climate change impacts. 1. Method
NASA Astrophysics Data System (ADS)
Ewen, J.; Parkin, G.
1996-02-01
Computer simulation models are increasingly being proposed as tools capable of giving water resource managers accurate predictions of the impact of changes in land-use and climate. Previous validation testing of catchment models is reviewed, and it is concluded that the methods used do not clearly test a model's fitness for such a purpose. A new generally applicable method is proposed. This involves the direct testing of fitness for purpose, uses established scientific techniques, and may be implemented within a quality assured programme of work. The new method is applied in Part 2 of this study (Parkin et al., J. Hydrol., 175:595-613, 1996).
Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R
2008-01-01
Background The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. Methods In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Results Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Conclusion Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth. PMID:18687148
Information management: considering adolescents' regulation of parental knowledge.
Marshall, Sheila K; Tilton-Weaver, Lauree C; Bosdet, Lara
2005-10-01
Employing Goffman's [(1959). The presentation of self in everyday life. New York: Doubleday and Company] notion of impression management, adolescents' conveyance of information about their whereabouts and activities to parents was assessed employing two methodologies. First, a two-wave panel design with a sample of 121 adolescents was used to test a model of information management incorporating two forms of information regulation (lying and willingness to disclose), adolescents' perception of their parents' knowledge about their activities, and adolescent misconduct. Path analysis was used to examine the model for two forms of misconduct as outcomes: substance use and antisocial behaviours. Fit indices indicate the path models were all good fits to the data. Second, 96 participants' responses to semi-structured questions were analyzed using a qualitative analytic technique. Findings reveal adolescents withhold or divulge information in coordination with their parents, employ impression management techniques, and try to balance safety issues with preservation of the parent-adolescent relationship.
Determination of Time Dependent Virus Inactivation Rates
NASA Astrophysics Data System (ADS)
Chrysikopoulos, C. V.; Vogler, E. T.
2003-12-01
A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.
Alghamdi, Manal; Al-Mallah, Mouaz; Keteyian, Steven; Brawner, Clinton; Ehrman, Jonathan; Sakr, Sherif
2017-01-01
Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, Naïve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE). The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (Naïve Bayes Tree, Random Forest, and Logistic Model Tree) and achieved high accuracy of prediction (AUC = 0.92). The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data.
Brassey, Charlotte A.; Gardiner, James D.
2015-01-01
Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559
Micro-CT evaluation of the marginal fit of CAD/CAM all ceramic crowns
NASA Astrophysics Data System (ADS)
Brenes, Christian
Objectives: Evaluate the marginal fit of CAD/CAM all ceramic crowns made from lithium disilicate and zirconia using two different fabrication protocols (model and model-less). METHODS: Forty anterior all ceramic restorations (20 lithium disilicate, 20 zirconia) were fabricated using a CEREC Bluecam scanner. Two different fabrication methods were used: a full digital approach and a printed model. Completed crowns were cemented and marginal gap was evaluated using Micro-CT. Each specimen was analyzed in sagittal and trans-axial orientations, allowing a 360° evaluation of the vertical and horizontal fit. RESULTS: Vertical measurements in the lingual, distal and mesial views had and estimated marginal gap from 101.9 to 133.9 microns for E-max crowns and 126.4 to 165.4 microns for zirconia. No significant differences were found between model and model-less techniques. CONCLUSION: Lithium disilicate restorations exhibited a more accurate and consistent marginal adaptation when compared to zirconia crowns. No statistically significant differences were observed when comparing model or model-less approaches.
NASA Astrophysics Data System (ADS)
Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan
2008-02-01
One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
NASA Astrophysics Data System (ADS)
Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi
2010-03-01
In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.
Jadhav, Vivek Dattatray; Motwani, Bhagwan K.; Shinde, Jitendra; Adhapure, Prasad
2017-01-01
Aims: The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. Settings and Design: This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. Materials and Methods: A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. Statistical Analysis Used: The results of the t-test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. Results: For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Conclusions: Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness. PMID:29042726
NASA Astrophysics Data System (ADS)
Chen, Lei; Zhang, Liguo; Tang, Yixian; Zhang, Hong
2018-04-01
The principle of exponent Knothe model was introduced in detail and the variation process of mining subsidence with time was analysed based on the formulas of subsidence, subsidence velocity and subsidence acceleration in the paper. Five scenes of radar images and six levelling measurements were collected to extract ground deformation characteristics in one coal mining area in this study. Then the unknown parameters of exponent Knothe model were estimated by combined levelling data with deformation information along the line of sight obtained by InSAR technique. By compared the fitting and prediction results obtained by InSAR and levelling with that obtained only by levelling, it was shown that the accuracy of fitting and prediction combined with InSAR and levelling was obviously better than the other that. Therefore, the InSAR measurements can significantly improve the fitting and prediction accuracy of exponent Knothe model.
Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu
2017-06-01
This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.
ERIC Educational Resources Information Center
Kim, Young-Mi; Neff, James Alan
2010-01-01
A model incorporating the direct and indirect effects of parental monitoring on adolescent alcohol use was evaluated by applying structural equation modeling (SEM) techniques to data on 4,765 tenth-graders in the 2001 Monitoring the Future Study. Analyses indicated good fit of hypothesized measurement and structural models. Analyses supported both…
McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra; ...
2017-05-23
Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra
Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less
Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.
Structure, Nanomechanics and Dynamics of Dispersed Surfactant-Free Clay Nanocomposite Films
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Zhao, Jing; Snyder, Chad; Karim, Alamgir; National Institute of Standards; Technology Collaboration
Natural Montmorillonite particles were dispersed as tactoids in thin films of polycaprolactone (PCL) through a flow coating technique assisted by ultra-sonication. Wide angle X-ray scattering (WAXS), Grazing-incidence wide angle X-ray scattering (GI-WAXS), and transmission electron microscopy (TEM) were used to confirm the level of dispersion. These characterization techniques are in conjunction with its nanomechanical properties via strain-induced buckling instability for modulus measurements (SIEBIMM), a high throughput technique to characterize thin film mechanical properties. The linear strengthening trend of the elastic modulus enhancements was fitted with Halpin-Tsai (HT) model, correlating the nanoparticle geometric effects and mechanical behaviors based on continuum theories. The overall aspect ratio of dispersed tactoids obtained through HT model fitting is in reasonable agreement with digital electron microscope image analysis. Moreover, glass transition behaviors of the composites were characterized using broadband dielectric relaxation spectroscopy. The segmental relaxation behaviors indicate that the associated mechanical property changes are due to the continuum filler effect rather than the interfacial confinement effect.
ERIC Educational Resources Information Center
Huang, Francis L.; Cornell, Dewey G.
2016-01-01
Advances in multilevel modeling techniques now make it possible to investigate the psychometric properties of instruments using clustered data. Factor models that overlook the clustering effect can lead to underestimated standard errors, incorrect parameter estimates, and model fit indices. In addition, factor structures may differ depending on…
Group Comparisons in the Presence of Missing Data Using Latent Variable Modeling Techniques
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2010-01-01
A latent variable modeling approach for examining population similarities and differences in observed variable relationship and mean indexes in incomplete data sets is discussed. The method is based on the full information maximum likelihood procedure of model fitting and parameter estimation. The procedure can be employed to test group identities…
Short-term forecasts gain in accuracy. [Regression technique using ''Box-Jenkins'' analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Box-Jenkins time-series models offer accuracy for short-term forecasts that compare with large-scale macroeconomic forecasts. Utilities need to be able to forecast peak demand in order to plan their generating, transmitting, and distribution systems. This new method differs from conventional models by not assuming specific data patterns, but by fitting available data into a tentative pattern on the basis of auto-correlations. Three types of models (autoregressive, moving average, or mixed autoregressive/moving average) can be used according to which provides the most appropriate combination of autocorrelations and related derivatives. Major steps in choosing a model are identifying potential models, estimating the parametersmore » of the problem, and running a diagnostic check to see if the model fits the parameters. The Box-Jenkins technique is well suited for seasonal patterns, which makes it possible to have as short as hourly forecasts of load demand. With accuracy up to two years, the method will allow electricity price-elasticity forecasting that can be applied to facility planning and rate design. (DCK)« less
Bayesian investigation of isochrone consistency using the old open cluster NGC 188
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hills, Shane; Courteau, Stéphane; Von Hippel, Ted
2015-03-01
This paper provides a detailed comparison of the differences in parameters derived for a star cluster from its color–magnitude diagrams (CMDs) depending on the filters and models used. We examine the consistency and reliability of fitting three widely used stellar evolution models to 15 combinations of optical and near-IR photometry for the old open cluster NGC 188. The optical filter response curves match those of theoretical systems and are thus not the source of fit inconsistencies. NGC 188 is ideally suited to this study thanks to a wide variety of high-quality photometry and available proper motions and radial velocities thatmore » enable us to remove non-cluster members and many binaries. Our Bayesian fitting technique yields inferred values of age, metallicity, distance modulus, and absorption as a function of the photometric band combinations and stellar models. We show that the historically favored three-band combinations of UBV and VRI can be meaningfully inconsistent with each other and with longer baseline data sets such as UBVRIJHK{sub S}. Differences among model sets can also be substantial. For instance, fitting Yi et al. (2001) and Dotter et al. (2008) models to UBVRIJHK{sub S} photometry for NGC 188 yields the following cluster parameters: age = (5.78 ± 0.03, 6.45 ± 0.04) Gyr, [Fe/H] = (+0.125 ± 0.003, −0.077 ± 0.003) dex, (m−M){sub V} = (11.441 ± 0.007, 11.525 ± 0.005) mag, and A{sub V} = (0.162 ± 0.003, 0.236 ± 0.003) mag, respectively. Within the formal fitting errors, these two fits are substantially and statistically different. Such differences among fits using different filters and models are a cautionary tale regarding our current ability to fit star cluster CMDs. Additional modeling of this kind, with more models and star clusters, and future Gaia parallaxes are critical for isolating and quantifying the most relevant uncertainties in stellar evolutionary models.« less
Reverse engineering the gap gene network of Drosophila melanogaster.
Perkins, Theodore J; Jaeger, Johannes; Reinitz, John; Glass, Leon
2006-05-01
A fundamental problem in functional genomics is to determine the structure and dynamics of genetic networks based on expression data. We describe a new strategy for solving this problem and apply it to recently published data on early Drosophila melanogaster development. Our method is orders of magnitude faster than current fitting methods and allows us to fit different types of rules for expressing regulatory relationships. Specifically, we use our approach to fit models using a smooth nonlinear formalism for modeling gene regulation (gene circuits) as well as models using logical rules based on activation and repression thresholds for transcription factors. Our technique also allows us to infer regulatory relationships de novo or to test network structures suggested by the literature. We fit a series of models to test several outstanding questions about gap gene regulation, including regulation of and by hunchback and the role of autoactivation. Based on our modeling results and validation against the experimental literature, we propose a revised network structure for the gap gene system. Interestingly, some relationships in standard textbook models of gap gene regulation appear to be unnecessary for or even inconsistent with the details of gap gene expression during wild-type development.
Analysis of Learning Curve Fitting Techniques.
1987-09-01
1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied
Use of the Box and Jenkins time series technique in traffic forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nihan, N.L.; Holmesland, K.O.
The use of recently developed time series techniques for short-term traffic volume forecasting is examined. A data set containing monthly volumes on a freeway segment for 1968-76 is used to fit a time series model. The resultant model is used to forecast volumes for 1977. The forecast volumes are then compared with actual volumes in 1977. Time series techniques can be used to develop highly accurate and inexpensive short-term forecasts. The feasibility of using these models to evaluate the effects of policy changes or other outside impacts is considered. (1 diagram, 1 map, 14 references,2 tables)
Gross, David A.; Snapp, Erik L.; Silver, David L.
2010-01-01
Fat storage-Inducing Transmembrane proteins 1 & 2 (FIT1/FITM1 and FIT2/FITM2) belong to a unique family of evolutionarily conserved proteins localized to the endoplasmic reticulum that are involved in triglyceride lipid droplet formation. FIT proteins have been shown to mediate the partitioning of cellular triglyceride into lipid droplets, but not triglyceride biosynthesis. FIT proteins do not share primary sequence homology with known proteins and no structural information is available to inform on the mechanism by which FIT proteins function. Here, we present the experimentally-solved topological models for FIT1 and FIT2 using N-glycosylation site mapping and indirect immunofluorescence techniques. These methods indicate that both proteins have six-transmembrane-domains with both N- and C-termini localized to the cytosol. Utilizing this model for structure-function analysis, we identified and characterized a gain-of-function mutant of FIT2 (FLL(157-9)AAA) in transmembrane domain 4 that markedly augmented the total number and mean size of lipid droplets. Using limited-trypsin proteolysis we determined that the FLL(157-9)AAA mutant has enhanced trypsin cleavage at K86 relative to wild-type FIT2, indicating a conformational change. Taken together, these studies indicate that FIT2 is a 6 transmembrane domain-containing protein whose conformation likely regulates its activity in mediating lipid droplet formation. PMID:20520733
Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds
NASA Astrophysics Data System (ADS)
Abdo, Mohammad Gamal Mohammad Mostafa
This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).
Feasibility of Clinician-Facilitated Three-Dimensional Printing of Synthetic Cranioplasty Flaps.
Panesar, Sandip S; Belo, Joao Tiago A; D'Souza, Rhett N
2018-05-01
Integration of three-dimensional (3D) printing and stereolithography into clinical practice is in its nascence, and concepts may be esoteric to the practicing neurosurgeon. Currently, creation of 3D printed implants involves recruitment of offsite third parties. We explored a range of 3D scanning and stereolithographic techniques to create patient-specific synthetic implants using an onsite, clinician-facilitated approach. We simulated bilateral craniectomies in a single cadaveric specimen. We devised 3 methods of creating stereolithographically viable virtual models from removed bone. First, we used preoperative and postoperative computed tomography scanner-derived bony window models from which the flap was extracted. Second, we used an entry-level 3D light scanner to scan and render models of the individual bone pieces. Third, we used an arm-mounted, 3D laser scanner to create virtual models using a real-time approach. Flaps were printed from the computed tomography scanner and laser scanner models only in a ultraviolet-cured polymer. The light scanner did not produce suitable virtual models for printing. The computed tomography scanner-derived models required extensive postfabrication modification to fit the existing defects. The laser scanner models assumed good fit within the defects without any modification. The methods presented varying levels of complexity in acquisition and model rendering. Each technique required hardware at varying in price points from $0 to approximately $100,000. The laser scanner models produced the best quality parts, which had near-perfect fit with the original defects. Potential neurosurgical applications of this technology are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
Garabedian, Stephen P.
1986-01-01
A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.
NASA Astrophysics Data System (ADS)
Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin
2013-11-01
The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.
3D Modeling Techniques for Print and Digital Media
NASA Astrophysics Data System (ADS)
Stephens, Megan Ashley
In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.
NASA Technical Reports Server (NTRS)
McIlraith, Sheila; Biswas, Gautam; Clancy, Dan; Gupta, Vineet
2005-01-01
This paper reports on an on-going Project to investigate techniques to diagnose complex dynamical systems that are modeled as hybrid systems. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. We cast the diagnosis problem as a model selection problem. To reduce the space of potential models under consideration, we exploit techniques from qualitative reasoning to conjecture an initial set of qualitative candidate diagnoses, which induce a smaller set of models. We refine these diagnoses using parameter estimation and model fitting techniques. As a motivating case study, we have examined the problem of diagnosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.
Rai, Rathika; Kumar, S Arun; Prabhu, R; Govindan, Ranjani Thillai; Tanveer, Faiz Mohamed
2017-01-01
Accuracy in fit of cast metal restoration has always remained as one of the primary factors in determining the success of the restoration. A well-fitting restoration needs to be accurate both along its margin and with regard to its internal surface. The aim of the study is to evaluate the marginal fit of metal ceramic crowns obtained by conventional inlay casting wax pattern using conventional impression with the metal ceramic crowns obtained by computer-aided design and computer-aided manufacturing (CAD/CAM) technique using direct and indirect optical scanning. This in vitro study on preformed custom-made stainless steel models with former assembly that resembles prepared tooth surfaces of standardized dimensions comprised three groups: the first group included ten samples of metal ceramic crowns fabricated with conventional technique, the second group included CAD/CAM-milled direct metal laser sintering (DMLS) crowns using indirect scanning, and the third group included DMLS crowns fabricated by direct scanning of the stainless steel model. The vertical marginal gap and the internal gap were evaluated with the stereomicroscope (Zoomstar 4); post hoc Turkey's test was used for statistical analysis. One-way analysis of variance method was used to compare the mean values. Metal ceramic crowns obtained from direct optical scanning showed the least marginal and internal gap when compared to the castings obtained from inlay casting wax and indirect optical scanning. Indirect and direct optical scanning had yielded results within clinically acceptable range.
Khan, I.; Hawlader, Sophie Mohammad Delwer Hossain; Arifeen, Shams El; Moore, Sophie; Hills, Andrew P.; Wells, Jonathan C.; Persson, Lars-Åke; Kabir, Iqbal
2012-01-01
The aim of this study was to investigate the validity of the Tanita TBF 300A leg-to-leg bioimpedance analyzer for estimating fat-free mass (FFM) in Bangladeshi children aged 4-10 years and to develop novel prediction equations for use in this population, using deuterium dilution as the reference method. Two hundred Bangladeshi children were enrolled. The isotope dilution technique with deuterium oxide was used for estimation of total body water (TBW). FFM estimated by Tanita was compared with results of deuterium oxide dilution technique. Novel prediction equations were created for estimating FFM, using linear regression models, fitting child's height and impedance as predictors. There was a significant difference in FFM and percentage of body fat (BF%) between methods (p<0.01), Tanita underestimating TBW in boys (p=0.001) and underestimating BF% in girls (p<0.001). A basic linear regression model with height and impedance explained 83% of the variance in FFM estimated by deuterium oxide dilution technique. The best-fit equation to predict FFM from linear regression modelling was achieved by adding weight, sex, and age to the basic model, bringing the adjusted R2 to 89% (standard error=0.90, p<0.001). These data suggest Tanita analyzer may be a valid field-assessment technique in Bangladeshi children when using population-specific prediction equations, such as the ones developed here. PMID:23082630
Monte Carlo analysis of neutron diffuse scattering data
NASA Astrophysics Data System (ADS)
Goossens, D. J.; Heerdegen, A. P.; Welberry, T. R.; Gutmann, M. J.
2006-11-01
This paper presents a discussion of a technique developed for the analysis of neutron diffuse scattering data. The technique involves processing the data into reciprocal space sections and modelling the diffuse scattering in these sections. A Monte Carlo modelling approach is used in which the crystal energy is a function of interatomic distances between molecules and torsional rotations within molecules. The parameters of the model are the spring constants governing the interactions, as they determine the correlations which evolve when the model crystal structure is relaxed at finite temperature. When the model crystal has reached equilibrium its diffraction pattern is calculated and a χ2 goodness-of-fit test between observed and calculated data slices is performed. This allows a least-squares refinement of the fit parameters and so automated refinement can proceed. The first application of this methodology to neutron, rather than X-ray, data is outlined. The sample studied was deuterated benzil, d-benzil, C14D10O2, for which data was collected using time-of-flight Laue diffraction on SXD at ISIS.
Update to core reporting practices in structural equation modeling.
Schreiber, James B
This paper is a technical update to "Core Reporting Practices in Structural Equation Modeling." 1 As such, the content covered in this paper includes, sample size, missing data, specification and identification of models, estimation method choices, fit and residual concerns, nested, alternative, and equivalent models, and unique issues within the SEM family of techniques. Copyright © 2016 Elsevier Inc. All rights reserved.
Stable cycling in discrete-time genetic models.
Hastings, A
1981-11-01
Examples of stable cycling are discussed for two-locus, two-allele, deterministic, discrete-time models with constant fitnesses. The cases that cycle were found by using numerical techniques to search for stable Hopf bifurcations. One consequence of the results is that apparent cases of directional selection may be due to stable cycling.
Diagnostic Procedures for Detecting Nonlinear Relationships between Latent Variables
ERIC Educational Resources Information Center
Bauer, Daniel J.; Baldasaro, Ruth E.; Gottfredson, Nisha C.
2012-01-01
Structural equation models are commonly used to estimate relationships between latent variables. Almost universally, the fitted models specify that these relationships are linear in form. This assumption is rarely checked empirically, largely for lack of appropriate diagnostic techniques. This article presents and evaluates two procedures that can…
Németh, Károly; Chapman, Karena W; Balasubramanian, Mahalingam; Shyam, Badri; Chupas, Peter J; Heald, Steve M; Newville, Matt; Klingler, Robert J; Winans, Randall E; Almer, Jonathan D; Sandi, Giselle; Srajer, George
2012-02-21
An efficient implementation of simultaneous reverse Monte Carlo (RMC) modeling of pair distribution function (PDF) and EXAFS spectra is reported. This implementation is an extension of the technique established by Krayzman et al. [J. Appl. Cryst. 42, 867 (2009)] in the sense that it enables simultaneous real-space fitting of x-ray PDF with accurate treatment of Q-dependence of the scattering cross-sections and EXAFS with multiple photoelectron scattering included. The extension also allows for atom swaps during EXAFS fits thereby enabling modeling the effects of chemical disorder, such as migrating atoms and vacancies. Significant acceleration of EXAFS computation is achieved via discretization of effective path lengths and subsequent reduction of operation counts. The validity and accuracy of the approach is illustrated on small atomic clusters and on 5500-9000 atom models of bcc-Fe and α-Fe(2)O(3). The accuracy gains of combined simultaneous EXAFS and PDF fits are pointed out against PDF-only and EXAFS-only RMC fits. Our modeling approach may be widely used in PDF and EXAFS based investigations of disordered materials. © 2012 American Institute of Physics
Improved Model Fitting for the Empirical Green's Function Approach Using Hierarchical Models
NASA Astrophysics Data System (ADS)
Van Houtte, Chris; Denolle, Marine
2018-04-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study examines a variety of model-fitting methods and shows that the choice of method can explain some of the discrepancy. The preferred method is Bayesian hierarchical modeling, which can reduce bias, better quantify uncertainties, and allow additional effects to be resolved. Two case study earthquakes are examined, the 2016 MW7.1 Kumamoto, Japan earthquake and a MW5.3 aftershock of the 2016 MW7.8 Kaikōura earthquake. By using hierarchical models, the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be retrieved without overfitting the data. Other methods commonly used to calculate corner frequencies may give substantial biases. In particular, if fc was calculated for the Kumamoto earthquake using an ω-square model, the obtained fc could be twice as large as a realistic value.
A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models
ERIC Educational Resources Information Center
Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.
2013-01-01
Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…
Fit of interim crowns fabricated using photopolymer-jetting 3D printing.
Mai, Hang-Nga; Lee, Kyu-Bok; Lee, Du-Hyeong
2017-08-01
The fit of interim crowns fabricated using 3-dimensional (3D) printing is unknown. The purpose of this in vitro study was to evaluate the fit of interim crowns fabricated using photopolymer-jetting 3D printing and to compare it with that of milling and compression molding methods. Twelve study models were fabricated by making an impression of a metal master model of the mandibular first molar. On each study model, interim crowns (N=36) were fabricated using compression molding (molding group, n=12), milling (milling group, n=12), and 3D polymer-jetting methods. The crowns were prepared as follows: molding group, overimpression technique; milling group, a 5-axis dental milling machine; and polymer-jetting group using a 3D printer. The fit of interim crowns was evaluated in the proximal, marginal, internal axial, and internal occlusal regions by using the image-superimposition and silicone-replica techniques. The Mann-Whitney U test and Kruskal-Wallis tests were used to compare the results among groups (α=.05). Compared with the molding group, the milling and polymer-jetting groups showed more accurate results in the proximal and marginal regions (P<.001). In the axial regions, even though the mean discrepancy was smallest in the molding group, the data showed large deviations. In the occlusal region, the polymer-jetting group was the most accurate, and compared with the other groups, the milling group showed larger internal discrepancies (P<.001). Polymer-jet 3D printing significantly enhanced the fit of interim crowns, particularly in the occlusal region. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Tiffany, S. H.; Adams, W. M., Jr.
1984-01-01
A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.
Estimated landmark calibration of biomechanical models for inverse kinematics.
Trinler, Ursula; Baker, Richard
2018-01-01
Inverse kinematics is emerging as the optimal method in movement analysis to fit a multi-segment biomechanical model to experimental marker positions. A key part of this process is calibrating the model to the dimensions of the individual being analysed which requires scaling of the model, pose estimation and localisation of tracking markers within the relevant segment coordinate systems. The aim of this study is to propose a generic technique for this process and test a specific application to the OpenSim model Gait2392. Kinematic data from 10 healthy adult participants were captured in static position and normal walking. Results showed good average static and dynamic fitting errors between virtual and experimental markers of 0.8 cm and 0.9 cm, respectively. Highest fitting errors were found on the epicondyle (static), feet (static, dynamic) and on the thigh (dynamic). These result from inconsistencies between the model geometry and degrees of freedom and the anatomy and movement pattern of the individual participants. A particular limitation is in estimating anatomical landmarks from the bone meshes supplied with Gait2392 which do not conform with the bone morphology of the participants studied. Soft tissue artefact will also affect fitting the model to walking trials. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
NASA Astrophysics Data System (ADS)
Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.
2015-11-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.
Extracting Damping Ratio from Dynamic Data and Numerical Solutions
NASA Technical Reports Server (NTRS)
Casiano, M. J.
2016-01-01
There are many ways to extract damping parameters from data or models. This Technical Memorandum provides a quick reference for some of the more common approaches used in dynamics analysis. Described are six methods of extracting damping from data: the half-power method, logarithmic decrement (decay rate) method, an autocorrelation/power spectral density fitting method, a frequency response fitting method, a random decrement fitting method, and a newly developed half-quadratic gain method. Additionally, state-space models and finite element method modeling tools, such as COMSOL Multiphysics (COMSOL), provide a theoretical damping via complex frequency. Each method has its advantages which are briefly noted. There are also likely many other advanced techniques in extracting damping within the operational modal analysis discipline, where an input excitation is unknown; however, these approaches discussed here are objective, direct, and can be implemented in a consistent manner.
NASA Technical Reports Server (NTRS)
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
Multiple organ definition in CT using a Bayesian approach for 3D model fitting
NASA Astrophysics Data System (ADS)
Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.
1995-08-01
Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.
A New Compression Method for FITS Tables
NASA Technical Reports Server (NTRS)
Pence, William; Seaman, Rob; White, Richard L.
2010-01-01
As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.
Simultaneous fits in ISIS on the example of GRO J1008-57
NASA Astrophysics Data System (ADS)
Kühnel, Matthias; Müller, Sebastian; Kreykenbohm, Ingo; Schwarm, Fritz-Walter; Grossberger, Christoph; Dauser, Thomas; Pottschmidt, Katja; Ferrigno, Carlo; Rothschild, Richard E.; Klochkov, Dmitry; Staubert, Rüdiger; Wilms, Joern
2015-04-01
Parallel computing and steadily increasing computation speed have led to a new tool for analyzing multiple datasets and datatypes: fitting several datasets simultaneously. With this technique, physically connected parameters of individual data can be treated as a single parameter by implementing this connection into the fit directly. We discuss the terminology, implementation, and possible issues of simultaneous fits based on the X-ray data analysis tool Interactive Spectral Interpretation System (ISIS). While all data modeling tools in X-ray astronomy allow in principle fitting data from multiple data sets individually, the syntax used in these tools is not often well suited for this task. Applying simultaneous fits to the transient X-ray binary GRO J1008-57, we find that the spectral shape is only dependent on X-ray flux. We determine time independent parameters such as, e.g., the folding energy E_fold, with unprecedented precision.
Hardebeck, J.L.; Michael, A.J.
2006-01-01
We present a new focal mechanism stress inversion technique to produce regional-scale models of stress orientation containing the minimum complexity necessary to fit the data. Current practice is to divide a region into small subareas and to independently fit a stress tensor to the focal mechanisms of each subarea. This procedure may lead to apparent spatial variability that is actually an artifact of overfitting noisy data or nonuniquely fitting data that does not completely constrain the stress tensor. To remove these artifacts while retaining any stress variations that are strongly required by the data, we devise a damped inversion method to simultaneously invert for stress in all subareas while minimizing the difference in stress between adjacent subareas. This method is conceptually similar to other geophysical inverse techniques that incorporate damping, such as seismic tomography. In checkerboard tests, the damped inversion removes the stress rotation artifacts exhibited by an undamped inversion, while resolving sharper true stress rotations than a simple smoothed model or a moving-window inversion. We show an example of a spatially damped stress field for southern California. The methodology can also be used to study temporal stress changes, and an example for the Coalinga, California, aftershock sequence is shown. We recommend use of the damped inversion technique for any study examining spatial or temporal variations in the stress field.
Modeling the Etiology of Adolescent Substance Use: A Test of the Social Development Model
Catalano, Richard F.; Kosterman, Rick; Hawkins, J. David; Newcomb, Michael D.; Abbott, Robert D.
2007-01-01
The social development model is a general theory of human behavior that seeks to explain antisocial behaviors through specification of predictive developmental relationships. It incorporates the effects of empirical predictors (“risk factors” and “protective factors”) for antisocial behavior and attempts to synthesize the most strongly supported propositions of control theory, social learning theory, and differential association theory. This article examines the power of social development model constructs measured at ages 9 to 10 and 13 to 14 to predict drug use at ages 17 to 18. The sample of 590 is from the longitudinal panel of the Seattle Social Development Project, which in 1985 sampled fifth grade students from high crime neighborhoods in Seattle, Washington. Structural equation modeling techniques were used to examine the fit of the model to the data. Although all but one path coefficient were significant and in the expected direction, the model did not fit the data as well as expected (CFI=.87). We next specified second-order factors for each path to capture the substantial common variance in the constructs' opportunities, involvement, and rewards. This model fit the data well (CFI=.90). We conclude that the social development model provides an acceptable fit to predict drug use at ages 17 to 18. Implications for the temporal nature of key constructs and for prevention are discussed. PMID:17848978
Extracting harmonic signal from a chaotic background with local linear model
NASA Astrophysics Data System (ADS)
Li, Chenlong; Su, Liyun
2017-02-01
In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.
Revision of laser-induced damage threshold evaluation from damage probability data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas
2013-04-15
In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
USDA-ARS?s Scientific Manuscript database
Non-linear regression techniques are used widely to fit weed field emergence patterns to soil microclimatic indices using S-type functions. Artificial neural networks present interesting and alternative features for such modeling purposes. In this work, a univariate hydrothermal-time based Weibull m...
NASA Astrophysics Data System (ADS)
Nättilä, J.; Miller, M. C.; Steiner, A. W.; Kajava, J. J. E.; Suleimanov, V. F.; Poutanen, J.
2017-12-01
Observations of thermonuclear X-ray bursts from accreting neutron stars (NSs) in low-mass X-ray binary systems can be used to constrain NS masses and radii. Most previous work of this type has set these constraints using Planck function fits as a proxy: the models and the data are both fit with diluted blackbody functions to yield normalizations and temperatures that are then compared with each other. For the first time, we here fit atmosphere models of X-ray bursting NSs directly to the observed spectra. We present a hierarchical Bayesian fitting framework that uses current X-ray bursting NS atmosphere models with realistic opacities and relativistic exact Compton scattering kernels as a model for the surface emission. We test our approach against synthetic data and find that for data that are well described by our model, we can obtain robust radius, mass, distance, and composition measurements. We then apply our technique to Rossi X-ray Timing Explorer observations of five hard-state X-ray bursts from 4U 1702-429. Our joint fit to all five bursts shows that the theoretical atmosphere models describe the data well, but there are still some unmodeled features in the spectrum corresponding to a relative error of 1-5% of the energy flux. After marginalizing over this intrinsic scatter, we find that at 68% credibility, the circumferential radius of the NS in 4U 1702-429 is R = 12.4±0.4 km, the gravitational mass is M = 1.9±0.3 M⊙, the distance is 5.1 < D/ kpc < 6.2, and the hydrogen mass fraction is X < 0.09.
ERIC Educational Resources Information Center
Mitchell, James K.; Carter, William E.
2000-01-01
Describes using a computer statistical software package called Minitab to model the sensitivity of several microbes to the disinfectant NaOCl (Clorox') using the Kirby-Bauer technique. Each group of students collects data from one microbe, conducts regression analyses, then chooses the best-fit model based on the highest r-values obtained.…
ERIC Educational Resources Information Center
Song, Hairong; Ferrer, Emilio
2009-01-01
This article presents a state-space modeling (SSM) technique for fitting process factor analysis models directly to raw data. The Kalman smoother via the expectation-maximization algorithm to obtain maximum likelihood parameter estimates is used. To examine the finite sample properties of the estimates in SSM when common factors are involved, a…
Exploring the fitness landscape of poliovirus
NASA Astrophysics Data System (ADS)
Bianco, Simone; Acevedo, Ashely; Andino, Raul; Tang, Chao
2012-02-01
RNA viruses are known to display extraordinary adaptation capabilities to different environments, due to high mutation rates. Their very dynamical evolution is captured by the quasispecies concept, according to which the viral population forms a swarm of genetic variants linked through mutation, which cooperatively interact at a functional level and collectively contribute to the characteristics of the population. The description of the viral fitness landscape becomes paramount towards a more thorough understanding of the virus evolution and spread. The high mutation rate, together with the cooperative nature of the quasispecies, makes it particularly challenging to explore its fitness landscape. I will present an investigation of the dynamical properties of poliovirus fitness landscape, through both the adoption of new experimental techniques and theoretical models.
NASA Technical Reports Server (NTRS)
Fu, Lee-Lueng; Vazquez, Jorge; Perigaud, Claire
1991-01-01
Free, equatorially trapped sinusoidal wave solutions to a linear model on an equatorial beta plane are used to fit the Geosat altimetric sea level observations in the tropical Pacific Ocean. The Kalman filter technique is used to estimate the wave amplitude and phase from the data. The estimation is performed at each time step by combining the model forecast with the observation in an optimal fashion utilizing the respective error covariances. The model error covariance is determined such that the performance of the model forecast is optimized. It is found that the dominant observed features can be described qualitatively by basin-scale Kelvin waves and the first meridional-mode Rossby waves. Quantitatively, however, only 23 percent of the signal variance can be accounted for by this simple model.
Sex versus asex: An analysis of the role of variance conversion.
Lewis-Pye, Andrew E M; Montalbán, Antonio
2017-04-01
The question as to why most complex organisms reproduce sexually remains a very active research area in evolutionary biology. Theories dating back to Weismann have suggested that the key may lie in the creation of increased variability in offspring, causing enhanced response to selection. Under appropriate conditions, selection is known to result in the generation of negative linkage disequilibrium, with the effect of recombination then being to increase genetic variance by reducing these negative associations between alleles. It has therefore been a matter of significant interest to understand precisely those conditions resulting in negative linkage disequilibrium, and to recognise also the conditions in which the corresponding increase in genetic variation will be advantageous. Here, we prove rigorous results for the multi-locus case, detailing the build up of negative linkage disequilibrium, and describing the long term effect on population fitness for models with and without bounds on fitness contributions from individual alleles. Under the assumption of large but finite bounds on fitness contributions from alleles, the non-linear nature of the effect of recombination on a population presents serious obstacles in finding the genetic composition of populations at equilibrium, and in establishing convergence to those equilibria. We describe techniques for analysing the long term behaviour of sexual and asexual populations for such models, and use these techniques to establish conditions resulting in higher fitnesses for sexually reproducing populations. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Building an ACT-R Reader for Eye-Tracking Corpus Data.
Dotlačil, Jakub
2018-01-01
Cognitive architectures have often been applied to data from individual experiments. In this paper, I develop an ACT-R reader that can model a much larger set of data, eye-tracking corpus data. It is shown that the resulting model has a good fit to the data for the considered low-level processes. Unlike previous related works (most prominently, Engelmann, Vasishth, Engbert & Kliegl, ), the model achieves the fit by estimating free parameters of ACT-R using Bayesian estimation and Markov-Chain Monte Carlo (MCMC) techniques, rather than by relying on the mix of manual selection + default values. The method used in the paper is generalizable beyond this particular model and data set and could be used on other ACT-R models. Copyright © 2017 Cognitive Science Society, Inc.
Modeling T1 and T2 relaxation in bovine white matter
NASA Astrophysics Data System (ADS)
Barta, R.; Kalantari, S.; Laule, C.; Vavasour, I. M.; MacKay, A. L.; Michal, C. A.
2015-10-01
The fundamental basis of T1 and T2 contrast in brain MRI is not well understood; recent literature contains conflicting views on the nature of relaxation in white matter (WM). We investigated the effects of inversion pulse bandwidth on measurements of T1 and T2 in WM. Hybrid inversion-recovery/Carr-Purcell-Meiboom-Gill experiments with broad or narrow bandwidth inversion pulses were applied to bovine WM in vitro. Data were analysed with the commonly used 1D-non-negative least squares (NNLS) algorithm, a 2D-NNLS algorithm, and a four-pool model which was based upon microscopically distinguishable WM compartments (myelin non-aqueous protons, myelin water, non-myelin non-aqueous protons and intra/extracellular water) and incorporated magnetization exchange between adjacent compartments. 1D-NNLS showed that different T2 components had different T1 behaviours and yielded dissimilar results for the two inversion conditions. 2D-NNLS revealed significantly more complicated T1/T2 distributions for narrow bandwidth than for broad bandwidth inversion pulses. The four-pool model fits allow physical interpretation of the parameters, fit better than the NNLS techniques, and fits results from both inversion conditions using the same parameters. The results demonstrate that exchange cannot be neglected when analysing experimental inversion recovery data from WM, in part because it can introduce exponential components having negative amplitude coefficients that cannot be correctly modeled with nonnegative fitting techniques. While assignment of an individual T1 to one particular pool is not possible, the results suggest that under carefully controlled experimental conditions the amplitude of an apparent short T1 component might be used to quantify myelin water.
Discrete range clustering using Monte Carlo methods
NASA Technical Reports Server (NTRS)
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
unmarked: An R package for fitting hierarchical models of wildlife occurrence and abundance
Fiske, Ian J.; Chandler, Richard B.
2011-01-01
Ecological research uses data collection techniques that are prone to substantial and unique types of measurement error to address scientific questions about species abundance and distribution. These data collection schemes include a number of survey methods in which unmarked individuals are counted, or determined to be present, at spatially- referenced sites. Examples include site occupancy sampling, repeated counts, distance sampling, removal sampling, and double observer sampling. To appropriately analyze these data, hierarchical models have been developed to separately model explanatory variables of both a latent abundance or occurrence process and a conditional detection process. Because these models have a straightforward interpretation paralleling mechanisms under which the data arose, they have recently gained immense popularity. The common hierarchical structure of these models is well-suited for a unified modeling interface. The R package unmarked provides such a unified modeling framework, including tools for data exploration, model fitting, model criticism, post-hoc analysis, and model comparison.
Order reduction for a model of marine bacteriophage evolution
NASA Astrophysics Data System (ADS)
Pagliarini, Silvia; Korobeinikov, Andrei
2017-02-01
A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.
NASA Astrophysics Data System (ADS)
Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal
2014-06-01
This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.
NASA Astrophysics Data System (ADS)
Leja, Joel; Johnson, Benjamin D.; Conroy, Charlie; van Dokkum, Pieter
2018-02-01
Forward modeling of the full galaxy SED is a powerful technique, providing self-consistent constraints on stellar ages, dust properties, and metallicities. However, the accuracy of these results is contingent on the accuracy of the model. One significant source of uncertainty is the contribution of obscured AGN, as they are relatively common and can produce substantial mid-IR (MIR) emission. Here we include emission from dusty AGN torii in the Prospector SED-fitting framework, and fit the UV–IR broadband photometry of 129 nearby galaxies. We find that 10% of the fitted galaxies host an AGN contributing >10% of the observed galaxy MIR luminosity. We demonstrate the necessity of this AGN component in the following ways. First, we compare observed spectral features to spectral features predicted from our model fit to the photometry. We find that the AGN component greatly improves predictions for observed Hα and Hβ luminosities, as well as mid-infrared Akari and Spitzer/IRS spectra. Second, we show that inclusion of the AGN component changes stellar ages and SFRs by up to a factor of 10, and dust attenuations by up to a factor of 2.5. Finally, we show that the strength of our model AGN component correlates with independent AGN indicators, suggesting that these galaxies truly host AGN. Notably, only 46% of the SED-detected AGN would be detected with a simple MIR color selection. Based on these results, we conclude that SED models which fit MIR data without AGN components are vulnerable to substantial bias in their derived parameters.
Model analysis for the MAGIC telescope
NASA Astrophysics Data System (ADS)
Mazin, D.; Bigongiari, C.; Goebel, F.; Moralejo, A.; Wittek, W.
The MAGIC Collaboration operates the 17m imaging Cherenkov telescope on the Canary island La Palma. The main goal of the experiment is an energy threshold below 100 GeV for primary gamma rays. The new analysis technique (model analysis) takes advantage of the high resolution (both in space and time) camera by fitting the averaged expected templates of the shower development to the measured shower images in the camera. This approach allows to recognize and reconstruct images just above the level of the night sky background light fluctuations. Progress and preliminary results of the model analysis technique will be presented.
Farina, Ana Paula; Spazzin, Aloísio Oro; Consani, Rafael Leonardo Xediek; Mesquita, Marcelo Ferraz
2014-06-01
Screws can loosen through mechanisms that have not been clearly established. The purpose of this study was to evaluate the influence of the tightening technique (the application of torque and retorque on the joint stability of titanium and gold prosthetic screws) in implant-supported dentures under different fit levels after 1 year of simulated masticatory function by means of mechanical cycling. Ten mandibular implant-supported dentures were fabricated, and 20 cast models were prepared by using the dentures to create 2 fit levels: passive fit and created misfit. The tightening protocol was evaluated according to 4 distinct profiles: without retorque plus titanium screws, without retorque plus gold screws, retorque plus titanium screws, and retorque plus gold screws. In the retorque application, the screws were tightened to 10 Ncm and retightened to 10 Ncm after 10 minutes. The screw joint stability after 1 year of simulated clinical function was measured with a digital torque meter. Data were analyzed statistically by 2-way ANOVA and Tukey honestly significant difference (HSD) post hoc tests (α=.05). The factors of fit level and tightening technique as well as the interaction between the factors, were statistically significant. The misfit decreases the loosening torque. The retorque application increased joint stability independent of fit level or screw material, which suggests that this procedure should be performed routinely during the tightening of these devices. All tightening techniques revealed reduced loosening torque values that were significantly lower in misfit dentures than in passive fit dentures. However, the retorque application significantly increased the loosening torque when titanium and gold screws were used. Therefore, this procedure should be performed routinely during screw tightening. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Verdel, Nina; Marin, Ana; Vidovič, Luka; Milanič, Matija; Majaron, Boris
2017-02-01
We have combined two optical techniques to enable simultaneous assessment of structure and composition of human skin in vivo: Pulsed photothermal radiometry (PPTR), which involves measurements of transient dynamics in midinfrared emission from sample surface after exposure to a light pulse, and diffuse reflectance spectroscopy (DRS) in visible part of the spectrum. Namely, while PPTR is highly sensitive to depth distribution of selected absorbers, DRS provides spectral information and thus enables differentiation between various chromophores. The accuracy and robustness of the inverse analysis is thus considerably improved compared to use of either technique on its own. Our analysis approach is simultaneous multi-dimensional fitting of the measured PPTR signals and DRS with predictions from a numerical model of light-tissue interaction (a.k.a. inverse Monte Carlo). By using a three-layer skin model (epidermis, dermis, and subcutis), we obtain a good match between the experimental and modeling data. However, dividing the dermis into two separate layers (i.e., papillary and reticular dermis) helps to bring all assessed parameter values within anatomically and physiologically plausible intervals. Both the quality of the fit and the assessed parameter values depend somewhat on the assumed scattering properties for skin, which vary in literature and likely depend on subject's age and gender, anatomical site, etc. In our preliminary experience, simultaneous fitting of the scattering properties is possible and leads to considerable improvement of the fit. The described approach may thus have a potential for simultaneous determination of absorption and scattering properties of human skin in vivo.
Some photometric techniques for atmosphereless solar system bodies.
Lumme, K; Peltoniemi, J; Irvine, W M
1990-01-01
We discuss various photometric techniques and their absolute scales in relation to the information that can be derived from the relevant data. We also outline a new scattering model for atmosphereless bodies in the solar system and show how it fits Mariner 10 surface photometry of the planet Mercury. It is shown how important the correct scattering law is while deriving the topography by photoclinometry.
Framework based on stochastic L-Systems for modeling IP traffic with multifractal behavior
NASA Astrophysics Data System (ADS)
Salvador, Paulo S.; Nogueira, Antonio; Valadas, Rui
2003-08-01
In a previous work we have introduced a multifractal traffic model based on so-called stochastic L-Systems, which were introduced by biologist A. Lindenmayer as a method to model plant growth. L-Systems are string rewriting techniques, characterized by an alphabet, an axiom (initial string) and a set of production rules. In this paper, we propose a novel traffic model, and an associated parameter fitting procedure, which describes jointly the packet arrival and the packet size processes. The packet arrival process is modeled through a L-System, where the alphabet elements are packet arrival rates. The packet size process is modeled through a set of discrete distributions (of packet sizes), one for each arrival rate. In this way the model is able to capture correlations between arrivals and sizes. We applied the model to measured traffic data: the well-known pOct Bellcore, a trace of aggregate WAN traffic and two traces of specific applications (Kazaa and Operation Flashing Point). We assess the multifractality of these traces using Linear Multiscale Diagrams. The suitability of the traffic model is evaluated by comparing the empirical and fitted probability mass and autocovariance functions; we also compare the packet loss ratio and average packet delay obtained with the measured traces and with traces generated from the fitted model. Our results show that our L-System based traffic model can achieve very good fitting performance in terms of first and second order statistics and queuing behavior.
Healy, Michael R; Light, Leah L; Chung, Christie
2005-07-01
In 3 experiments, young and older adults studied lists of unrelated word pairs and were given confidence-rated item and associative recognition tests. Several different models of recognition were fit to the confidence-rating data using techniques described by S. Macho (2002, 2004). Concordant with previous findings, item recognition data were best fit by an unequal-variance signal detection theory model for both young and older adults. For both age groups, associative recognition performance was best explained by models incorporating both recollection and familiarity components. Examination of parameter estimates supported the conclusion that recollection is reduced in old age, but inferences about age differences in familiarity were highly model dependent. Implications for dual-process models of memory in old age are discussed. ((c) 2005 APA, all rights reserved).
Hung, Chien-Ya; Sun, Pei-Lun; Chiang, Shu-Jen; Jaw, Fu-Shan
2014-01-01
Similar clinical appearances prevent accurate diagnosis of two common skin diseases, clavus and verruca. In this study, electrical impedance is employed as a novel tool to generate a predictive model for differentiating these two diseases. We used 29 clavus and 28 verruca lesions. To obtain impedance parameters, a LCR-meter system was applied to measure capacitance (C), resistance (Re), impedance magnitude (Z), and phase angle (θ). These values were combined with lesion thickness (d) to characterize the tissue specimens. The results from clavus and verruca were then fitted to a univariate logistic regression model with the generalized estimating equations (GEE) method. In model generation, log ZSD and θSD were formulated as predictors by fitting a multiple logistic regression model with the same GEE method. The potential nonlinear effects of covariates were detected by fitting generalized additive models (GAM). Moreover, the model was validated by the goodness-of-fit (GOF) assessments. Significant mean differences of the index d, Re, Z, and θ are found between clavus and verruca (p<0.001). A final predictive model is established with Z and θ indices. The model fits the observed data quite well. In GOF evaluation, the area under the receiver operating characteristics (ROC) curve is 0.875 (>0.7), the adjusted generalized R2 is 0.512 (>0.3), and the p value of the Hosmer-Lemeshow GOF test is 0.350 (>0.05). This technique promises to provide an approved model for differential diagnosis of clavus and verruca. It could provide a rapid, relatively low-cost, safe and non-invasive screening tool in clinic use.
Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A
2001-09-01
We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
Guenole, Nigel; Brown, Anna A; Cooper, Andrew J
2018-06-01
This article describes an investigation of whether Thurstonian item response modeling is a viable method for assessment of maladaptive traits. Forced-choice responses from 420 working adults to a broad-range personality inventory assessing six maladaptive traits were considered. The Thurstonian item response model's fit to the forced-choice data was adequate, while the fit of a counterpart item response model to responses to the same items but arranged in a single-stimulus design was poor. Monotrait heteromethod correlations indicated corresponding traits in the two formats overlapped substantially, although they did not measure equivalent constructs. A better goodness of fit and higher factor loadings for the Thurstonian item response model, coupled with a clearer conceptual alignment to the theoretical trait definitions, suggested that the single-stimulus item responses were influenced by biases that the independent clusters measurement model did not account for. Researchers may wish to consider forced-choice designs and appropriate item response modeling techniques such as Thurstonian item response modeling for personality questionnaire applications in industrial psychology, especially when assessing maladaptive traits. We recommend further investigation of this approach in actual selection situations and with different assessment instruments.
NASA Astrophysics Data System (ADS)
Bowler, Brendan P.; Liu, Michael C.; Cushing, Michael C.
2009-12-01
We present a near-infrared spectroscopic study of HD 114762B, the latest-type metal-poor companion discovered to date and the only ultracool subdwarf with a known metallicity, inferred from the primary star to be [Fe/H] = -0.7. We obtained a medium-resolution (R ~ 3800) Keck/OSIRIS 1.18-1.40 μm spectrum and a low-resolution (R ~ 150) Infrared Telescope Facility/SpeX 0.8-2.4 μm spectrum of HD 114762B to test atmospheric and evolutionary models for the first time in this mass-metallicity regime. HD 114762B exhibits spectral features common to both late-type dwarfs and subdwarfs, and we assign it a spectral type of d/sdM9 ± 1. We use a Monte Carlo technique to fit PHOENIX/GAIA synthetic spectra to the observations, accounting for the coarsely gridded nature of the models. Fits to the entire OSIRIS J-band and to the metal-sensitive J-band atomic absorption features (Fe I, K I, and Al I lines) yield model parameters that are most consistent with the metallicity of the primary star and the high surface gravity expected of old late-type objects. The effective temperatures and radii inferred from the model atmosphere fitting broadly agree with those predicted by the evolutionary models of Chabrier & Baraffe, and the model color-absolute magnitude relations accurately predict the metallicity of HD 114762B. We conclude that current low-mass, mildly metal-poor atmospheric and evolutionary models are mutually consistent for spectral fits to medium-resolution J-band spectra of HD 114762B, but are inconsistent for fits to low-resolution near-infrared spectra of mild subdwarfs. Finally, we develop a technique for estimating distances to ultracool subdwarfs based on a single near-infrared spectrum. We show that this "spectroscopic parallax" method enables distance estimates accurate to lsim10% of parallactic distances for ultracool subdwarfs near the hydrogen burning minimum mass. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
NASA Astrophysics Data System (ADS)
Yan, Y. T.; Cai, Y.
2006-03-01
A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
Bayesian inference in an item response theory model with a generalized student t link function
NASA Astrophysics Data System (ADS)
Azevedo, Caio L. N.; Migon, Helio S.
2012-10-01
In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.
Stoklosa, Jakub; Dann, Peter; Huggins, Richard
2014-09-01
To accommodate seasonal effects that change from year to year into models for the size of an open population we consider a time-varying coefficient model. We fit this model to a capture-recapture data set collected on the little penguin Eudyptula minor in south-eastern Australia over a 25 year period using Jolly-Seber type estimators and nonparametric P-spline techniques. The time-varying coefficient model identified strong changes in the seasonal pattern across the years which we further examined using functional data analysis techniques. To evaluate the methodology we also conducted several simulation studies that incorporate seasonal variation. Copyright © 2014 Elsevier Inc. All rights reserved.
Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans
NASA Astrophysics Data System (ADS)
Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj
2016-06-01
This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.
ERIC Educational Resources Information Center
Furlow, Carolyn F.; Beretvas, S. Natasha
2005-01-01
Three methods of synthesizing correlations for meta-analytic structural equation modeling (SEM) under different degrees and mechanisms of missingness were compared for the estimation of correlation and SEM parameters and goodness-of-fit indices by using Monte Carlo simulation techniques. A revised generalized least squares (GLS) method for…
Guess, Petra C; Vagkopoulou, Thaleia; Zhang, Yu; Wolkewitz, Martin; Strub, Joerg R
2014-02-01
The aim of the study was to evaluate the marginal and internal fit of heat-pressed and CAD/CAM fabricated all-ceramic onlays before and after luting as well as after thermo-mechanical fatigue. Seventy-two caries-free, extracted human mandibular molars were randomly divided into three groups (n=24/group). All teeth received an onlay preparation with a mesio-occlusal-distal inlay cavity and an occlusal reduction of all cusps. Teeth were restored with heat-pressed IPS-e.max-Press* (IP, *Ivoclar-Vivadent) and Vita-PM9 (VP, Vita-Zahnfabrik) as well as CAD/CAM fabricated IPS-e.max-CAD* (IC, Cerec 3D/InLab/Sirona) all-ceramic materials. After cementation with a dual-polymerising resin cement (VariolinkII*), all restorations were subjected to mouth-motion fatigue (98 N, 1.2 million cycles; 5°C/55°C). Marginal fit discrepancies were examined on epoxy replicas before and after luting as well as after fatigue at 200× magnification. Internal fit was evaluated by multiple sectioning technique. For the statistical analysis, a linear model was fitted with accounting for repeated measurements. Adhesive cementation of onlays resulted in significantly increased marginal gap values in all groups, whereas thermo-mechanical fatigue had no effect. Marginal gap values of all test groups were equal after fatigue exposure. Internal discrepancies of CAD/CAM fabricated restorations were significantly higher than both press manufactured onlays. Mean marginal gap values of the investigated onlays before and after luting as well as after fatigue were within the clinically acceptable range. Marginal fit was not affected by the investigated heat-press versus CAD/CAM fabrication technique. Press fabrication resulted in a superior internal fit of onlays as compared to the CAD/CAM technique. Clinical requirements of 100 μm for marginal fit were fulfilled by the heat-press as well as by the CAD/CAM fabricated all-ceramic onlays. Superior internal fit was observed with the heat-press manufacturing method. The impact of present findings on the clinical long-term behaviour of differently fabricated all-ceramic onlays warrants further investigation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Guess, Petra C.; Vagopoulou, Thaleia; Zhang, Yu; Wolkewitz, Martin; Strub, Joerg R.
2015-01-01
Objectives The aim of the study was to evaluate the marginal and internal fit of heat-pressed and CAD/CAM fabricated all-ceramic onlays before and after luting as well as after thermo-mechanical fatigue. Materials and Methods Seventy-two caries-free, extracted human mandibular molars were randomly divided into three groups (n=24/group). All teeth received an onlay preparation with a mesio-occlusal-distal inlay cavity and an occlusal reduction of all cusps. Teeth were restored with heat-pressed IPS-e.max-Press* (IP, *Ivoclar-Vivadent) and Vita-PM9 (VP, Vita-Zahnfabrik) as well as CAD/CAM fabricated IPS-e.max-CAD* (IC, Cerec 3D/InLab/Sirona) all-ceramic materials. After cementation with a dual-polymerizing resin cement (VariolinkII*), all restorations were subjected to mouth-motion fatigue (98N, 1.2 million cycles; 5°C/55°C). Marginal fit discrepancies were examined on epoxy replicas before and after luting as well as after fatigue at 200x magnification. Internal fit was evaluated by multiple sectioning technique. For the statistical analysis, a linear model was fitted with accounting for repeated measurements. Results Adhesive cementation of onlays resulted in significantly increased marginal gap values in all groups, whereas thermo-mechanical fatigue had no effect. Marginal gap values of all test groups were equal after fatigue exposure. Internal discrepancies of CAD/CAM fabricated restorations were significantly higher than both press manufactured onlays. Conclusions Mean marginal gap values of the investigated onlays before and after luting as well as after fatigue were within the clinically acceptable range. Marginal fit was not affected by the investigated heat-press versus CAD/CAM fabrication technique. Press fabrication resulted in a superior internal fit of onlays as compared to the CAD/CAM technique. Clinical Relevance Clinical requirements of 100 μm for marginal fit were fulfilled by the heat-press as well as by the CAD/CAM fabricated all-ceramic onlays. Superior internal fit was observed with the heat-press manufacturing method. The impact of present findings on the clinical long-term behaviour of differently fabricated all-ceramic onlays warrants further investigation. PMID:24161516
Predictive modeling of transient storage and nutrient uptake: Implications for stream restoration
O'Connor, Ben L.; Hondzo, Miki; Harvey, Judson W.
2010-01-01
This study examined two key aspects of reactive transport modeling for stream restoration purposes: the accuracy of the nutrient spiraling and transient storage models for quantifying reach-scale nutrient uptake, and the ability to quantify transport parameters using measurements and scaling techniques in order to improve upon traditional conservative tracer fitting methods. Nitrate (NO3–) uptake rates inferred using the nutrient spiraling model underestimated the total NO3– mass loss by 82%, which was attributed to the exclusion of dispersion and transient storage. The transient storage model was more accurate with respect to the NO3– mass loss (±20%) and also demonstrated that uptake in the main channel was more significant than in storage zones. Conservative tracer fitting was unable to produce transport parameter estimates for a riffle-pool transition of the study reach, while forward modeling of solute transport using measured/scaled transport parameters matched conservative tracer breakthrough curves for all reaches. Additionally, solute exchange between the main channel and embayment surface storage zones was quantified using first-order theory. These results demonstrate that it is vital to account for transient storage in quantifying nutrient uptake, and the continued development of measurement/scaling techniques is needed for reactive transport modeling of streams with complex hydraulic and geomorphic conditions.
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Collins, Curtis Andrew
Ordinary and weighted least squares multiple linear regression techniques were used to derive 720 models predicting Katrina-induced storm damage in cubic foot volume (outside bark) and green weight tons (outside bark). The large number of models was dictated by the use of three damage classes, three product types, and four forest type model strata. These 36 models were then fit and reported across 10 variable sets and variable set combinations for volume and ton units. Along with large model counts, potential independent variables were created using power transforms and interactions. The basis of these variables was field measured plot data, satellite (Landsat TM and ETM+) imagery, and NOAA HWIND wind data variable types. As part of the modeling process, lone variable types as well as two-type and three-type combinations were examined. By deriving models with these varying inputs, model utility is flexible as all independent variable data are not needed in future applications. The large number of potential variables led to the use of forward, sequential, and exhaustive independent variable selection techniques. After variable selection, weighted least squares techniques were often employed using weights of one over the square root of the pre-storm volume or weight of interest. This was generally successful in improving residual variance homogeneity. Finished model fits, as represented by coefficient of determination (R2), surpassed 0.5 in numerous models with values over 0.6 noted in a few cases. Given these models, an analyst is provided with a toolset to aid in risk assessment and disaster recovery should Katrina-like weather events reoccur.
Estimation of steady-state leakage current in polycrystalline PZT thin films
NASA Astrophysics Data System (ADS)
Podgorny, Yury; Vorotilov, Konstantin; Sigov, Alexander
2016-09-01
Estimation of the steady state (or "true") leakage current Js in polycrystalline ferroelectric PZT films with the use of the voltage-step technique is discussed. Curie-von Schweidler (CvS) and sum of exponents (Σ exp ) models are studied for current-time J (t) data fitting. Σ exp model (sum of three or two exponents) gives better fitting characteristics and provides good accuracy of Js estimation at reduced measurement time thus making possible to avoid film degradation, whereas CvS model is very sensitive to both start and finish time points and give in many cases incorrect results. The results give rise to suggest an existence of low-frequency relaxation processes in PZT films with characteristic duration of tens and hundreds of seconds.
Hendriks, Jacqueline; Fyfe, Sue; Styles, Irene; Skinner, S Rachel; Merriman, Gareth
2012-01-01
Measurement scales seeking to quantify latent traits like attitudes, are often developed using traditional psychometric approaches. Application of the Rasch unidimensional measurement model may complement or replace these techniques, as the model can be used to construct scales and check their psychometric properties. If data fit the model, then a scale with invariant measurement properties, including interval-level scores, will have been developed. This paper highlights the unique properties of the Rasch model. Items developed to measure adolescent attitudes towards abortion are used to exemplify the process. Ten attitude and intention items relating to abortion were answered by 406 adolescents aged 12 to 19 years, as part of the "Teen Relationships Study". The sampling framework captured a range of sexual and pregnancy experiences. Items were assessed for fit to the Rasch model including checks for Differential Item Functioning (DIF) by gender, sexual experience or pregnancy experience. Rasch analysis of the original dataset initially demonstrated that some items did not fit the model. Rescoring of one item (B5) and removal of another (L31) resulted in fit, as shown by a non-significant item-trait interaction total chi-square and a mean log residual fit statistic for items of -0.05 (SD=1.43). No DIF existed for the revised scale. However, items did not distinguish as well amongst persons with the most intense attitudes as they did for other persons. A person separation index of 0.82 indicated good reliability. Application of the Rasch model produced a valid and reliable scale measuring adolescent attitudes towards abortion, with stable measurement properties. The Rasch process provided an extensive range of diagnostic information concerning item and person fit, enabling changes to be made to scale items. This example shows the value of the Rasch model in developing scales for both social science and health disciplines.
Three-dimensional simulation of human teeth and its application in dental education and research.
Koopaie, Maryam; Kolahdouz, Sajad
2016-01-01
Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible.
Three-dimensional simulation of human teeth and its application in dental education and research
Koopaie, Maryam; Kolahdouz, Sajad
2016-01-01
Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible. PMID:28491836
Finding Effective Models in Transition Metals using Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Williams, Kiel; Wagner, Lucas K.
There is a gap between high-accuracy ab-initio calculations, like those produced from Quantum Monte Carlo (QMC), and effective lattice models such as the Hubbard model. We have developed a method that combines data produced from QMC with fitting techniques taken from data science, allowing us to determine which degrees of freedom are required to connect ab-initio and model calculations. We test this approach for transition metal atoms, where spectroscopic reference data exists. We report on the accuracy of several derived effective models that include different degrees of freedom, and comment on the quality of the parameter values we obtain from our fitting procedure. We gratefully acknowledge funding from the National Science Foundation Graduate Research Fellowship Program under Grant Number DGE-1144245 (K.T.W.) and from SciDAC Grant DE-FG02-12ER46875 (L.K.W.).
Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M
2018-07-01
Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).
Digital Model of Railway Electric Traction Lines
NASA Astrophysics Data System (ADS)
Garg, Rachana; Mahajan, Priya; Kumar, Parmod
2017-08-01
The characteristic impedance and propagation constant define the behavior of signal propagation over the transmission lines. The digital model for railway traction lines which includes railway tracks is developed, using curve fitting technique in MATLAB. The sensitivity of this model has been computed with respect to frequency. The digital sensitivity values are compared with the values of analog sensitivity. The developed model is useful for digital protection, integrated operation, control and planning of the system.
Assessment of Response Surface Models using Independent Confirmation Point Analysis
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2010-01-01
This paper highlights various advantages that confirmation-point residuals have over conventional model design-point residuals in assessing the adequacy of a response surface model fitted by regression techniques to a sample of experimental data. Particular advantages are highlighted for the case of design matrices that may be ill-conditioned for a given sample of data. The impact of both aleatory and epistemological uncertainty in response model adequacy assessments is considered.
Fitting Nonlinear Curves by use of Optimization Techniques
NASA Technical Reports Server (NTRS)
Hill, Scott A.
2005-01-01
MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.
Right-Sizing Statistical Models for Longitudinal Data
Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.
2015-01-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507
Right-sizing statistical models for longitudinal data.
Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M
2015-12-01
Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).
Liu, Y; Allen, R
2002-09-01
The study aimed to model the cerebrovascular system, using a linear ARX model based on data simulated by a comprehensive physiological model, and to assess the range of applicability of linear parametric models. Arterial blood pressure (ABP) and middle cerebral arterial blood flow velocity (MCAV) were measured from 11 subjects non-invasively, following step changes in ABP, using the thigh cuff technique. By optimising parameters associated with autoregulation, using a non-linear optimisation technique, the physiological model showed a good performance (r=0.83+/-0.14) in fitting MCAV. An additional five sets of measured ABP of length 236+/-154 s were acquired from a subject at rest. These were normalised and rescaled to coefficients of variation (CV=SD/mean) of 2% and 10% for model comparisons. Randomly generated Gaussian noise with standard deviation (SD) from 1% to 5% was added to both ABP and physiologically simulated MCAV (SMCAV), with 'normal' and 'impaired' cerebral autoregulation, to simulate the real measurement conditions. ABP and SMCAV were fitted by ARX modelling, and cerebral autoregulation was quantified by a 5 s recovery percentage R5% of the step responses of the ARX models. The study suggests that cerebral autoregulation can be assessed by computing the R5% of the step response of an ARX model of appropriate order, even when measurement noise is considerable.
Zahedi, Edmond; Sohani, Vahid; Ali, M A Mohd; Chellappan, Kalaivani; Beng, Gan Kok
2015-01-01
The feasibility of a novel system to reliably estimate the normalized central blood pressure (CBPN) from the radial photoplethysmogram (PPG) is investigated. Right-wrist radial blood pressure and left-wrist PPG were simultaneously recorded in five different days. An industry-standard applanation tonometer was employed for recording radial blood pressure. The CBP waveform was amplitude-normalized to determine CBPN. A total of fifteen second-order autoregressive models with exogenous input were investigated using system identification techniques. Among these 15 models, the model producing the lowest coefficient of variation (CV) of the fitness during the five days was selected as the reference model. Results show that the proposed model is able to faithfully reproduce CBPN (mean fitness = 85.2% ± 2.5%) from the radial PPG for all 15 segments during the five recording days. The low CV value of 3.35% suggests a stable model valid for different recording days.
Longitudinal analysis of categorical epidemiological data: a study of Three Mile Island.
Fienberg, S E; Bromet, E J; Follmann, D; Lambert, D; May, S M
1985-11-01
The accident at the Three Mile Island nuclear power plant in 1979 led to an unprecedented set of events with potentially life threatening implications. This paper focusses on the analysis of a longitudinal study of the psychological well-being of the mothers of young children living within 10 miles of the plant. The initial analyses of the data utilize loglinear/logit model techniques from the contingency table literature, and involve the fitting of a sequence of logit models. The inadequancies of these analyses are noted, and a new class of mixture models for logistic response structures is introduced to overcome the noted shortcomings. The paper includes a brief outline of the methodology relevant for the fitting of these models using the method of maximum likelihood, and then the model is applied to the TMI data. The paper concludes with a discussion of some of the substantive implications of the mixture model analysis.
A Comparison of Two Methods for Estimating Black Hole Spin in Active Galactic Nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capellupo, Daniel M.; Haggard, Daryl; Wafflard-Fernandez, Gaylor, E-mail: danielc@physics.mcgill.ca
Angular momentum, or spin, is a fundamental property of black holes (BHs), yet it is much more difficult to estimate than mass or accretion rate (for actively accreting systems). In recent years, high-quality X-ray observations have allowed for detailed measurements of the Fe K α emission line, where relativistic line broadening allows constraints on the spin parameter (the X-ray reflection method). Another technique uses accretion disk models to fit the AGN continuum emission (the continuum-fitting, or CF, method). Although each technique has model-dependent uncertainties, these are the best empirical tools currently available and should be vetted in systems where bothmore » techniques can be applied. A detailed comparison of the two methods is also useful because neither method can be applied to all AGN. The X-ray reflection technique targets mostly local ( z ≲ 0.1) systems, while the CF method can be applied at higher redshift, up to and beyond the peak of AGN activity and growth. Here, we apply the CF method to two AGN with X-ray reflection measurements. For both the high-mass AGN, H1821+643, and the Seyfert 1, NGC 3783, we find a range in spin parameter consistent with the X-ray reflection measurements. However, the near-maximal spin favored by the reflection method for NGC 3783 is more probable if we add a disk wind to the model. Refinement of these techniques, together with improved X-ray measurements and tighter BH mass constraints, will permit this comparison in a larger sample of AGN and increase our confidence in these spin estimation techniques.« less
Fitting C 2 Continuous Parametric Surfaces to Frontiers Delimiting Physiologic Structures
Bayer, Jason D.
2014-01-01
We present a technique to fit C 2 continuous parametric surfaces to scattered geometric data points forming frontiers delimiting physiologic structures in segmented images. Such mathematical representation is interesting because it facilitates a large number of operations in modeling. While the fitting of C 2 continuous parametric curves to scattered geometric data points is quite trivial, the fitting of C 2 continuous parametric surfaces is not. The difficulty comes from the fact that each scattered data point should be assigned a unique parametric coordinate, and the fit is quite sensitive to their distribution on the parametric plane. We present a new approach where a polygonal (quadrilateral or triangular) surface is extracted from the segmented image. This surface is subsequently projected onto a parametric plane in a manner to ensure a one-to-one mapping. The resulting polygonal mesh is then regularized for area and edge length. Finally, from this point, surface fitting is relatively trivial. The novelty of our approach lies in the regularization of the polygonal mesh. Process performance is assessed with the reconstruction of a geometric model of mouse heart ventricles from a computerized tomography scan. Our results show an excellent reproduction of the geometric data with surfaces that are C 2 continuous. PMID:24782911
A New Stellar Atmosphere Grid and Comparisons with HST /STIS CALSPEC Flux Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bohlin, Ralph C.; Fleming, Scott W.; Gordon, Karl D.
The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli and Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanzmore » and Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T {sub eff} = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope . Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.« less
A phase transition induces chaos in a predator-prey ecosystem with a dynamic fitness landscape.
Gilpin, William; Feldman, Marcus W
2017-07-01
In many ecosystems, natural selection can occur quickly enough to influence the population dynamics and thus future selection. This suggests the importance of extending classical population dynamics models to include such eco-evolutionary processes. Here, we describe a predator-prey model in which the prey population growth depends on a prey density-dependent fitness landscape. We show that this two-species ecosystem is capable of exhibiting chaos even in the absence of external environmental variation or noise, and that the onset of chaotic dynamics is the result of the fitness landscape reversibly alternating between epochs of stabilizing and disruptive selection. We draw an analogy between the fitness function and the free energy in statistical mechanics, allowing us to use the physical theory of first-order phase transitions to understand the onset of rapid cycling in the chaotic predator-prey dynamics. We use quantitative techniques to study the relevance of our model to observational studies of complex ecosystems, finding that the evolution-driven chaotic dynamics confer community stability at the "edge of chaos" while creating a wide distribution of opportunities for speciation during epochs of disruptive selection-a potential observable signature of chaotic eco-evolutionary dynamics in experimental studies.
A New Stellar Atmosphere Grid and Comparisons with HST/STIS CALSPEC Flux Distributions
NASA Astrophysics Data System (ADS)
Bohlin, Ralph C.; Mészáros, Szabolcs; Fleming, Scott W.; Gordon, Karl D.; Koekemoer, Anton M.; Kovács, József
2017-05-01
The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli & Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanz & Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T eff = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope. Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2014-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan Walker
2015-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
Bayesian Estimation in the One-Parameter Latent Trait Model.
1980-03-01
Journal of Mathematical and Statistical Psychology , 1973, 26, 31-44. (a) Andersen, E. B. A goodness of fit test for the Rasch model. Psychometrika, 1973, 28...technique for estimating latent trait mental test parameters. Educational and Psychological Measurement, 1976, 36, 705-715. Lindley, D. V. The...Lord, F. M. An analysis of verbal Scholastic Aptitude Test using Birnbaum’s three-parameter logistic model. Educational and Psychological
ERIC Educational Resources Information Center
Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael
2011-01-01
This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…
Regression analysis of current-status data: an application to breast-feeding.
Grummer-strawn, L M
1993-09-01
"Although techniques for calculating mean survival time from current-status data are well known, their use in multiple regression models is somewhat troublesome. Using data on current breast-feeding behavior, this article considers a number of techniques that have been suggested in the literature, including parametric, nonparametric, and semiparametric models as well as the application of standard schedules. Models are tested in both proportional-odds and proportional-hazards frameworks....I fit [the] models to current status data on breast-feeding from the Demographic and Health Survey (DHS) in six countries: two African (Mali and Ondo State, Nigeria), two Asian (Indonesia and Sri Lanka), and two Latin American (Colombia and Peru)." excerpt
Videodensitometric Methods for Cardiac Output Measurements
NASA Astrophysics Data System (ADS)
Mischi, Massimo; Kalker, Ton; Korsten, Erik
2003-12-01
Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.
Exploring novel objective functions for simulating muscle coactivation in the neck.
Mortensen, J; Trkov, M; Merryweather, A
2018-04-11
Musculoskeletal modeling allows for analysis of individual muscles in various situations. However, current techniques to realistically simulate muscle response when significant amounts of intentional coactivation is required are inadequate. This would include stiffening the neck or spine through muscle coactivation in preparation for perturbations or impacts. Muscle coactivation has been modeled previously in the neck and spine using optimization techniques that seek to maximize the joint stiffness by maximizing total muscle activation or muscle force. These approaches have not sought to replicate human response, but rather to explore the possible effects of active muscle. Coactivation remains a challenging feature to include in musculoskeletal models, and may be improved by extracting optimization objective functions from experimental data. However, the components of such an objective function must be known before fitting to experimental data. This study explores the effect of components in several objective functions, in order to recommend components to be used for fitting to experimental data. Four novel approaches to modeling coactivation through optimization techniques are presented, two of which produce greater levels of stiffness than previous techniques. Simulations were performed using OpenSim and MATLAB cooperatively. Results show that maximizing the moment generated by a particular muscle appears analogous to maximizing joint stiffness. The approach of optimizing for maximum moment generated by individual muscles may be a good candidate for developing objective functions that accurately simulate muscle coactivation in complex joints. This new approach will be the focus of future studies with human subjects. Copyright © 2018 Elsevier Ltd. All rights reserved.
3D spherical-cap fitting procedure for (truncated) sessile nano- and micro-droplets & -bubbles.
Tan, Huanshu; Peng, Shuhua; Sun, Chao; Zhang, Xuehua; Lohse, Detlef
2016-11-01
In the study of nanobubbles, nanodroplets or nanolenses immobilised on a substrate, a cross-section of a spherical cap is widely applied to extract geometrical information from atomic force microscopy (AFM) topographic images. In this paper, we have developed a comprehensive 3D spherical-cap fitting procedure (3D-SCFP) to extract morphologic characteristics of complete or truncated spherical caps from AFM images. Our procedure integrates several advanced digital image analysis techniques to construct a 3D spherical-cap model, from which the geometrical parameters of the nanostructures are extracted automatically by a simple algorithm. The procedure takes into account all valid data points in the construction of the 3D spherical-cap model to achieve high fidelity in morphology analysis. We compare our 3D fitting procedure with the commonly used 2D cross-sectional profile fitting method to determine the contact angle of a complete spherical cap and a truncated spherical cap. The results from 3D-SCFP are consistent and accurate, while 2D fitting is unavoidably arbitrary in the selection of the cross-section and has a much lower number of data points on which the fitting can be based, which in addition is biased to the top of the spherical cap. We expect that the developed 3D spherical-cap fitting procedure will find many applications in imaging analysis.
Aab, A.; Abreu, P.; Aglietta, M.; ...
2014-12-01
Using the data taken at the Pierre Auger Observatory between December 2004 and December 2012, we have examined the implications of the distributions of depths of atmospheric shower maximum (Xmax), using a hybrid technique, for composition and hadronic interaction models. We do this by fitting the distributions with predictions from a variety of hadronic interaction models for variations in the composition of the primary cosmic rays and examining the quality of the fit. Regardless of what interaction model is assumed, we find that our data are not well described by a mix of protons and iron nuclei over most ofmore » the energy range. Acceptable fits can be obtained when intermediate masses are included, and when this is done consistent results for the proton and iron-nuclei contributions can be found using the available models. We observe a strong energy dependence of the resulting proton fractions, and find no support from any of the models for a significant contribution from iron nuclei. However, we also observe a significant disagreement between the models with respect to the relative contributions of the intermediate components.« less
de Monchy, Romain; Rouyer, Julien; Destrempes, François; Chayer, Boris; Cloutier, Guy; Franceschini, Emilie
2018-04-01
Quantitative ultrasound techniques based on the backscatter coefficient (BSC) have been commonly used to characterize red blood cell (RBC) aggregation. Specifically, a scattering model is fitted to measured BSC and estimated parameters can provide a meaningful description of the RBC aggregates' structure (i.e., aggregate size and compactness). In most cases, scattering models assumed monodisperse RBC aggregates. This study proposes the Effective Medium Theory combined with the polydisperse Structure Factor Model (EMTSFM) to incorporate the polydispersity of aggregate size. From the measured BSC, this model allows estimating three structural parameters: the mean radius of the aggregate size distribution, the width of the distribution, and the compactness of the aggregates. Two successive experiments were conducted: a first experiment on blood sheared in a Couette flow device coupled with an ultrasonic probe, and a second experiment, on the same blood sample, sheared in a plane-plane rheometer coupled to a light microscope. Results demonstrated that the polydisperse EMTSFM provided the best fit to the BSC data when compared to the classical monodisperse models for the higher levels of aggregation at hematocrits between 10% and 40%. Fitting the polydisperse model yielded aggregate size distributions that were consistent with direct light microscope observations at low hematocrits.
NASA Astrophysics Data System (ADS)
D, Meena; Francis, Fredy; T, Sarath K.; E, Dipin; Srinivas, T.; K, Jayasree V.
2014-10-01
Wavelength Division Multiplexing (WDM) techniques overfibrelinks helps to exploit the high bandwidth capacity of single mode fibres. A typical WDM link consisting of laser source, multiplexer/demultiplexer, amplifier and detectoris considered for obtaining the open loop gain model of the link. The methodology used here is to obtain individual component models using mathematical and different curve fitting techniques. These individual models are then combined to obtain the WDM link model. The objective is to deduce a single variable model for the WDM link in terms of input current to system. Thus it provides a black box solution for a link. The Root Mean Square Error (RMSE) associated with each of the approximated models is given for comparison. This will help the designer to select the suitable WDM link model during a complex link design.
NASA Technical Reports Server (NTRS)
Bedewi, Nabih E.; Yang, Jackson C. S.
1987-01-01
Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The results of an experiment conducted on an offshore platform scale model to verify the validity of the technique and to demonstrate its application in damage detection are presented.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
CalFitter: a web server for analysis of protein thermal denaturation data.
Mazurenko, Stanislav; Stourac, Jan; Kunka, Antonin; Nedeljkovic, Sava; Bednar, David; Prokop, Zbynek; Damborsky, Jiri
2018-05-14
Despite significant advances in the understanding of protein structure-function relationships, revealing protein folding pathways still poses a challenge due to a limited number of relevant experimental tools. Widely-used experimental techniques, such as calorimetry or spectroscopy, critically depend on a proper data analysis. Currently, there are only separate data analysis tools available for each type of experiment with a limited model selection. To address this problem, we have developed the CalFitter web server to be a unified platform for comprehensive data fitting and analysis of protein thermal denaturation data. The server allows simultaneous global data fitting using any combination of input data types and offers 12 protein unfolding pathway models for selection, including irreversible transitions often missing from other tools. The data fitting produces optimal parameter values, their confidence intervals, and statistical information to define unfolding pathways. The server provides an interactive and easy-to-use interface that allows users to directly analyse input datasets and simulate modelled output based on the model parameters. CalFitter web server is available free at https://loschmidt.chemi.muni.cz/calfitter/.
ERIC Educational Resources Information Center
Boker, Steven M.; Nesselroade, John R.
2002-01-01
Examined two methods for fitting models of intrinsic dynamics to intraindividual variability data by testing these techniques' behavior in equations through simulation studies. Among the main results is the demonstration that a local linear approximation of derivatives can accurately recover the parameters of a simulated linear oscillator, with…
Samuel V. Glass; Charles R. Boardman; Samuel L. Zelinka
2017-01-01
Recently, the dynamic vapor sorption (DVS) technique has been used to measure sorption isotherms and develop moisture-mechanics models for wood and cellulosic materials. This method typically involves measuring the time-dependent mass response of a sample following step changes in relative humidity (RH), fitting a kinetic model to the data, and extrapolating the...
ERIC Educational Resources Information Center
Healy, Michael R.; Light, Leah L.; Chung, Christie
2005-01-01
In 3 experiments, young and older adults studied lists of unrelated word pairs and were given confidence-rated item and associative recognition tests. Several different models of recognition were fit to the confidence-rating data using techniques described by S. Macho (2002, 2004). Concordant with previous findings, item recognition data were best…
Consistent data-driven computational mechanics
NASA Astrophysics Data System (ADS)
González, D.; Chinesta, F.; Cueto, E.
2018-05-01
We present a novel method, within the realm of data-driven computational mechanics, to obtain reliable and thermodynamically sound simulation from experimental data. We thus avoid the need to fit any phenomenological model in the construction of the simulation model. This kind of techniques opens unprecedented possibilities in the framework of data-driven application systems and, particularly, in the paradigm of industry 4.0.
Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.
Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko
2016-03-01
In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
MOPET: a context-aware and user-adaptive wearable system for fitness training.
Buttussi, Fabio; Chittaro, Luca
2008-02-01
Cardiovascular disease, obesity, and lack of physical fitness are increasingly common and negatively affect people's health, requiring medical assistance and decreasing people's wellness and productivity. In the last years, researchers as well as companies have been increasingly investigating wearable devices for fitness applications with the aim of improving user's health, in terms of cardiovascular benefits, loss of weight or muscle strength. Dedicated GPS devices, accelerometers, step counters and heart rate monitors are already commercially available, but they are usually very limited in terms of user interaction and artificial intelligence capabilities. This significantly limits the training and motivation support provided by current systems, making them poorly suited for untrained people who are more interested in fitness for health rather than competitive purposes. To better train and motivate users, we propose the mobile personal trainer (MOPET) system. MOPET is a wearable system that supervises a physical fitness activity based on alternating jogging and fitness exercises in outdoor environments. By exploiting real-time data coming from sensors, knowledge elicited from a sport physiologist and a professional trainer, and a user model that is built and periodically updated through a guided autotest, MOPET can provide motivation as well as safety and health advice, adapted to the user and the context. To better interact with the user, MOPET also displays a 3D embodied agent that speaks, suggests stretching or strengthening exercises according to user's current condition, and demonstrates how to correctly perform exercises with interactive 3D animations. By describing MOPET, we show how context-aware and user-adaptive techniques can be applied to the fitness domain. In particular, we describe how such techniques can be exploited to train, motivate, and supervise users in a wearable personal training system for outdoor fitness activity.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
Imaging and modelling root water uptake
NASA Astrophysics Data System (ADS)
Zarebanadkouki, M.; Meunier, F.; Javaux, M.; Kaestner, A.; Carminati, A.
2017-12-01
Spatially resolved measurement and modelling of root water uptake is urgently needed to identify root traits that can improve capture of water from the soil. However, measuring water fluxes into roots of transpiring plants growing in soil remains challenging. Here, we describe an in-situ technique to measure local fluxes of water into roots. The technique consists of tracing the transport of deuterated water (D2O) in soil and roots using time series neutron radiography and tomography. A diffusion-convection model was used to model the transport of D2O in roots. The model includes root features such as the endodermis, xylem and the composite flow of water in the apoplastic and symplastic pathways. Diffusion permeability of root cells and of the endodermis were estimated by fitting the experiment during the night, when transpiration was negligible. The water fluxes at different position of the root system were obtained by fitting the experiments at daytime. The results showed that root water uptake was not uniform along root system and varied among different root types. The measured profiles of root water uptake into roots were used to estimate the radial and axial hydraulic of the roots. A three-dimensional model of root water uptake was used to fit the measured water fluxes by adjusting the root radial and axial hydraulic conductivities. We found that the estimated radial conductivities decreased with root age, while the axial conducances increased, and they are different among root types. The significance of this study is the development of a method to estimate 1) water uptake and 2) the radial and axial hydraulic conductivities of roots of transpiring plants growing in the soil.
Measurements of strain at plate boundaries using space based geodetic techniques
NASA Technical Reports Server (NTRS)
Robaudo, Stefano; Harrison, Christopher G. A.
1993-01-01
We have used the space based geodetic techniques of Satellite Laser Ranging (SLR) and VLBI to study strain along subduction and transform plate boundaries and have interpreted the results using a simple elastic dislocation model. Six stations located behind island arcs were analyzed as representative of subduction zones while 13 sites located on either side of the San Andreas fault were used for the transcurrent zones. The length deformation scale was then calculated for both tectonic margins by fitting the relative strain to an exponentially decreasing function of distance from the plate boundary. Results show that space-based data for the transcurrent boundary along the San Andreas fault help to define better the deformation length scale in the area while fitting nicely the elastic half-space earth model. For subduction type bonndaries the analysis indicates that there is no single scale length which uniquely describes the deformation. This is mainly due to the difference in subduction characteristics for the different areas.
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques.
Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; De Gaetani, Carlo Iapige; Pinto, Livio; Tornatore, Vincenza
2018-03-02
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D'Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations.
Abbas, Aamir; Ihsanullah; Al-Baghli, Nadhir A. H.
2017-01-01
Multiwall carbon nanotubes (CNTs) and iron oxide impregnated carbon nanotubes (CNTs-iron oxide) were investigated for the adsorption of hazardous toluene and paraxylene (p-xylene) from aqueous solution. Pure CNTs were impregnated with iron oxides nanoparticles using wet impregnation technique. Various characterization techniques including thermogravimetric analysis, scanning electron microscopy, elemental dispersion spectroscopy, X-ray diffraction, and nitrogen adsorption analysis were used to study the thermal degradation, surface morphology, purity, and surface area of the materials. Batch adsorption experiments show that iron oxide impregnated CNTs have higher degree of removal of p-xylene (i.e., 90%) compared with toluene (i.e., 70%), for soaking time 2 h, with pollutant initial concentration 100 ppm, at pH 6 and shaking speed of 200 rpm at 25°C. Pseudo-second-order model provides better fitting for the toluene and p-xylene adsorption. Langmuir and Freundlich isotherm models demonstrate good fitting for the adsorption data of toluene and p-xylene. PMID:28386208
Brookings, Ted; Goeritz, Marie L; Marder, Eve
2014-11-01
We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. Copyright © 2014 the American Physiological Society.
New Approaches For Asteroid Spin State and Shape Modeling From Delay-Doppler Radar Images
NASA Astrophysics Data System (ADS)
Raissi, Chedy; Lamee, Mehdi; Mosiane, Olorato; Vassallo, Corinne; Busch, Michael W.; Greenberg, Adam; Benner, Lance A. M.; Naidu, Shantanu P.; Duong, Nicholas
2016-10-01
Delay-Doppler radar imaging is a powerful technique to characterize the trajectories, shapes, and spin states of near-Earth asteroids; and has yielded detailed models of dozens of objects. Reconstructing objects' shapes and spins from delay-Doppler data is a computationally intensive inversion problem. Since the 1990s, delay-Doppler data has been analyzed using the SHAPE software. SHAPE performs sequential single-parameter fitting, and requires considerable computer runtime and human intervention (Hudson 1993, Magri et al. 2007). Recently, multiple-parameter fitting algorithms have been shown to more efficiently invert delay-Doppler datasets (Greenberg & Margot 2015) - decreasing runtime while improving accuracy. However, extensive human oversight of the shape modeling process is still required. We have explored two new techniques to better automate delay-Doppler shape modeling: Bayesian optimization and a machine-learning neural network.One of the most time-intensive steps of the shape modeling process is to perform a grid search to constrain the target's spin state. We have implemented a Bayesian optimization routine that uses SHAPE to autonomously search the space of spin-state parameters. To test the efficacy of this technique, we compared it to results with human-guided SHAPE for asteroids 1992 UY4, 2000 RS11, and 2008 EV5. Bayesian optimization yielded similar spin state constraints within a factor of 3 less computer runtime.The shape modeling process could be further accelerated using a deep neural network to replace iterative fitting. We have implemented a neural network with a variational autoencoder (VAE), using a subset of known asteroid shapes and a large set of synthetic radar images as inputs to train the network. Conditioning the VAE in this manner allows the user to give the network a set of radar images and get a 3D shape model as an output. Additional development will be required to train a network to reliably render shapes from delay-Doppler images.This work was supported by NASA Ames, NVIDIA, Autodesk and the SETI Institute as part of the NASA Frontier Development Lab program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolphin, Andrew E., E-mail: adolphin@raytheon.com
The combination of spectroscopic stellar metallicities and resolved star color–magnitude diagrams (CMDs) has the potential to constrain the entire star formation history (SFH) of a galaxy better than fitting CMDs alone (as is most common in SFH studies using resolved stellar populations). In this paper, two approaches to incorporating external metallicity information into CMD-fitting techniques are presented. Overall, the joint fitting of metallicity and CMD information can increase the precision of measured age–metallicity relationships (AMRs) and star formation rates by 10% over CMD fitting alone. However, systematics in stellar isochrones and mismatches between spectroscopic and photometric determinations of metallicity canmore » reduce the accuracy of the recovered SFHs. I present a simple mitigation of these systematics that can reduce their amplitude to the level obtained from CMD fitting alone, while ensuring that the AMR is consistent with spectroscopic metallicities. As is the case in CMD-fitting analysis, improved stellar models and calibrations between spectroscopic and photometric metallicities are currently the primary impediment to gains in SFH precision from jointly fitting stellar metallicities and CMDs.« less
Inter-technique validation of tropospheric slant total delays
NASA Astrophysics Data System (ADS)
Kačmařík, Michal; Douša, Jan; Dick, Galina; Zus, Florian; Brenot, Hugues; Möller, Gregor; Pottiaux, Eric; Kapłon, Jan; Hordyniec, Paweł; Václavovic, Pavel; Morel, Laurent
2017-06-01
An extensive validation of line-of-sight tropospheric slant total delays (STD) from Global Navigation Satellite Systems (GNSS), ray tracing in numerical weather prediction model (NWM) fields and microwave water vapour radiometer (WVR) is presented. Ten GNSS reference stations, including collocated sites, and almost 2 months of data from 2013, including severe weather events were used for comparison. Seven institutions delivered their STDs based on GNSS observations processed using 5 software programs and 11 strategies enabling to compare rather different solutions and to assess the impact of several aspects of the processing strategy. STDs from NWM ray tracing came from three institutions using three different NWMs and ray-tracing software. Inter-techniques evaluations demonstrated a good mutual agreement of various GNSS STD solutions compared to NWM and WVR STDs. The mean bias among GNSS solutions not considering post-fit residuals in STDs was -0.6 mm for STDs scaled in the zenith direction and the mean standard deviation was 3.7 mm. Standard deviations of comparisons between GNSS and NWM ray-tracing solutions were typically 10 mm ± 2 mm (scaled in the zenith direction), depending on the NWM model and the GNSS station. Comparing GNSS versus WVR STDs reached standard deviations of 12 mm ± 2 mm also scaled in the zenith direction. Impacts of raw GNSS post-fit residuals and cleaned residuals on optimal reconstructing of GNSS STDs were evaluated at inter-technique comparison and for GNSS at collocated sites. The use of raw post-fit residuals is not generally recommended as they might contain strong systematic effects, as demonstrated in the case of station LDB0. Simplified STDs reconstructed only from estimated GNSS tropospheric parameters, i.e. without applying post-fit residuals, performed the best in all the comparisons; however, it obviously missed part of tropospheric signals due to non-linear temporal and spatial variations in the troposphere. Although the post-fit residuals cleaned of visible systematic errors generally showed a slightly worse performance, they contained significant tropospheric signal on top of the simplified model. They are thus recommended for the reconstruction of STDs, particularly during high variability in the troposphere. Cleaned residuals also showed a stable performance during ordinary days while containing promising information about the troposphere at low-elevation angles.
Simultaneous parameter optimization of x-ray and neutron reflectivity data using genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Surendra, E-mail: surendra@barc.gov.in; Basu, Saibal
2016-05-23
X-ray and neutron reflectivity are two non destructive techniques which provide a wealth of information on thickness, structure and interracial properties in nanometer length scale. Combination of X-ray and neutron reflectivity is well suited for obtaining physical parameters of nanostructured thin films and superlattices. Neutrons provide a different contrast between the elements than X-rays and are also sensitive to the magnetization depth profile in thin films and superlattices. The real space information is extracted by fitting a model for the structure of the thin film sample in reflectometry experiments. We have applied a Genetic Algorithms technique to extract depth dependentmore » structure and magnetic in thin film and multilayer systems by simultaneously fitting X-ray and neutron reflectivity data.« less
Focal ratio degradation: a new perspective
NASA Astrophysics Data System (ADS)
Haynes, Dionne M.; Withford, Michael J.; Dawes, Judith M.; Haynes, Roger; Bland-Hawthorn, Joss
2008-07-01
We have developed an alternative FRD empirical model for the parallel laser beam technique which can accommodate contributions from both scattering and modal diffusion. It is consistent with scattering inducing a Lorentzian contribution and modal diffusion inducing a Gaussian contribution. The convolution of these two functions produces a Voigt function which is shown to better simulate the observed behavior of the FRD distribution and provides a greatly improved fit over the standard Gaussian fitting approach. The Voigt model can also be used to quantify the amount of energy displaced by FRD, therefore allowing astronomical instrument scientists to identify, quantify and potentially minimize the various sources of FRD, and optimise the fiber and instrument performance.
Tarlak, Fatih; Ozdemir, Murat; Melikoglu, Mehmet
2018-02-02
The growth data of Pseudomonas spp. on sliced mushrooms (Agaricus bisporus) stored between 4 and 28°C were obtained and fitted to three different primary models, known as the modified Gompertz, logistic and Baranyi models. The goodness of fit of these models was compared by considering the mean squared error (MSE) and the coefficient of determination for nonlinear regression (pseudo-R 2 ). The Baranyi model yielded the lowest MSE and highest pseudo-R 2 values. Therefore, the Baranyi model was selected as the best primary model. Maximum specific growth rate (r max ) and lag phase duration (λ) obtained from the Baranyi model were fitted to secondary models namely, the Ratkowsky and Arrhenius models. High pseudo-R 2 and low MSE values indicated that the Arrhenius model has a high goodness of fit to determine the effect of temperature on r max . Observed number of Pseudomonas spp. on sliced mushrooms from independent experiments was compared with the predicted number of Pseudomonas spp. with the models used by considering the B f and A f values. The B f and A f values were found to be 0.974 and 1.036, respectively. The correlation between the observed and predicted number of Pseudomonas spp. was high. Mushroom spoilage was simulated as a function of temperature with the models used. The models used for Pseudomonas spp. growth can provide a fast and cost-effective alternative to traditional microbiological techniques to determine the effect of storage temperature on product shelf-life. The models can be used to evaluate the growth behaviour of Pseudomonas spp. on sliced mushroom, set limits for the quantitative detection of the microbial spoilage and assess product shelf-life. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.
Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less
Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.
2015-11-19
Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less
Hospital survey on patient safety culture: psychometric analysis on a Scottish sample.
Sarac, Cakil; Flin, Rhona; Mearns, Kathryn; Jackson, Jeanette
2011-10-01
To investigate the psychometric properties of the Hospital Survey on Patient Safety Culture on a Scottish NHS data set. The data were collected from 1969 clinical staff (estimated 22% response rate) from one acute hospital from each of seven Scottish Health boards. Using a split-half validation technique, the data were randomly split; an exploratory factor analysis was conducted on the calibration data set, and confirmatory factor analyses were conducted on the validation data set to investigate and check the original US model fit in a Scottish sample. Following the split-half validation technique, exploratory factor analysis results showed a 10-factor optimal measurement model. The confirmatory factor analyses were then performed to compare the model fit of two competing models (10-factor alternative model vs 12-factor original model). An S-B scaled χ(2) square difference test demonstrated that the original 12-factor model performed significantly better in a Scottish sample. Furthermore, reliability analyses of each component yielded satisfactory results. The mean scores on the climate dimensions in the Scottish sample were comparable with those found in other European countries. This study provided evidence that the original 12-factor structure of the Hospital Survey on Patient Safety Culture scale has been replicated in this Scottish sample. Therefore, no modifications are required to the original 12-factor model, which is suggested for use, since it would allow researchers the possibility of cross-national comparisons.
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
Souto, Juan Carlos; Yustos, Pedro; Ladero, Miguel; Garcia-Ochoa, Felix
2011-02-01
In this work, a phenomenological study of the isomerisation and disproportionation of rosin acids using an industrial 5% Pd on charcoal catalyst from 200 to 240°C is carried out. Medium composition is determined by elemental microanalysis, GC-MS and GC-FID. Dehydrogenated and hydrogenated acid species molar amounts in the final product show that dehydrogenation is the main reaction. Moreover, both hydrogen and non-hydrogen concentration considering kinetic models are fitted to experimental data using a multivariable non-linear technique. Statistical discrimination among the proposed kinetic models lead to the conclusion hydrogen considering models fit much better to experimental results. The final kinetic model involves first-order isomerisation reactions of neoabietic and palustric acids to abietic acid, first-order dehydrogenation and hydrogenation of this latter acid, and hydrogenation of pimaric acids. Hydrogenation reactions are partial first-order regarding the acid and hydrogen. Copyright © 2010 Elsevier Ltd. All rights reserved.
Latest astronomical constraints on some non-linear parametric dark energy models
NASA Astrophysics Data System (ADS)
Yang, Weiqiang; Pan, Supriya; Paliathanasis, Andronikos
2018-04-01
We consider non-linear redshift-dependent equation of state parameters as dark energy models in a spatially flat Friedmann-Lemaître-Robertson-Walker universe. To depict the expansion history of the universe in such cosmological scenarios, we take into account the large-scale behaviour of such parametric models and fit them using a set of latest observational data with distinct origin that includes cosmic microwave background radiation, Supernove Type Ia, baryon acoustic oscillations, redshift space distortion, weak gravitational lensing, Hubble parameter measurements from cosmic chronometers, and finally the local Hubble constant from Hubble space telescope. The fitting technique avails the publicly available code Cosmological Monte Carlo (COSMOMC), to extract the cosmological information out of these parametric dark energy models. From our analysis, it follows that those models could describe the late time accelerating phase of the universe, while they are distinguished from the Λ-cosmology.
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
Revisiting Isotherm Analyses Using R: Comparison of Linear, Non-linear, and Bayesian Techniques
Extensive adsorption isotherm data exist for an array of chemicals of concern on a variety of engineered and natural sorbents. Several isotherm models exist that can accurately describe these data from which the resultant fitting parameters may subsequently be used in numerical ...
Malachowski, George C; Clegg, Robert M; Redford, Glen I
2007-12-01
A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.
NASA Astrophysics Data System (ADS)
Mert, Bayram Ali; Dag, Ahmet
2017-12-01
In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.
KAMINSKI, GEORGE A.; STERN, HARRY A.; BERNE, B. J.; FRIESNER, RICHARD A.; CAO, YIXIANG X.; MURPHY, ROBERT B.; ZHOU, RUHONG; HALGREN, THOMAS A.
2014-01-01
We present results of developing a methodology suitable for producing molecular mechanics force fields with explicit treatment of electrostatic polarization for proteins and other molecular system of biological interest. The technique allows simulation of realistic-size systems. Employing high-level ab initio data as a target for fitting allows us to avoid the problem of the lack of detailed experimental data. Using the fast and reliable quantum mechanical methods supplies robust fitting data for the resulting parameter sets. As a result, gas-phase many-body effects for dipeptides are captured within the average RMSD of 0.22 kcal/mol from their ab initio values, and conformational energies for the di- and tetrapeptides are reproduced within the average RMSD of 0.43 kcal/mol from their quantum mechanical counterparts. The latter is achieved in part because of application of a novel torsional fitting technique recently developed in our group, which has already been used to greatly improve accuracy of the peptide conformational equilibrium prediction with the OPLS-AA force field.1 Finally, we have employed the newly developed first-generation model in computing gas-phase conformations of real proteins, as well as in molecular dynamics studies of the systems. The results show that, although the overall accuracy is no better than what can be achieved with a fixed-charges model, the methodology produces robust results, permits reasonably low computational cost, and avoids other computational problems typical for polarizable force fields. It can be considered as a solid basis for building a more accurate and complete second-generation model. PMID:12395421
Analysis of Sediment Transport for Rivers in South Korea based on Data Mining technique
NASA Astrophysics Data System (ADS)
Jang, Eun-kyung; Ji, Un; Yeo, Woonkwang
2017-04-01
The purpose of this study is to calculate of sediment discharge assessment using data mining in South Korea. The Model Tree was selected for this study which is the most suitable technique to explicitly analyze the relationship between input and output variables in various and diverse databases among the Data Mining. In order to derive the sediment discharge equation using the Model Tree of Data Mining used the dimensionless variables used in Engelund and Hansen, Ackers and White, Brownlie and van Rijn equations as the analytical condition. In addition, total of 14 analytical conditions were set considering the conditions dimensional variables and the combination conditions of the dimensionless variables and the dimensional variables according to the relationship between the flow and the sediment transport. For each case, the analysis results were analyzed by mean of discrepancy ratio, root mean square error, mean absolute percent error, correlation coefficient. The results showed that the best fit was obtained by using five dimensional variables such as velocity, depth, slope, width and Median Diameter. And closest approximation to the best goodness-of-fit was estimated from the depth, slope, width, main grain size of bed material and dimensionless tractive force and except for the slope in the single variable. In addition, the three types of Model Tree that are most appropriate are compared with the Ackers and White equation which is the best fit among the existing equations, the mean discrepancy ration and the correlation coefficient of the Model Tree are improved compared to the Ackers and White equation.
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
Wavelet transform approach for fitting financial time series data
NASA Astrophysics Data System (ADS)
Ahmed, Amel Abdoullah; Ismail, Mohd Tahir
2015-10-01
This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.
NASA Astrophysics Data System (ADS)
Sahoo, Sasmita; Jha, Madan K.
2013-12-01
The potential of multiple linear regression (MLR) and artificial neural network (ANN) techniques in predicting transient water levels over a groundwater basin were compared. MLR and ANN modeling was carried out at 17 sites in Japan, considering all significant inputs: rainfall, ambient temperature, river stage, 11 seasonal dummy variables, and influential lags of rainfall, ambient temperature, river stage and groundwater level. Seventeen site-specific ANN models were developed, using multi-layer feed-forward neural networks trained with Levenberg-Marquardt backpropagation algorithms. The performance of the models was evaluated using statistical and graphical indicators. Comparison of the goodness-of-fit statistics of the MLR models with those of the ANN models indicated that there is better agreement between the ANN-predicted groundwater levels and the observed groundwater levels at all the sites, compared to the MLR. This finding was supported by the graphical indicators and the residual analysis. Thus, it is concluded that the ANN technique is superior to the MLR technique in predicting spatio-temporal distribution of groundwater levels in a basin. However, considering the practical advantages of the MLR technique, it is recommended as an alternative and cost-effective groundwater modeling tool.
ERIC Educational Resources Information Center
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan
2016-01-01
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming…
Photometric Supernova Classification with Machine Learning
NASA Astrophysics Data System (ADS)
Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.
2016-08-01
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
Small, Scott R; Hensley, Sarah E; Cook, Paige L; Stevens, Rebecca A; Rogge, Renee D; Meding, John B; Berend, Michael E
2017-02-01
Short-stemmed femoral components facilitate reduced exposure surgical techniques while preserving native bone. A clinically successful stem should ideally reduce risk for stress shielding while maintaining adequate primary stability for biological fixation. We asked (1) how stem-length changes cortical strain distribution in the proximal femur in a fit-and-fill geometry and (2) if short-stemmed components exhibit primary stability on par with clinically successful designs. Cortical strain was assessed via digital image correlation in composite femurs implanted with long, medium, and short metaphyseal fit-and-fill stem designs in a single-leg stance loading model. Strain was compared to a loaded, unimplanted femur. Bone-implant micromotion was then compared with reduced lateral shoulder short stem and short tapered-wedge designs in cyclic axial and torsional testing. Femurs implanted with short-stemmed components exhibited cortical strain response most closely matching that of the intact femur model, theoretically reducing the potential for proximal stress shielding. In micromotion testing, no difference in primary stability was observed as a function of reduced stem length within the same component design. Our findings demonstrate that within this fit-and-fill stem design, reduction in stem length improved proximal cortical strain distribution and maintained axial and torsional stability on par with other stem designs in a composite femur model. Short-stemmed implants may accommodate less invasive surgical techniques while facilitating more physiological femoral loading without sacrificing primary implant stability. Copyright © 2016 Elsevier Inc. All rights reserved.
Richardson, Daniel R; Stauffer, Hans U; Roy, Sukesh; Gord, James R
2017-04-10
A comparison is made between two ultrashort-pulse coherent anti-Stokes Raman scattering (CARS) thermometry techniques-hybrid femtosecond/picosecond (fs/ps) CARS and chirped-probe-pulse (CPP) fs-CARS-that have become standards for high-repetition-rate thermometry in the combustion diagnostics community. These two variants of fs-CARS differ only in the characteristics of the ps-duration probe pulse; in hybrid fs/ps CARS a spectrally narrow, time-asymmetric probe pulse is used, whereas a highly chirped, spectrally broad probe pulse is used in CPP fs-CARS. Temperature measurements were performed using both techniques in near-adiabatic flames in the temperature range 1600-2400 K and for probe time delays of 0-30 ps. Under these conditions, both techniques are shown to exhibit similar temperature measurement accuracies and precisions to previously reported values and to each other. However, it is observed that initial calibration fits to the spectrally broad CPP results require more fitting parameters and a more robust optimization algorithm and therefore significantly increased computational cost and complexity compared to the fitting of hybrid fs/ps CARS data. The optimized model parameters varied more for the CPP measurements than for the hybrid fs/ps measurements for different experimental conditions.
Classifying and modelling spiral structures in hydrodynamic simulations of astrophysical discs
NASA Astrophysics Data System (ADS)
Forgan, D. H.; Ramón-Fox, F. G.; Bonnell, I. A.
2018-05-01
We demonstrate numerical techniques for automatic identification of individual spiral arms in hydrodynamic simulations of astrophysical discs. Building on our earlier work, which used tensor classification to identify regions that were `spiral-like', we can now obtain fits to spirals for individual arm elements. We show this process can even detect spirals in relatively flocculent spiral patterns, but the resulting fits to logarithmic `grand-design' spirals are less robust. Our methods not only permit the estimation of pitch angles, but also direct measurements of the spiral arm width and pattern speed. In principle, our techniques will allow the tracking of material as it passes through an arm. Our demonstration uses smoothed particle hydrodynamics simulations, but we stress that the method is suitable for any finite-element hydrodynamics system. We anticipate our techniques will be essential to studies of star formation in disc galaxies, and attempts to find the origin of recently observed spiral structure in protostellar discs.
ERIC Educational Resources Information Center
Maruyama, Geoffrey
1992-01-01
A Lewinian orientation to educational problems fits current innovative thinking in education (e.g., models for making education multicultural), and provides the bases of important applied work on cooperative learning techniques and constructive ways of structuring conflict within educational settings. Lewinian field theory provides a broad…
Differential Item Functioning Analysis Using Rasch Item Information Functions
ERIC Educational Resources Information Center
Wyse, Adam E.; Mapuranga, Raymond
2009-01-01
Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…
Combat Identification Modeling Using Neural Networks Techniques
2009-03-01
Ybarvector SSpe ANOVA Xhatp; clear Yhatp U Z xi xerror yerror Tcrit BoxCoxusedlamda BoxCoxusedlog; clear leveragepoints Cooks DFFITS Cooksinfluence...counter lofFo e; clear lofFpvalue SSlof SSpe StdErr To Tstat Tpvalue Bhat Rstud I VIF; clear invR Tcrit X LofFit ALLREG BOXCOX GRAPHS globalp Warnng
Bhaskaran, Eswaran; Azhagarasan, N S; Miglani, Saket; Ilango, T; Krishna, G Phani; Gajapathi, B
2013-09-01
Accuracy of the fit of the restoration has always remained as one of the primary factors in determining success of the restoration. A well fitting restoration needs to be accurate both along its margins and internal surface. This study was conducted to comparatively evaluate the marginal gap and internal gap of cobalt-chromium (Co-Cr) copings fabricated by conventional casting procedures and with direct metal laser sintering (DMLS) technique. Among the total of 30 test samples 10 cast copings were made from inlay casting wax and 10 from 3D printed resin pattern. 10 copings were obtained from DMLS technique. All the 30 test samples were then cemented sequentially on stainless steel model using pressure indicating paste and evaluated for vertical marginal gap in 8 predetermined reference areas. All copings were then removed and partially sectioned and cemented sequentially on same master model for evaluation of internal gap at 4 predetermined reference areas. Both marginal gap and internal gap were measured in microns using video measuring system (VMS2010F). The results obtained for both marginal and internal gap were statistically analyzed and the values fall within the clinically acceptable range. The DMLS technique had an edge over the other two techniques used, as it exhibited minimal gap in the marginal region which is an area of chief concern.
Risk factors for unsuccessful acetabular press-fit fixation at primary total hip arthroplasty.
Brulc, U; Antolič, V; Mavčič, B
2017-11-01
Surgeon at primary total hip arthroplasty sometimes cannot achieve sufficient cementless acetabular press-fit fixation and must resort to other fixation methods. Despite a predominant use of cementless cups, this issue is not fully clarified, therefore we performed a large retrospective study to: (1) identify risk factors related to patient or implant or surgeon for unsuccessful intraoperative press-fit; (2) check for correlation between surgeons' volume of operated cases and the press-fit success rate. Unsuccessful intra-operative press-fit more often occurs in older female patients, particular implants, due to learning curve and low-volume surgeons. Retrospective observational cohort of prospectively collected intraoperative data (2009-2016) included all primary total hip arthroplasty patients with implant brands that offered acetabular press-fit fixation only. Press-fit was considered successful if acetabulum was of the same implant brand as the femoral component without additional screws or cement. Logistic regression models for unsuccessful acetabular press-fit included patients' gender/age/operated side, implant, surgeon, approach (posterior n=1206, direct-lateral n=871) and surgery date (i.e. learning curve). In 2077 patients (mean 65.5 years, 1093 females, 1163 right hips), three different implant brands (973 ABG-II™-Stryker, 646 EcoFit™ Implantcast, 458 Procotyl™ L-Wright) were implanted by eight surgeons. Their unsuccessful press-fit fixation rates ranged from 3.5% to 23.7%. Older age (odds ratio 1.01 [95% CI: 0.99-1.02]), female gender (2.87 [95% CI: 2.11-3.91]), right side (1.44 [95% CI: 1.08-1.92]), surgery date (0.90 [95% CI: 1.08-1.92]) and particular implants were significant risk factors only in three surgeons with less successful surgical technique (higher rates of unsuccessful press-fit with Procotyl™-L and EcoFit™ [P=0.01]). Direct-lateral hip approach had a lower rate of unsuccessful press-fit than posterior hip approach (P<0.01), but there was no correlation between surgeons' volume and rate of successful press-fit (Spearman's rho=0.10, P=0.82). Subcohort of 961 patients with 5-7-years follow-up indicated higher early/late cup revision rates with unsuccessful press-fit. Success of press-fit fixation depends entirely on the surgeon and surgical approach. With proper operative technique, the unsuccessful press-fit fixation rate should be below 5% and the impact of patients' characteristics or implants on press-fit fixation is then insignificant. Findings of huge variability in operative technique between surgeons of the presented study emphasize the need for surgeon-specific data stratification in arthroplasty studies and indicate the possibility of false attribution of clinically observed phenomena to patient-related factors in pooled data of large centers or hip arthroplasty registers. Level III, retrospective observational case control study. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Wheeler, Matthew W; Bailer, A John
2007-06-01
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
A dynamic mechanical analysis technique for porous media
Pattison, Adam J; McGarry, Matthew; Weaver, John B; Paulsen, Keith D
2015-01-01
Dynamic mechanical analysis (DMA) is a common way to measure the mechanical properties of materials as functions of frequency. Traditionally, a viscoelastic mechanical model is applied and current DMA techniques fit an analytical approximation to measured dynamic motion data by neglecting inertial forces and adding empirical correction factors to account for transverse boundary displacements. Here, a finite element (FE) approach to processing DMA data was developed to estimate poroelastic material properties. Frequency-dependent inertial forces, which are significant in soft media and often neglected in DMA, were included in the FE model. The technique applies a constitutive relation to the DMA measurements and exploits a non-linear inversion to estimate the material properties in the model that best fit the model response to the DMA data. A viscoelastic version of this approach was developed to validate the approach by comparing complex modulus estimates to the direct DMA results. Both analytical and FE poroelastic models were also developed to explore their behavior in the DMA testing environment. All of the models were applied to tofu as a representative soft poroelastic material that is a common phantom in elastography imaging studies. Five samples of three different stiffnesses were tested from 1 – 14 Hz with rough platens placed on the top and bottom surfaces of the material specimen under test to restrict transverse displacements and promote fluid-solid interaction. The viscoelastic models were identical in the static case, and nearly the same at frequency with inertial forces accounting for some of the discrepancy. The poroelastic analytical method was not sufficient when the relevant physical boundary constraints were applied, whereas the poroelastic FE approach produced high quality estimates of shear modulus and hydraulic conductivity. These results illustrated appropriate shear modulus contrast between tofu samples and yielded a consistent contrast in hydraulic conductivity as well. PMID:25248170
Multiplexed absorption tomography with calibration-free wavelength modulation spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Weiwei; Kaminski, Clemens F., E-mail: cfk23@cam.ac.uk
2014-04-14
We propose a multiplexed absorption tomography technique, which uses calibration-free wavelength modulation spectroscopy with tunable semiconductor lasers for the simultaneous imaging of temperature and species concentration in harsh combustion environments. Compared with the commonly used direct absorption spectroscopy (DAS) counterpart, the present variant enjoys better signal-to-noise ratios and requires no baseline fitting, a particularly desirable feature for high-pressure applications, where adjacent absorption features overlap and interfere severely. We present proof-of-concept numerical demonstrations of the technique using realistic phantom models of harsh combustion environments and prove that the proposed techniques outperform currently available tomography techniques based on DAS.
Atmospheric and Fundamental Parameters of Stars in Hubble's Next Generation Spectral Library
NASA Technical Reports Server (NTRS)
Heap, Sally
2010-01-01
Hubble's Next Generation Spectral Library (NGSL) consists of R approximately 1000 spectra of 374 stars of assorted temperature, gravity, and metallicity. We are presently working to determine the atmospheric and fundamental parameters of the stars from the NGSL spectra themselves via full-spectrum fitting of model spectra to the observed (extinction-corrected) spectrum over the full wavelength range, 0.2-1.0 micron. We use two grids of model spectra for this purpose: the very low-resolution spectral grid from Castelli-Kurucz (2004), and the grid from MARCS (2008). Both the observed spectrum and the MARCS spectra are first degraded in resolution to match the very low resolution of the Castelli-Kurucz models, so that our fitting technique is the same for both model grids. We will present our preliminary results with a comparison with those from the Sloan/Segue Stellar Parameter Pipeline, ELODIE, and MILES, etc.
Fitting and Calibrating a Multilevel Mixed-Effects Stem Taper Model for Maritime Pine in NW Spain
Arias-Rodil, Manuel; Castedo-Dorado, Fernando; Cámara-Obregón, Asunción; Diéguez-Aranda, Ulises
2015-01-01
Stem taper data are usually hierarchical (several measurements per tree, and several trees per plot), making application of a multilevel mixed-effects modelling approach essential. However, correlation between trees in the same plot/stand has often been ignored in previous studies. Fitting and calibration of a variable-exponent stem taper function were conducted using data from 420 trees felled in even-aged maritime pine (Pinus pinaster Ait.) stands in NW Spain. In the fitting step, the tree level explained much more variability than the plot level, and therefore calibration at plot level was omitted. Several stem heights were evaluated for measurement of the additional diameter needed for calibration at tree level. Calibration with an additional diameter measured at between 40 and 60% of total tree height showed the greatest improvement in volume and diameter predictions. If additional diameter measurement is not available, the fixed-effects model fitted by the ordinary least squares technique should be used. Finally, we also evaluated how the expansion of parameters with random effects affects the stem taper prediction, as we consider this a key question when applying the mixed-effects modelling approach to taper equations. The results showed that correlation between random effects should be taken into account when assessing the influence of random effects in stem taper prediction. PMID:26630156
Full two-dimensional transient solutions of electrothermal aircraft blade deicing
NASA Technical Reports Server (NTRS)
Masiulaniec, K. C.; Keith, T. G., Jr.; Dewitt, K. J.; Leffel, K. L.
1985-01-01
Two finite difference methods are presented for the analysis of transient, two-dimensional responses of an electrothermal de-icer pad of an aircraft wing or blade with attached variable ice layer thickness. Both models employ a Crank-Nicholson iterative scheme, and use an enthalpy formulation to handle the phase change in the ice layer. The first technique makes use of a 'staircase' approach, fitting the irregular ice boundary with square computational cells. The second technique uses a body fitted coordinate transform, and maps the exact shape of the irregular boundary into a rectangular body, with uniformally square computational cells. The numerical solution takes place in the transformed plane. Initial results accounting for variable ice layer thickness are presented. Details of planned de-icing tests at NASA-Lewis, which will provide empirical verification for the above two methods, are also presented.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1990-01-01
Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.
Nelson, Neha; K S, Jyothi; Sunny, Kiran
2017-03-01
The margins of copings for crowns and retainers of fixed partial dentures affect the progress of microleakage and dental caries. Failures occur due to altered fit which is also influenced by the method of fabrication. An in-vitro study was conducted to determine among the cast base metal, copy milled zirconia, computer aided designing computer aided machining/manufacturing zirconia and direct metal laser sintered copings which showed best marginal accuracy and internal fit. Forty extracted maxillary premolars were mounted on an acrylic model and reduced occlusally using a milling machine up to a final tooth height of 4 mm from the cementoenamel junction. Axial reduction was accomplished on a surveyor and a chamfer finish line was given. The impressions and dies were made for fabrication of copings which were luted on the prepared teeth under standardized loading, embedded in self-cure acrylic resin, sectioned and observed using scanning electron microscope for internal gap and marginal accuracy. The copings fabricated using direct metal laser sintering technique exhibited best marginal accuracy and internal fit. Comparison of mean between the four groups by ANOVA and post-hoc Tukey HSD tests showed a statistically significant difference between all the groups (p⟨0.05). It was concluded that the copings fabricated using direct metal laser sintering technique exhibited best marginal accuracy and internal fit. Additive digital technologies such as direct metal laser sintering could be cost-effective for the clinician, minimize failures related to fit and increase longevity of teeth and prostheses. Copyright© 2017 Dennis Barber Ltd.
FitEM2EM—Tools for Low Resolution Study of Macromolecular Assembly and Dynamics
Frankenstein, Ziv; Sperling, Joseph; Sperling, Ruth; Eisenstein, Miriam
2008-01-01
Studies of the structure and dynamics of macromolecular assemblies often involve comparison of low resolution models obtained using different techniques such as electron microscopy or atomic force microscopy. We present new computational tools for comparing (matching) and docking of low resolution structures, based on shape complementarity. The matched or docked objects are represented by three dimensional grids where the value of each grid point depends on its position with regard to the interior, surface or exterior of the object. The grids are correlated using fast Fourier transformations producing either matches of related objects or docking models depending on the details of the grid representations. The procedures incorporate thickening and smoothing of the surfaces of the objects which effectively compensates for differences in the resolution of the matched/docked objects, circumventing the need for resolution modification. The presented matching tool FitEM2EMin successfully fitted electron microscopy structures obtained at different resolutions, different conformers of the same structure and partial structures, ranking correct matches at the top in every case. The differences between the grid representations of the matched objects can be used to study conformation differences or to characterize the size and shape of substructures. The presented low-to-low docking tool FitEM2EMout ranked the expected models at the top. PMID:18974836
A phase transition induces chaos in a predator-prey ecosystem with a dynamic fitness landscape
2017-01-01
In many ecosystems, natural selection can occur quickly enough to influence the population dynamics and thus future selection. This suggests the importance of extending classical population dynamics models to include such eco-evolutionary processes. Here, we describe a predator-prey model in which the prey population growth depends on a prey density-dependent fitness landscape. We show that this two-species ecosystem is capable of exhibiting chaos even in the absence of external environmental variation or noise, and that the onset of chaotic dynamics is the result of the fitness landscape reversibly alternating between epochs of stabilizing and disruptive selection. We draw an analogy between the fitness function and the free energy in statistical mechanics, allowing us to use the physical theory of first-order phase transitions to understand the onset of rapid cycling in the chaotic predator-prey dynamics. We use quantitative techniques to study the relevance of our model to observational studies of complex ecosystems, finding that the evolution-driven chaotic dynamics confer community stability at the “edge of chaos” while creating a wide distribution of opportunities for speciation during epochs of disruptive selection—a potential observable signature of chaotic eco-evolutionary dynamics in experimental studies. PMID:28678792
MGS-TES thermal inertia study of the Arsia Mons Caldera
Cushing, G.E.; Titus, T.N.
2008-01-01
Temperatures of the Arsia Mons caldera floor and two nearby control areas were obtained by the Mars Global Surveyor (MGS) Thermal Emission Spectrometer (TES). These observations revealed that the Arsia Mons caldera floor exhibits thermal behavior different from the surrounding Tharsis region when compared with thermal models. Our technique compares modeled and observed data to determine best fit values of thermal inertia, layer depth, and albedo. Best fit modeled values are accurate in the two control regions, but those in the Arsia Mons' caldera are consistently either up to 15 K warmer than afternoon observations, or have albedo values that are more than two standard deviations higher than the observed mean. Models of both homogeneous and layered (such as dust over bedrock) cases were compared, with layered-cases indicating a surface layer at least thick enough to insulate itself from diurnal effects of an underlying substrate material. Because best fit models of the caldera floor poorly match observations, it is likely that the caldera floor experiences some physical process not incorporated into our thermal model. Even on Mars, Arsia Mons is an extreme environment where CO2 condenses upon the caldera floor every night, diurnal temperatures range each day by a factor of nearly 2, and annual average atmospheric pressure is only around one millibar. Here, we explore several possibilities that may explain the poor modeled fits to caldera floor and conclude that temperature dependent thermal conductivity may cause thermal inertia to vary diurnally, and this effect may be exaggerated by presence of water-ice clouds, which occur frequently above Arsia Mons. Copyright 2008 by the American Geophysical Union.
Modeling Phase-Aligned Gamma-Ray and Radio Millisecond Pulsar Light Curves
NASA Technical Reports Server (NTRS)
Venter, C.; Johnson, T.; Harding, A.
2012-01-01
Since the discovery of the first eight gamma-ray millisecond pulsars (MSPs) by the Fermi Large Area Telescope, this population has been steadily expanding. Four of the more recent detections, PSR J00340534, PSR J1939+2134 (B1937+21; the first MSP ever discovered), PSR J1959+2048 (B1957+20; the first discovery of a black widow system), and PSR J2214+3000, exhibit a phenomenon not present in the original discoveries: nearly phase-aligned radio and gamma-ray light curves (LCs). To account for the phase alignment, we explore models where both the radio and gamma-ray emission originate either in the outer magnetosphere near the light cylinder or near the polar caps. Using a Markov Chain Monte Carlo technique to search for best-fit model parameters, we obtain reasonable LC fits for the first three of these MSPs in the context of altitude-limited outer gap (alOG) and two-pole caustic (alTPC) geometries (for both gamma-ray and radio emission). These models differ from the standard outer gap (OG)/two-pole caustic (TPC) models in two respects: the radio emission originates in caustics at relatively high altitudes compared to the usual conal radio beams, and we allow both the minimum and maximum altitudes of the gamma-ray and radio emission regions to vary within a limited range (excluding the minimum gamma-ray altitude of the alTPC model, which is kept constant at the stellar radius, and that of the alOG model, which is set to the position-dependent null charge surface altitude). Alternatively, phase-aligned solutions also exist for emission originating near the stellar surface in a slot gap scenario (low-altitude slot gap (laSG) models). We find that the alTPC models provide slightly better LC fits than the alOG models, and both of these give better fits than the laSG models (for the limited range of parameters considered in the case of the laSG models). Thus, our fits imply that the phase-aligned LCs are likely of caustic origin, produced in the outer magnetosphere, and that the radio emission for these pulsars may come from close to the light cylinder. In addition, we were able to constrain the minimum and maximum emission altitudes with typical uncertainties of 30% of the light cylinder radius. Our results therefore describe a third gamma-ray MSP subclass, in addition to the two previously found by Venter et al.: those with LCs fit by standard OG/TPC models and those with LCs fit by pair-starved polar cap models.
Experimental characterization of wingtip vortices in the near field using smoke flow visualizations
NASA Astrophysics Data System (ADS)
Serrano-Aguilera, J. J.; García-Ortiz, J. Hermenegildo; Gallardo-Claros, A.; Parras, L.; del Pino, C.
2016-08-01
In order to predict the axial development of the wingtip vortices strength, an accurate theoretical model is required. Several experimental techniques have been used to that end, e.g. PIV or hot-wire anemometry, but they imply a significant cost and effort. For this reason, we have performed experiments using the smoke-wire technique to visualize smoke streaks in six planes perpendicular to the main stream flow direction. Using this visualization technique, we obtained quantitative information regarding the vortex velocity field by means of Batchelor's model for two chord-based Reynolds numbers, Re_c=3.33× 10^4 and 10^5. Therefore, this theoretical vortex model has been introduced in the integration of ordinary differential equations which describe the temporal evolution of streak lines as function of two parameters: the swirl number, S, and the virtual axial origin, overline{z_0}. We have applied two different procedures to minimize the distance between experimental and theoretical flow patterns: individual curve fitting at six different control planes in the streamwise direction and the global curve fitting which corresponds to all the control planes simultaneously. Both sets of results have been compared with those provided by del Pino et al. (Phys Fluids 23(013):602, 2011b. doi: 10.1063/1.3537791), finding good agreement. Finally, we have observed a weak influence of the Reynolds number on the values S and overline{z_0} at low-to-moderate Re_c. This experimental technique is proposed as a low cost alternative to characterize wingtip vortices based on flow visualizations.
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
Tian, Lu; Wei, Wan-Zhi; Mao, You-An
2004-04-01
The adsorption of human serum albumin onto hydroxyapatite-modified silver electrodes has been in situ investigated by utilizing the piezoelectric quartz crystal impedance technique. The changes of equivalent circuit parameters were used to interpret the adsorption process. A kinetic model of two consecutive steps was derived to describe the process and compared with a first-order kinetic model by using residual analysis. The experimental data of frequency shift fitted to the model and kinetics parameters, k1, k2, psi1, psi2 and qr, were obtained. All fitted results were in reasonable agreement with the corresponding experimental results. Two adsorption constants (7.19 kJ mol(-1) and 22.89 kJ mol(-1)) were calculated according to the Arrhenius formula.
Treatment of oroantral fistulas using bony press-fit technique.
Er, Nuray; Tuncer, Hakan Yusuf; Karaca, Ciğdem; Copuroğlu, Seçil
2013-04-01
The objective of this study was to determine the effectiveness of the bony press-fit technique in closing oroantral communications (OACs) and oroantral fistulas (OAFs) and in identifying potential intraoral donor sites. Ten patients, 4 with OACs and 6 with OAFs, were treated with autogenous bone grafts using the bony press-fit technique. In 9 patients, dental extractions caused OACs or OAFs; in 1 patient, an OAC appeared after cyst enucleation. Donor sites included the chin (3 patients), buccal exostosis (1 patient), maxillary tuberosity (2 patients), ramus (1 patient), and the lateral wall of the maxillary sinus (3 patients). The preoperative evaluation of the patients, surgical technique, and postoperative management were examined. In all 10 patients, a stable press fit of the graft was achieved. Additional fixation methods were not needed. In 2 patients, mucosal dehiscence developed, but healed spontaneously. In 2 patients, dental implant surgery was performed in the grafted area. Treatment of 10 patients with OACs or OAFs was performed, with a 100% success rate. The bony press-fit technique can be used to safely close OACs or OAFs, and it presents some advantages compared with other techniques. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
An Improved Cryosat-2 Sea Ice Freeboard Retrieval Algorithm Through the Use of Waveform Fitting
NASA Technical Reports Server (NTRS)
Kurtz, Nathan T.; Galin, N.; Studinger, M.
2014-01-01
We develop an empirical model capable of simulating the mean echo power cross product of CryoSat-2 SAR and SAR In mode waveforms over sea ice covered regions. The model simulations are used to show the importance of variations in the radar backscatter coefficient with incidence angle and surface roughness for the retrieval of surfaceelevation of both sea ice floes and leads. The numerical model is used to fit CryoSat-2 waveforms to enable retrieval of surface elevation through the use of look-up tables and a bounded trust region Newton least squares fitting approach. The use of a model to fit returns from sea ice regions offers advantages over currently used threshold retrackingmethods which are here shown to be sensitive to the combined effect of bandwidth limited range resolution and surface roughness variations. Laxon et al. (2013) have compared ice thickness results from CryoSat-2 and IceBridge, and found good agreement, however consistent assumptions about the snow depth and density of sea ice werenot used in the comparisons. To address this issue, we directly compare ice freeboard and thickness retrievals from the waveform fitting and threshold tracker methods of CryoSat-2 to Operation IceBridge data using a consistent set of parameterizations. For three IceBridge campaign periods from March 20112013, mean differences (CryoSat-2 IceBridge) of 0.144m and 1.351m are respectively found between the freeboard and thickness retrievals using a 50 sea ice floe threshold retracker, while mean differences of 0.019m and 0.182m are found when using the waveform fitting method. This suggests the waveform fitting technique is capable of better reconciling the seaice thickness data record from laser and radar altimetry data sets through the usage of consistent physical assumptions.
Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian
2018-02-01
This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Convergence in parameters and predictions using computational experimental design.
Hagen, David R; White, Jacob K; Tidor, Bruce
2013-08-06
Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.
NASA Astrophysics Data System (ADS)
Varady, Mark; Mantooth, Brent; Pearl, Thomas; Willis, Matthew
2014-03-01
A continuum model of reactive decontamination in absorbing polymeric thin film substrates exposed to the chemical warfare agent O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate (known as VX) was developed to assess the performance of various decontaminants. Experiments were performed in conjunction with an inverse analysis method to obtain the necessary model parameters. The experiments involved contaminating a substrate with a fixed VX exposure, applying a decontaminant, followed by a time-resolved, liquid phase extraction of the absorbing substrate to measure the residual contaminant by chromatography. Decontamination model parameters were uniquely determined using the Levenberg-Marquardt nonlinear least squares fitting technique to best fit the experimental time evolution of extracted mass. The model was implemented numerically in both a 2D axisymmetric finite element program and a 1D finite difference code, and it was found that the more computationally efficient 1D implementation was sufficiently accurate. The resulting decontamination model provides an accurate quantification of contaminant concentration profile in the material, which is necessary to assess exposure hazards.
NASA Technical Reports Server (NTRS)
Kuhlman, J. M.
1979-01-01
The aerodynamic design of a wind-tunnel model of a wing representative of that of a subsonic jet transport aircraft, fitted with winglets, was performed using two recently developed optimal wing-design computer programs. Both potential flow codes use a vortex lattice representation of the near-field of the aerodynamic surfaces for determination of the required mean camber surfaces for minimum induced drag, and both codes use far-field induced drag minimization procedures to obtain the required spanloads. One code uses a discrete vortex wake model for this far-field drag computation, while the second uses a 2-D advanced panel wake model. Wing camber shapes for the two codes are very similar, but the resulting winglet camber shapes differ widely. Design techniques and considerations for these two wind-tunnel models are detailed, including a description of the necessary modifications of the design geometry to format it for use by a numerically controlled machine for the actual model construction.
Pawlikowski, Marek; Jankowski, Krzysztof; Skalski, Konstanty
2018-05-30
A new constitutive model for human trabecular bone is presented in the present study. As the model is based on indentation tests performed on single trabeculae it is formulated in a microscale. The constitutive law takes into account non-linear viscoelasticity of the tissue. The elastic response is described by the hyperelastic Mooney-Rivlin model while the viscoelastic effects are considered by means of the hereditary integral in which stress depends on both time and strain. The material constants in the constitutive equation are identified on the basis of the stress relaxation tests and the indentation tests using curve-fitting procedure. The constitutive model is implemented into finite element package Abaqus ® by means of UMAT subroutine. The curve-fitting error is low and the viscoelastic behaviour of the tissue predicted by the proposed constitutive model corresponds well to the realistic response of the trabecular bone. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Real estate value prediction using multivariate regression models
NASA Astrophysics Data System (ADS)
Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav
2017-11-01
The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.
Fitting of the Thomson scattering density and temperature profiles on the COMPASS tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stefanikova, E.; Division of Fusion Plasma Physics, KTH Royal Institute of Technology, SE-10691 Stockholm; Peterka, M.
2016-11-15
A new technique for fitting the full radial profiles of electron density and temperature obtained by the Thomson scattering diagnostic in H-mode discharges on the COMPASS tokamak is described. The technique combines the conventionally used modified hyperbolic tangent function for the edge transport barrier (pedestal) fitting and a modification of a Gaussian function for fitting the core plasma. Low number of parameters of this combined function and their straightforward interpretability and controllability provide a robust method for obtaining physically reasonable profile fits. Deconvolution with the diagnostic instrument function is applied on the profile fit, taking into account the dependence onmore » the actual magnetic configuration.« less
[Tibial press-fit fixation of flexor tendons for reconstruction of the anterior cruciate ligament].
Ettinger, M; Liodakis, E; Haasper, C; Hurschler, C; Breitmeier, D; Krettek, C; Jagodzinski, M
2012-09-01
Press-fit fixation of hamstring tendon autografts for anterior cruciate ligament reconstruction is an interesting technique because no hardware is necessary. This study compares the biomechanical properties of press-fit fixations to an interference screw fixation. Twenty-eight human cadaveric knees were used for hamstring tendon explantation. An additional bone block was harvested from the tibia. We used 28 porcine femora for graft fixation. Constructs were cyclically stretched and then loaded until failure. Maximum load to failure, stiffness and elongation during failure testing and cyclic loading were investigated. The maximum load to failure was 970±83 N for the press-fit tape fixation (T), 572±151 N for the bone bridge fixation (TS), 544±109 N for the interference screw fixation (I), 402±77 N for the press-fit suture fixation (S) and 290±74 N for the bone block fixation technique (F). The T fixation had a significantly better maximum load to failure compared to all other techniques (p<0.001). This study demonstrates that a tibial press-fit technique which uses an additional bone block has better maximum load to failure results compared to a simple interference screw fixation.
[Arthroscopic reconstruction of anterior cruciate ligament with press-fit technique].
Halder, A M
2010-08-01
Problems related to the use of interference screws for fixation of bone-patellar tendon-bone grafts for anterior cruciate ligament (ACL) replacement have led to increasing interest in press-fit techniques. Most of the described techniques use press-fit fixation on either the femoral or tibial side. Therefore an arthroscopic technique was developed which achieves bone-patellar tendon-bone graft fixation by press-fit on both sides without the need for supplemental fixation material. The first consecutive 40 patients were examined clinically with a KT-1000 arthrometer and radiologically after a mean of 28.7 months (range 20-40 months) postoperatively. The mean difference in side-to-side laxity was 1.3 mm (SD 2.2 mm) and the results according to the International Knee Documentation Committee (IKDC) score were as follows: 7 A, 28 B, 5 C, 0 D. The presented press-fit technique avoids all complications related to the use of interference screws. It achieves primary stable fixation of the bone-patellar tendon-bone graft thereby allowing early functional rehabilitation. However, fixation strength depends on bone quality and the arthroscopic procedure is demanding. The results showed reliable stabilization of the operated knees.
Sabonghy, Eric Peter; Wood, Robert Michael; Ambrose, Catherine Glauber; McGarvey, William Christopher; Clanton, Thomas Oscar
2003-03-01
Tendon transfer techniques in the foot and ankle are used for tendon ruptures, deformities, and instabilities. This fresh cadaver study compares the tendon fixation strength in 10 paired specimens by performing a tendon to tendon fixation technique or using 7 x 20-25 mm bioabsorbable interference-fit screw tendon fixation technique. Load at failure of the tendon to tendon fixation method averaged 279N (Standard Deviation 81N) and the bioabsorbable screw 148N (Standard Deviation 72N) [p = 0.0008]. Bioabsorbable interference-fit screws in these specimens show decreased fixation strength relative to the traditional fixation technique. However, the mean bioabsorbable screw fixation strength of 148N provides physiologic strength at the tendon-bone interface.
A system approach to aircraft optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.
Mayr, Hermann O; Dietrich, Markwart; Fraedrich, Franz; Hube, Robert; Nerlich, Andreas; von Eisenhart-Rothe, Rüdiger; Hein, Werner; Bernstein, Anke
2009-09-01
A sheep study was conducted to test a press-fit technique using microporous pure beta-tricalcium phosphate (beta-TCP) dowels for fixation of the anterior cruciate ligament (ACL) graft. Microporous (5 mum) cylindrical plugs of beta-TCP (diameter, 7 mm; length, 25 mm) with interconnecting pores were used. The material featured a novel configuration of structure and surface geometry. Implants were tested by use of press-fit fixation of ACL grafts with and without bone blocks in 42 sheep over a period of 24 weeks. Biomechanical, radiologic, histologic, and immunohistochemical evaluations were performed. In load-to-failure tests at 6, 12, and 24 weeks after surgery, the intra-articular graft always failed, not the fixation. Grafts showed bony fixation in the tunnel at 6 weeks and primary healing at the junction of the tunnel and joint after 24 weeks. Tricalcium phosphate was resorbed and simultaneously replaced by bone. Remodeling was still incomplete at 24 weeks. In the sheep model microporous beta-TCP implants used with press-fit fixation of ACL grafts permit early functional rehabilitation. After 6 weeks, the graft is fixed by woven bone or bony integration. Implanted microporous tricalcium phosphate is resorbed and replaced by bone. In a sheep model we showed that primary healing of ACL grafts with resorption and bony replacement of the fixating implant can be achieved by means of press-fit fixation with pure beta-TCP.
Can a Linear Sigma Model Describe Walking Gauge Theories at Low Energies?
NASA Astrophysics Data System (ADS)
Gasbarro, Andrew
2018-03-01
In recent years, many investigations of confining Yang Mills gauge theories near the edge of the conformal window have been carried out using lattice techniques. These studies have revealed that the spectrum of hadrons in nearly conformal ("walking") gauge theories differs significantly from the QCD spectrum. In particular, a light singlet scalar appears in the spectrum which is nearly degenerate with the PNGBs at the lightest currently accessible quark masses. This state is a viable candidate for a composite Higgs boson. Presently, an acceptable effective field theory (EFT) description of the light states in walking theories has not been established. Such an EFT would be useful for performing chiral extrapolations of lattice data and for serving as a bridge between lattice calculations and phenomenology. It has been shown that the chiral Lagrangian fails to describe the IR dynamics of a theory near the edge of the conformal window. Here we assess a linear sigma model as an alternate EFT description by performing explicit chiral fits to lattice data. In a combined fit to the Goldstone (pion) mass and decay constant, a tree level linear sigma model has a Χ2/d.o.f. = 0.5 compared to Χ2/d.o.f. = 29.6 from fitting nextto-leading order chiral perturbation theory. When the 0++ (σ) mass is included in the fit, Χ2/d.o.f. = 4.9. We remark on future directions for providing better fits to the σ mass.
Fatone, Stefania; Caldwell, Ryan
2017-06-01
Current transfemoral prosthetic sockets restrict function, lack comfort, and cause residual limb problems. Lower proximal trim lines are an appealing way to address this problem. Development of a more comfortable and possibly functional subischial socket may contribute to improving quality of life of persons with transfemoral amputation. The purpose of this study was to (1) describe the design and fabrication of a new subischial socket and (2) describe efforts to teach this technique. Development project. Socket development involved defining the following: subject and liner selection, residual limb evaluation, casting, positive mold rectification, check socket fitting, definitive socket fabrication, and troubleshooting of socket fit. Three hands-on workshops to teach the socket were piloted and attended by 30 certified prosthetists and their patient models. Patient models responded positively to the comfort, range of motion, and stability of the new socket while prosthetists described the technique as "straight forward, reproducible." To our knowledge, this is the first attempt to create a teachable subischial socket, and while it appears promising, more definitive evaluation is needed. Clinical relevance We developed the Northwestern University Flexible Subischial Vacuum (NU-FlexSIV) Socket as a more comfortable alternative to current transfemoral sockets and demonstrated that it could be taught successfully to prosthetists.
NASA Astrophysics Data System (ADS)
Riley, P.
2016-12-01
The southward component of the interplanetary magnetic field plays a key role in many space weather-related phenomena. However, thus far, it has proven difficult to predict it with any degree of fidelity. In this talk I outline the difficulties in making such forecasts, and describe several promising techniques that may ultimately prove successful. In particular, I focus on predictions of magnetic fields embedded within interplanetary coronal mass ejections (ICMEs), which are the cause of most large, non-recurrent geomagnetic storms. I discuss three specific techniques that are already producing modest, but promising results. First, a pattern recognition approach, which matches observed coherent rotations in the magnetic field with historical intervals of similar variations, then forecasts future variations based on the historical data. Second, a novel flux rope fitting technique that uses an MCMC algorithm to find a best fit to the partially observed ICME. And third, an empirical modular CME model (based on the approach outlined by N. Savani and colleagues), which links several ad hoc models of coronal properties of the flux rope, its kinematics and geometry in the corona, dynamic evolution, and time of transit to 1 AU. We highlight the uncertainties associated with these predictions, and, in particular, identify those that we believe can be reduced in the future.
Bending the Rules: Widefield Microscopy and the Abbe Limit of Resolution
Verdaasdonk, Jolien S.; Stephens, Andrew D.; Haase, Julian; Bloom, Kerry
2014-01-01
One of the most fundamental concepts of microscopy is that of resolution–the ability to clearly distinguish two objects as separate. Recent advances such as structured illumination microscopy (SIM) and point localization techniques including photoactivated localization microscopy (PALM), and stochastic optical reconstruction microscopy (STORM) strive to overcome the inherent limits of resolution of the modern light microscope. These techniques, however, are not always feasible or optimal for live cell imaging. Thus, in this review, we explore three techniques for extracting high resolution data from images acquired on a widefield microscope–deconvolution, model convolution, and Gaussian fitting. Deconvolution is a powerful tool for restoring a blurred image using knowledge of the point spread function (PSF) describing the blurring of light by the microscope, although care must be taken to ensure accuracy of subsequent quantitative analysis. The process of model convolution also requires knowledge of the PSF to blur a simulated image which can then be compared to the experimentally acquired data to reach conclusions regarding its geometry and fluorophore distribution. Gaussian fitting is the basis for point localization microscopy, and can also be applied to tracking spot motion over time or measuring spot shape and size. All together, these three methods serve as powerful tools for high-resolution imaging using widefield microscopy. PMID:23893718
VizieR Online Data Catalog: Optical/NIR photometry of OGLE-2012-SN-006 (Pastorello+, 2015)
NASA Astrophysics Data System (ADS)
Pastorello, A.; Wyrzykowski, L.; Valenti, S.; Prieto, J. L.; Kozlowski, S.; Udalski, A.; Elias-Rosa, N.; Morales-Garoffolo, A.; Anderson, J. P.; Benetti, S.; Bersten, M.; Botticella, M. T.; Cappellaro, E.; Fasano, G.; Fraser, M.; Gal-Yam, A.; Gillone, M.; Graham, M. L.; Greiner, J.; Hachinger, S.; Howell, D. A.; Inserra, C.; Parrent, J.; Rau, A.; Schulze, S.; Smartt, S. J.; Smith, K. W.; Turatto, M.; Yaron, O.; Young, D. R.; Kubiak, M.; Szymanski, M. K.; Pietrzynski, G.; Soszynski, I.; Ulaczyk, K.; Poleski, R.; Pietrukowicz, P.; Skowron, J.; Mroz, P.
2017-11-01
Photometric measurements in the optical and NIR bands were obtained through the PSF-fitting technique. A template PSF was built using stars in the SN field. With this PSF model along with a low-order polynomial surface, we finally performed a fit to the SN and the underlying background. OGLE-IV photometry was obtained using the difference imaging analysis, which is a template subtraction method adapted to the OGLE data and detailed in Wyrzykowski et al. 2014, J/AcA/64/197 (see also Wozniak 2000, J/AcA/50/421). (2 data files).
Applying the Yule-Nielsen equation with negative n.
Lewandowski, Achim; Ludl, Marcus; Byrne, Gerald; Dorffner, Georg
2006-08-01
The theory of halftone printing has attracted the attention of both industry and scientists for decades. The Yule-Nielsen equation has been used for color prediction for more than fifty years now, but apparently the tuning parameter n has been taken to be only larger than or equal to one. Our paper shows that the extension to the whole real axis is well defined in a mathematical sense. We fitted models to data from the ceramics tile printing sector (Rotocolor and Kerajet printing techniques) and found that using a negative n in almost all considered cases resulted in a better fit.
Development of a winter wheat adjustable crop calendar model
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1978-01-01
The author has identified the following significant results. After parameter estimation, tests were conducted with variances from the fits, and on independent data. From these tests, it was generally concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson's triquadratic form, in general use for spring wheat, was found to show promise for winter wheat, but special techniques and care were required for its use. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with daily environmental values as independent variables.
Using CFD Techniques to Predict Slosh Force Frequency and Damping Rate
NASA Technical Reports Server (NTRS)
Marsell, Brandon; Gangadharan, Sathya; Chatman, Yadira; Sudermann, James
2009-01-01
Resonant effects and energy dissipation due to sloshing fuel inside propellant tanks are problems that arise in the initial design of any spacecraft or launch vehicle. A faster and more reliable method for calculating these effects during the design stages is needed. Using Computational Fluid Dynamics (CFD) techniques, a model of these fuel tanks can be created and used to predict important parameters such as resonant slosh frequency and damping rate. This initial study addresses the case of free surface slosh. Future studies will focus on creating models for tanks fitted with propellant management devices (PMD) such as diaphragms and baffles.
A study of data analysis techniques for the multi-needle Langmuir probe
NASA Astrophysics Data System (ADS)
Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.
2018-06-01
In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.
1980-09-01
HASTRUP , T REAL UNCLASSIFIED SACLAATCEN- SM-139 N SACLANTCEN Memorandum SM -139 -LEFW SACLANT ASW RESEARCH CENTRE ~ MEMORANDUM A SIMPLE FORMULA TO...CALCULATE SHALLOW-WATER TRANSMISSION LOSS BY MEANS OF A LEAST- SQUARES SURFACE FIT TECHNIQUE 7-sallby OLE F. HASTRUP and TUNCAY AKAL I SEPTEMBER 1980 NORTH...JRANSi4ISSION LOSS/ BY MEANS OF A LEAST-SQUARES SURFACE fIT TECHNIQUE, C T ~e F./ Hastrup .0TnaAa ()1 Sep 8 This memorandum has been prepared within the
Frisardi, Gianni; Barone, Sandro; Razionale, Armando V; Paoli, Alessandro; Frisardi, Flavio; Tullio, Antonio; Lumbau, Aurea; Chessa, Giacomo
2012-05-29
A fundamental pre-requisite for the clinical success in dental implant surgery is the fast and stable implant osseointegration. The press-fit phenomenon occurring at implant insertion induces biomechanical effects in the bone tissues, which ensure implant primary stability. In the field of dental surgery, the understanding of the key factors governing the osseointegration process still remains of utmost importance. A thorough analysis of the biomechanics of dental implantology requires a detailed knowledge of bone mechanical properties as well as an accurate definition of the jaw bone geometry. In this work, a CT image-based approach, combined with the Finite Element Method (FEM), has been used to investigate the effect of the drill size on the biomechanics of the dental implant technique. A very accurate model of the human mandible bone segment has been created by processing high resolution micro-CT image data. The press-fit phenomenon has been simulated by FE analyses for different common drill diameters (DA=2.8 mm, DB=3.3 mm, and DC=3.8 mm) with depth L=12 mm. A virtual implant model has been assumed with a cylindrical geometry having height L=11 mm and diameter D=4 mm. The maximum stresses calculated for drill diameters DA, DB and DC have been 12.31 GPa, 7.74 GPa and 4.52 GPa, respectively. High strain values have been measured in the cortical area for the models of diameters DA and DB, while a uniform distribution has been observed for the model of diameter DC . The maximum logarithmic strains, calculated in nonlinear analyses, have been ϵ=2.46, 0.51 and 0.49 for the three models, respectively. This study introduces a very powerful, accurate and non-destructive methodology for investigating the effect of the drill size on the biomechanics of the dental implant technique.Further studies could aim at understanding how different drill shapes can determine the optimal press-fit condition with an equally distributed preload on both the cortical and trabecular structure around the implant.
2012-01-01
Background A fundamental pre-requisite for the clinical success in dental implant surgery is the fast and stable implant osseointegration. The press-fit phenomenon occurring at implant insertion induces biomechanical effects in the bone tissues, which ensure implant primary stability. In the field of dental surgery, the understanding of the key factors governing the osseointegration process still remains of utmost importance. A thorough analysis of the biomechanics of dental implantology requires a detailed knowledge of bone mechanical properties as well as an accurate definition of the jaw bone geometry. Methods In this work, a CT image-based approach, combined with the Finite Element Method (FEM), has been used to investigate the effect of the drill size on the biomechanics of the dental implant technique. A very accurate model of the human mandible bone segment has been created by processing high resolution micro-CT image data. The press-fit phenomenon has been simulated by FE analyses for different common drill diameters (DA = 2.8 mm, DB = 3.3 mm, and DC = 3.8 mm) with depth L = 12 mm. A virtual implant model has been assumed with a cylindrical geometry having height L = 11 mm and diameter D = 4 mm. Results The maximum stresses calculated for drill diameters DA, DB and DC have been 12.31 GPa, 7.74 GPa and 4.52 GPa, respectively. High strain values have been measured in the cortical area for the models of diameters DA and DB, while a uniform distribution has been observed for the model of diameter DC . The maximum logarithmic strains, calculated in nonlinear analyses, have been ϵ = 2.46, 0.51 and 0.49 for the three models, respectively. Conclusions This study introduces a very powerful, accurate and non-destructive methodology for investigating the effect of the drill size on the biomechanics of the dental implant technique. Further studies could aim at understanding how different drill shapes can determine the optimal press-fit condition with an equally distributed preload on both the cortical and trabecular structure around the implant. PMID:22642768
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques
Barzaghi, Riccardo; De Gaetani, Carlo Iapige
2018-01-01
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D’Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations. PMID:29498650
NASA Technical Reports Server (NTRS)
Jennings, W. P.; Olsen, N. L.; Walter, M. J.
1976-01-01
The development of testing techniques useful in airplane ground resonance testing, wind tunnel aeroelastic model testing, and airplane flight flutter testing is presented. Included is the consideration of impulsive excitation, steady-state sinusoidal excitation, and random and pseudorandom excitation. Reasons for the selection of fast sine sweeps for transient excitation are given. The use of the fast fourier transform dynamic analyzer (HP-5451B) is presented, together with a curve fitting data process in the Laplace domain to experimentally evaluate values of generalized mass, model frequencies, dampings, and mode shapes. The effects of poor signal to noise ratios due to turbulence creating data variance are discussed. Data manipulation techniques used to overcome variance problems are also included. The experience is described that was gained by using these techniques since the early stages of the SST program. Data measured during 747 flight flutter tests, and SST, YC-14, and 727 empennage flutter model tests are included.
NASA Astrophysics Data System (ADS)
Menezes-Blackburn, Daniel; Sun, Jiahui; Lehto, Niklas; Zhang, Hao; Stutter, Marc; Giles, Courtney D.; Darch, Tegan; George, Timothy S.; Shand, Charles; Lumsdon, David; Blackwell, Martin; Wearing, Catherine; Cooper, Patricia; Wendler, Renate; Brown, Lawrie; Haygarth, Philip M.
2017-04-01
The phosphorus (P) labile pool and desorption kinetics were simultaneously evaluated in ten representative UK soils using the technique of Diffusive gradients in thin films (DGT). The DGT-induced fluxes in soil and sediments model (DIFS) was fitted to the time series of DGT deployment (1h to 240h). The desorbable P concentration (labile P) was obtained by multiplying the fitted Kd by the soil solution P concentration obtained using Diffusive Equilibration in Thin Films (DET) devices. The labile P was then compared to several soil P extracts including Olsen P, Resin P, FeO-P and water extractable P, in order to assess if these analytical procedures can be used to represent the labile P across different soils. The Olsen P, commonly used as a representation of the soil labile P pool, overestimated the desorbable P concentration by a seven fold factor. The use of this approach for the quantification of soil P desorption kinetics parameters was somewhat unprecise, showing a wide range of equally valid solutions for the response of the system P equilibration time (Tc). Additionally, the performance of different DIFS model versions (1D, 2D and 3D) was compared. Although these models had a good fit to experimental DGT time series data, the fitted parameters showed a poor agreement between different model versions. The limitations of the DIFS model family are associated with the assumptions taken in the modelling approach and the 3D version is here considered to be the most precise among them.
Gaikwad, Bhushan Satish; Nazirkar, Girish; Dable, Rajani; Singh, Shailendra
2018-01-01
The present study aims to compare and evaluate the marginal fit and axial wall adaptability of Co-Cr copings fabricated by metal laser sintering (MLS) and lost-wax (LW) techniques using a stereomicroscope. A stainless steel master die assembly was fabricated simulating a prepared crown; 40 replicas of master die were fabricated in gypsum type IV and randomly divided in two equal groups. Group A coping was fabrication by LW technique and the Group B coping fabrication by MLS technique. The copings were seated on their respective gypsum dies and marginal fit was measured using stereomicroscope and image analysis software. For evaluation of axial wall adaptability, the coping and die assembly were embedded in autopolymerizing acrylic resin and sectioned vertically. The discrepancies between the dies and copings were measured along the axial wall on each halves. The data were subjected to statistical analysis using unpaired t -test. The mean values of marginal fit for copings in Group B (MLS) were lower (24.6 μm) than the copings in Group A (LW) (39.53 μm), and the difference was statistically significant ( P < 0.05). The mean axial wall discrepancy value was lower for Group B (31.03 μm) as compared with Group A (54.49 μm) and the difference was statistically significant ( P < 0.05). The copings fabricated by MLS technique had better marginal fit and axial wall adaptability in comparison with copings fabricated by the LW technique. However, the values of marginal fit of copings fabricated that the two techniques were within the clinically acceptable limit (<50 μm).
Htun, Tha Pyai; Lim, Peng Im; Ho-Lim, Sarah
2015-01-01
Objectives The aim of this study was to examine the relationships among maternal and infant characteristics, breastfeeding techniques, and exclusive breastfeeding initiation in different modes of birth using structural equation modeling approaches. Methods We examined a hypothetical model based on integrating concepts of a breastfeeding decision-making model, a breastfeeding initiation model, and a social cognitive theory among 952 mother-infant dyads. The LATCH breastfeeding assessment tool was used to evaluate breastfeeding techniques and two infant feeding categories were used (exclusive and non-exclusive breastfeeding). Results Structural equation models (SEM) showed that multiparity was significantly positively associated with breastfeeding techniques and the jaundice of an infant was significantly negatively related to exclusive breastfeeding initiation. A multigroup analysis in the SEM showed no difference between the caesarean section and vaginal delivery groups estimates of breastfeeding techniques on exclusive breastfeeding initiation. Breastfeeding techniques were significantly positively associated with exclusive breastfeeding initiation in the entire sample and in the vaginal deliveries group. However, breastfeeding techniques were not significantly associated with exclusive breastfeeding initiation in the cesarean section group. Maternal age, maternal race, gestations, birth weight of infant, and postnatal complications had no significant impacts on breastfeeding techniques or exclusive breastfeeding initiation in our study. Overall, the models fitted the data satisfactorily (GFI = 0.979–0.987; AGFI = 0.951–0.962; IFI = 0.958–0.962; CFI = 0.955–0.960, and RMSEA = 0.029–0.034). Conclusions Multiparity and jaundice of an infant were found to affect breastfeeding technique and exclusive breastfeeding initiation respectively. Breastfeeding technique was related to exclusive breastfeeding initiation according to the mode of birth. This relationship implies the importance of early effective interventions among first-time mothers with jaundice infants in improving breastfeeding techniques and promoting exclusive breastfeeding initiation. PMID:26566028
Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J
2014-02-01
Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model (Markov) that needs the parameterization of transition probabilities, and only has summary KM plots available.
Fitting and Reconstruction of Thirteen Simple Coronal Mass Ejections
NASA Astrophysics Data System (ADS)
Al-Haddad, Nada; Nieves-Chinchilla, Teresa; Savani, Neel P.; Lugaz, Noé; Roussev, Ilia I.
2018-05-01
Coronal mass ejections (CMEs) are the main drivers of geomagnetic disturbances, but the effects of their interaction with Earth's magnetic field depend on their magnetic configuration and orientation. Fitting and reconstruction techniques have been developed to determine important geometrical and physical CME properties, such as the orientation of the CME axis, the CME size, and its magnetic flux. In many instances, there is disagreement between different methods but also between fitting from in situ measurements and reconstruction based on remote imaging. This could be due to the geometrical or physical assumptions of the models, but also to the fact that the magnetic field inside CMEs is only measured at one point in space as the CME passes over a spacecraft. In this article we compare three methods that are based on different assumptions for measurements by the Wind spacecraft for 13 CMEs from 1997 to 2015. These CMEs are selected from the interplanetary coronal mass ejections catalog on
Su, Ting-Shu; Sun, Jian
2016-09-01
For 20 years, the intraoral digital impression technique has been applied to the fabrication of computer aided design and computer aided manufacturing (CAD-CAM) fixed dental prostheses (FDPs). Clinical fit is one of the main determinants of the success of an FDP. Studies of the clinical fit of 3-unit ceramic FDPs made by means of a conventional impression versus a digital impression technology are limited. The purpose of this in vitro study was to evaluate and compare the internal fit and marginal fit of CAD-CAM, 3-unit ceramic FDP frameworks fabricated from an intraoral digital impression and a conventional impression. A standard model was designed for a prepared maxillary left canine and second premolar and missing first premolar. The model was scanned with an intraoral digital scanner, exporting stereolithography (STL) files as the experimental group (digital group). The model was used to fabricate 10 stone casts that were scanned with an extraoral scanner, exporting STL files to a computer connected to the scanner as the control group (conventional group). The STL files were used to produce zirconia FDP frameworks with CAD-CAM. These frameworks were seated on the standard model and evaluated for marginal and internal fit. Each framework was segmented into 4 sections per abutment teeth, resulting in 8 sections per framework, and was observed using optical microscopy with ×50 magnification. Four measurement points were selected on each section as marginal discrepancy (P1), mid-axial wall (P2), axio-occusal edge (P3), and central-occlusal point (P4). Mean marginal fit values of the digital group (64 ±16 μm) were significantly smaller than those of the conventional group (76 ±18 μm) (P<.05). The mean internal fit values of the digital group (111 ±34 μm) were significantly smaller than those of the conventional group (132 ±44 μm) (P<.05). CAD-CAM 3-unit zirconia FDP frameworks fabricated from intraoral digital and conventional impressions showed clinically acceptable marginal and internal fit. The marginal and internal fit of frameworks fabricated from the intraoral digital impression system were better than those fabricated from conventional impressions. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Inferring Diffusion Dynamics from FCS in Heterogeneous Nuclear Environments
Tsekouras, Konstantinos; Siegel, Amanda P.; Day, Richard N.; Pressé, Steve
2015-01-01
Fluorescence correlation spectroscopy (FCS) is a noninvasive technique that probes the diffusion dynamics of proteins down to single-molecule sensitivity in living cells. Critical mechanistic insight is often drawn from FCS experiments by fitting the resulting time-intensity correlation function, G(t), to known diffusion models. When simple models fail, the complex diffusion dynamics of proteins within heterogeneous cellular environments can be fit to anomalous diffusion models with adjustable anomalous exponents. Here, we take a different approach. We use the maximum entropy method to show—first using synthetic data—that a model for proteins diffusing while stochastically binding/unbinding to various affinity sites in living cells gives rise to a G(t) that could otherwise be equally well fit using anomalous diffusion models. We explain the mechanistic insight derived from our method. In particular, using real FCS data, we describe how the effects of cell crowding and binding to affinity sites manifest themselves in the behavior of G(t). Our focus is on the diffusive behavior of an engineered protein in 1) the heterochromatin region of the cell’s nucleus as well as 2) in the cell’s cytoplasm and 3) in solution. The protein consists of the basic region-leucine zipper (BZip) domain of the CCAAT/enhancer-binding protein (C/EBP) fused to fluorescent proteins. PMID:26153697
Ramo, Nicole L.; Puttlitz, Christian M.
2018-01-01
Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558
Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco
2017-10-01
The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was <4 PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.
NASA Astrophysics Data System (ADS)
Elbeih, Ahmed; Abd-Elghany, Mohamed; Elshenawy, Tamer
2017-03-01
Vacuum stability test (VST) is mainly used to study compatibility and stability of energetic materials. In this work, VST has been investigated to study thermal decomposition kinetics of four cyclic nitramines, 1,3,5-trinitro-1,3,5-triazinane (RDX) and 1,3,5,7-tetranitro-1,3,5,7-tetrazocane (HMX), cis-1,3,4,6-tetranitrooctahydroimidazo-[4,5-d]imidazole (BCHMX), 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (ε-HNIW, CL-20), bonded by polyurethane matrix based on hydroxyl terminated polybutadiene (HTPB). Model fitting and model free (isoconversional) methods have been applied to determine the decomposition kinetics from VST results. For comparison, the decomposition kinetics were determined isothermally by ignition delay technique and non-isothermally by Advanced Kinetics and Technology Solution (AKTS) software. The activation energies for thermolysis obtained by isoconversional method based on VST technique of RDX/HTPB, HMX/HTPB, BCHMX/HTPB and CL20/HTPB were 157.1, 203.1, 190.0 and 176.8 kJ mol-1 respectively. Model fitting method proved that the mechanism of thermal decomposition of BCHMX/HTPB is controlled by the nucleation model while all the other studied PBXs are controlled by the diffusion models. A linear relationship between the ignition temperatures and the activation energies was observed. BCHMX/HTPB is interesting new PBX in the research stage.
Efficient Power Network Analysis with Modeling of Inductive Effects
NASA Astrophysics Data System (ADS)
Zeng, Shan; Yu, Wenjian; Hong, Xianlong; Cheng, Chung-Kuan
In this paper, an efficient method is proposed to accurately analyze large-scale power/ground (P/G) networks, where inductive parasitics are modeled with the partial reluctance. The method is based on frequency-domain circuit analysis and the technique of vector fitting [14], and obtains the time-domain voltage response at given P/G nodes. The frequency-domain circuit equation including partial reluctances is derived, and then solved with the GMRES algorithm with rescaling, preconditioning and recycling techniques. With the merit of sparsified reluctance matrix and iterative solving techniques for the frequency-domain circuit equations, the proposed method is able to handle large-scale P/G networks with complete inductive modeling. Numerical results show that the proposed method is orders of magnitude faster than HSPICE, several times faster than INDUCTWISE [4], and capable of handling the inductive P/G structures with more than 100, 000 wire segments.
Four-dimensional modeling of recent vertical movements in the area of the southern California uplift
Vanicek, Petr; Elliot, Michael R.; Castle, Robert O.
1979-01-01
This paper describes an analytical technique that utilizes scattered geodetic relevelings and tide-gauge records to portray Recent vertical crustal movements that may have been characterized by spasmodic changes in velocity. The technique is based on the fitting of a time-varying algebraic surface of prescribed degree to the geodetic data treated as tilt elements and to tide-gauge readings treated as point movements. Desired variations in time can be selected as any combination of powers of vertical movement velocity and episodic events. The state of the modeled vertical displacement can be shown for any number of dates for visual display. Statistical confidence limits of the modeled displacements, derived from the density of measurements in both space and time, line length, and accuracy of input data, are also provided. The capabilities of the technique are demonstrated on selected data from the region of the southern California uplift.
Recovery of atmospheric refractivity profiles from simulated satellite-to-satellite tracking data
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.; Rangaswamy, S.
1975-01-01
Techniques for recovering atmospheric refractivity profiles from simulated satellite-to-satellite tracking data are documented. Examples are given using the geometric configuration of the ATS-6/NIMBUS-6 Tracking Experiment. The underlying refractivity model for the lower atmosphere has the spherically symmetric form N = exp P(s) where P(s) is a polynomial in the normalized height s. For the simulation used, the Herglotz-Wiechert technique recovered values which were 0.4% and 40% different from the input values at the surface and at a height of 33 kilometers, respectively. Using the same input data, the model fitting technique recovered refractivity values 0.05% and 1% different from the input values at the surface and at a height of 50 kilometers, respectively. It is also shown that if ionospheric and water vapor effects can be properly modelled or effectively removed from the data, pressure and temperature distributions can be obtained.
Compositional Models of Glass/Melt Properties and their Use for Glass Formulation
Vienna, John D.; USA, Richland Washington
2014-12-18
Nuclear waste glasses must simultaneously meet a number of criteria related to their processability, product quality, and cost factors. The properties that must be controlled in glass formulation and waste vitrification plant operation tend to vary smoothly with composition allowing for glass property-composition models to be developed and used. Models have been fit to the key glass properties. The properties are transformed so that simple functions of composition (e.g., linear, polynomial, or component ratios) can be used as model forms. The model forms are fit to experimental data designed statistically to efficiently cover the composition space of interest. Examples ofmore » these models are found in literature. The glass property-composition models, their uncertainty definitions, property constraints, and optimality criteria are combined to formulate optimal glass compositions, control composition in vitrification plants, and to qualify waste glasses for disposal. An overview of current glass property-composition modeling techniques is summarized in this paper along with an example of how those models are applied to glass formulation and product qualification at the planned Hanford high-level waste vitrification plant.« less
NASA Astrophysics Data System (ADS)
Halbrügge, Marc
2010-12-01
This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.
PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less
Quantifying cell turnover using CFSE data.
Ganusov, Vitaly V; Pilyugin, Sergei S; de Boer, Rob J; Murali-Krishna, Kaja; Ahmed, Rafi; Antia, Rustom
2005-03-01
The CFSE dye dilution assay is widely used to determine the number of divisions a given CFSE labelled cell has undergone in vitro and in vivo. In this paper, we consider how the data obtained with the use of CFSE (CFSE data) can be used to estimate the parameters determining cell division and death. For a homogeneous cell population (i.e., a population with the parameters for cell division and death being independent of time and the number of divisions cells have undergone), we consider a specific biologically based "Smith-Martin" model of cell turnover and analyze three different techniques for estimation of its parameters: direct fitting, indirect fitting and rescaling method. We find that using only CFSE data, the duration of the division phase (i.e., approximately the S+G2+M phase of the cell cycle) can be estimated with the use of either technique. In some cases, the average division or cell cycle time can be estimated using the direct fitting of the model solution to the data or by using the Gett-Hodgkin method [Gett A. and Hodgkin, P. 2000. A cellular calculus for signal integration by T cells. Nat. Immunol. 1:239-244]. Estimation of the death rates during commitment to division (i.e., approximately the G1 phase of the cell cycle) and during the division phase may not be feasible with the use of only CFSE data. We propose that measuring an additional parameter, the fraction of cells in division, may allow estimation of all model parameters including the death rates during different stages of the cell cycle.
Optimal pacing for running 400- and 800-m track races
NASA Astrophysics Data System (ADS)
Reardon, James
2013-06-01
We present a toy model of anaerobic glycolysis that utilizes appropriate physiological and mathematical consideration while remaining useful to the athlete. The toy model produces an optimal pacing strategy for 400-m and 800-m races that is analytically calculated via the Euler-Lagrange equation. The calculation of the optimum v(t) is presented in detail, with an emphasis on intuitive arguments in order to serve as a bridge between the basic techniques presented in undergraduate physics textbooks and the more advanced techniques of control theory. Observed pacing strategies in 400-m and 800-m world-record races are found to be well-fit by the toy model, which allows us to draw a new physiological interpretation for the advantages of common weight-training practices.
A variable-gain output feedback control design approach
NASA Technical Reports Server (NTRS)
Haylo, Nesim
1989-01-01
A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.
Riegel, Adam C; Chen, Yu; Kapur, Ajay; Apicello, Laura; Kuruvilla, Abraham; Rea, Anthony J; Jamshidi, Abolghassem; Potters, Louis
Optically stimulated luminescent dosimeters (OSLDs) are utilized for in vivo dosimetry (IVD) of modern radiation therapy techniques such as intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). Dosimetric precision achieved with conventional techniques may not be attainable. In this work, we measured accuracy and precision for a large sample of clinical OSLD-based IVD measurements. Weekly IVD measurements were collected from 4 linear accelerators for 2 years and were expressed as percent differences from planned doses. After outlier analysis, 10,224 measurements were grouped in the following way: overall, modality (photons, electrons), treatment technique (3-dimensional [3D] conformal, field-in-field intensity modulation, inverse-planned IMRT, and VMAT), placement location (gantry angle, cardinality, and central axis positioning), and anatomical site (prostate, breast, head and neck, pelvis, lung, rectum and anus, brain, abdomen, esophagus, and bladder). Distributions were modeled via a Gaussian function. Fitting was performed with least squares, and goodness-of-fit was assessed with the coefficient of determination. Model means (μ) and standard deviations (σ) were calculated. Sample means and variances were compared for statistical significance by analysis of variance and the Levene tests (α = 0.05). Overall, μ ± σ was 0.3 ± 10.3%. Precision for electron measurements (6.9%) was significantly better than for photons (10.5%). Precision varied significantly among treatment techniques (P < .0001) with field-in-field lowest (σ = 7.2%) and IMRT and VMAT highest (σ = 11.9% and 13.4%, respectively). Treatment site models with goodness-of-fit greater than 0.90 (6 of 10) yielded accuracy within ±3%, except for head and neck (μ = -3.7%). Precision varied with treatment site (range, 7.3%-13.0%), with breast and head and neck yielding the best and worst precision, respectively. Placement on the central axis of cardinal gantry angles yielded more precise results (σ = 8.5%) compared with other locations (range, 10.5%-11.4%). Accuracy of ±3% was achievable. Precision ranged from 6.9% to 13.4% depending on modality, technique, and treatment site. Simple, standardized locations may improve IVD precision. These findings may aid development of patient-specific tolerances for OSLD-based IVD. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Modeling of forest canopy BRDF using DIRSIG
NASA Astrophysics Data System (ADS)
Rengarajan, Rajagopalan; Schott, John R.
2016-05-01
The characterization and temporal analysis of multispectral and hyperspectral data to extract the biophysical information of the Earth's surface can be significantly improved by understanding its aniosotropic reflectance properties, which are best described by a Bi-directional Reflectance Distribution Function (BRDF). The advancements in the field of remote sensing techniques and instrumentation have made hyperspectral BRDF measurements in the field possible using sophisticated goniometers. However, natural surfaces such as forest canopies impose limitations on both the data collection techniques, as well as, the range of illumination angles that can be collected from the field. These limitations can be mitigated by measuring BRDF in a virtual environment. This paper presents an approach to model the spectral BRDF of a forest canopy using the Digital Image and Remote Sensing Image Generation (DIRSIG) model. A synthetic forest canopy scene is constructed by modeling the 3D geometries of different tree species using OnyxTree software. The field collected spectra from the Harvard forest is used to represent the optical properties of the tree elements. The canopy radiative transfer is estimated using the DIRSIG model for specific view and illumination angles to generate BRDF measurements. A full hemispherical BRDF is generated by fitting the measured BRDF to a semi-empirical BRDF model. The results from fitting the model to the measurement indicates a root mean square error of less than 5% (2 reflectance units) relative to the forest's reflectance in the VIS-NIR-SWIR region. The process can be easily extended to generate a spectral BRDF library for various biomes.
Deeb, Omar; Shaik, Basheerulla; Agrawal, Vijay K
2014-10-01
Quantitative Structure-Activity Relationship (QSAR) models for binding affinity constants (log Ki) of 78 flavonoid ligands towards the benzodiazepine site of GABA (A) receptor complex were calculated using the machine learning methods: artificial neural network (ANN) and support vector machine (SVM) techniques. The models obtained were compared with those obtained using multiple linear regression (MLR) analysis. The descriptor selection and model building were performed with 10-fold cross-validation using the training data set. The SVM and MLR coefficient of determination values are 0.944 and 0.879, respectively, for the training set and are higher than those of ANN models. Though the SVM model shows improvement of training set fitting, the ANN model was superior to SVM and MLR in predicting the test set. Randomization test is employed to check the suitability of the models.
Empirical evidence for multi-scaled controls on wildfire size distributions in California
NASA Astrophysics Data System (ADS)
Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.
2014-12-01
Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California wildfire ecosystems appear to be adaptive, governed by stationary and non-stationary controls, which may be either exogenous or endogenous to the system.
Evidence of a Supermassive Black Hole in the Galaxy NGC 1023 From The Nuclear Stellar Dynamics
NASA Technical Reports Server (NTRS)
Bower, G. A.; Green, R. F.; Bender, R.; Gebhardt, K.; Lauer, T. R.; Magorrian, J.; Richstone, D. O.; Danks, A.; Gull, T.; Hutchings, J.
2000-01-01
We analyze the nuclear stellar dynamics of the SBO galaxy NGC 1023, utilizing observational data both from the Space Telescope Imaging Spectrograph aboard the Hubble Space Telescope and from the ground. The stellar kinematics measured from these long-slit spectra show rapid rotation (V equals approx. 70 km/s at a distance of O.1 deg = 4.9 pc from the nucleus) and increasing velocity dispersion toward the nucleus (where sigma = 295 +/- 30 km/s). We model the observed stellar kinematics assuming an axisymmetric mass distribution with both two and three integrals of motion. Both modeling techniques point to the presence of a central dark compact mass (which presumably is a supermassive black hole) with confidence > 99%. The isotropic two-integral models yield a best-fitting black hole mass of (6.0 +/- 0.4) x 10(exp 7) solar masses and mass-to-light ratio (M/L(sub v)) of 5.38 +/- 0.08, and the goodness-of-fit (CHI(exp 2)) is insensitive to reasonable values for the galaxy's inclination. The three-integral models, which non-parametrically fit the observed line-of-sight velocity distribution as a function of position in the galaxy, suggest a black hole mass of (3.9 +/- 0.4) x 10(exp 7) solar masses and M/L(sub v) of 5.56 +/- 0.02 (internal errors), and the edge-on models are vastly superior fits over models at other inclinations. The internal dynamics in NGC 1023 as suggested by our best-fit three-integral model shows that the velocity distribution function at the nucleus is tangentially anisotropic, suggesting the presence of a nuclear stellar disk. The nuclear line of sight velocity distribution has enhanced wings at velocities >= 600 km/s from systemic, suggesting that perhaps we have detected a group of stars very close to the central dark mass.
Proceedings of the NASA Workshop on Surface Fitting
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1982-01-01
Surface fitting techniques and their utilization are addressed. Surface representation, approximation, and interpolation are discussed. Along with statistical estimation problems associated with surface fitting.
Bustamante, Carlos D.; Valero-Cuevas, Francisco J.
2010-01-01
The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906
Uncertainty Model for Total Solar Irradiance Estimation on Australian Rooftops
NASA Astrophysics Data System (ADS)
Al-Saadi, Hassan; Zivanovic, Rastko; Al-Sarawi, Said
2017-11-01
The installations of solar panels on Australian rooftops have been in rise for the last few years, especially in the urban areas. This motivates academic researchers, distribution network operators and engineers to accurately address the level of uncertainty resulting from grid-connected solar panels. The main source of uncertainty is the intermittent nature of radiation, therefore, this paper presents a new model to estimate the total radiation incident on a tilted solar panel. Where a probability distribution factorizes clearness index, the model is driven upon clearness index with special attention being paid for Australia with the utilization of best-fit-correlation for diffuse fraction. The assessment of the model validity is achieved with the adoption of four goodness-of-fit techniques. In addition, the Quasi Monte Carlo and sparse grid methods are used as sampling and uncertainty computation tools, respectively. High resolution data resolution of solar irradiations for Adelaide city were used for this assessment, with an outcome indicating a satisfactory agreement between actual data variation and model.
NASA Astrophysics Data System (ADS)
Geszke-Moritz, Małgorzata; Moritz, Michał
2016-04-01
Four mesoporous siliceous materials such as SBA-16, SBA-15, PHTS and MCF functionalized with (3-aminopropyl)triethoxysilane were successfully prepared and applied as the carriers for poorly water-soluble drug diflunisal. Several techniques including nitrogen sorption analysis, XRD, TEM, FTIR and thermogravimetric analysis were employed to characterize mesoporous matrices. Adsorption isotherms were analyzed using Langmuir, Freundlich, Temkin and Dubinin-Radushkevich models. In order to find the best-fit isotherm for each model, both linear and nonlinear regressions were carried out. The equilibrium data were best fitted by the Langmuir isotherm model revealing maximum adsorption capacity of 217.4 mg/g for aminopropyl group-modified SBA-15. The negative values of Gibbs free energy change indicated that the adsorption of diflunisal is a spontaneous process. Weibull release model was employed to describe the dissolution profile of diflunisal. At pH 4.5 all prepared mesoporous matrices exhibited the improvement of drug dissolution kinetics as compared to the dissolution rate of pure diflunisal.
Generalized Optoelectronic Model of Series-Connected Multijunction Solar Cells
Geisz, John F.; Steiner, Myles A.; Garcia, Ivan; ...
2015-10-02
The emission of light from each junction in a series-connected multijunction solar cell, we found, both complicates and elucidates the understanding of its performance under arbitrary conditions. Bringing together many recent advances in this understanding, we present a general 1-D model to describe luminescent coupling that arises from both voltage-driven electroluminescence and voltage-independent photoluminescence in nonideal junctions that include effects such as Sah-Noyce-Shockley (SNS) recombination with n ≠ 2, Auger recombination, shunt resistance, reverse-bias breakdown, series resistance, and significant dark area losses. The individual junction voltages and currents are experimentally determined from measured optical and electrical inputs and outputs ofmore » the device within the context of the model to fit parameters that describe the devices performance under arbitrary input conditions. Furthermore, our techniques to experimentally fit the model are demonstrated for a four-junction inverted metamorphic solar cell, and the predictions of the model are compared with concentrator flash measurements.« less
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.
1986-01-01
Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1988-01-01
This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.
NASA Astrophysics Data System (ADS)
McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.
2017-03-01
Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.
Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska
NASA Astrophysics Data System (ADS)
Bonin, J. A.; Chambers, D. P.
2012-12-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.
Vickers, Anna A; Potter, Nicola J; Fishwick, Colin W G; Chopra, Ian; O'Neill, Alex J
2009-06-01
This study sought to expand knowledge on the molecular mechanisms of mutational resistance to trimethoprim in Staphylococcus aureus, and the fitness costs associated with resistance. Spontaneous trimethoprim-resistant mutants of S. aureus SH1000 were recovered in vitro, resistance genotypes characterized by DNA sequencing of the gene encoding the drug target (dfrA) and the fitness of mutants determined by pair-wise growth competition assays with SH1000. Novel resistance genotypes were confirmed by ectopic expression of dfrA alleles in a trimethoprim-sensitive S. aureus strain. Molecular models of S. aureus dihydrofolate reductase (DHFR) were constructed to explore the structural basis of trimethoprim resistance, and to rationalize the observed in vitro fitness of trimethoprim-resistant mutants. In addition to known amino acid substitutions in DHFR mediating trimethoprim resistance (F(99)Y and H(150)R), two novel resistance polymorphisms (L(41)F and F(99)S) were identified among the trimethoprim-resistant mutants selected in vitro. Molecular modelling of mutated DHFR enzymes provided insight into the structural basis of trimethoprim resistance. Calculated binding energies of the substrate (dihydrofolate) for the mutant and wild-type enzymes were similar, consistent with apparent lack of fitness costs for the resistance mutations in vitro. Reduced susceptibility to trimethoprim of DHFR enzymes carrying substitutions L(41)F, F(99)S, F(99)Y and H(150)R appears to result from structural changes that reduce trimethoprim binding to the enzyme. However, the mutations conferring trimethoprim resistance are not associated with fitness costs in vitro, suggesting that the survival of trimethoprim-resistant strains emerging in the clinic may not be subject to a fitness disadvantage.
ACCELERATED FITTING OF STELLAR SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ting, Yuan-Sen; Conroy, Charlie; Rix, Hans-Walter
2016-07-20
Stellar spectra are often modeled and fitted by interpolating within a rectilinear grid of synthetic spectra to derive the stars’ labels: stellar parameters and elemental abundances. However, the number of synthetic spectra needed for a rectilinear grid grows exponentially with the label space dimensions, precluding the simultaneous and self-consistent fitting of more than a few elemental abundances. Shortcuts such as fitting subsets of labels separately can introduce unknown systematics and do not produce correct error covariances in the derived labels. In this paper we present a new approach—Convex Hull Adaptive Tessellation (chat)—which includes several new ideas for inexpensively generating amore » sufficient stellar synthetic library, using linear algebra and the concept of an adaptive, data-driven grid. A convex hull approximates the region where the data lie in the label space. A variety of tests with mock data sets demonstrate that chat can reduce the number of required synthetic model calculations by three orders of magnitude in an eight-dimensional label space. The reduction will be even larger for higher dimensional label spaces. In chat the computational effort increases only linearly with the number of labels that are fit simultaneously. Around each of these grid points in the label space an approximate synthetic spectrum can be generated through linear expansion using a set of “gradient spectra” that represent flux derivatives at every wavelength point with respect to all labels. These techniques provide new opportunities to fit the full stellar spectra from large surveys with 15–30 labels simultaneously.« less
Evidence for the associated production of a W boson and a top quark at ATLAS
NASA Astrophysics Data System (ADS)
Koll, James
This thesis discusses a search for the Standard Model single top Wt-channel process. An analysis has been performed searching for the Wt-channel process using 4.7 fb-1 of integrated luminosity collected with the ATLAS detector at the Large Hadron Collider. A boosted decision tree is trained using machine learning techniques to increase the separation between signal and background. A profile likelihood fit is used to measure the cross-section of the Wt-channel process at sigma(pp → Wt + X) = 16.8 +/-2.9 (stat) +/- 4.9(syst) pb, consistent with the Standard Model prediction. This fit is also used to generate pseudoexperiments to calculate the significance, finding an observed (expected) 3.3 sigma (3.4 sigma) excess over background.
Dynamic Analysis of Recalescence Process and Interface Growth of Eutectic Fe82B17Si1 Alloy
NASA Astrophysics Data System (ADS)
Fan, Y.; Liu, A. M.; Chen, Z.; Li, P. Z.; Zhang, C. H.
2018-03-01
By employing the glass fluxing technique in combination with cyclical superheating, the microstructural evolution of the undercooled Fe82B17Si1 alloy in the obtained undercooling range was studied. With increase in undercooling, a transition of cooling curves was detected from one recalescence to two recalescences, followed by one recalescence. The two types of cooling curves were fitted by the break equation and the Johnson-Mehl-Avrami-Kolmogorov model. Based on the cooling curves at different undercoolings, the recalescence rate was calculated by the multi-logistic growth model and the Boettinger-Coriel-Trivedi model. Both the recalescence features and the interface growth kinetics of the eutectic Fe82B17Si1 alloy were explored. The fitting results that were obtained using TEM (SAED), SEM and XRD were consistent with the changing rule of microstructures. Finally, the relationship between the microstructure and hardness was also investigated.
A Model-Based Approach for the Measurement of Eye Movements Using Image Processing
NASA Technical Reports Server (NTRS)
Sung, Kwangjae; Reschke, Millard F.
1997-01-01
This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.
Apps to promote physical activity among adults: a review and content analysis.
Middelweerd, Anouk; Mollee, Julia S; van der Wal, C Natalie; Brug, Johannes; Te Velde, Saskia J
2014-07-25
In May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear. The study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play. On average, the reviewed apps included 5 behavior change techniques (range 2-8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found. The present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions.
Accuracy of Different Implant Impression Techniques: Evaluation of New Tray Design Concept.
Liu, David Yu; Cader, Fathima Nashmie; Abduo, Jaafar; Palamara, Joseph
2017-12-29
To evaluate implant impression accuracy with a new tray design concept in comparison to nonsplinted and splinted impression techniques for a 2-implant situation. A reference bar titanium framework was fabricated to fit on 2 parallel implants. The framework was used to generate a resin master model with 2 implants that fit precisely against the framework. Three impression techniques were evaluated: (1) nonsplinted, (2) splinted, and (3) nonsplinted with modified tray impressions. All the trays were fabricated from light-cured acrylic resin material with openings that corresponded to the implant impression copings. Ten impressions were taken for each technique using poly(vinyl siloxane) impression material. The impressions were poured with type IV dental stone to generate the test casts. A rosette strain gauge was bonded to the middle of the framework. As the framework retaining screws were tightened on each test cast, the developed strains were recorded until the completion of the tightening to 35 Ncm. The generated strains of the rosette strain gauge were used to calculate the maximum principal strain. A statistically significant difference was observed among the different impression techniques. The modified tray design impression technique was associated with the least framework strains, which indicates greater accuracy compared with the other techniques. There was no significant difference between the splinted and the nonsplinted impression techniques. The new tray design concept appeared to produce more accurate implant impressions than the other techniques. Despite the statistical difference among the impression techniques, the clinical significance of this difference is yet to be determined. © 2017 by the American College of Prosthodontists.
NASA Astrophysics Data System (ADS)
Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.
2007-12-01
An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies and Mt. Barr Cirque Lake located in British Columbia as examples.
Buonaccorsi, Giovanni A; Roberts, Caleb; Cheung, Sue; Watson, Yvonne; O'Connor, James P B; Davies, Karen; Jackson, Alan; Jayson, Gordon C; Parker, Geoff J M
2006-09-01
The quantitative analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data is subject to model fitting errors caused by motion during the time-series data acquisition. However, the time-varying features that occur as a result of contrast enhancement can confound motion correction techniques based on conventional registration similarity measures. We have therefore developed a heuristic, locally controlled tracer kinetic model-driven registration procedure, in which the model accounts for contrast enhancement, and applied it to the registration of abdominal DCE-MRI data at high temporal resolution. Using severely motion-corrupted data sets that had been excluded from analysis in a clinical trial of an antiangiogenic agent, we compared the results obtained when using different models to drive the tracer kinetic model-driven registration with those obtained when using a conventional registration against the time series mean image volume. Using tracer kinetic model-driven registration, it was possible to improve model fitting by reducing the sum of squared errors but the improvement was only realized when using a model that adequately described the features of the time series data. The registration against the time series mean significantly distorted the time series data, as did tracer kinetic model-driven registration using a simpler model of contrast enhancement. When an appropriate model is used, tracer kinetic model-driven registration influences motion-corrupted model fit parameter estimates and provides significant improvements in localization in three-dimensional parameter maps. This has positive implications for the use of quantitative DCE-MRI for example in clinical trials of antiangiogenic or antivascular agents.
UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, N. E.; Soderberg, A. M.; Betancourt, M., E-mail: nsanders@cfa.harvard.edu
2015-02-10
Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. Wemore » present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST.« less
SU-E-T-664: Radiobiological Modeling of Prophylactic Cranial Irradiation in Mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D; Debeb, B; Woodward, W
Purpose: Prophylactic cranial irradiation (PCI) is a clinical technique used to reduce the incidence of brain metastasis and improve overall survival in select patients with ALL and SCLC, and we have shown the potential of PCI in select breast cancer patients through a mouse model (manuscript in preparation). We developed a computational model using our experimental results to demonstrate the advantage of treating brain micro-metastases early. Methods: MATLAB was used to develop the computational model of brain metastasis and PCI in mice. The number of metastases per mouse and the volume of metastases from four- and eight-week endpoints were fitmore » to normal and log-normal distributions, respectively. Model input parameters were optimized so that model output would match the experimental number of metastases per mouse. A limiting dilution assay was performed to validate the model. The effect of radiation at different time points was computationally evaluated through the endpoints of incidence, number of metastases, and tumor burden. Results: The correlation between experimental number of metastases per mouse and the Gaussian fit was 87% and 66% at the two endpoints. The experimental volumes and the log-normal fit had correlations of 99% and 97%. In the optimized model, the correlation between number of metastases per mouse and the Gaussian fit was 96% and 98%. The log-normal volume fit and the model agree 100%. The model was validated by a limiting dilution assay, where the correlation was 100%. The model demonstrates that cells are very sensitive to radiation at early time points, and delaying treatment introduces a threshold dose at which point the incidence and number of metastases decline. Conclusion: We have developed a computational model of brain metastasis and PCI in mice that is highly correlated to our experimental data. The model shows that early treatment of subclinical disease is highly advantageous.« less
Gutiérrez-Juárez, G; Vargas-Luna, M; Córdova, T; Varela, J B; Bernal-Alvarado, J J; Sosa, M
2002-08-01
A photoacoustic technique is used for studying topically applied substance absorption in human skin. The proposed method utilizes a double-chamber PA cell. The absorption determination was obtained through the measurement of the thermal effusivity of the binary system substance-skin. The theoretical model assumes that the effective thermal effusivity of the binary system corresponds to that of a two-phase system. Experimental applications of the method employed different substances of topical application in different parts of the body of a volunteer. The method is demonstrated to be an easily used non-invasive technique for dermatology research. The relative concentrations as a function of time of substances such as ketoconazol and sunscreen were determined by fitting a sigmoidal function to the data, while an exponential function corresponds to the best fit for the set of data for nitrofurazona, vaseline and vaporub. The time constants associated with the rates of absorption, were found to vary in the range between 10 and 58 min, depending on the substance and the part of the body.
The effect of various veneering techniques on the marginal fit of zirconia copings.
Torabi, Kianoosh; Vojdani, Mahroo; Giti, Rashin; Taghva, Masumeh; Pardis, Soheil
2015-06-01
This study aimed to evaluate the fit of zirconia ceramics before and after veneering, using 3 different veneering processes (layering, press-over, and CAD-on techniques). Thirty standardized zirconia CAD/CAM frameworks were constructed and divided into three groups of 10 each. The first group was veneered using the traditional layering technique. Press-over and CAD-on techniques were used to veneer second and third groups. The marginal gap of specimens was measured before and after veneering process at 18 sites on the master die using a digital microscope. Paired t-test was used to evaluate mean marginal gap changes. One-way ANOVA and post hoc tests were also employed for comparison among 3 groups (α=.05). Marginal gap of 3 groups was increased after porcelain veneering. The mean marginal gap values after veneering in the layering group (63.06 µm) was higher than press-over (50.64 µm) and CAD-on (51.50 µm) veneered groups (P<.001). Three veneering methods altered the marginal fit of zirconia copings. Conventional layering technique increased the marginal gap of zirconia framework more than pressing and CAD-on techniques. All ceramic crowns made through three different veneering methods revealed clinically acceptable marginal fit.
NASA Technical Reports Server (NTRS)
Hall, David G.; Heidelberg, Laurence; Konno, Kevin
1993-01-01
The rotating microphone measurement technique and data analysis procedures are documented which are used to determine circumferential and radial acoustic mode content in the inlet of the Advanced Ducted Propeller (ADP) model. Circumferential acoustic mode levels were measured at a series of radial locations using the Doppler frequency shift produced by a rotating inlet microphone probe. Radial mode content was then computed using a least squares curve fit with the measured radial distribution for each circumferential mode. The rotating microphone technique is superior to fixed-probe techniques because it results in minimal interference with the acoustic modes generated by rotor-stator interaction. This effort represents the first experimental implementation of a measuring technique developed by T. G. Sofrin. Testing was performed in the NASA Lewis Low Speed Anechoic Wind Tunnel at a simulated takeoff condition of Mach 0.2. The design is included of the data analysis software and the performance of the rotating rake apparatus. The effect of experiment errors is also discussed.
NASA Technical Reports Server (NTRS)
Hall, David G.; Heidelberg, Laurence; Konno, Kevin
1993-01-01
The rotating microphone measurement technique and data analysis procedures are documented which are used to determine circumferential and radial acoustic mode content in the inlet of the Advanced Ducted Propeller (ADP) model. Circumferential acoustic mode levels were measured at a series of radial locations using the Doppler frequency shift produced by a rotating inlet microphone probe. Radial mode content was then computed using a least squares curve fit with the measured radial distribution for each circumferential mode. The rotating microphone technique is superior to fixed-probe techniques because it results in minimal interference with the acoustic modes generated by rotor-stator interaction. This effort represents the first experimental implementation of a measuring technique developed by T. G. Sofrin. Testing was performed in the NASA Lewis Low Speed Anechoic Wind Tunnel at a simulated takeoff condition of Mach 0.2. The design is included of the data analysis software and the performance of the rotating rake apparatus. The effect of experiment errors is also discussed.
Correcting for deformation in skin-based marker systems.
Alexander, E J; Andriacchi, T P
2001-03-01
A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.
Experiences with Markov Chain Monte Carlo Convergence Assessment in Two Psychometric Examples
ERIC Educational Resources Information Center
Sinharay, Sandip
2004-01-01
There is an increasing use of Markov chain Monte Carlo (MCMC) algorithms for fitting statistical models in psychometrics, especially in situations where the traditional estimation techniques are very difficult to apply. One of the disadvantages of using an MCMC algorithm is that it is not straightforward to determine the convergence of the…
Annual Tree Growth Predictions From Periodic Measurements
Quang V. Cao
2004-01-01
Data from annual measurements of a loblolly pine (Pinus taeda L.) plantation were available for this study. Regression techniques were employed to model annual changes of individual trees in terms of diameters, heights, and survival probabilities. Subsets of the data that include measurements every 2, 3, 4, 5, and 6 years were used to fit the same...
ERIC Educational Resources Information Center
Meulman, Jacqueline J.; Verboon, Peter
1993-01-01
Points of view analysis, as a way to deal with individual differences in multidimensional scaling, was largely supplanted by the weighted Euclidean model. It is argued that the approach deserves new attention, especially as a technique to analyze group differences. A streamlined and integrated process is proposed. (SLD)
An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Turkington, Bruce
2013-08-01
A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.
Callina, Kristina Schmid; Johnson, Sara K; Tirrell, Jonathan M; Batanova, Milena; Weiner, Michelle B; Lerner, Richard M
2017-06-01
There were two purposes of the present research: first, to add to scholarship about a key character virtue, hopeful future expectations; and second, to demonstrate a recent innovation in longitudinal methodology that may be especially useful in enhancing the understanding of the developmental course of hopeful future expectations and other character virtues that have been the focus of recent scholarship in youth development. Burgeoning interest in character development has led to a proliferation of short-term, longitudinal studies on character. These data sets are sometimes limited in their ability to model character development trajectories due to low power or relatively brief time spans assessed. However, the integrative data analysis approach allows researchers to pool raw data across studies in order to fit one model to an aggregated data set. The purpose of this article is to demonstrate the promises and challenges of this new tool for modeling character development. We used data from four studies evaluating youth character strengths in different settings to fit latent growth curve models of hopeful future expectations from participants aged 7 through 26 years. We describe the analytic strategy for pooling the data and modeling the growth curves. Implications for future research are discussed in regard to the advantages of integrative data analysis. Finally, we discuss issues researchers should consider when applying these techniques in their own work.
A simple prescription for simulating and characterizing gravitational arcs
NASA Astrophysics Data System (ADS)
Furlanetto, C.; Santiago, B. X.; Makler, M.; de Bom, C.; Brandt, C. H.; Neto, A. F.; Ferreira, P. C.; da Costa, L. N.; Maia, M. A. G.
2013-01-01
Simple models of gravitational arcs are crucial for simulating large samples of these objects with full control of the input parameters. These models also provide approximate and automated estimates of the shape and structure of the arcs, which are necessary for detecting and characterizing these objects on massive wide-area imaging surveys. We here present and explore the ArcEllipse, a simple prescription for creating objects with a shape similar to gravitational arcs. We also present PaintArcs, which is a code that couples this geometrical form with a brightness distribution and adds the resulting object to images. Finally, we introduce ArcFitting, which is a tool that fits ArcEllipses to images of real gravitational arcs. We validate this fitting technique using simulated arcs and apply it to CFHTLS and HST images of tangential arcs around clusters of galaxies. Our simple ArcEllipse model for the arc, associated to a Sérsic profile for the source, recovers the total signal in real images typically within 10%-30%. The ArcEllipse+Sérsic models also automatically recover visual estimates of length-to-width ratios of real arcs. Residual maps between data and model images reveal the incidence of arc substructure. They may thus be used as a diagnostic for arcs formed by the merging of multiple images. The incidence of these substructures is the main factor that prevents ArcEllipse models from accurately describing real lensed systems.
Fit Analysis of Different Framework Fabrication Techniques for Implant-Supported Partial Prostheses.
Spazzin, Aloísio Oro; Bacchi, Atais; Trevisani, Alexandre; Farina, Ana Paula; Dos Santos, Mateus Bertolini
2016-01-01
This study evaluated the vertical misfit of implant-supported frameworks made using different techniques to obtain passive fit. Thirty three-unit fixed partial dentures were fabricated in cobalt-chromium alloy (n = 10) using three fabrication methods: one-piece casting, framework cemented on prepared abutments, and laser welding. The vertical misfit between the frameworks and the abutments was evaluated with an optical microscope using the single-screw test. Data were analyzed using one-way analysis of variance and Tukey test (α = .05). The one-piece casted frameworks presented significantly higher vertical misfit values than those found for framework cemented on prepared abutments and laser welding techniques (P < .001 and P < .003, respectively). Laser welding and framework cemented on prepared abutments are effective techniques to improve the adaptation of three-unit implant-supported prostheses. These techniques presented similar fit.
NASA Astrophysics Data System (ADS)
Rollett, T.; Möstl, C.; Isavnin, A.; Davies, J. A.; Kubicka, M.; Amerstorfer, U. V.; Harrison, R. A.
2016-06-01
In this study, we present a new method for forecasting arrival times and speeds of coronal mass ejections (CMEs) at any location in the inner heliosphere. This new approach enables the adoption of a highly flexible geometrical shape for the CME front with an adjustable CME angular width and an adjustable radius of curvature of its leading edge, I.e., the assumed geometry is elliptical. Using, as input, Solar TErrestrial RElations Observatory (STEREO) heliospheric imager (HI) observations, a new elliptic conversion (ElCon) method is introduced and combined with the use of drag-based model (DBM) fitting to quantify the deceleration or acceleration experienced by CMEs during propagation. The result is then used as input for the Ellipse Evolution Model (ElEvo). Together, ElCon, DBM fitting, and ElEvo form the novel ElEvoHI forecasting utility. To demonstrate the applicability of ElEvoHI, we forecast the arrival times and speeds of 21 CMEs remotely observed from STEREO/HI and compare them to in situ arrival times and speeds at 1 AU. Compared to the commonly used STEREO/HI fitting techniques (Fixed-ϕ, Harmonic Mean, and Self-similar Expansion fitting), ElEvoHI improves the arrival time forecast by about 2 to ±6.5 hr and the arrival speed forecast by ≈ 250 to ±53 km s-1, depending on the ellipse aspect ratio assumed. In particular, the remarkable improvement of the arrival speed prediction is potentially beneficial for predicting geomagnetic storm strength at Earth.
Yu, Jian-Hong; Lo, Lun-Jou; Hsu, Pin-Hsin
2017-01-01
This study integrates cone-beam computed tomography (CBCT)/laser scan image superposition, computer-aided design (CAD), and 3D printing (3DP) to develop a technology for producing customized dental (orthodontic) miniscrew surgical templates using polymer material. Maxillary bone solid models with the bone and teeth reconstructed using CBCT images and teeth and mucosa outer profile acquired using laser scanning were superimposed to allow miniscrew visual insertion planning and permit surgical template fabrication. The customized surgical template CAD model was fabricated offset based on the teeth/mucosa/bracket contour profiles in the superimposition model and exported to duplicate the plastic template using the 3DP technique and polymer material. An anterior retraction and intrusion clinical test for the maxillary canines/incisors showed that two miniscrews were placed safely and did not produce inflammation or other discomfort symptoms one week after surgery. The fitness between the mucosa and template indicated that the average gap sizes were found smaller than 0.5 mm and confirmed that the surgical template presented good holding power and well-fitting adaption. This study addressed integrating CBCT and laser scan image superposition; CAD and 3DP techniques can be applied to fabricate an accurate customized surgical template for dental orthodontic miniscrews. PMID:28280726
Learning in the model space for cognitive fault diagnosis.
Chen, Huanhuan; Tino, Peter; Rodan, Ali; Yao, Xin
2014-01-01
The emergence of large sensor networks has facilitated the collection of large amounts of real-time data to monitor and control complex engineering systems. However, in many cases the collected data may be incomplete or inconsistent, while the underlying environment may be time-varying or unformulated. In this paper, we develop an innovative cognitive fault diagnosis framework that tackles the above challenges. This framework investigates fault diagnosis in the model space instead of the signal space. Learning in the model space is implemented by fitting a series of models using a series of signal segments selected with a sliding window. By investigating the learning techniques in the fitted model space, faulty models can be discriminated from healthy models using a one-class learning algorithm. The framework enables us to construct a fault library when unknown faults occur, which can be regarded as cognitive fault isolation. This paper also theoretically investigates how to measure the pairwise distance between two models in the model space and incorporates the model distance into the learning algorithm in the model space. The results on three benchmark applications and one simulated model for the Barcelona water distribution network confirm the effectiveness of the proposed framework.
Lifting degeneracy in holographic characterization of colloidal particles using multi-color imaging.
Ruffner, David B; Cheong, Fook Chiong; Blusewicz, Jaroslaw M; Philips, Laura A
2018-05-14
Micrometer sized particles can be accurately characterized using holographic video microscopy and Lorenz-Mie fitting. In this work, we explore some of the limitations in holographic microscopy and introduce methods for increasing the accuracy of this technique with the use of multiple wavelengths of laser illumination. Large high index particle holograms have near degenerate solutions that can confuse standard fitting algorithms. Using a model based on diffraction from a phase disk, we explain the source of these degeneracies. We introduce multiple color holography as an effective approach to distinguish between degenerate solutions and provide improved accuracy for the holographic analysis of sub-visible colloidal particles.
Residential magnetic fields predicted from wiring configurations: I. Exposure model.
Bowman, J D; Thomas, D C; Jiang, L; Jiang, F; Peters, J M
1999-10-01
A physically based model for residential magnetic fields from electric transmission and distribution wiring was developed to reanalyze the Los Angeles study of childhood leukemia by London et al. For this exposure model, magnetic field measurements were fitted to a function of wire configuration attributes that was derived from a multipole expansion of the Law of Biot and Savart. The model parameters were determined by nonlinear regression techniques, using wiring data, distances, and the geometric mean of the ELF magnetic field magnitude from 24-h bedroom measurements taken at 288 homes during the epidemiologic study. The best fit to the measurement data was obtained with separate models for the two major utilities serving Los Angeles County. This model's predictions produced a correlation of 0.40 with the measured fields, an improvement on the 0.27 correlation obtained with the Wertheimer-Leeper (WL) wire code. For the leukemia risk analysis in a companion paper, the regression model predicts exposures to the 24-h geometric mean of the ELF magnetic fields in Los Angeles homes where only wiring data and distances have been obtained. Since these input parameters for the exposure model usually do not change for many years, the predicted magnetic fields will be stable over long time periods, just like the WL code. If the geometric mean is not the exposure metric associated with cancer, this regression technique could be used to estimate long-term exposures to temporal variability metrics and other characteristics of the ELF magnetic field which may be cancer risk factors.
Biomechanical characterization of double-bundle femoral press-fit fixation techniques.
Ettinger, M; Haasper, C; Hankemeier, S; Hurschler, C; Breitmeier, D; Krettek, C; Jagodzinski, M
2011-03-01
Press-fit fixation of patellar tendon bone anterior cruciate ligament autografts is an interesting technique because no hardware is necessary. To date, no biomechanical data exist describing an implant-free double-bundle press-fit procedure. The purpose of this study was to characterize the biomechanical properties of three double-bundle press-fit fixations. In a controlled laboratory study, the patellar-, quadriceps- and hamstring tendons of 10 human cadavers (age: 49.2 ± 18.5 years) were used. An inside out press-fit fixation with a knot in the semitendinosus and gracilis tendons (SG) combined with an additional bone block, with two quadriceps tendon bone block grafts (QU) was compared with press-fit fixation of two bone patellar tendon bone block (PT) grafts in 30 porcine femora. Constructs were cyclically stretched and then loaded until failure. Maximum load to failure, stiffness and elongation during failure testing and cyclical loading were investigated. The maximum load to failure was 703 ± 136 N for SG fixation, 632 ± 130 N for QU and 656 ± 127 N for PT fixation. Stiffness of the constructs averaged 138 ± 26 N/mm for SG, 159 ± 74 N/mm for QU, and 154 ± 50 N/mm for PT fixation. Elongation during initial cyclical loading was 1.2 ± 1.4 mm for SG, 2.0 ± 1.4 mm for QU, and 1.0 ± 0.6 mm for PT (significantly larger for PT and QU between the first 5 cycles compared with cycles 15-20th, P < 0.01). All investigated double-bundle fixation techniques were equal in terms of maximum load to failure, stiffness, and elongation. Unlike with single-bundle press-fit fixation techniques that have been published, no difference was observed between pure tendon combined with an additional bone block and tendon bone grafts. All techniques exhibited larger elongation during initial cyclical loading. All three press-fit fixation techniques that were investigated exhibit comparable biomechanical properties. Preconditioning of the constructs is critical.
How motivation affects academic performance: a structural equation modelling analysis.
Kusurkar, R A; Ten Cate, Th J; Vos, C M P; Westers, P; Croiset, G
2013-03-01
Few studies in medical education have studied effect of quality of motivation on performance. Self-Determination Theory based on quality of motivation differentiates between Autonomous Motivation (AM) that originates within an individual and Controlled Motivation (CM) that originates from external sources. To determine whether Relative Autonomous Motivation (RAM, a measure of the balance between AM and CM) affects academic performance through good study strategy and higher study effort and compare this model between subgroups: males and females; students selected via two different systems namely qualitative and weighted lottery selection. Data on motivation, study strategy and effort was collected from 383 medical students of VU University Medical Center Amsterdam and their academic performance results were obtained from the student administration. Structural Equation Modelling analysis technique was used to test a hypothesized model in which high RAM would positively affect Good Study Strategy (GSS) and study effort, which in turn would positively affect academic performance in the form of grade point averages. This model fit well with the data, Chi square = 1.095, df = 3, p = 0.778, RMSEA model fit = 0.000. This model also fitted well for all tested subgroups of students. Differences were found in the strength of relationships between the variables for the different subgroups as expected. In conclusion, RAM positively correlated with academic performance through deep strategy towards study and higher study effort. This model seems valid in medical education in subgroups such as males, females, students selected by qualitative and weighted lottery selection.
BGFit: management and automated fitting of biological growth curves.
Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana
2013-09-25
Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.
Evaluation of impression accuracy for a four-implant mandibular model--a digital approach.
Stimmelmayr, Michael; Erdelt, Kurt; Güth, Jan-Frederik; Happe, Arndt; Beuer, Florian
2012-08-01
Implant-supported prosthodontics requires precise impressions to achieve a passive fit. Since the early 1990s, in vitro studies comparing different implant impression techniques were performed, capturing the data mostly mechanically. The purpose of this study was to evaluate the accuracy of three different impression techniques digitally. Dental implants were inserted bilaterally in ten polymer lower-arch models at the positions of the first molars and canines. From each original model, three different impressions (A, transfer; B, pick-up; and C, splinted pick-up) were taken. Scan-bodies were mounted on the implants of the polymer and on the lab analogues of the stone models and digitized. The scan-body in position 36 (FDI) of the digitized original and master casts were each superimposed, and the deviations of the remaining three scan-bodies were measured three-dimensionally. The systematic error of digitizing the models was 13 μm for the polymer and 5 μm for the stone model. The mean discrepancies of the original model to the stone casts were 124 μm (±34) μm for the transfer technique, 116 (±46) μm for the pick-up technique, and 80 (±25) μm for the splinted pick-up technique. There were statistically significant discrepancies between the evaluated impression techniques (p ≤ 0.025; ANOVA test). The splinted pick-up impression showed the least deviation between original and stone model; transfer and pick-up techniques showed similar results. For better accuracy of implant-supported prosthodontics, the splinted pick-up technique should be used for impressions of four implants evenly spread in edentulous jaws.
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
Bayesian inference of ice thickness from remote-sensing data
NASA Astrophysics Data System (ADS)
Werder, Mauro A.; Huss, Matthias
2017-04-01
Knowledge about ice thickness and volume is indispensable for studying ice dynamics, future sea-level rise due to glacier melt or their contribution to regional hydrology. Accurate measurements of glacier thickness require on-site work, usually employing radar techniques. However, these field measurements are time consuming, expensive and sometime downright impossible. Conversely, measurements of the ice surface, namely elevation and flow velocity, are becoming available world-wide through remote sensing. The model of Farinotti et al. (2009) calculates ice thicknesses based on a mass conservation approach paired with shallow ice physics using estimates of the surface mass balance. The presented work applies a Bayesian inference approach to estimate the parameters of a modified version of this forward model by fitting it to both measurements of surface flow speed and of ice thickness. The inverse model outputs ice thickness as well the distribution of the error. We fit the model to ten test glaciers and ice caps and quantify the improvements of thickness estimates through the usage of surface ice flow measurements.
Rittner, Linda L; Pulos, Steven M
2014-01-01
The purpose of this study was to develop a general procedure for evaluation of a dynamic assessment and to demonstrate an analysis of a dynamic assessment, the CITM (Tzuriel, 1995b), as an objective measure for use as a group assessment. The techniques used to determine the fit of the CITM to a Rasch partial credit model are explicitly outlined. A modified format of the CITM was administered to 266 diverse second grade students in the USA; 58% of participants were identified as low SES. The participants (males n = 144) were White Anglo and Latino American students (55%), many of whom were first generation Mexican immigrants. The CITM was found to adequately fit a Rasch partial credit model (PCM) indicating that the CITM is a likely candidate for a group administered dynamic assessment that can be measured objectively. Data also supported that a model for objectively measuring change in learning ability for inferential thinking in the CITM was feasible.
Size determination of gold nanoparticles in silicate glasses by UV-Vis spectroscopy
NASA Astrophysics Data System (ADS)
Ali, Shahid; Khan, Younas; Iqbal, Yaseen; Hayat, Khizar; Ali, Muhammad
2017-01-01
A relatively easier and more accurate method for the determination of average size of metal nanoparticles/aggregates in silicate glasses based on ultraviolet visible (UV-Vis) spectra fitted with the Mie and Mie-Gans models was reported. Gold ions were diffused into sodalime silicate and borosilicate glasses by field-assisted solid-state ion-exchange technique using the same experimental parameters for both glasses. Transmission electron microscopy was performed to directly investigate the morphology and distribution of the dopant nanoparticles. UV-Vis spectra of the doped glasses showed broad surface plasmon resonance peaks in their fingerprint regions, i.e., at 525 and 500 nm for sodalime silicate and borosilicate glass matrices, respectively. These spectra were fitted with the Mie model for spherical nanoparticles and the Mie-Gans model for spheroidal nanoparticles. Although both the models were developed for colloidal nanoparticles, the size of the nanoparticles/aggregates calculated was accurate to within ˜10% in both the glass matrices in comparison to the size measured directly from the transmission electron microscope images.
Diong, B; Grainger, J; Goldman, M; Nazeran, H
2009-01-01
The forced oscillation technique offers some advantages over spirometry for assessing pulmonary function. It requires only passive patient cooperation; it also provides data in a form, frequency-dependent impedance, which is very amenable to engineering analysis. In particular, the data can be used to obtain parameter estimates for electric circuit-based models of the respiratory system, which can in turn aid the detection and diagnosis of various diseases/pathologies. In this study, we compare the least-squares error performance of the RIC, extended RIC, augmented RIC, augmented RIC+I(p), DuBois, Nagels and Mead models in fitting 3 sets of impedance data. These data were obtained by pseudorandom noise forced oscillation of healthy subjects, mild asthmatics and more severe asthmatics. We found that the aRIC+I(p) and DuBois models yielded the lowest fitting errors (for the healthy subjects group and the 2 asthmatic patient groups, respectively) without also producing unphysiologically large component estimates.
Kelleher, Deirdre C; Jagadeesh Chandra Bose, R P; Waterhouse, Lauren J; Carter, Elizabeth A; Burd, Randall S
2014-03-01
Trauma resuscitations without pre-arrival notification are often initially chaotic, which can potentially compromise patient care. We hypothesized that trauma resuscitations without pre-arrival notification are performed with more variable adherence to ATLS protocol and that implementation of a checklist would improve performance. We analyzed event logs of trauma resuscitations from two 4-month periods before (n = 222) and after (n = 215) checklist implementation. Using process mining techniques, individual resuscitations were compared with an ideal workflow model of 6 ATLS primary survey tasks performed by the bedside evaluator and given model fitness scores (range 0 to 1). Mean fitness scores and frequency of conformance (fitness = 1) were compared (using Student's t-test or chi-square test, as appropriate) for activations with and without notification both before and after checklist implementation. Multivariable linear regression, controlling for patient and resuscitation characteristics, was also performed to assess the association between pre-arrival notification and model fitness before and after checklist implementation. Fifty-five (12.6%) resuscitations lacked pre-arrival notification (23 pre-implementation and 32 post-implementation; p = 0.15). Before checklist implementation, resuscitations without notification had lower fitness (0.80 vs 0.90; p < 0.001) and conformance (26.1% vs 50.8%; p = 0.03) than those with notification. After checklist implementation, the fitness (0.80 vs 0.91; p = 0.007) and conformance (26.1% vs 59.4%; p = 0.01) improved for resuscitations without notification, but still remained lower than activations with notification. In multivariable analysis, activations without notification had lower fitness both before (b = -0.11, p < 0.001) and after checklist implementation (b = -0.04, p = 0.02). Trauma resuscitations without pre-arrival notification are associated with a decreased adherence to key components of the ATLS primary survey protocol. The addition of a checklist improves protocol adherence and reduces the effect of notification on task performance. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vdovin, R. A.; Smelov, V. G.
2017-02-01
This work describes the experience in manufacturing the turbine rotor for the micro-engine. It demonstrates the design principles for the complex investment casting process combining the use of the ProCast software and the rapid prototyping techniques. At the virtual modelling stage, in addition to optimized process parameters, the casting structure was improved to obtain the defect-free section. The real production stage allowed demonstrating the performance and fitness of rapid prototyping techniques for the manufacture of geometrically-complex engine-building parts.
Use of reconstructed 3D VMEC equilibria to match effects of toroidally rotating discharges in DIII-D
Wingen, Andreas; Wilcox, Robert S.; Cianciosa, Mark R.; ...
2016-10-13
Here, a technique for tokamak equilibrium reconstructions is used for multiple DIII-D discharges, including L-mode and H-mode cases when weakly 3D fieldsmore » $$\\left(\\delta B/B\\sim {{10}^{-3}}\\right)$$ are applied. The technique couples diagnostics to the non-linear, ideal MHD equilibrium solver VMEC, using the V3FIT code, to find the most likely 3D equilibrium based on a suite of measurements. It is demonstrated that V3FIT can be used to find non-linear 3D equilibria that are consistent with experimental measurements of the plasma response to very weak 3D perturbations, as well as with 2D profile measurements. Observations at DIII-D show that plasma rotation larger than 20 krad s –1 changes the relative phase between the applied 3D fields and the measured plasma response. Discharges with low averaged rotation (10 krad s –1) and peaked rotation profiles (40 krad s –1) are reconstructed. Similarities and differences to forward modeled VMEC equilibria, which do not include rotational effects, are shown. Toroidal phase shifts of up to $${{30}^{\\circ}}$$ are found between the measured and forward modeled plasma responses at the highest values of rotation. The plasma response phases of reconstructed equilibra on the other hand match the measured ones. This is the first time V3FIT has been used to reconstruct weakly 3D tokamak equilibria.« less
Kiss, Marc-Olivier; Levasseur, Annie; Petit, Yvan; Lavigne, Patrick
2012-05-01
Osteochondral autografts in mosaicplasty are inserted in a press-fit fashion, and hence, patients are kept nonweightbearing for up to 2 months after surgery to allow bone healing and prevent complications. Very little has been published regarding alternative fixation techniques of those grafts. Osteochondral autografts stabilized with a resorbable osteoconductive bone cement would have a greater load-bearing capacity than standard press-fit grafts. Controlled laboratory study. Biomechanical testing was conducted on 8 pairs of cadaveric bovine distal femurs. For the first 4 pairs, 6 single osteochondral autografts were inserted in a press-fit fashion on one femur. On the contralateral femur, 6 grafts were stabilized with a calcium triglyceride osteoconductive bone cement. For the 4 remaining pairs of femurs, 4 groups of 3 adjacent press-fit grafts were inserted on one femur, whereas on the contralateral femur, grafts were cemented. After a maturation period of 48 hours, axial loading was applied on all single grafts and on the middle graft of each 3-in-a-row series. For the single-graft configuration, median loads required to sink the press-fit and cemented grafts by 2 and 3 mm were 281.87 N versus 345.56 N (P = .015) and 336.29 N versus 454.08 N (P = .018), respectively. For the 3-in-a-row configuration, median loads required to sink the press-fit and cemented grafts by 2 and 3 mm were 260.31 N versus 353.47 N (P = .035) and 384.83 N versus 455.68 N (P = .029), respectively. Fixation of osteochondral grafts using bone cement appears to improve immediate stability over the original mosaicplasty technique for both single- and multiple-graft configurations. Achieving greater primary stability of osteochondral grafts could potentially accelerate postoperative recovery, allowing early weightbearing and physical therapy.
Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems
NASA Technical Reports Server (NTRS)
Koch, Patrick N.
1997-01-01
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.
Fatone, Stefania; Caldwell, Ryan
2017-01-01
Background: Current transfemoral prosthetic sockets restrict function, lack comfort, and cause residual limb problems. Lower proximal trim lines are an appealing way to address this problem. Development of a more comfortable and possibly functional subischial socket may contribute to improving quality of life of persons with transfemoral amputation. Objectives: The purpose of this study was to (1) describe the design and fabrication of a new subischial socket and (2) describe efforts to teach this technique. Study design: Development project. Methods: Socket development involved defining the following: subject and liner selection, residual limb evaluation, casting, positive mold rectification, check socket fitting, definitive socket fabrication, and troubleshooting of socket fit. Three hands-on workshops to teach the socket were piloted and attended by 30 certified prosthetists and their patient models. Results: Patient models responded positively to the comfort, range of motion, and stability of the new socket while prosthetists described the technique as “straight forward, reproducible.” Conclusion: To our knowledge, this is the first attempt to create a teachable subischial socket, and while it appears promising, more definitive evaluation is needed. Clinical relevance We developed the Northwestern University Flexible Subischial Vacuum (NU-FlexSIV) Socket as a more comfortable alternative to current transfemoral sockets and demonstrated that it could be taught successfully to prosthetists. PMID:28094686
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326
NASA Astrophysics Data System (ADS)
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
McCaig, Duncan; Bhatia, Sudeep; Elliott, Mark T; Walasek, Lukasz; Meyer, Caroline
2018-05-07
Text-mining offers a technique to identify and extract information from a large corpus of textual data. As an example, this study presents the application of text-mining to assess and compare interest in fitness tracking technology across eating disorder and health-related online communities. A list of fitness tracking technology terms was developed, and communities (i.e., 'subreddits') on a large online discussion platform (Reddit) were compared regarding the frequency with which these terms occurred. The corpus used in this study comprised all comments posted between May 2015 and January 2018 (inclusive) on six subreddits-three eating disorder-related, and three relating to either fitness, weight-management, or nutrition. All comments relating to the same 'thread' (i.e., conversation) were concatenated, and formed the cases used in this study (N = 377,276). Within the eating disorder-related subreddits, the findings indicated that a 'pro-eating disorder' subreddit, which is less recovery focused than the other eating disorder subreddits, had the highest frequency of fitness tracker terms. Across all subreddits, the weight-management subreddit had the highest frequency of the fitness tracker terms' occurrence, and MyFitnessPal was the most frequently mentioned fitness tracker. The technique exemplified here can potentially be used to assess group differences to identify at-risk populations, generate and explore clinically relevant research questions in populations who are difficult to recruit, and scope an area for which there is little extant literature. The technique also facilitates methodological triangulation of research findings obtained through more 'traditional' techniques, such as surveys or interviews. © 2018 Wiley Periodicals, Inc.
Compact continuum brain model for human electroencephalogram
NASA Astrophysics Data System (ADS)
Kim, J. W.; Shin, H.-B.; Robinson, P. A.
2007-12-01
A low-dimensional, compact brain model has recently been developed based on physiologically based mean-field continuum formulation of electric activity of the brain. The essential feature of the new compact model is a second order time-delayed differential equation that has physiologically plausible terms, such as rapid corticocortical feedback and delayed feedback via extracortical pathways. Due to its compact form, the model facilitates insight into complex brain dynamics via standard linear and nonlinear techniques. The model successfully reproduces many features of previous models and experiments. For example, experimentally observed typical rhythms of electroencephalogram (EEG) signals are reproduced in a physiologically plausible parameter region. In the nonlinear regime, onsets of seizures, which often develop into limit cycles, are illustrated by modulating model parameters. It is also shown that a hysteresis can occur when the system has multiple attractors. As a further illustration of this approach, power spectra of the model are fitted to those of sleep EEGs of two subjects (one with apnea, the other with narcolepsy). The model parameters obtained from the fittings show good matches with previous literature. Our results suggest that the compact model can provide a theoretical basis for analyzing complex EEG signals.
Langmuir probe analysis in electronegative plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bredin, Jerome, E-mail: jerome.bredin@lpp.polytechnique.fr; Chabert, Pascal; Aanesland, Ane
2014-12-15
This paper compares two methods to analyze Langmuir probe data obtained in electronegative plasmas. The techniques are developed to allow investigations in plasmas, where the electronegativity α{sub 0} = n{sub –}/n{sub e} (the ratio between the negative ion and electron densities) varies strongly. The first technique uses an analytical model to express the Langmuir probe current-voltage (I-V) characteristic and its second derivative as a function of the electron and ion densities (n{sub e}, n{sub +}, n{sub –}), temperatures (T{sub e}, T{sub +}, T{sub –}), and masses (m{sub e}, m{sub +}, m{sub –}). The analytical curves are fitted to the experimental data bymore » adjusting these variables and parameters. To reduce the number of fitted parameters, the ion masses are assumed constant within the source volume, and quasi-neutrality is assumed everywhere. In this theory, Maxwellian distributions are assumed for all charged species. We show that this data analysis can predict the various plasma parameters within 5–10%, including the ion temperatures when α{sub 0} > 100. However, the method is tedious, time consuming, and requires a precise measurement of the energy distribution function. A second technique is therefore developed for easier access to the electron and ion densities, but does not give access to the ion temperatures. Here, only the measured I-V characteristic is needed. The electron density, temperature, and ion saturation current for positive ions are determined by classical probe techniques. The electronegativity α{sub 0} and the ion densities are deduced via an iterative method since these variables are coupled via the modified Bohm velocity. For both techniques, a Child-Law sheath model for cylindrical probes has been developed and is presented to emphasize the importance of this model for small cylindrical Langmuir probes.« less
Application of a metabolic balancing technique to the analysis of microbial fermentation data.
de Hollander, J A
1991-01-01
A general method for the development of fermentation models, based on elemental and metabolic balances, is illustrated with three examples from the literature. Physiological parameters such as the (maximal) yield on ATP, the energetic maintenance coefficient, the P/O ratio and others are estimated by fitting model equations to experimental data. Further, phenomenological relations concerning kinetics of product formation and limiting enzyme activities are assessed. The results are compared with the conclusions of the original articles, and differences due to the application of improved models are discussed.
NASA Astrophysics Data System (ADS)
Laborda, Eduardo; Wang, Yijun; Henstridge, Martin C.; Martínez-Ortiz, Francisco; Molina, Angela; Compton, Richard G.
2011-08-01
The Marcus-Hush and Butler-Volmer kinetic electrode models are compared experimentally by studying the reduction of 2-methyl-2-nitropropane in acetonitrile at mercury microelectrodes using Reverse Scan Square Wave Voltammetry. This technique is found to be very sensitive to the electrode kinetics and to permit critical comparison of the two models. The Butler-Volmer model satisfactorily fits the experimental data whereas Marcus-Hush does not quantitatively describe this redox system.
Wall shear stress effects of different endodontic irrigation techniques and systems.
Goode, Narisa; Khan, Sara; Eid, Ashraf A; Niu, Li-na; Gosier, Johnny; Susin, Lisiane F; Pashley, David H; Tay, Franklin R
2013-07-01
This study examined débridement efficacy as a result of wall shear stresses created by different irrigant delivery/agitation techniques in an inaccessible recess of a curved root canal model. A reusable, curved canal cavity containing a simulated canal fin was milled into mirrored titanium blocks. Calcium hydroxide (Ca(OH)2) paste was used as debris and loaded into the canal fin. The titanium blocks were bolted together to provide a fluid-tight seal. Sodium hypochlorite was delivered at a previously-determined flow rate of 1 mL/min that produced either negligible or no irrigant extrusion pressure into the periapex for all the techniques examined. Nine irrigation delivery/agitation techniques were examined: NaviTip passive irrigation control, Max-i-Probe(®) side-vented needle passive irrigation, manual dynamic agitation (MDA) using non-fitting and well-fitting gutta-percha points, EndoActivator™ sonic agitation with medium and large points, VPro™ EndoSafe™ irrigation system, VPro™ StreamClean™ continuous ultrasonic irrigation and EndoVac apical negative pressure irrigation. Débridement efficacies were analysed with Kruskal-Wallis ANOVA and Dunn's multiple comparisons tests (α=0.05). EndoVac was the only technique that removed more than 99% calcium hydroxide debris from the canal fin at the predefined flow rate. This group was significantly different (p<0.05) from the other groups that exhibited incomplete Ca(OH)2 removal. The ability of the EndoVac system to significantly clean more debris from a mechanically inaccessible recess of the model curved root canal may be caused by robust bubble formation during irrigant delivery, creating higher wall shear stresses by a two-phase air-liquid flow phenomenon that is well known in other industrial débridement systems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Supermassive Black Holes and Their Host Spheroids. I. Disassembling Galaxies
NASA Astrophysics Data System (ADS)
Savorgnan, G. A. D.; Graham, A. W.
2016-01-01
Several recent studies have performed galaxy decompositions to investigate correlations between the black hole mass and various properties of the host spheroid, but they have not converged on the same conclusions. This is because their models for the same galaxy were often significantly different and not consistent with each other in terms of fitted components. Using 3.6 μm Spitzer imagery, which is a superb tracer of the stellar mass (superior to the K band), we have performed state-of-the-art multicomponent decompositions for 66 galaxies with directly measured black hole masses. Our sample is the largest to date and, unlike previous studies, contains a large number (17) of spiral galaxies with low black hole masses. We paid careful attention to the image mosaicking, sky subtraction, and masking of contaminating sources. After a scrupulous inspection of the galaxy photometry (through isophotal analysis and unsharp masking) and—for the first time—2D kinematics, we were able to account for spheroids large-scale, intermediate-scale, and nuclear disks bars rings spiral arms halos extended or unresolved nuclear sources; and partially depleted cores. For each individual galaxy, we compared our best-fit model with previous studies, explained the discrepancies, and identified the optimal decomposition. Moreover, we have independently performed one-dimensional (1D) and two-dimensional (2D) decompositions and concluded that, at least when modeling large, nearby galaxies, 1D techniques have more advantages than 2D techniques. Finally, we developed a prescription to estimate the uncertainties on the 1D best-fit parameters for the 66 spheroids that takes into account systematic errors, unlike popular 2D codes that only consider statistical errors.
SUPERMASSIVE BLACK HOLES AND THEIR HOST SPHEROIDS. I. DISASSEMBLING GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savorgnan, G. A. D.; Graham, A. W., E-mail: gsavorgn@astro.swin.edu.au
Several recent studies have performed galaxy decompositions to investigate correlations between the black hole mass and various properties of the host spheroid, but they have not converged on the same conclusions. This is because their models for the same galaxy were often significantly different and not consistent with each other in terms of fitted components. Using 3.6 μm Spitzer imagery, which is a superb tracer of the stellar mass (superior to the K band), we have performed state-of-the-art multicomponent decompositions for 66 galaxies with directly measured black hole masses. Our sample is the largest to date and, unlike previous studies, containsmore » a large number (17) of spiral galaxies with low black hole masses. We paid careful attention to the image mosaicking, sky subtraction, and masking of contaminating sources. After a scrupulous inspection of the galaxy photometry (through isophotal analysis and unsharp masking) and—for the first time—2D kinematics, we were able to account for spheroids; large-scale, intermediate-scale, and nuclear disks; bars; rings; spiral arms; halos; extended or unresolved nuclear sources; and partially depleted cores. For each individual galaxy, we compared our best-fit model with previous studies, explained the discrepancies, and identified the optimal decomposition. Moreover, we have independently performed one-dimensional (1D) and two-dimensional (2D) decompositions and concluded that, at least when modeling large, nearby galaxies, 1D techniques have more advantages than 2D techniques. Finally, we developed a prescription to estimate the uncertainties on the 1D best-fit parameters for the 66 spheroids that takes into account systematic errors, unlike popular 2D codes that only consider statistical errors.« less
Inferring diffusion dynamics from FCS in heterogeneous nuclear environments.
Tsekouras, Konstantinos; Siegel, Amanda P; Day, Richard N; Pressé, Steve
2015-07-07
Fluorescence correlation spectroscopy (FCS) is a noninvasive technique that probes the diffusion dynamics of proteins down to single-molecule sensitivity in living cells. Critical mechanistic insight is often drawn from FCS experiments by fitting the resulting time-intensity correlation function, G(t), to known diffusion models. When simple models fail, the complex diffusion dynamics of proteins within heterogeneous cellular environments can be fit to anomalous diffusion models with adjustable anomalous exponents. Here, we take a different approach. We use the maximum entropy method to show-first using synthetic data-that a model for proteins diffusing while stochastically binding/unbinding to various affinity sites in living cells gives rise to a G(t) that could otherwise be equally well fit using anomalous diffusion models. We explain the mechanistic insight derived from our method. In particular, using real FCS data, we describe how the effects of cell crowding and binding to affinity sites manifest themselves in the behavior of G(t). Our focus is on the diffusive behavior of an engineered protein in 1) the heterochromatin region of the cell's nucleus as well as 2) in the cell's cytoplasm and 3) in solution. The protein consists of the basic region-leucine zipper (BZip) domain of the CCAAT/enhancer-binding protein (C/EBP) fused to fluorescent proteins. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach
NASA Astrophysics Data System (ADS)
Denolle, M.; Van Houtte, C.
2017-12-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.
NASA Technical Reports Server (NTRS)
Shoberg, Tom; Stein, Seth
1994-01-01
Spreading center segments that have experienced a complex tectonic history including rift propagation may have a complicated signature in bathymetric and magnetic anomaly data. To gain insight into the history of such regions, we have developed techniques in which both the magnetic anomaly patterns and seafloor fabric trends are predicted theoretically, and the combined predictions are compared numerically with the data to estimate best fitting parameters for the propagation history. Fitting functions are constructed to help determine which model best matches the digitized fabric and magnetic anomaly data. Such functions offer statistical criteria for choosing the best fit model. We use this approach to resolve the propagation history of the Cobb Offset along the Juan de Fuca ridge. In this example, the magnetic anomaly data prove more useful in defining the geometry of the propagation events, while the fabric, with its greater temporal resolution, is more useful for constraining the rate of propagation. It thus appears that joint inversion of magnetic and seafloor fabric data can be valuable in tectonic analyses.
Rounds, S.A.; Tiffany, B.A.; Pankow, J.F.
1993-01-01
Aerosol particles from a highway tunnel were collected on a Teflon membrane filter (TMF) using standard techniques. Sorbed organic compounds were then desorbed for 28 days by passing clean nitrogen through the filter. Volatile n-alkanes and polycyclic aromatic hydrocarbons (PAHs) were liberated from the filter quickly; only a small fraction of the less volatile ra-alkanes and PAHs were desorbed. A nonlinear least-squares method was used to fit an intraparticle diffusion model to the experimental data. Two fitting parameters were used: the gas/particle partition coefficient (Kp and an effective intraparticle diffusion coefficient (Oeff). Optimized values of Kp are in agreement with previously reported values. The slope of a correlation between the fitted values of Deff and Kp agrees well with theory, but the absolute values of Deff are a factor of ???106 smaller than predicted for sorption-retarded, gaseous diffusion. Slow transport through an organic or solid phase within the particles or preferential flow through the bed of particulate matter on the filter might be the cause of these very small effective diffusion coefficients. ?? 1993 American Chemical Society.
Mean gravity anomalies and sea surface heights derived from GEOS-3 altimeter data
NASA Technical Reports Server (NTRS)
Rapp, R. H.
1978-01-01
Approximately 2000 GEOS-3 altimeter arcs were analyzed to improve knowledge of the geoid and gravity field. An adjustment procedure was used to fit the sea surface heights (geoid undulations) in an adjustment process that incorporated cross-over constraints. The error model used for the fit was a one or two parameter model which was designed to remove altimeter bias and orbit error. The undulations on the adjusted arcs were used to produce geoid maps in 20 regions. The adjusted data was used to derive 301 5 degree equal area anomalies and 9995 1 x 1 degree anomalies in areas where the altimeter data was most dense, using least squares collocation techniques. Also emphasized was the ability of the altimeter data to imply rapid anomaly changes of up to 240 mgals in adjacent 1 x 1 degree blocks.
Accuracy of 3 different impression techniques for internal connection angulated implants.
Tsagkalidis, George; Tortopidis, Dimitrios; Mpikos, Pavlos; Kaisarlis, George; Koidis, Petros
2015-10-01
Making implant impressions with different angulations requires a more precise and time-consuming impression technique. The purpose of this in vitro study was to compare the accuracy of nonsplinted, splinted, and snap-fit impression techniques of internal connection implants with different angulations. An experimental device was used to allow a clinical simulation of impression making by means of open and closed tray techniques. Three different impression techniques (nonsplinted, acrylic-resin splinted, and indirect snap-fit) for 6 internal-connected implants at different angulations (0, 15, 25 degrees) were examined using polyether. Impression accuracy was evaluated by measuring the differences in 3-dimensional (3D) position deviations between the implant body/impression coping before the impression procedure and the coping/laboratory analog positioned within the impression, using a coordinate measuring machine. Data were analyzed by 2-way ANOVA. Means were compared with the least significant difference criterion at P<.05. Results showed that at 25 degrees of implant angulation, the highest accuracy was obtained with the splinted technique (mean ±SE: 0.39 ±0.05 mm) and the lowest with the snap-fit technique (0.85 ±0.09 mm); at 15 degrees of angulation, there were no significant differences among splinted (0.22 ±0.04 mm) and nonsplinted technique (0.15 ±0.02 mm) and the lowest accuracy obtained with the snap-fit technique (0.95 ±0.15 mm); and no significant differences were found between nonsplinted and splinted technique at 0 degrees of implant placement. Splinted impression technique exhibited a higher accuracy than the other techniques studied when increased implant angulations at 25 degrees were involved. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Uncertainty quantification for constitutive model calibration of brain tissue.
Brewick, Patrick T; Teferra, Kirubel
2018-05-31
The results of a study comparing model calibration techniques for Ogden's constitutive model that describes the hyperelastic behavior of brain tissue are presented. One and two-term Ogden models are fit to two different sets of stress-strain experimental data for brain tissue using both least squares optimization and Bayesian estimation. For the Bayesian estimation, the joint posterior distribution of the constitutive parameters is calculated by employing Hamiltonian Monte Carlo (HMC) sampling, a type of Markov Chain Monte Carlo method. The HMC method is enriched in this work to intrinsically enforce the Drucker stability criterion by formulating a nonlinear parameter constraint function, which ensures the constitutive model produces physically meaningful results. Through application of the nested sampling technique, 95% confidence bounds on the constitutive model parameters are identified, and these bounds are then propagated through the constitutive model to produce the resultant bounds on the stress-strain response. The behavior of the model calibration procedures and the effect of the characteristics of the experimental data are extensively evaluated. It is demonstrated that increasing model complexity (i.e., adding an additional term in the Ogden model) improves the accuracy of the best-fit set of parameters while also increasing the uncertainty via the widening of the confidence bounds of the calibrated parameters. Despite some similarity between the two data sets, the resulting distributions are noticeably different, highlighting the sensitivity of the calibration procedures to the characteristics of the data. For example, the amount of uncertainty reported on the experimental data plays an essential role in how data points are weighted during the calibration, and this significantly affects how the parameters are calibrated when combining experimental data sets from disparate sources. Published by Elsevier Ltd.
Cai, Jing; Tyree, Melvin T
2010-07-01
The objective of this study was to quantify the relationship between vulnerability to cavitation and vessel diameter within a species. We measured vulnerability curves (VCs: percentage loss hydraulic conductivity versus tension) in aspen stems and measured vessel-size distributions. Measurements were done on seed-grown, 4-month-old aspen (Populus tremuloides Michx) grown in a greenhouse. VCs of stem segments were measured using a centrifuge technique and by a staining technique that allowed a VC to be constructed based on vessel diameter size-classes (D). Vessel-based VCs were also fitted to Weibull cumulative distribution functions (CDF), which provided best-fit values of Weibull CDF constants (c and b) and P(50) = the tension causing 50% loss of hydraulic conductivity. We show that P(50) = 6.166D(-0.3134) (R(2) = 0.995) and that b and 1/c are both linear functions of D with R(2) > 0.95. The results are discussed in terms of models of VCs based on vessel D size-classes and in terms of concepts such as the 'pit area hypothesis' and vessel pathway redundancy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genest-Beaulieu, C.; Bergeron, P., E-mail: genest@astro.umontreal.ca, E-mail: bergeron@astro.umontreal.ca
We present a comparative analysis of atmospheric parameters obtained with the so-called photometric and spectroscopic techniques. Photometric and spectroscopic data for 1360 DA white dwarfs from the Sloan Digital Sky Survey (SDSS) are used, as well as spectroscopic data from the Villanova White Dwarf Catalog. We first test the calibration of the ugriz photometric system by using model atmosphere fits to observed data. Our photometric analysis indicates that the ugriz photometry appears well calibrated when the SDSS to AB{sub 95} zeropoint corrections are applied. The spectroscopic analysis of the same data set reveals that the so-called high-log g problem canmore » be solved by applying published correction functions that take into account three-dimensional hydrodynamical effects. However, a comparison between the SDSS and the White Dwarf Catalog spectra also suggests that the SDSS spectra still suffer from a small calibration problem. We then compare the atmospheric parameters obtained from both fitting techniques and show that the photometric temperatures are systematically lower than those obtained from spectroscopic data. This systematic offset may be linked to the hydrogen line profiles used in the model atmospheres. We finally present the results of an analysis aimed at measuring surface gravities using photometric data only.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh
In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less
Langdon, Jonathan H; Elegbe, Etana; McAleavey, Stephen A
2015-01-01
Single Tracking Location (STL) Shear wave Elasticity Imaging (SWEI) is a method for detecting elastic differences between tissues. It has the advantage of intrinsic speckle bias suppression compared to Multiple Tracking Location (MTL) variants of SWEI. However, the assumption of a linear model leads to an overestimation of the shear modulus in viscoelastic media. A new reconstruction technique denoted Single Tracking Location Viscosity Estimation (STL-VE) is introduced to correct for this overestimation. This technique utilizes the same raw data generated in STL-SWEI imaging. Here, the STL-VE technique is developed by way of a Maximum Likelihood Estimation (MLE) for general viscoelastic materials. The method is then implemented for the particular case of the Kelvin-Voigt Model. Using simulation data, the STL-VE technique is demonstrated and the performance of the estimator is characterized. Finally, the STL-VE method is used to estimate the viscoelastic parameters of ex-vivo bovine liver. We find good agreement between the STL-VE results and the simulation parameters as well as between the liver shear wave data and the modeled data fit. PMID:26168170
Apps to promote physical activity among adults: a review and content analysis
2014-01-01
Background In May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear. Methods The study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play. Results On average, the reviewed apps included 5 behavior change techniques (range 2–8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found. Conclusions The present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions. PMID:25059981
The effect of various veneering techniques on the marginal fit of zirconia copings
Torabi, Kianoosh; Vojdani, Mahroo; Giti, Rashin; Pardis, Soheil
2015-01-01
PURPOSE This study aimed to evaluate the fit of zirconia ceramics before and after veneering, using 3 different veneering processes (layering, press-over, and CAD-on techniques). MATERIALS AND METHODS Thirty standardized zirconia CAD/CAM frameworks were constructed and divided into three groups of 10 each. The first group was veneered using the traditional layering technique. Press-over and CAD-on techniques were used to veneer second and third groups. The marginal gap of specimens was measured before and after veneering process at 18 sites on the master die using a digital microscope. Paired t-test was used to evaluate mean marginal gap changes. One-way ANOVA and post hoc tests were also employed for comparison among 3 groups (α=.05). RESULTS Marginal gap of 3 groups was increased after porcelain veneering. The mean marginal gap values after veneering in the layering group (63.06 µm) was higher than press-over (50.64 µm) and CAD-on (51.50 µm) veneered groups (P<.001). CONCLUSION Three veneering methods altered the marginal fit of zirconia copings. Conventional layering technique increased the marginal gap of zirconia framework more than pressing and CAD-on techniques. All ceramic crowns made through three different veneering methods revealed clinically acceptable marginal fit. PMID:26140175
Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects
NASA Astrophysics Data System (ADS)
Jarndal, Anwar; Ghannouchi, Fadhel M.
2016-09-01
In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.
Forecasting coconut production in the Philippines with ARIMA model
NASA Astrophysics Data System (ADS)
Lim, Cristina Teresa
2015-02-01
The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.
Effect of various putty-wash impression techniques on marginal fit of cast crowns.
Nissan, Joseph; Rosner, Ofir; Bukhari, Mohammed Amin; Ghelfan, Oded; Pilo, Raphael
2013-01-01
Marginal fit is an important clinical factor that affects restoration longevity. The accuracy of three polyvinyl siloxane putty-wash impression techniques was compared by marginal fit assessment using the nondestructive method. A stainless steel master cast containing three abutments with three metal crowns matching the three preparations was used to make 45 impressions: group A = single-step technique (putty and wash impression materials used simultaneously), group B = two-step technique with a 2-mm relief (putty as a preliminary impression to create a 2-mm wash space followed by the wash stage), and group C = two-step technique with a polyethylene spacer (plastic spacer used with the putty impression followed by the wash stage). Accuracy was assessed using a toolmaker microscope to measure and compare the marginal gaps between each crown and finish line on the duplicated stone casts. Each abutment was further measured at the mesial, buccal, and distal aspects. One-way analysis of variance was used for statistical analysis. P values and Scheffe post hoc contrasts were calculated. Significance was determined at .05. One-way analysis of variance showed significant differences among the three impression techniques in all three abutments and at all three locations (P < .001). Group B yielded dies with minimal gaps compared to groups A and C. The two-step impression technique with 2-mm relief was the most accurate regarding the crucial clinical factor of marginal fit.
Binder, Harald; Porzelius, Christine; Schumacher, Martin
2011-03-01
Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Nonexercise Equations to Estimate Fitness in White European and South Asian Men.
O'Donovan, Gary; Bakrania, Kishan; Ghouri, Nazim; Yates, Thomas; Gray, Laura J; Hamer, Mark; Stamatakis, Emmanuel; Khunti, Kamlesh; Davies, Melanie; Sattar, Naveed; Gill, Jason M R
2016-05-01
Cardiorespiratory fitness is a strong, independent predictor of health, whether it is measured in an exercise test or estimated in an equation. The purpose of this study was to develop and validate equations to estimate fitness in middle-age white European and South Asian men. Multiple linear regression models (n = 168, including 83 white European and 85 South Asian men) were created using variables that are thought to be important in predicting fitness (V˙O2max, mL·kg⁻¹·min⁻¹): age (yr), body mass index (kg·m⁻²), resting HR (bpm); smoking status (0, never smoked; 1, ex or current smoker), physical activity expressed as quintiles (0, quintile 1; 1, quintile 2; 2, quintile 3; 3, quintile 4; 4, quintile 5), categories of moderate- to-vigorous intensity physical activity (MVPA) (0, <75 min·wk⁻¹; 1, 75-150 min·wk⁻¹; 2, >150-225 min·wk⁻¹; 3, >225-300 min·wk⁻¹; 4, >300 min·wk⁻¹), or minutes of MVPA (min·wk⁻¹); and, ethnicity (0, South Asian; 1, white). The leave-one-out cross-validation procedure was used to assess the generalizability, and the bootstrap and jackknife resampling techniques were used to estimate the variance and bias of the models. Around 70% of the variance in fitness was explained in models with an ethnicity variable, such as: V˙O2max = 77.409 - (age × 0.374) - (body mass index × 0.906) - (ex or current smoker × 1.976) + (physical activity quintile coefficient) - (resting HR × 0.066) + (white ethnicity × 8.032), where physical activity quintile 1 is 0, 2 is 1.127, 3 is 1.869, 4 is 3.793, and 5 is 3.029. Only around 50% of the variance was explained in models without an ethnicity variable. All models with an ethnicity variable were generalizable and had low variance and bias. These data demonstrate the importance of incorporating ethnicity in nonexercise equations to estimate cardiorespiratory fitness in multiethnic populations.
Comments on Different techniques for finding best-fit parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.; Triplett, Laurie A.
2014-07-01
A common data analysis problem is to find best-fit parameters through chi-square minimization. Levenberg-Marquardt is an often used system that depends on gradients and converges when successive iterations do not change chi-square more than a specified amount. We point out in cases where the sought-after parameter weakly affects the fit and cases where the overall scale factor is a parameter, that a Golden Search technique can often do better. The Golden Search converges when the best-fit point is within a specified range and that range can be made arbitrarily small. It does not depend on the value of chi-square.
Shaban, Mohamed; Hassouna, Mohamed E M; Nasief, Fadya M; AbuKhadra, Mostafa R
2017-10-01
Raw kaolinite was used in the synthesis of metakaolinite/carbon nanotubes (K/CNTs) and kaolinite/starch (K/starch) nanocomposites. Raw kaolinite and the synthetic composites were characterized using XRD, SEM, and TEM techniques. The synthetic composites were used as adsorbents for Fe and Mn ions from aqueous solutions and natural underground water. The adsorption by the both composites is highly pH dependent and achieves high efficiency within the neutral pH range. The experimental adsorption data for the uptake of Fe and Mn ions by K/CNTs were found to be well represented by the pseudo-second-order kinetic model rather than the intra-particle diffusion model or Elovich model. For the adsorption using K/starch, the uptake results of Fe ions was well fitted by the second-order model, whereas the uptake of Mn ions fitted well to the Elovich model rather than pseudo-second-order and intra-particle diffusion models The equilibrium studies revealed the excellent fitting of the removal of Fe and Mn ions by K/CNTs and Fe using K/starch with the Langmuir isotherm model rather than with Freundlich and Temkin models. But the adsorption of Mn ions by K/starch is well fitted with Freundlich rather than Temkin and Langmuir isotherm models. The thermodynamic studies reflected the endothermic nature and the exothermic nature for the adsorption by K/CNTs and K/starch nanocomposites, respectively. Natural ground water contaminated by 0.4 mg/L Fe and 0.5 mg/L Mn was treated at the optimum conditions of pH 6 and 120 min contact time. Under these conditions, 92.5 and 72.5% Fe removal efficiencies were achieved using 20 mg of K/CNTs and K/starch nanocomposites, respectively. Also, K/CNTs nanocomposite shows higher efficiency in the removal of Mn ions as compared to K/starch nanocomposite.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
NASA Astrophysics Data System (ADS)
Gao, Hua; Ho, Luis C.
2017-08-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hua; Ho, Luis C.
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxymore » Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.« less
Using stereophotogrammetric technology for obtaining intraoral digital impressions of implants.
Pradíes, Guillermo; Ferreiroa, Alberto; Özcan, Mutlu; Giménez, Beatriz; Martínez-Rus, Francisco
2014-04-01
The procedure for making impressions of multiple implants continues to be a challenge, despite the various techniques proposed to date. The authors' objective in this case report is to describe a novel digital impression method for multiple implants involving the use of stereophotogrammetric technology. The authors present three cases of patients who had multiple implants in which the impressions were obtained with this technology. Initially, a stereo camera with an infrared flash detects the position of special flag abutments screwed into the implants. This process is based on registering the x, y and z coordinates of each implant and the distances between them. This information is converted into a stereolithographic (STL) file. To add the soft-tissue information, the user must obtain another STL file by using an intraoral or extraoral scanner. In the first case presented, this information was acquired from the plaster model with an extraoral scanner; in the second case, from a Digital Imaging and Communication in Medicine (DICOM) file of the plaster model obtained with cone-beam computed tomography; and in the third case, through an intraoral digital impression with a confocal scanner. In the three cases, the frameworks manufactured from this technique showed a correct clinical passive fit. At follow-up appointments held six, 12 and 24 months after insertion of the prosthesis, no complications were reported. Stereophotogrammetric technology is a viable, accurate and easy technique for making multiple implant impressions. Clinicians can use stereophotogrammetric technology to acquire reliable digital master models as a first step in producing frameworks with a correct passive fit.
Response Surface Methods For Spatially-Resolved Optical Measurement Techniques
NASA Technical Reports Server (NTRS)
Danehy, P. M.; Dorrington, A. A.; Cutler, A. D.; DeLoach, R.
2003-01-01
Response surface methods (or methodology), RSM, have been applied to improve data quality for two vastly different spatially-resolved optical measurement techniques. In the first application, modern design of experiments (MDOE) methods, including RSM, are employed to map the temperature field in a direct-connect supersonic combustion test facility at NASA Langley Research Center. The laser-based measurement technique known as coherent anti-Stokes Raman spectroscopy (CARS) is used to measure temperature at various locations in the combustor. RSM is then used to develop temperature maps of the flow. Even though the temperature fluctuations at a single point in the flowfield have a standard deviation on the order of 300 K, RSM provides analytic fits to the data having 95% confidence interval half width uncertainties in the fit as low as +/- 30 K. Methods of optimizing future CARS experiments are explored. The second application of RSM is to quantify the shape of a 5-meter diameter, ultra-lightweight, inflatable space antenna at NASA Langley Research Center. Photogrammetry is used to simultaneously measure the shape of the antenna at approximately 500 discrete spatial locations. RSM allows an analytic model to be developed that describes the shape of the majority of the antenna with an uncertainty of 0.4 mm, with 95% confidence. This model would allow a quantitative comparison between the actual shape of the antenna and the original design shape. Accurately determining this shape also allows confident interpolation between the measured points. Such a model could, for example, be used for ray tracing of radio-frequency waves up to 95 GHz. to predict the performance of the antenna.
Hwang, Dae-Hee; Shetty, Gautam M; Kim, Jong In; Kwon, Jae Ho; Song, Jae-Kwang; Muñoz, Michael; Lee, Jun Seop; Nha, Kyung-Wook
2013-01-01
The purpose of this prospective, randomized, computed tomography-based study was to investigate whether the press-fit technique reduces tunnel volume enlargement (TVE) and improves the clinical outcome after anterior cruciate ligament reconstruction at a minimum follow-up of 1 year compared with conventional technique. Sixty-nine patients undergoing primary ACL reconstruction using hamstring autografts were randomly allocated to either the press-fit technique group (group A) or conventional technique group (group B). All patients were evaluated for TVE and tunnel widening using computed tomography scanning, for functional outcome using International Knee Documentation Committee and Lysholm scores, for rotational stability using the pivot-shift test, and for anterior laxity using the KT-2000 arthrometer at a minimum of 1-year follow-up. There were no significant differences in TVE between the 2 groups. In group A, in which the press-fit technique was used, mean volume enlargement in the femoral tunnel was 65% compared with 71.5% in group B (P = .84). In group A, 57% (20 of 35) of patients developed femoral TVE compared with 67% (23 of 34) of patients in group B (P = .27). Both groups showed no significant difference for functional outcome (mean Lysholm score P = .73, International Knee Documentation Committee score P = .15), or knee laxity (anterior P = .78, rotational P = .22) at a minimum follow-up of 1 year. In a comparison of press-fit and conventional techniques, there were no significant differences in TVE and clinical outcome at short-term follow-up. Level II, therapeutic study, prospective randomized clinical trial. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Fitting multidimensional splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Janneck, Robby; Vercesi, Federico; Heremans, Paul; Genoe, Jan; Rolin, Cedric
2016-09-01
A model that describes solvent evaporation dynamics in meniscus-guided coating techniques is developed. In combination with a single fitting parameter, it is shown that this formula can accurately predict a processing window for various coating conditions. Organic thin-film transistors (OTFTs), fabricated by a zone-casting setup, indeed show the best performance at the predicted coating speeds with mobilities reaching 7 cm 2 V -1 s -1 . © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Foveal Curvature and Asymmetry Assessed Using Optical Coherence Tomography.
VanNasdale, Dean A; Eilerman, Amanda; Zimmerman, Aaron; Lai, Nicky; Ramsey, Keith; Sinnott, Loraine T
2017-06-01
The aims of this study were to use cross-sectional optical coherence tomography imaging and custom curve fitting software to evaluate and model the foveal curvature as a spherical surface and to compare the radius of curvature in the horizontal and vertical meridians and test the sensitivity of this technique to anticipated meridional differences. Six 30-degree foveal-centered radial optical coherence tomography cross-section scans were acquired in the right eye of 20 clinically normal subjects. Cross sections were manually segmented, and custom curve fitting software was used to determine foveal pit radius of curvature using the central 500, 1000, and 1500 μm of the foveal contour. Radius of curvature was compared across different fitting distances. Root mean square error was used to determine goodness of fit. The radius of curvature was compared between the horizontal and vertical meridians for each fitting distance. There radius of curvature was significantly different when comparing each of the three fitting distances (P < .01 for each comparison). The average radii of curvature were 970 μm (95% confidence interval [CI], 913 to 1028 μm), 1386 μm (95% CI, 1339 to 1439 μm), and 2121 μm (95% CI, 2066 to 2183) for the 500-, 1000-, and 1500-μm fitting distances, respectively. Root mean square error was also significantly different when comparing each fitting distance (P < .01 for each comparison). The average root mean square errors were 2.48 μm (95% CI, 2.41 to 2.53 μm), 6.22 μm (95% CI, 5.77 to 6.60 μm), and 13.82 μm (95% CI, 12.93 to 14.58 μm) for the 500-, 1000-, and 1500-μm fitting distances, respectively. The radius of curvature between the horizontal and vertical meridian radii was statistically different only in the 1000- and 1500-μm fitting distances (P < .01 for each), with the horizontal meridian being flatter than the vertical. The foveal contour can be modeled as a sphere with low curve fitting error over a limited distance and capable of detecting subtle foveal contour differences between meridians.
Three-dimensional accuracy of different impression techniques for dental implants
Nakhaei, Mohammadreza; Madani, Azam S; Moraditalab, Azizollah; Haghi, Hamidreza Rajati
2015-01-01
Background: Accurate impression making is an essential prerequisite for achieving a passive fit between the implant and the superstructure. The aim of this in vitro study was to compare the three-dimensional accuracy of open-tray and three closed-tray impression techniques. Materials and Methods: Three acrylic resin mandibular master models with four parallel implants were used: Biohorizons (BIO), Straumann tissue-level (STL), and Straumann bone-level (SBL). Forty-two putty/wash polyvinyl siloxane impressions of the models were made using open-tray and closed-tray techniques. Closed-tray impressions were made using snap-on (STL model), transfer coping (TC) (BIO model) and TC plus plastic cap (TC-Cap) (SBL model). The impressions were poured with type IV stone, and the positional accuracy of the implant analog heads in each dimension (x, y and z axes), and the linear displacement (ΔR) were evaluated using a coordinate measuring machine. Data were analyzed using ANOVA and post-hoc Tukey tests (α = 0.05). Results: The ΔR values of the snap-on technique were significantly lower than those of TC and TC-Cap techniques (P < 0.001). No significant differences were found between closed and open impression techniques for STL in Δx, Δy, Δz and ΔR values (P = 0.444, P = 0.181, P = 0.835 and P = 0.911, respectively). Conclusion: Considering the limitations of this study, the snap-on implant-level impression technique resulted in more three-dimensional accuracy than TC and TC-Cap, but it was similar to the open-tray technique. PMID:26604956
Ripley, Beth; Kelil, Tatiana; Cheezum, Michael K.; Goncalves, Alexandra; Di Carli, Marcelo F.; Rybicki, Frank J.; Steigner, Mike; Mitsouras, Dimitrios; Blankstein, Ron
2017-01-01
Background 3D printing is a promising technique that may have applications in medicine, and there is expanding interest in the use of patient-specific 3D models to guide surgical interventions. Objective To determine the feasibility of using cardiac CT to print individual models of the aortic root complex for transcatheter aortic valve replacement (TAVR) planning as well as to determine the ability to predict paravalvular aortic regurgitation (PAR). Methods This retrospective study included 16 patients (9 with PAR identified on blinded interpretation of post-procedure trans-thoracic echocardiography and 7 age, sex, and valve size-matched controls with no PAR). 3D printed models of the aortic root were created from pre-TAVR cardiac computed tomography data. These models were fitted with printed valves and predictions regarding post-implant PAR were made using a light transmission test. Results Aortic root 3D models were highly accurate, with excellent agreement between annulus measurements made on 3D models and those made on corresponding 2D data (mean difference of −0.34 mm, 95% limits of agreement: ± 1.3 mm). The 3D printed valve models were within 0.1 mm of their designed dimensions. Examination of the fit of valves within patient-specific aortic root models correctly predicted PAR in 6 of 9 patients (6 true positive, 3 false negative) and absence of PAR in 5 of 7 patients (5 true negative, 2 false positive). Conclusions Pre-TAVR 3D-printing based on cardiac CT provides a unique patient-specific method to assess the physical interplay of the aortic root and implanted valves. With additional optimization, 3D models may complement traditional techniques used for predicting which patients are more likely to develop PAR. PMID:26732862
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
Dynamical Cognitive Models of Social Issues in Russia
NASA Astrophysics Data System (ADS)
Mitina, Olga; Abraham, Fred; Petrenko, Victor
We examine and model dynamics in three areas of social cognition: (1) political transformations within Russia, (2) evaluation of political trends in other countries by Russians, and (3) evaluation of Russian stereotypes concerning women. We try to represent consciousness as vectorfields and trajectories in a cognitive state space. We use psychosemantic techniques that allow definition of the state space and the systematic construction of these vectorfields and trajectories and their portrait from research data. Then we construct models to fit them, using multiple regression methods to obtain linear differential equations. These dynamical models of social cognition fit the data quite well. (1) The political transformations were modeled by a spiral repellor in a two-dimensional space of a democratic-totalitarian factor and social depression-optimism factor. (2) The evaluation of alien political trends included a flow away from a saddle toward more stable and moderate political regimes in a 2D space, of democratic-totalitarian and unstable-stable cognitive dimensions. (3) The gender study showed expectations (attractors) for more liberated, emancipated roles for women in the future.
Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI
Farooq, Hamza; Xu, Junqian; Nam, Jung Who; Keefe, Daniel F.; Yacoub, Essa; Georgiou, Tryphon; Lenglet, Christophe
2016-01-01
Diffusion MRI (dMRI) reveals microstructural features of the brain white matter by quantifying the anisotropic diffusion of water molecules within axonal bundles. Yet, identifying features such as axonal orientation dispersion, density, diameter, etc., in complex white matter fiber configurations (e.g. crossings) has proved challenging. Besides optimized data acquisition and advanced biophysical models, computational procedures to fit such models to the data are critical. However, these procedures have been largely overlooked by the dMRI microstructure community and new, more versatile, approaches are needed to solve complex biophysical model fitting problems. Existing methods are limited to models assuming single fiber orientation, relevant to limited brain areas like the corpus callosum, or multiple orientations but without the ability to extract detailed microstructural features. Here, we introduce a new and versatile optimization technique (MIX), which enables microstructure imaging of crossing white matter fibers. We provide a MATLAB implementation of MIX, and demonstrate its applicability to general microstructure models in fiber crossings using synthetic as well as ex-vivo and in-vivo brain data. PMID:27982056
NASA Astrophysics Data System (ADS)
Chrobak, Ł.; Maliński, M.
2018-06-01
This paper presents a comparison of three nondestructive and contactless techniques used for determination of recombination parameters of silicon samples. They are: photoacoustic method, modulated free carriers absorption method and the photothermal radiometry method. In the paper the experimental set-ups used for measurements of the recombination parameters in these methods as also theoretical models used for interpretation of obtained experimental data have been presented and described. The experimental results and their respective fits obtained with these nondestructive techniques are shown and discussed. The values of the recombination parameters obtained with these methods are also presented and compared. Main advantages and disadvantages of presented methods have been discussed.
Modelling a single phase voltage controlled rectifier using Laplace transforms
NASA Technical Reports Server (NTRS)
Kraft, L. Alan; Kankam, M. David
1992-01-01
The development of a 20 kHz, AC power system by NASA for large space projects has spurred a need to develop models for the equipment which will be used on these single phase systems. To date, models for the AC source (i.e., inverters) have been developed. It is the intent of this paper to develop a method to model the single phase voltage controlled rectifiers which will be attached to the AC power grid as an interface for connected loads. A modified version of EPRI's HARMFLO program is used as the shell for these models. The results obtained from the model developed in this paper are quite adequate for the analysis of problems such as voltage resonance. The unique technique presented in this paper uses the Laplace transforms to determine the harmonic content of the load current of the rectifier rather than a curve fitting technique. Laplace transforms yield the coefficient of the differential equations which model the line current to the rectifier directly.
NASA Astrophysics Data System (ADS)
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-01
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-07
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
NASA Astrophysics Data System (ADS)
Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng
2014-05-01
It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
NASA Technical Reports Server (NTRS)
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torello, David; Kim, Jin-Yeon; Qu, Jianmin
2015-03-31
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less
AIDS-related health behavior: coping, protection motivation, and previous behavior.
Van der Velde, F W; Van der Pligt, J
1991-10-01
The purpose of this study was to examine Rogers' protection motivation theory and aspects of Janis and Mann's conflict theory in the context of AIDS-related health behavior. Subjects were 84 heterosexual men and women and 147 homosexual men with multiple sexual partners; LISREL's path-analysis techniques were used to evaluate the goodness of fit of the structural equation models. Protection motivation theory did fit the data but had considerably more explanatory power for heterosexual than for homosexual subjects (49 vs. 22%, respectively). When coping styles were added, different patterns of findings were found among both groups. Adding variables such as social norms and previous behavior increased the explained variance to 73% for heterosexual subjects and to 44% for homosexual subjects. It was concluded that although protection motivation theory did fit the data fairly adequately, expanding the theory with other variables--especially those related to previous behavior--could improve our understanding of AIDS-related health behavior.
A curve fitting method for solving the flutter equation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Cooper, J. L.
1972-01-01
A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.
Serroukh, Sonia; Huber, Patrick; Lallam, Abdelaziz
2018-01-19
Inverse liquid chromatography is a technique for studying solid/liquid interaction and most specifically for the determination of solute adsorption isotherm. For the first time, the adsorption behaviour of microfibrillated cellulose was assessed using inverse liquid chromatography. We showed that microfibrillated cellulose could adsorb 17 mg/g of tetrasulfonated optical brightening agent in typical papermaking conditions. The adsorbed amount of hexasulfonated optical brightening agent was lower (7 mg/g). The packing of the column with microfibrillated cellulose caused important axial dispersion (D a = 5e-7 m²/s). Simulation of transport phenomena in the column showed that neglecting axial dispersion in the analysis of the chromatogram caused significant error (8%) in the determination of maximum adsorbed amount. We showed that conventional chromatogram analysis technique such as elution by characteristic point could not be used to fit our data. Using a bi-Langmuir isotherm model improved the fitting, but did not take into account axial dispersion, thus provided adsorption parameters which may have no physical significance. Using an inverse method with a single Langmuir isotherm, and fitting the transport equation to the chromatogram was shown to provide a satisfactory fitting to the chromatogram data. In general, the inverse method could be recommended to analyse inverse liquid chromatography data for column packing with significant axial dispersion (D a > 1e-7 m²/s). Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mede, Kyle; Brandt, Timothy D.
2017-03-01
We present the Exoplanet Simple Orbit Fitting Toolbox (ExoSOFT), a new, open-source suite to fit the orbital elements of planetary or stellar-mass companions to any combination of radial velocity and astrometric data. To explore the parameter space of Keplerian models, ExoSOFT may be operated with its own multistage sampling approach or interfaced with third-party tools such as emcee. In addition, ExoSOFT is packaged with a collection of post-processing tools to analyze and summarize the results. Although only a few systems have been observed with both radial velocity and direct imaging techniques, this number will increase, thanks to upcoming spacecraft and ground-based surveys. Providing both forms of data enables simultaneous fitting that can help break degeneracies in the orbital elements that arise when only one data type is available. The dynamical mass estimates this approach can produce are important when investigating the formation mechanisms and subsequent evolution of substellar companions. ExoSOFT was verified through fitting to artificial data and was implemented using the Python and Cython programming languages; it is available for public download at https://github.com/kylemede/ExoSOFT under GNU General Public License v3.
Oscillation mechanics of the respiratory system.
Bates, Jason H T; Irvin, Charles G; Farré, Ramon; Hantos, Zoltán
2011-07-01
The mechanical impedance of the respiratory system defines the pressure profile required to drive a unit of oscillatory flow into the lungs. Impedance is a function of oscillation frequency, and is measured using the forced oscillation technique. Digital signal processing methods, most notably the Fourier transform, are used to calculate impedance from measured oscillatory pressures and flows. Impedance is a complex function of frequency, having both real and imaginary parts that vary with frequency in ways that can be used empirically to distinguish normal lung function from a variety of different pathologies. The most useful diagnostic information is gained when anatomically based mathematical models are fit to measurements of impedance. The simplest such model consists of a single flow-resistive conduit connecting to a single elastic compartment. Models of greater complexity may have two or more compartments, and provide more accurate fits to impedance measurements over a variety of different frequency ranges. The model that currently enjoys the widest application in studies of animal models of lung disease consists of a single airway serving an alveolar compartment comprising tissue with a constant-phase impedance. This model has been shown to fit very accurately to a wide range of impedance data, yet contains only four free parameters, and as such is highly parsimonious. The measurement of impedance in human patients is also now rapidly gaining acceptance, and promises to provide a more comprehensible assessment of lung function than parameters derived from conventional spirometry. © 2011 American Physiological Society.
Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M
2016-05-01
Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE.Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model.The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0-30.3) of the articles and 18.5% (95% CI: 14.8-22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor.A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature.
Kim, Hyong Nyun; Liu, Xiao Ning; Noh, Kyu Cheol
2015-06-10
Open reduction and plate fixation is the standard operative treatment for displaced midshaft clavicle fracture. However, sometimes it is difficult to achieve anatomic reduction by open reduction technique in cases with comminution. We describe a novel technique using a real-size three dimensionally (3D)-printed clavicle model as a preoperative and intraoperative tool for minimally invasive plating of displaced comminuted midshaft clavicle fractures. A computed tomography (CT) scan is taken of both clavicles in patients with a unilateral displaced comminuted midshaft clavicle fracture. Both clavicles are 3D printed into a real-size clavicle model. Using the mirror imaging technique, the uninjured side clavicle is 3D printed into the opposite side model to produce a suitable replica of the fractured side clavicle pre-injury. The 3D-printed fractured clavicle model allows the surgeon to observe and manipulate accurate anatomical replicas of the fractured bone to assist in fracture reduction prior to surgery. The 3D-printed uninjured clavicle model can be utilized as a template to select the anatomically precontoured locking plate which best fits the model. The plate can be inserted through a small incision and fixed with locking screws without exposing the fracture site. Seven comminuted clavicle fractures treated with this technique achieved good bone union. This technique can be used for a unilateral displaced comminuted midshaft clavicle fracture when it is difficult to achieve anatomic reduction by open reduction technique. Level of evidence V.
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola
1992-01-01
A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.
Automation of reverse engineering process in aircraft modeling and related optimization problems
NASA Technical Reports Server (NTRS)
Li, W.; Swetits, J.
1994-01-01
During the year of 1994, the engineering problems in aircraft modeling were studied. The initial concern was to obtain a surface model with desirable geometric characteristics. Much of the effort during the first half of the year was to find an efficient way of solving a computationally difficult optimization model. Since the smoothing technique in the proposal 'Surface Modeling and Optimization Studies of Aerodynamic Configurations' requires solutions of a sequence of large-scale quadratic programming problems, it is important to design algorithms that can solve each quadratic program in a few interactions. This research led to three papers by Dr. W. Li, which were submitted to SIAM Journal on Optimization and Mathematical Programming. Two of these papers have been accepted for publication. Even though significant progress has been made during this phase of research and computation times was reduced from 30 min. to 2 min. for a sample problem, it was not good enough for on-line processing of digitized data points. After discussion with Dr. Robert E. Smith Jr., it was decided not to enforce shape constraints in order in order to simplify the model. As a consequence, P. Dierckx's nonparametric spline fitting approach was adopted, where one has only one control parameter for the fitting process - the error tolerance. At the same time the surface modeling software developed by Imageware was tested. Research indicated a substantially improved fitting of digitalized data points can be achieved if a proper parameterization of the spline surface is chosen. A winning strategy is to incorporate Dierckx's surface fitting with a natural parameterization for aircraft parts. The report consists of 4 chapters. Chapter 1 provides an overview of reverse engineering related to aircraft modeling and some preliminary findings of the effort in the second half of the year. Chapters 2-4 are the research results by Dr. W. Li on penalty functions and conjugate gradient methods for quadratic programming problems.
Draenert, Florian Guy; Huetzen, Dominic; Kämmerer, Peer; Wagner, Wilfried
2011-09-01
Bone transplants are mostly prepared with cutting drills, chisels, and rasps. These techniques are difficult for unexperienced surgeons, and the implant interface is less precise due to unstandardized preparation. Cylindrical bone transplants are a known alternative. Current techniques include fixation methods with osteosynthesis screws or the dental implant. A new bone cylinder transplant technique is presented using a twin-drill principle resulting in a customized pressfit of the transplant without fixation devices and combining this with the superior grinding properties of a diamond coating. New cylindrical diamond hollow drills are used for customized press fit bone transplants in a case series of five patients for socket reconstruction in the front and molar region of maxilla and mandibula with and without simultaneous implant placement. The technical approach was successful without intra or postoperative complications during the acute healing phase. The customized press fit completes a technological trias of bone cylinder transplant techniques adding to the assisted press fit with either osteosynthesis screws or the dental implant itself. © 2009 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Suttles, J. T.; Sullivan, E. M.; Margolis, S. B.
1974-01-01
Curve-fit formulas are presented for the stagnation-point radiative heating rate, cooling factor, and shock standoff distance for inviscid flow over blunt bodies at conditions corresponding to high-speed earth entry. The data which were curve fitted were calculated by using a technique which utilizes a one-strip integral method and a detailed nongray radiation model to generate a radiatively coupled flow-field solution for air in chemical and local thermodynamic equilibrium. The range of free-stream parameters considered were altitudes from about 55 to 70 km and velocities from about 11 to 16 km.sec. Spherical bodies with nose radii from 30 to 450 cm and elliptical bodies with major-to-minor axis ratios of 2, 4, and 6 were treated. Powerlaw formulas are proposed and a least-squares logarithmic fit is used to evaluate the constants. It is shown that the data can be described in this manner with an average deviation of about 3 percent (or less) and a maximum deviation of about 10 percent (or less). The curve-fit formulas provide an effective and economic means for making preliminary design studies for situations involving high-speed earth entry.
Mahlke, C; Hernando, D; Jahn, C; Cigliano, A; Ittermann, T; Mössler, A; Kromrey, ML; Domaska, G; Reeder, SB; Kühn, JP
2016-01-01
Purpose To investigate the feasibility of estimating the proton-density fat fraction (PDFF) using a 7.1 Tesla magnetic resonance imaging (MRI) system and to compare the accuracy of liver fat quantification using different fitting approaches. Materials and Methods Fourteen leptin-deficient ob/ob mice and eight intact controls were examined in a 7.1 Tesla animal scanner using a 3-dimensional six-echo chemical shift-encoded pulse sequence. Confounder-corrected PDFF was calculated using magnitude (magnitude data alone) and combined fitting (complex and magnitude data). Differences between fitting techniques were compared using Bland-Altman analysis. In addition, PDFFs derived with both reconstructions were correlated with histopathological fat content and triglyceride mass fraction using linear regression analysis. Results The PDFFs determined with use of both reconstructions correlated very strongly (r=0.91). However, small mean bias between reconstructions demonstrated divergent results (3.9%; CI 2.7%-5.1%). For both reconstructions, there was linear correlation with histopathology (combined fitting: r=0.61; magnitude fitting: r=0.64) and triglyceride content (combined fitting: r=0.79; magnitude fitting: r=0.70). Conclusion Liver fat quantification using the PDFF derived from MRI performed at 7.1 Tesla is feasible. PDFF has strong correlations with histopathologically determined fat and with triglyceride content. However, small differences between PDFF reconstruction techniques may impair the robustness and reliability of the biomarker at 7.1 Tesla. PMID:27197806
Niskanen, Ilpo; Peiponen, Kai-Erik; Räty, Jukka
2010-05-01
Using a multifunction spectrophotometer, the refractive index of a pigment can be estimated by measuring the backscattering of light from the pigment in immersion liquids having slightly different refractive indices. A simple theoretical Gaussian function model related to the optical path distribution is introduced that makes it possible to describe quantitatively the backscattering signal from transparent pigments using a set of only a few immersion liquids. With the aid of the data fitting by a Gaussian function, the measurement time of the refractive index of the pigment can be reduced. The backscattering measurement technique is suggested to be useful in industrial measurement environments of pigments.
Impact of digital impression techniques on the adaption of ceramic partial crowns in vitro.
Schaefer, Oliver; Decker, Mike; Wittstock, Frank; Kuepper, Harald; Guentsch, Arndt
2014-06-01
To investigate the effects, digital impression procedures can have on the three-dimensional fit of ceramic partial crowns in vitro. An acrylic model of a mandibular first molar was prepared to receive a partial coverage all-ceramic crown (mesio-occlusal-distal inlay preparation with reduction of all cusps and rounded shoulder finish line of buccal wall). Digital impressions were taken using iTero (ITE), cara TRIOS (TRI), CEREC AC with Bluecam (CBC), and Lava COS (COS) systems, before restorations were designed and machined from lithium disilicate blanks. Both the preparation and the restorations were digitised using an optical reference-scanner. Data were entered into quality inspection software, which superimposed the records (best-fit-algorithm), calculated fit-discrepancies for every pixel, and colour-coded the results to aid visualisation. Furthermore, mean quadratic deviations (RMS) were computed and analysed statistically with a one-way ANOVA. Scheffé's procedure was applied for multiple comparisons (n=5, α=0.05). Mean marginal (internal) discrepancies were: ITE 90 (92) μm, TRI 128 (106) μm, CBC 146 (84) μm, and COS 109 (93) μm. Differences among impression systems were statistically significant at p<0.001 (p=0.039). Qualitatively, partial crowns were undersized especially around cusp tips or the occluso-approximal isthmus. By contrast, potential high-spots could be detected along the preparation finishline and at central occlusal boxes. Marginal and internal fit of milled lithium disilicate partial crowns depended on the employed digital impression technique. The investigated digital impression procedures demonstrated significant fit discrepancies. However, all fabricated restorations showed acceptable marginal and internal gap sizes, when considering clinically relevant thresholds reported in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.
Papaspyridakos, Panos; Hirayama, Hiroshi; Chen, Chun-Jung; Ho, Chung-Han; Chronopoulos, Vasilios; Weber, Hans-Peter
2016-09-01
The aim of this study was to assess the effect of connection type and impression technique on the accuracy of fit of implant-supported fixed complete-arch dental prostheses (IFCDPs). An edentulous mandibular cast with five implants was fabricated to serve as master cast (control) for both implant- and abutment-level baselines. A titanium one-piece framework for an IFCDP was milled at abutment level and used for accuracy of fit measurements. Polyether impressions were made using a splinted and non-splinted technique at the implant and abutment level leading to four test groups, n = 10 each. Hence, four groups of test casts were generated. The impression accuracy was evaluated indirectly by assessing the fit of the IFCDP framework on the generated casts of the test groups, clinically and radiographically. Additionally, the control and all test casts were digitized with a high-resolution reference scanner (IScan D103i, Imetric, Courgenay, Switzerland) and standard tessellation language datasets were generated and superimposed. Potential correlations between the clinical accuracy of fit data and the data from the digital scanning were investigated. To compare the accuracy of casts of the test groups versus the control at the implant and abutment level, Fisher's exact test was used. Of the 10 casts of test group I (implant-level splint), all 10 presented with accurate clinical fit when the framework was seated on its respective cast, while only five of 10 casts of test group II (implant-level non-splint) showed adequate fit. All casts of group III (abutment-level splint) presented with accurate fit, whereas nine of 10 of the casts of test group IV (abutment-level non-splint) were accurate. Significant 3D deviations (P < 0.05) were found between group II and the control. No statistically significant differences were found between groups I, III, and IV compared with the control. Implant connection type (implant level vs. abutment level) and impression technique did affect the 3D accuracy of implant impressions only with the non-splint technique (P < 0.05). For one-piece IFCDPs, the implant-level splinted impression technique showed to be more accurate than the non-splinted approach, whereas at the abutment-level, no difference in the accuracy was found. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
What do you do when you have a loose screw?
Brady, Paul C; Arrigoni, Paolo; Burkhart, Stephen S
2006-09-01
This study seeks to compare the pullout strength of various anchor configurations in an osteoporotic bone model. We have tested and present here a technique designed to augment the pullout resistance of an anchor in poor-quality bone with the use of a second anchor as an interference fit; this report describes our in vivo results with this procedure. Four groups of suture anchor constructs were tested. These included a single 5.0-mm Bio-Corkscrew (Arthrex, Naples, FL) (group I), a single 5.5-mm Bio-Corkscrew FT (fully threaded; Arthrex, Naples, FL) (group II), a single 6.5-mm Bio-Corkscrew (Arthrex, Naples, FL) (group III), and an interference fit of two 5.0-mm Bio-Corkscrew suture anchors (group IV). Anchors were secured in a 10-lb/ft3 polyurethane foam block to simulate osteoporotic bone. Each construct was cycled, then was pulled to failure with an Instron testing device (Instron, Canton, MA); measurements regarding cyclic displacement, yield load, and extension at yield load were recorded. During the in vivo portion of the study, the interference fit technique was performed in 18 shoulder arthroscopy cases in which a loose screw was a matter of concern. After the technique was performed, both anchors were pulled so their security could be assessed; cuff repair then proceeded normally. Biomechanical study: In terms of yield load, every anchor construct was significantly different from every other construct. Specifically, pullout strength increased significantly as follows: group I was the weakest against pullout (176 +/- 13 N), group III (223 +/- 17 N) was significantly stronger than group I, group II (247 +/- 12 N) was significantly stronger than group III, and, finally, group IV (305 +/- 16 N) was significantly stronger than group II. The only statistically significant difference in terms of cyclic displacement was that group IV (1.4 mm +/- 0.2) had significantly less displacement than group III (1.9 mm +/- 0.3). No significant differences in extension at yield load were observed among any of the groups. In vivo study: The interference anchor technique was used in 18 of 24 loose screw situations over a 6-month period. In all 18 of these cases (100%), a stable dual-anchor construct was achieved. All anchors were stable to the tug test, and none failed during knot tying or at any time during the procedure. From the perspective of strength against pullout, the strongest suture construct of those tested in the osteoporotic bone model was the dual-anchor-against-an-anchor interference fit construct. The next strongest anchor tested was the 5.5-mm Bio-Corkscrew FT, followed by the 6.5-mm Bio-Corkscrew, and, finally, the 5.0-mm Bio-Corkscrew. Each group was statistically different from every other group in terms of pullout strength. The interference fit construct was not only the strongest in vitro, but it performed well in the in vivo setting, offering the added benefit of additional sutures to be used for securing a cuff defect. This study gives the arthroscopic surgeon important data for use in planning what to do when a loose screw is encountered. Data from this study may be useful for the arthroscopic surgeon in choosing the proper anchor construct for osteoporotic bone. This study also lends support to the technique of press-fitting an anchor against an anchor in the loose screw situation.
3D Lunar Terrain Reconstruction from Apollo Images
NASA Technical Reports Server (NTRS)
Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.
2009-01-01
Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
NASA Astrophysics Data System (ADS)
Amiri, Nafise; Moradi, Ali; Abolghasem Sajjadi Tabasi, Sayyed; Movaffagh, Jebrail
2018-04-01
Chitosan-collagen composite nanofiber is of a great interest to researchers in biomedical fields. Since the electrospinning is the most popular method for nanofiber production, having a comprehensive knowledge of the electrospinning process is beneficial. Modeling techniques are precious tools for managing variables in the electrospinning process, prior to the more time- consuming and expensive experimental techniques. In this study, a central composite design of response surface methodology (RSM) was employed to develop a statistical model as well as to define the optimum condition for fabrication of chitosan-collagen nanofiber with minimum diameter. The individual and the interaction effects of applied voltage (10–25 kV), flow rate (0.5–1.5 mL h‑1), and needle to collector distance (15–25 cm) on the fiber diameter were investigated. ATR- FTIR and cell study were done to evaluate the optimized nanofibers. According to the RSM, a two-factor interaction (2FI) model was the most suitable model. The high regression coefficient value (R 2 ≥ 0.9666) of the fitted regression model and insignificant lack of fit (P = 0.0715) indicated that the model was highly adequate in predicting chitosan-collagen nanofiber diameter. The optimization process showed that the chitosan-collagen nanofiber diameter of 156.05 nm could be obtained in 9 kV, 0.2 ml h‑1, and 25 cm which was confirmed by experiment (155.92 ± 18.95 nm). The ATR-FTIR and cell study confirmed the structure and biocompatibility of the optimized membrane. The represented model could assist researchers in fabricating chitosan-collagen electrospun scaffolds with a predictable fiber diameter, and optimized chitosan-collagen nanofibrous mat could be a potential candidate for wound healing and tissue engineering.
Sabouhi, Mahmoud; Bajoghli, Farshad; Abolhasani, Majid
2015-01-01
The success of an implant-supported prosthesis is dependent on the passive fit of its framework fabricated on a precise cast. The aim of this in vitro study was to digitally compare the three-dimensional accuracy of implant impression techniques in partially and completely edentulous conditions. The master model simulated two clinical conditions. The first condition was a partially edentulous mandibular arch with an anterior edentulous space (D condition). Two implant analogs were inserted in bilateral canine sites. After elimination of the teeth, the model was converted to a completely edentulous condition (E condition). Three different impression techniques were performed (open splinted [OS], open unsplinted [OU], closed [C]) for each condition. Six groups of casts (DOS, DOU, DC, EOS, EOU, EC) (n = 8), totaling 48 casts, were made. Two scan bodies were secured onto the master edentulous model and onto each test cast and digitized by an optical scanning system. The related scans were superimposed, and the mean discrepancy for each cast was determined. The statistical analysis showed no significant difference in the accuracy of casts as a function of model status (P = .78, analysis of variance [ANOVA] test), impression technique (P = .57, ANOVA test), or as the combination of both (P = .29, ANOVA test). The distribution of data was normal (Kolmogorov-Smirnov test). Model status (dentate or edentulous) and impression technique did not influence the precision of the casts. There is no difference among any of the impression techniques in either simulated clinical condition.
Operant Conditioning in Honey Bees (Apis mellifera L.): The Cap Pushing Response.
Abramson, Charles I; Dinges, Christopher W; Wells, Harrington
2016-01-01
The honey bee has been an important model organism for studying learning and memory. More recently, the honey bee has become a valuable model to understand perception and cognition. However, the techniques used to explore psychological phenomena in honey bees have been limited to only a few primary methodologies such as the proboscis extension reflex, sting extension reflex, and free flying target discrimination-tasks. Methods to explore operant conditioning in bees and other invertebrates are not as varied as with vertebrates. This may be due to the availability of a suitable response requirement. In this manuscript we offer a new method to explore operant conditioning in honey bees: the cap pushing response (CPR). We used the CPR to test for difference in learning curves between novel auto-shaping and more traditional explicit-shaping. The CPR protocol requires bees to exhibit a novel behavior by pushing a cap to uncover a food source. Using the CPR protocol we tested the effects of both explicit-shaping and auto-shaping techniques on operant conditioning. The goodness of fit and lack of fit of these data to the Rescorla-Wagner learning-curve model, widely used in classical conditioning studies, was tested. The model fit well to both control and explicit-shaping results, but only for a limited number of trials. Learning ceased rather than continuing to asymptotically approach the physiological most accurate possible. Rate of learning differed between shaped and control bee treatments. Learning rate was about 3 times faster for shaped bees, but for all measures of proficiency control and shaped bees reached the same level. Auto-shaped bees showed one-trial learning rather than the asymptotic approach to a maximal efficiency. However, in terms of return-time, the auto-shaped bees' learning did not carry over to the covered-well test treatments.
Operant Conditioning in Honey Bees (Apis mellifera L.): The Cap Pushing Response
Abramson, Charles I.; Dinges, Christopher W.; Wells, Harrington
2016-01-01
The honey bee has been an important model organism for studying learning and memory. More recently, the honey bee has become a valuable model to understand perception and cognition. However, the techniques used to explore psychological phenomena in honey bees have been limited to only a few primary methodologies such as the proboscis extension reflex, sting extension reflex, and free flying target discrimination-tasks. Methods to explore operant conditioning in bees and other invertebrates are not as varied as with vertebrates. This may be due to the availability of a suitable response requirement. In this manuscript we offer a new method to explore operant conditioning in honey bees: the cap pushing response (CPR). We used the CPR to test for difference in learning curves between novel auto-shaping and more traditional explicit-shaping. The CPR protocol requires bees to exhibit a novel behavior by pushing a cap to uncover a food source. Using the CPR protocol we tested the effects of both explicit-shaping and auto-shaping techniques on operant conditioning. The goodness of fit and lack of fit of these data to the Rescorla-Wagner learning-curve model, widely used in classical conditioning studies, was tested. The model fit well to both control and explicit-shaping results, but only for a limited number of trials. Learning ceased rather than continuing to asymptotically approach the physiological most accurate possible. Rate of learning differed between shaped and control bee treatments. Learning rate was about 3 times faster for shaped bees, but for all measures of proficiency control and shaped bees reached the same level. Auto-shaped bees showed one-trial learning rather than the asymptotic approach to a maximal efficiency. However, in terms of return-time, the auto-shaped bees’ learning did not carry over to the covered-well test treatments. PMID:27626797
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Does money matter in inflation forecasting?
NASA Astrophysics Data System (ADS)
Binner, J. M.; Tino, P.; Tepper, J.; Anderson, R.; Jones, B.; Kendall, G.
2010-11-01
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regression-techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naïve random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists’ long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies.
Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.
Schneider, Martin; Iskander, D Robert; Collins, Michael J
2009-02-01
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
Development and analysis of a twelfth degree and order gravity model for Mars
NASA Technical Reports Server (NTRS)
Christensen, E. J.; Balmino, G.
1979-01-01
Satellite geodesy techniques previously applied to artificial earth satellites have been extended to obtain a high-resolution gravity field for Mars. Two-way Doppler data collected by 10 Deep Space Network (DSN) stations during Mariner 9 and Viking 1 and 2 missions have been processed to obtain a twelfth degree and order spherical harmonic model for the martian gravitational potential. The quality of this model was evaluated by examining the rms residuals within the fit and the ability of the model to predict the spacecraft state beyond the fit. Both indicators show that more data and higher degree and order harmonics will be required to further refine our knowledge of the martian gravity field. The model presented shows much promise, since it resolves local gravity features which correlate highly with the martian topography. An isostatic analysis based on this model, as well as an error analysis, shows rather complete compensation on a global (long wavelength) scale. Though further model refinements are necessary to be certain, local (short wavelength) features such as the shield volcanos in Tharsis appear to be uncompensated. These are interpreted to place some bounds on the internal structure of Mars.
The 3XMM spectral fit database
NASA Astrophysics Data System (ADS)
Georgantopoulos, I.; Corral, A.; Watson, M.; Carrera, F.; Webb, N.; Rosen, S.
2016-06-01
I will present the XMMFITCAT database which is a spectral fit inventory of the sources in the 3XMM catalogue. Spectra are available by the XMM/SSC for all 3XMM sources which have more than 50 background subtracted counts per module. This work is funded in the framework of the ESA Prodex project. The 3XMM catalog currently covers 877 sq. degrees and contains about 400,000 unique sources. Spectra are available for over 120,000 sources. Spectral fist have been performed with various spectral models. The results are available in the web page http://xraygroup.astro.noa.gr/ and also at the University of Leicester LEDAS database webpage ledas-www.star.le.ac.uk/. The database description as well as some science results in the joint area with SDSS are presented in two recent papers: Corral et al. 2015, A&A, 576, 61 and Corral et al. 2014, A&A, 569, 71. At least for extragalactic sources, the spectral fits will acquire added value when photometric redshifts become available. In the framework of a new Prodex project we have been funded to derive photometric redshifts for the 3XMM sources using machine learning techniques. I will present the techniques as well as the optical near-IR databases that will be used.
ERIC Educational Resources Information Center
Golino, Hudson F.; Gomes, Cristiano M. A.
2016-01-01
This paper presents a non-parametric imputation technique, named random forest, from the machine learning field. The random forest procedure has two main tuning parameters: the number of trees grown in the prediction and the number of predictors used. Fifty experimental conditions were created in the imputation procedure, with different…
NASA Astrophysics Data System (ADS)
Levay, Z. G.
2004-12-01
A new, freely-available accessory for Adobe's widely-used Photoshop image editing software makes it much more convenient to produce presentable images directly from FITS data. It merges a fully-functional FITS reader with an intuitive user interface and includes fully interactive flexibility in scaling data. Techniques for producing attractive images from astronomy data using the FITS plugin will be presented, including the assembly of full-color images. These techniques have been successfully applied to producing colorful images for public outreach with data from the Hubble Space Telescope and other major observatories. Now it is much less cumbersome for students or anyone not experienced with specialized astronomical analysis software, but reasonably familiar with digital photography, to produce useful and attractive images.
NASA Astrophysics Data System (ADS)
Almurshedi, Ahmed; Ismail, Abd Khamim
2015-04-01
EEG source localization was studied in order to determine the location of the brain sources that are responsible for the measured potentials at the scalp electrodes using EEGLAB with Independent Component Analysis (ICA) algorithm. Neuron source locations are responsible in generating current dipoles in different states of brain through the measured potentials. The current dipole sources localization are measured by fitting an equivalent current dipole model using a non-linear optimization technique with the implementation of standardized boundary element head model. To fit dipole models to ICA components in an EEGLAB dataset, ICA decomposition is performed and appropriate components to be fitted are selected. The topographical scalp distributions of delta, theta, alpha, and beta power spectrum and cross coherence of EEG signals are observed. In close eyes condition it shows that during resting and action states of brain, alpha band was activated from occipital (O1, O2) and partial (P3, P4) area. Therefore, parieto-occipital area of brain are active in both resting and action state of brain. However cross coherence tells that there is more coherence between right and left hemisphere in action state of brain than that in the resting state. The preliminary result indicates that these potentials arise from the same generators in the brain.
Spatial uncertainty of a geoid undulation model in Guayaquil, Ecuador
NASA Astrophysics Data System (ADS)
Chicaiza, E. G.; Leiva, C. A.; Arranz, J. J.; Buenańo, X. E.
2017-06-01
Geostatistics is a discipline that deals with the statistical analysis of regionalized variables. In this case study, geostatistics is used to estimate geoid undulation in the rural area of Guayaquil town in Ecuador. The geostatistical approach was chosen because the estimation error of prediction map is getting. Open source statistical software R and mainly geoR, gstat and RGeostats libraries were used. Exploratory data analysis (EDA), trend and structural analysis were carried out. An automatic model fitting by Iterative Least Squares and other fitting procedures were employed to fit the variogram. Finally, Kriging using gravity anomaly of Bouguer as external drift and Universal Kriging were used to get a detailed map of geoid undulation. The estimation uncertainty was reached in the interval [-0.5; +0.5] m for errors and a maximum estimation standard deviation of 2 mm in relation with the method of interpolation applied. The error distribution of the geoid undulation map obtained in this study provides a better result than Earth gravitational models publicly available for the study area according the comparison with independent validation points. The main goal of this paper is to confirm the feasibility to use geoid undulations from Global Navigation Satellite Systems and leveling field measurements and geostatistical techniques methods in order to use them in high-accuracy engineering projects.
Modelling local GPS/levelling geoid undulations using artificial neural networks
NASA Astrophysics Data System (ADS)
Kavzoglu, T.; Saka, M. H.
2005-04-01
The use of GPS for establishing height control in an area where levelling data are available can involve the so-called GPS/levelling technique. Modelling of the GPS/levelling geoid undulations has usually been carried out using polynomial surface fitting, least-squares collocation (LSC) and finite-element methods. Artificial neural networks (ANNs) have recently been used for many investigations, and proven to be effective in solving complex problems represented by noisy and missing data. In this study, a feed-forward ANN structure, learning the characteristics of the training data through the back-propagation algorithm, is employed to model the local GPS/levelling geoid surface. The GPS/levelling geoid undulations for Istanbul, Turkey, were estimated from GPS and precise levelling measurements obtained during a field study in the period 1998-99. The results are compared to those produced by two well-known conventional methods, namely polynomial fitting and LSC, in terms of root mean square error (RMSE) that ranged from 3.97 to 5.73 cm. The results show that ANNs can produce results that are comparable to polynomial fitting and LSC. The main advantage of the ANN-based surfaces seems to be the low deviations from the GPS/levelling data surface, which is particularly important for distorted levelling networks.
On the Confounding Effect of Temperature on Chemical Shift-Encoded Fat Quantification
Hernando, Diego; Sharma, Samir D.; Kramer, Harald; Reeder, Scott B.
2014-01-01
Purpose To characterize the confounding effect of temperature on chemical shift-encoded (CSE) fat quantification. Methods The proton resonance frequency of water, unlike triglycerides, depends on temperature. This leads to a temperature dependence of the spectral models of fat (relative to water) that are commonly used by CSE-MRI methods. Simulation analysis was performed for 1.5 Tesla CSE fat–water signals at various temperatures and echo time combinations. Oil–water phantoms were constructed and scanned at temperatures between 0 and 40°C using spectroscopy and CSE imaging at three echo time combinations. An explanted human liver, rejected for transplantation due to steatosis, was scanned using spectroscopy and CSE imaging. Fat–water reconstructions were performed using four different techniques: magnitude and complex fitting, with standard or temperature-corrected signal modeling. Results In all experiments, magnitude fitting with standard signal modeling resulted in large fat quantification errors. Errors were largest for echo time combinations near TEinit ≈ 1.3 ms, ΔTE ≈ 2.2 ms. Errors in fat quantification caused by temperature-related frequency shifts were smaller with complex fitting, and were avoided using a temperature-corrected signal model. Conclusion Temperature is a confounding factor for fat quantification. If not accounted for, it can result in large errors in fat quantifications in phantom and ex vivo acquisitions. PMID:24123362
Modelling passive diastolic mechanics with quantitative MRI of cardiac structure and function.
Wang, Vicky Y; Lam, H I; Ennis, Daniel B; Cowan, Brett R; Young, Alistair A; Nash, Martyn P
2009-10-01
The majority of patients with clinically diagnosed heart failure have normal systolic pump function and are commonly categorized as suffering from diastolic heart failure. The left ventricle (LV) remodels its structure and function to adapt to pathophysiological changes in geometry and loading conditions, which in turn can alter the passive ventricular mechanics. In order to better understand passive ventricular mechanics, a LV finite element (FE) model was customized to geometric data segmented from in vivo tagged magnetic resonance images (MRI) data and myofibre orientation derived from ex vivo diffusion tensor MRI (DTMRI) of a canine heart using nonlinear finite element fitting techniques. MRI tissue tagging enables quantitative evaluation of cardiac mechanical function with high spatial and temporal resolution, whilst the direction of maximum water diffusion in each voxel of a DTMRI directly corresponds to the local myocardial fibre orientation. Due to differences in myocardial geometry between in vivo and ex vivo imaging, myofibre orientations were mapped into the geometric FE model using host mesh fitting (a free form deformation technique). Pressure recordings, temporally synchronized to the tagging data, were used as the loading constraints to simulate the LV deformation during diastole. Simulation of diastolic LV mechanics allowed us to estimate the stiffness of the passive LV myocardium based on kinematic data obtained from tagged MRI. Integrated physiological modelling of this kind will allow more insight into mechanics of the LV on an individualized basis, thereby improving our understanding of the underlying structural basis of mechanical dysfunction under pathological conditions.
Nonlinear tuning techniques of plasmonic nano-filters
NASA Astrophysics Data System (ADS)
Kotb, Rehab; Ismail, Yehea; Swillam, Mohamed A.
2015-02-01
In this paper, a fitting model to the propagation constant and the losses of Metal-Insulator-Metal (MIM) plasmonic waveguide is proposed. Using this model, the modal characteristics of MIM plasmonic waveguide can be solved directly without solving Maxwell's equations from scratch. As a consequence, the simulation time and the computational cost that are needed to predict the response of different plasmonic structures can be reduced significantly. This fitting model is used to develop a closed form model that describes the behavior of a plasmonic nano-filter. Easy and accurate mechanisms to tune the filter are investigated and analyzed. The filter tunability is based on using a nonlinear dielectric material with Pockels or Kerr effect. The tunability is achieved by applying an external voltage or through controlling the input light intensity. The proposed nano-filter supports both red and blue shift in the resonance response depending on the type of the used non-linear material. A new approach to control the input light intensity by applying an external voltage to a previous stage is investigated. Therefore, the filter tunability to a stage that has Kerr material can be achieved by applying voltage to a previous stage that has Pockels material. Using this method, the Kerr effect can be achieved electrically instead of varying the intensity of the input source. This technique enhances the ability of the device integration for on-chip applications. Tuning the resonance wavelength with high accuracy, minimum insertion loss and high quality factor is obtained using these approaches.
NASA Astrophysics Data System (ADS)
Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.
2015-10-01
We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.
Cilla, M; Pérez-Rey, I; Martínez, M A; Peña, Estefania; Martínez, Javier
2018-06-23
Motivated by the search for new strategies for fitting a material model, a new approach is explored in the present work. The use of numerical and complex algorithms based on machine learning techniques such as support vector machines for regression, bagged decision trees and artificial neural networks is proposed for solving the parameter identification of constitutive laws for soft biological tissues. First, the mathematical tools were trained with analytical uniaxial data (circumferential and longitudinal directions) as inputs, and their corresponding material parameters of the Gasser, Ogden and Holzapfel strain energy function as outputs. The train and test errors show great efficiency during the training process in finding correlations between inputs and outputs; besides, the correlation coefficients were very close to 1. Second, the tool was validated with unseen observations of analytical circumferential and longitudinal uniaxial data. The results show an excellent agreement between the prediction of the material parameters of the SEF and the analytical curves. Finally, data from real circumferential and longitudinal uniaxial tests on different cardiovascular tissues were fitted, thus the material model of these tissues was predicted. We found that the method was able to consistently identify model parameters, and we believe that the use of these numerical tools could lead to an improvement in the characterization of soft biological tissues. This article is protected by copyright. All rights reserved.
Dissolution enhancement of efavirenz by solid dispersion and PEGylation techniques
Madhavi, B. Bindu; Kusum, B.; Chatanya, CH. Krishna; Madhu, M. Naga; Harsha, V. Sri; Banji, David
2011-01-01
Background: Efavirenz is the preferred nonnucleotide reverse transcriptase inhibitor for first-line antiretroviral treatment in many countries. It is orally active and is specific for human immunodeficiency virus type 1. Its effectiveness can be attributed to its long half-life, which is 52–76 h after multiple doses. The drug is having poor water solubility. The formulation of poorly soluble drug for oral delivery will be one of the biggest challenges for formulation scientists in the research field. Among the available approaches, the solid dispersion technique has often proved to be the most commonly used method in improving dissolution and bioavailability of the drugs because of its simplicity and economy in preparation and evaluation. Materials and Methods: Solid dispersions were prepared by solvent evaporation and physical mixture methods by using polyethylene glycol as the hydrophilic carrier and PEGylated product was also prepared. The prepared products were evaluated for various parameters, such as polymer interaction, saturation solubility study, and drug release studies. The drug release data were analyzed by fitting it into various kinetic models. Results: There is an improvement in the dissolution from 16% to 70% with solid dispersion technology. Higuchi model was found to be the best fit model. Conclusion: Solid dispersion is the simple, efficient, and economic method to improve the dissolution of the poorly water-soluble drugs. PMID:23071917
Cheing, Gladys; Vong, Sinfia; Chan, Fong; Ditchman, Nicole; Brooks, Jessica; Chan, Chetwyn
2014-12-01
Pain is a complex phenomenon not easily discerned from psychological, social, and environmental characteristics and is an oft cited barrier to return to work for people experiencing low back pain (LBP). The purpose of this study was to evaluate a path-analytic mediation model to examine how motivational enhancement physiotherapy, which incorporates tenets of motivational interviewing, improves physical functioning of patients with chronic LBP. Seventy-six patients with chronic LBP were recruited from the outpatient physiotherapy department of a government hospital in Hong Kong. The re-specified path-analytic model fit the data very well, χ (2)(3, N = 76) = 3.86, p = .57; comparative fit index = 1.00; and the root mean square error of approximation = 0.00. Specifically, results indicated that (a) using motivational interviewing techniques in physiotherapy was associated with increased working alliance with patients, (b) working alliance increased patients' outcome expectancy and (c) greater outcome expectancy resulted in a reduction of subjective pain intensity and improvement in physical functioning. Change in pain intensity also directly influenced improvement in physical functioning. The effect of motivational enhancement therapy on physical functioning can be explained by social-cognitive factors such as motivation, outcome expectancy, and working alliance. The use of motivational interviewing techniques to increase outcome expectancy of patients and improve working alliance could further strengthen the impact of physiotherapy on rehabilitation outcomes of patients with chronic LBP.
Inversion for the driving forces of plate tectonics
NASA Technical Reports Server (NTRS)
Richardson, R. M.
1983-01-01
Inverse modeling techniques have been applied to the problem of determining the roles of various forces that may drive and resist plate tectonic motions. Separate linear inverse problems have been solved to find the best fitting pole of rotation for finite element grid point velocities and to find the best combination of force models to fit the observed relative plate velocities for the earth's twelve major plates using the generalized inverse operator. Variance-covariance data on plate motion have also been included. Results emphasize the relative importance of ridge push forces in the driving mechanism. Convergent margin forces are smaller by at least a factor of two, and perhaps by as much as a factor of twenty. Slab pull, apparently, is poorly transmitted to the surface plate as a driving force. Drag forces at the base of the plate are smaller than ridge push forces, although the sign of the force remains in question.
Barbi, Francisco C L; Camarini, Edevaldo T; Silva, Rafael S; Endo, Eliana H; Pereira, Jefferson R
2012-12-01
The influence of different joining techniques on passive fit at the interface structure/abutment of cobalt-chromium (Co-Cr) superstructures has not yet been clearly established. The purpose of this study was to compare 3 different techniques of joining Co-Cr superstructures by measuring the resulting marginal misfit in a simulated prosthetic assembly. A specially designed metal model was used for casting, sectioning, joining, and measuring marginal misfit. Forty-five cast bar-type superstructures were fabricated in a Co-Cr alloy and randomly assigned by drawing lots to 3 groups (n=15) according to the joining method used: conventional gas-torch brazing (G-TB), laser welding (LW), and tungsten inert gas welding (TIG). Joined specimens were assembled onto abutment analogs in the metal model with the 1-screw method. The resulting marginal misfit was measured with scanning electron microscopy (SEM) at 3 different points: distal (D), central (C), and mesial (M) along the buccal aspect of both abutments: A (tightened) and B (without screw). The Levene test was used to evaluate variance homogeneity and then the Welsch ANOVA for heteroscedastic data (α=.05). Significant differences were found on abutment A between groups G-TB and LW (P=.013) measured mesially and between groups G-TB and TIG (P=.037) measured centrally. On abutment B, significant differences were found between groups G-TB and LW (P<.001) and groups LW and TIG (P<.001) measured mesially; groups G-TB and TIG (P=.007) measured distally; and groups G-TB and TIG (P=.001) and LW and TIG (P=.007) measured centrally. The method used for joining Co-Cr prosthetic structures had an influence on the level of resulting passive fit. Structures joined by the tungsten inert gas method produced better mean results than did the brazing or laser method. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Multi-filter spectrophotometry of quasar environments
NASA Technical Reports Server (NTRS)
Craven, Sally E.; Hickson, Paul; Yee, Howard K. C.
1993-01-01
A many-filter photometric technique for determining redshifts and morphological types, by fitting spectral templates to spectral energy distributions, has good potential for application in surveys. Despite success in studies performed on simulated data, the results have not been fully reliable when applied to real, low signal-to-noise data. We are investigating techniques to improve the fitting process.
Ion distribution in dry polyelectrolyte multilayers: a neutron reflectometry study.
Ghoussoub, Yara E; Zerball, Maximilian; Fares, Hadi M; Ankner, John F; von Klitzing, Regine; Schlenoff, Joseph B
2018-02-28
Ultrathin films of complexed polycation poly(diallyldimethylammonium), PDADMA, and polyanion poly(styrenesulfonate), PSS, were prepared on silicon wafers using the layer-by-layer adsorption technique. When terminated with PDADMA, all films had excess PDADMA, which was balanced by counterions. Neutron reflectivity of these as-made multilayers was compared with measurements on multilayers which had been further processed to ensure 1 : 1 stoichiometry of PDADMA and PSS. The compositions of all films, including polymers and counterions, were determined experimentally rather than by fitting, reducing the number of fit parameters required to model the reflectivity. For each sample, acetate, either protiated, CH 3 COO - , or deuterated, CD 3 COO - , served as the counterion. All films were maintained dry under vacuum. Scattering length density profiles were constrained to fit reflectivity data from samples having either counterion. The best fits were obtained with uniform counterion concentrations, even for stoichiometric samples that had been exposed to PDADMA for ca. 5 minutes, showing that surprisingly fast and complete transport of excess cationic charge occurs throughout the multilayer during its construction.
Barker, Fiona; Mackenzie, Emma; de Lusignan, Simon
2016-11-01
To observe and analyse the range and nature of behaviour change techniques (BCTs) employed by audiologists during hearing-aid fitting consultations to encourage and enable hearing-aid use. Non-participant observation and qualitative thematic analysis using the behaviour change technique taxonomy (version 1) (BCTTv1). Ten consultations across five English NHS audiology departments. Audiologists engage in behaviours to ensure the hearing-aid is fitted to prescription and is comfortable to wear. They provide information, equipment, and training in how to use a hearing-aid including changing batteries, cleaning, and maintenance. There is scope for audiologists to use additional BCTs: collaborating with patients to develop a behavioural plan for hearing-aid use that includes goal-setting, action-planning and problem-solving; involving significant others; providing information on the benefits of hearing-aid use or the consequences of non-use and giving advice about using prompts/cues for hearing-aid use. This observational study of audiologist behaviour in hearing-aid fitting consultations has identified opportunities to use additional behaviour change techniques that might encourage hearing-aid use. This information defines potential intervention targets for further research with the aim of improving hearing-aid use amongst adults with acquired hearing loss.
NASA Technical Reports Server (NTRS)
Miller, Eric J.; Holguin, Andrew C.; Cruz, Josue; Lokos, William A.
2014-01-01
The safety-of-flight parameters for the Adaptive Compliant Trailing Edge (ACTE) flap experiment require that flap-to-wing interface loads be sensed and monitored in real time to ensure that the structural load limits of the wing are not exceeded. This paper discusses the strain gage load calibration testing and load equation derivation methodology for the ACTE interface fittings. Both the left and right wing flap interfaces were monitored; each contained four uniquely designed and instrumented flap interface fittings. The interface hardware design and instrumentation layout are discussed. Twenty-one applied test load cases were developed using the predicted in-flight loads. Pre-test predictions of strain gage responses were produced using finite element method models of the interface fittings. Predicted and measured test strains are presented. A load testing rig and three hydraulic jacks were used to apply combinations of shear, bending, and axial loads to the interface fittings. Hardware deflections under load were measured using photogrammetry and transducers. Due to deflections in the interface fitting hardware and test rig, finite element model techniques were used to calculate the reaction loads throughout the applied load range, taking into account the elastically-deformed geometry. The primary load equations were selected based on multiple calibration metrics. An independent set of validation cases was used to validate each derived equation. The 2-sigma residual errors for the shear loads were less than eight percent of the full-scale calibration load; the 2-sigma residual errors for the bending moment loads were less than three percent of the full-scale calibration load. The derived load equations for shear, bending, and axial loads are presented, with the calculated errors for both the calibration cases and the independent validation load cases.
Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.
2017-12-01
Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.
NASA Astrophysics Data System (ADS)
Bhattacharya, P.; Viesca, R. C.
2017-12-01
In the absence of in situ field-scale observations of quantities such as fault slip, shear stress and pore pressure, observational constraints on models of fault slip have mostly been limited to laboratory and/or remote observations. Recent controlled fluid-injection experiments on well-instrumented faults fill this gap by simultaneously monitoring fault slip and pore pressure evolution in situ [Gugleilmi et al., 2015]. Such experiments can reveal interesting fault behavior, e.g., Gugleilmi et al. report fluid-activated aseismic slip followed only subsequently by the onset of micro-seismicity. We show that the Gugleilmi et al. dataset can be used to constrain the hydro-mechanical model parameters of a fluid-activated expanding shear rupture within a Bayesian framework. We assume that (1) pore-pressure diffuses radially outward (from the injection well) within a permeable pathway along the fault bounded by a narrow damage zone about the principal slip surface; (2) pore-pressure increase ativates slip on a pre-stressed planar fault due to reduction in frictional strength (expressed as a constant friction coefficient times the effective normal stress). Owing to efficient, parallel, numerical solutions to the axisymmetric fluid-diffusion and crack problems (under the imposed history of injection), we are able to jointly fit the observed history of pore-pressure and slip using an adaptive Monte Carlo technique. Our hydrological model provides an excellent fit to the pore-pressure data without requiring any statistically significant permeability enhancement due to the onset of slip. Further, for realistic elastic properties of the fault, the crack model fits both the onset of slip and its early time evolution reasonably well. However, our model requires unrealistic fault properties to fit the marked acceleration of slip observed later in the experiment (coinciding with the triggering of microseismicity). Therefore, besides producing meaningful and internally consistent bounds on in-situ fault properties like permeability, storage coefficient, resolved stresses, friction and the shear modulus, our results also show that fitting the complete observed time history of slip requires alternative model considerations, such as variations in fault mechanical properties or friction coefficient with slip.
The basis function approach for modeling autocorrelation in ecological data
Hefley, Trevor J.; Broms, Kristin M.; Brost, Brian M.; Buderman, Frances E.; Kay, Shannon L.; Scharf, Henry; Tipton, John; Williams, Perry J.; Hooten, Mevin B.
2017-01-01
Analyzing ecological data often requires modeling the autocorrelation created by spatial and temporal processes. Many seemingly disparate statistical methods used to account for autocorrelation can be expressed as regression models that include basis functions. Basis functions also enable ecologists to modify a wide range of existing ecological models in order to account for autocorrelation, which can improve inference and predictive accuracy. Furthermore, understanding the properties of basis functions is essential for evaluating the fit of spatial or time-series models, detecting a hidden form of collinearity, and analyzing large data sets. We present important concepts and properties related to basis functions and illustrate several tools and techniques ecologists can use when modeling autocorrelation in ecological data.
Level-Specific Evaluation of Model Fit in Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Ryu, Ehri; West, Stephen G.
2009-01-01
In multilevel structural equation modeling, the "standard" approach to evaluating the goodness of model fit has a potential limitation in detecting the lack of fit at the higher level. Level-specific model fit evaluation can address this limitation and is more informative in locating the source of lack of model fit. We proposed level-specific test…
Lim, Chi Kim; Bay, Hui Han; Neoh, Chin Hong; Aris, Azmi; Abdul Majid, Zaiton; Ibrahim, Zaharah
2013-10-01
In this study, the adsorption behavior of azo dye Acid Orange 7 (AO7) from aqueous solution onto macrocomposite (MC) was investigated under various experimental conditions. The adsorbent, MC, which consists of a mixture of zeolite and activated carbon, was found to be effective in removing AO7. The MC were characterized by scanning electron microscopy (SEM), energy dispersive X-ray, point of zero charge, and Brunauer-Emmett-Teller surface area analysis. A series of experiments were performed via batch adsorption technique to examine the effect of the process variables, namely, contact time, initial dye concentration, and solution pH. The dye equilibrium adsorption was investigated, and the equilibrium data were fitted to Langmuir, Freundlich, and Tempkin isotherm models. The Langmuir isotherm model fits the equilibrium data better than the Freundlich isotherm model. For the kinetic study, pseudo-first-order, pseudo-second-order, and intraparticle diffusion model were used to fit the experimental data. The adsorption kinetic was found to be well described by the pseudo-second-order model. Thermodynamic analysis indicated that the adsorption process is a spontaneous and endothermic process. The SEM, Fourier transform infrared spectroscopy, ultraviolet-visible spectral and high performance liquid chromatography analysis were carried out before and after the adsorption process. For the phytotoxicity test, treated AO7 was found to be less toxic. Thus, the study indicated that MC has good potential use as an adsorbent for the removal of azo dye from aqueous solution.
Hyland, Philip; Shevlin, Mark; Adamson, Gary; Boduszek, Daniel
2014-01-01
This study directly tests a central prediction of rational emotive behaviour therapy (REBT) that has received little empirical attention regarding the core and intermediate beliefs in the development of posttraumatic stress symptoms. A theoretically consistent REBT model of posttraumatic stress disorder (PTSD) was examined using structural equation modelling techniques among a sample of 313 trauma-exposed military and law enforcement personnel. The REBT model of PTSD provided a good fit of the data, χ(2) = 599.173, df = 356, p < .001; standardized root mean square residual = .05 (confidence interval = .04-.05); standardized root mean square residual = .04; comparative fit index = .95; Tucker Lewis index = .95. Results demonstrated that demandingness beliefs indirectly affected the various symptom groups of PTSD through a set of secondary irrational beliefs that include catastrophizing, low frustration tolerance, and depreciation beliefs. Results were consistent with the predictions of REBT theory and provides strong empirical support that the cognitive variables described by REBT theory are critical cognitive constructs in the prediction of PTSD symptomology. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Sahai, Swupnil
This thesis includes three parts. The overarching theme is how to analyze structured hierarchical data, with applications to astronomy and sociology. The first part discusses how expectation propagation can be used to parallelize the computation when fitting big hierarchical bayesian models. This methodology is then used to fit a novel, nonlinear mixture model to ultraviolet radiation from various regions of the observable universe. The second part discusses how the Stan probabilistic programming language can be used to numerically integrate terms in a hierarchical bayesian model. This technique is demonstrated on supernovae data to significantly speed up convergence to the posterior distribution compared to a previous study that used a Gibbs-type sampler. The third part builds a formal latent kernel representation for aggregate relational data as a way to more robustly estimate the mixing characteristics of agents in a network. In particular, the framework is applied to sociology surveys to estimate, as a function of ego age, the age and sex composition of the personal networks of individuals in the United States.
Hu, Jingwen; Klinich, Kathleen D; Miller, Carl S; Nazmi, Giseli; Pearlman, Mark D; Schneider, Lawrence W; Rupp, Jonathan D
2009-11-13
Motor-vehicle crashes are the leading cause of fetal deaths resulting from maternal trauma in the United States, and placental abruption is the most common cause of these deaths. To minimize this injury, new assessment tools, such as crash-test dummies and computational models of pregnant women, are needed to evaluate vehicle restraint systems with respect to reducing the risk of placental abruption. Developing these models requires accurate material properties for tissues in the pregnant abdomen under dynamic loading conditions that can occur in crashes. A method has been developed for determining dynamic material properties of human soft tissues that combines results from uniaxial tensile tests, specimen-specific finite-element models based on laser scans that accurately capture non-uniform tissue-specimen geometry, and optimization techniques. The current study applies this method to characterizing material properties of placental tissue. For 21 placenta specimens tested at a strain rate of 12/s, the mean failure strain is 0.472+/-0.097 and the mean failure stress is 34.80+/-12.62 kPa. A first-order Ogden material model with ground-state shear modulus (mu) of 23.97+/-5.52 kPa and exponent (alpha(1)) of 3.66+/-1.90 best fits the test results. The new method provides a nearly 40% error reduction (p<0.001) compared to traditional curve-fitting methods by considering detailed specimen geometry, loading conditions, and dynamic effects from high-speed loading. The proposed method can be applied to determine mechanical properties of other soft biological tissues.
Ellipsoidal head model for fetal magnetoencephalography: forward and inverse solutions
NASA Astrophysics Data System (ADS)
Gutiérrez, David; Nehorai, Arye; Preissl, Hubert
2005-05-01
Fetal magnetoencephalography (fMEG) is a non-invasive technique where measurements of the magnetic field outside the maternal abdomen are used to infer the source location and signals of the fetus' neural activity. There are a number of aspects related to fMEG modelling that must be addressed, such as the conductor volume, fetal position and orientation, gestation period, etc. We propose a solution to the forward problem of fMEG based on an ellipsoidal head geometry. This model has the advantage of highlighting special characteristics of the field that are inherent to the anisotropy of the human head, such as the spread and orientation of the field in relationship with the localization and position of the fetal head. Our forward solution is presented in the form of a kernel matrix that facilitates the solution of the inverse problem through decoupling of the dipole localization parameters from the source signals. Then, we use this model and the maximum likelihood technique to solve the inverse problem assuming the availability of measurements from multiple trials. The applicability and performance of our methods are illustrated through numerical examples based on a real 151-channel SQUID fMEG measurement system (SARA). SARA is an MEG system especially designed for fetal assessment and is currently used for heart and brain studies. Finally, since our model requires knowledge of the best-fitting ellipsoid's centre location and semiaxes lengths, we propose a method for estimating these parameters through a least-squares fit on anatomical information obtained from three-dimensional ultrasound images.
Small area estimation for semicontinuous data.
Chandra, Hukum; Chambers, Ray
2016-03-01
Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wedge, David C; Rowe, William; Kell, Douglas B; Knowles, Joshua
2009-03-07
We model the process of directed evolution (DE) in silico using genetic algorithms. Making use of the NK fitness landscape model, we analyse the effects of mutation rate, crossover and selection pressure on the performance of DE. A range of values of K, the epistatic interaction of the landscape, are considered, and high- and low-throughput modes of evolution are compared. Our findings suggest that for runs of or around ten generations' duration-as is typical in DE-there is little difference between the way in which DE needs to be configured in the high- and low-throughput regimes, nor across different degrees of landscape epistasis. In all cases, a high selection pressure (but not an extreme one) combined with a moderately high mutation rate works best, while crossover provides some benefit but only on the less rugged landscapes. These genetic algorithms were also compared with a "model-based approach" from the literature, which uses sequential fixing of the problem parameters based on fitting a linear model. Overall, we find that purely evolutionary techniques fare better than do model-based approaches across all but the smoothest landscapes.